KNOWLEDGE TO LEAD | ISSN 2765-0626 | Volume 5 | Issue 1 | Oct. 2024

Page 1

ISSN 2765-0626 I Volume 5 I Issue 1 I Oct. 2024

HOB"\nY

YOUTH LEADERSHIP

REPUBLIC OF KOREA



In Volume 5, Issue 1 120241, the tonowing papers have been published. • Papers produced by participants of the 6th Global Youth [[>t>[:>ovAREP] Academic Enrichment Program Participants in this program worked with professors or associate professors from Cornell, Harvard, MIT, or UPenn, as well as Ph.D. teaching assistants, for 10-12 weeks during the summer, including pre-training sessions.

With professors and teaching assistants observing the entire research process-from choosing a research topic and conducting research to producing a final manuscript-these papers ensure the authenticity of the participants' efforts, passion, and research skills and research ethics.

• Manuscripts by participants of the Paper Presentation Track at the 4th H0BY Korea International Young Scholars Conference 2024

These papers were evaluated by professors or associate professors from Cornell, MIT, or UPENN.

• Research proposals developed by participants of the Research Mentoring Track at the 4th H0BY Korea International Young Scholars Conference 2024 • Award-winning manuscripts from the 8th International Youth Research Paper Competition 2024 • Research reports by participants of HOBY Korea's 2024 Youth STEM Internship Papers published in this volume were checked for plagiarism using Turnitin, a tool that identifies content similarities. AI usage was permitted for editing and improving grammar.


TABLE OF CONTENTS RESEARCH PAPERS – SOCIAL SCIENCES 6

Implementation of Crime Prevention Through Environmental Design (CPTED) at Yongsan International School Seoul (YISS) and Yongsan-gu An, Claire Eunsae

17

Dynamic Asset Allocation with Markowitz Theory and Proximal Policy Optimization Baek, Seo-yun

23

Main Obstacle to Achieving Poverty Eradication: Extreme Poverty in Sub-Saharan Africa Bak, Sunghyun

42

Financial Barriers to Orthodontic Care Access for Children with Special Healthcare Needs in Washington: A Literature Review Cho, Claire

49

The Reciprocal Relationship Between Humans and Architecture: An Exploration of Architectural Psychology and Case Studies in Design Intentions Cho, Eunbin

56

A Waft of Miss Dior in Moscow: In Defense of Haute Couture in the Soviet Fashion Scene Choi, Eun Seo

74

How have Reforms in Greece’s Public Administration Influenced the Absorption and Implementation of the EU Cohesion Funds? Choi, Sunho

98

Unveiling Gender Disparities: A Comprehensive Analysis of South Korea’s Social and Policy Challenges and Their Significance for Gender Equality Chung, Joylynn

111

The Role of Early Childhood Education in Educational Achievement Disparities Between White and Hispanic/African American Students: A Review Hwang, Alison

119

Music Instrumental Interventions and their Effects on Mental Health and Social Adjustment among Refugee Children and Adolescents Hwang, Hannah

129

The Impact of the US Presidential Election on Global Business Strategies and Consumer Behavior Jang, Dongmin

146

A Comparative Analysis of Three Key Policies Under the Trump and Biden Administration: Fiscal, Trade, Tariffs Kim, Dayeon

162

Significance of K-pop influencing Japanese consumers’ views on the Korean between 2011 to 2017 Kim, Harhim


169

Comparison Between the Effectiveness of Aiding with Irrigation Infrastructure and Immediate Financial Aid as a Way to Foster Agricultural Productivity and Food Security in Sub-Saharan Africa Kim, Junseo (Liam)

180

Beyond Sacrifice: Enhancing Human Rights with Minority Inclusion in South Korea’s Labor Market Kim, Yejin

189

The Reasons Why Belarus’s Relationship with Russia is Significant to the EU and surrounding European nations Lee, MinSeok

198

China’s Soft Power Strategies: The Influence of Panda Diplomacy and Global Engagement Lee, Haryeong

206

How did Christianity influence the arts and techniques of 17th century Japanese art, with respect to the Jesuit painting Seminario and the Virgin Mary Kannon sculpture? Lee, JoonSeok

215

How did the People's Republic of China develop its foreign policy to foster the growth of the domestic semiconductor market by changing diplomatic strategies leveraging China-U.S. geopolitical relations? Oh, Seungwoo

222

The Impact of Song Lyrics on Environmental Awareness Rim, Jaewook

228

A Comparative Study of Korean and Japanese Policies On Low Fertility Rates and Aging Societies: Unveiling Distinctive Approaches Sohn, Olivia Chaeri

RESEARCH PAPERS – STEM 246

Investing the Effect of Piezoelectricity on Spirodela Polyrhize Ahn, Lugh Kang

256

ICA-DNN: Novel Neural Network Architecture for Prediction of Efficient Power Output in Large-Scale Wave Energy Farms Aussarbekov, Adilet

268

Correlation between Time of Light Exposure and Plant Growth Bang, Seoyun

275

Enhanced Adversarial Attack on Voice Conversion Using Layer-Wise Relevance Propagation Hong, Ryan

286

Exploring the Impact of Water Temperature on the Electrical Resistance of Submerged Copper and Aluminum Wires: An investigation suitable for aiding high school physics education Im, Youngchan

298

Management of Fine Dust in School Life - Focusing on Cheongshim International High School Jo, Ajin


317

A Study on the Space Use Status of High School Students and Implications of Future-oriented School Space Using Space Syntax Jo, Ajin

329

Optimizing Cultured Meat Production: Development of Affordable Conditioned Media Enriched with Afamin and Wnt3a Proteins Jung, Yae Joon | Kim, Siyoon | Kim, Eyoung | Son, Yubin| Cho, Nayoon

339

Investigation into the tumor-suppressive effects of the IFN-ε gene on glioblastoma proliferation and viability Kang, Gerald Hyunwoo

348

Comparative Analysis of CT Scan Reconstruction Methods: Filtered Back Projection (FBP) vs. Simultaneous Algebraic Reconstruction Technique (SART) Kim, Aaron Junung

359

Comparative Analysis of State Vector and Density Matrix Simulation on Hadamard’s Gate. Kim, Dohoon

369

A Study on the Clinical Methodology for the Diagnosis of Neurodegenerative Diseases Kim, Dongeon

380

Nanotechnology for Corrosion Prevention in Deep Marine Environments Kim, Jinheon

391

Density-Based Clustering method of Isotropic and Anisotropic Data Using Hill-based Abstraction Kim, Woojin

406

Evaluation of Noise Reduction Filters on Retinal Images with Varying Noise Levels for Optimal Image Restoration Kim, Yewon

413

The Effect of Ventilation on Indoor Dust Concentration Ko, Bisong

421

The Relationship between Telehealth Usage and Healthcare Affordability: A Cross-Sectional Analysis of U.S. States Koo, William Bon

431

Temporal Dynamics of Physical Intervention on Brassica oleracea Growth Kwon, Michaela Cho

444

Elucidating Thigmomorphogenesis: Effects of Mechanical Stimulation on the Growth Rate of Basils Lee, Inho

459

Temperature as an indicator for the extent of DENV transmission Lee, Kunwoo (Grant)

467

Exploring the Wave Protection Capability of Wave-Block According to the Slope Angle of the Breakwater Surface Lee, Sangmin & Choi, Yunho

488

Exploring the Temporal Dynamics of Honeybee Acoustic Influence on Nectar Production in Lavandula Angustifolia Lee, Sung Joo


503

Wind Strength and Basil Growth: A Study on Thigmomorphogenesis and Agricultural Implications Park, Gaon

509

Effectiveness of Air Purifier vs. Natural Resources in Reducing Dust Levels Park, Sieun

519

The Differentiation of Combined-Cycle Engines for Efficient Hypersonic Flight: The RocketBased and the Turbine-Based Combined Cycle Engines Park, So Young (Amy)

527

Temperature as an indicator for the extent of DENV transmission Shin, Kayla Kyungwon

537

Comparative Analysis of Photoresist LOR3A and Chromium Sacrificial Layers for Biocompatible Polymer SU8 Kirigami Structure for Neural Organoid Biosensor Fabrication Song, Jennifer | Tang, Jiayin | Manzoor, Maheen

553

Antibacterial Effect of Silica-Agar Structure Coated with AgNPs Sung, David Jun

566

Water balance in the Great Lakes Tung, Sooyeon

578

The Impact of Generative AI on Culture and the Impact of Culture on Generative AI Yeo, Iksun (Justin)

589

Exploring the Role of Rare Gene Variants in Autism Spectrum Disorder: Insights from Songbird Models Yoon, Soyeon (Anna)

RESEARCH PROPOSALS 599

604

How does the educational background (of South Korean students) influence their attitude toward North Korean defectors? An, Chaehyeon Enhancing the Effectiveness of Biofeedback Therapy for Headache Relief through Relaxation Techniques: ANS Arousal and Psychological Readiness Chung, Ahyoung

610

The Reason Individuals Prioritize their Own Profits over Addressing the Climate Crisis: Psychological Instincts Jo, Yunchae

614

Developing the Error Correcting Mechanism of DNA Data Storage Through the Intervention of Optical Discs Kim, Rinho

618

Enhancing Education through AI mentoring: Exploring Benefits and Challenges of Personalized Learning Support Lee, ByoungJu

620

Drug-Based Epigenetic Modulation of Aging: Therapeutic Approaches and Mechanistic Insights Lee, Handong


Implementation of Crime Prevention Through Environmental Design (CPTED) at Yongsan International School Seoul (YISS) and Yongsan-gu Author

Full Name

:

An, Claire Eunsae

:

Yongsan International School Seoul

(Last Name, First Name)

School Name

Abstract This research paper investigates the application of Crime Prevention Through Environmental Design (CPTED) principles to mitigate theft rates in the locker rooms of Yongsan International School of Seoul (YISS). It explores the broader implications for urban safety within the Yongsan-gu district. Despite South Korea's lower overall theft rates, a recent increase in thefts at YISS called for a detailed analysis of the school's locker room design. The study identifies critical design flaws contributing to the thefts, including poor surveillance, territorial reinforcement, and insufficient space management. A comprehensive blueprint analysis of the locker room reveals significant blind spots and areas lacking clear ownership markers. To address these issues, the research proposes targeted interventions based on CPTED principles: enhancing visibility through better lighting and removing obstructive benches, defining personal spaces with secure lockers, and employing mirrors to increase perceived surveillance. Beyond the school, the research extends CPTED strategies to Yongsan-gu, a diverse urban area with challenges related to its structure and the surge of redevelopment projects. Proposed solutions include the installation of solar floor lights to illuminate dark alleyways and the redesign of neglected walls to foster a sense of community ownership and deter criminal activity. This research highlights the potential for CPTED to create safer environments in both educational and urban settings, emphasizing the importance of well-maintained design strategies in crime prevention efforts.

Keywords Crime Prevention Through Environmental Design (CPTED), Yongsan International School of Seoul, Yongsan-gu District, Urban Redevelopment, Crime Prevention

6


OBJECTIVE A few months ago, the principal of Yongsan International School of Seoul (YISS) alerted the school community that there had been a concerning increase in thefts occurring within the school’s locker rooms. The items being stolen also varied- starting from clothing items like sneakers, P.E. uniforms, and sportswear, to money being taken out from unprotected wallets. This situation was particularly alarming to me because South Korea experiences relatively low rates of theft. According to data collected by the United Nations Office on Drugs and Crime, there were 10.4 cases of robberies for every 100,000 people in South Korea, while there were 146.4 in the U.S., which is 14 times more than that in South Korea. The idea that people, especially students, would engage in such an activity was very surprising and troubling for me. Therefore, as a measure to address this ongoing issue, I decided to apply the concept of Crime Prevention Through Environmental Design (CPTED). With further research, I decided that I could also effectively implement this concept not only within the school but also on a larger scale within Yongsangu, the district in which I live. The objective of this research paper is to examine the relationship between crime/accident rates and CPTED strategies. This will be achieved by analyzing the high theft rates in the locker rooms of YISS, identifying the design and structural flaws that contribute to those crimes, proposing a solution based on CPTED to mitigate the rates of theft, and further applying the analysis to address similar issues in Yongsan-gu to reduce crime on a broader scale.

CRIME PREVENTION THROUGH ENVIRONMENTAL DESIGN (CPTED) Crime Prevention Through Environmental Design, or CPTED is a concept that was first coined in 1971 by criminologist C. Ray Jeffery. In the present day, it is led by the International CPTED Association or ICA. CPTED is a strategy that introduces ways to deter crime by changing the design of buildings and public spaces. Crime is closely related to the design, structure, and layout of the environment that one is in. CPTED can be divided into four main principles. First, the ‘surveillance’ dimension is so that people can easily see what others are doing, maximizing their visibility and therefore reducing potential offenders from committing crimes in these certain areas. This principle can also be applied to accident prevention because it secures a clear view of what’s going on, providing clear sightlines between public and private places. The next principle is ‘access control’, where the line between public spaces and private spaces becomes more clear by guiding people in and out of a space using signs and symbolic barriers. The key goal of access control is to prevent criminals from having direct access to a target by restricting access to certain places. Next, the ‘territorial reinforcement’ principle is the idea that a space is less attractive to criminals when it looks well-maintained and owned by someone. This principle relies on the ownership of spaces since this encourages people to report crimes more instantly when they recognize an intruder. The last principle is ’space management,’ where proper maintenance and cleanliness of a site is necessary to imply that the site is cared for and looked after regularly. To create safer environments and improve the quality of life, CPTED principles are applied and extended to other environmental crime prevention theories, such as the Broken Windows Theory.

7


PROBLEM AT YONGSAN INTERNATION SCHOOL SEOUL

Figure 1. Blueprint of YISS Locker Room (Current) To identify the cause of theft being so prevalent in the school’s locker rooms, I created a blueprint of the locker room layout. Analyzing the blueprint made me realize that there were critical design flaws in the locker room. Foremost, more than any other place around the school, the locker room is an easy site for thieves because there is no surveillance system within it. The locker room fails to follow the surveillance principle of CPTED, as there is a ‘blind spot’ within the locker room, where theft can occur undetected.

Figure 2. Blueprint of YISS Locker Room (Sections)

8


As illustrated above, the locker room can be split into five distinct zones. Though visibility is maintained in zones A, B, and D, zones C and E do not provide a clear view of what’s going on. Zone E, however, is exempt from surveillance considerations because it functions as a shower room and is designed so that it can give people privacy. On the other hand, zone C represents a complete blind spot, as it is located in the corner and dimly lit, creating an ideal environment for thieves. Next, the locker room also fails to meet the expectations for the territorial reinforcement principle of CPTED. The problem with the locker room lies in the fact that the personal boundaries in the locker room are very vague. The lockers in zones A and B are not completely utilized with clothing, shoes, and duffle bags frequently scattered on the floor. Furthermore, a critical issue is that the lockers do not have locks, making it impossible to mark clear and personal territories for students. Overall, the lack of organization, surveillance, and security in the locker room increases the risk of theft.

APPLICATION OF CPTED AT YISS To address the ongoing issue, the application of CPTED is an effective strategy to reduce the constant theft going on. This problem can be solved by tackling three aspects: open design, securing visibility, and self-reflection. These three aspects are all tangible solutions that are easy to implement but also effective in reducing crime. A. Open design: The open design aspect can be tackled by removing the two wooden benches in the center of the locker room. Removing the two benches can improve the line of sight, improving visibility throughout the entire locker room. Because the bench acts as a border to separate zones A and B, removing it allows students to secure a better view around the locker room. Furthermore, before students utilized the benches to put their belongings on it, thieves often took advantage of the fact that belongings were left out in the open without being protected. However, as the benches are removed, it forces students to put their belongings in the lockers, encouraging students to reinforce the ownership of their belongings.

Figure 3. Blueprint of YISS Locker Room Implementing CPTED (1)

9


B. Securing visibility: Especially for problematic zones like zone C, where visibility is highly limited due to its secluded location and dimly lit lights, its visibility can be improved and secured by simply adding a set of lights. Because it is realistically harder to completely get rid of the corner, turning the area brighter is a good way to apply CPTED The dark corner, which was a complete blind spot, can turn into any other zone in the locker room with better lighting. Good lighting increases the likelihood of any offenders being observed, which deters potential theft. C. Self-reflection: Mirrors can be installed to alleviate rates of theft. This aspect is a mix between CPTED and psychology. First, mirrors can also play a role in monitoring crime and eliminating blind spots. The mirror reflects any activity around the locker room, so it can be used for detecting any suspicious activity. The mirrors allow students to stay aware of their surroundings, allowing them to detect movement even when they are not looking directly at something. The mirrors can also act as a psychological deterrent to crime. The presence of mirrors creates a perception of being monitored, which further discourages potential thieves. Due to obvious reasons, since there cannot be any surveillance cameras within the locker room, the mirrors can act as a great immediate measure to deter theft.

Figure 4. Blueprint of YISS Locker Room Implementing CPTED (2)

PROBLEMS IN YONGSAN-GU CPTED can be extended and applied to the district of Yongsan. The district encompasses 21.87 km² of land, with 16 different ‘dongs’ or administrative neighborhoods. It is a diverse district in central Seoul, known for its blend of modern culture and historical significance. However, Yongsan faces a variety of challenges concerning its urban area, making CPTED a necessary approach for making Yongsan a better place to live. 1. Redevelopment and Lack of Maintenance A. Yongsan-gu has attempted to start significant redevelopment projects aimed at modernizing the area. While many plans have been made, barely any of the plans have been carried out yet. In these areas where redevelopment is planned, electric poles and power lines can be seen everywhere on the narrow

10


alleys, the hilly areas are not properly maintained, trash is everywhere along the alleys, people put sheets on their roofs to prevent rain from seeping in their old houses, the roadblocks are broken, the paint on the walls are scratching off, and houses are already demolished but neglected.

Figure 5. Current Status of Yongsan-gu Redevelopment Areas - Surveyed and Photographed by Claire An Areas like those shown above lack the surveillance and the well-maintained aspect of CPTED, as these streets are neither well-maintained nor located in an open area where other people have access. 2. High-rise Areas A. Hence its name “Yongsan,” the etymology of Yongsan (龍山) is derived from the word 'san' (山), which means 'mountain.’ Yongsan-gu belongs to a hilly, high-ground area in downtown Seoul. Its geography has its benefits- even when heavy rain occurred in the metropolitan area in 2022, unlike

11


Gangnam-gu, where flood damage was severe, Yongsan-gu suffered no significant damage because it was located on high ground. However, the high ground also has negative aspects to it. B. High areas are secluded- it is beyond the natural surveillance range of average citizens. With these high structures, This feature can be seen in several areas around Yongsan.

Figure 6. Current Status of Yongsan-gu High Areas - Surveyed and Photographed by Claire An C. Not only does this feature hinder the surveillance aspect of CPTED, but it also plays a factor in potential accidents. This was especially seen during the “Seoul Halloween crowd crush” in 2022, where the hilly and narrow structure of Itaewon played a partial role in many people becoming stampeded.

12


Figure 7. Seoul Halloween Crowd Crush Diagram & Street View

APPLICATION OF CPTED AT YONGSAN-GU Several areas in Yongsan already have a history of attempting to apply CPTED. Especially for towns like Ichon-dong, where many family units and students live, there are several applications of CPTED already. For example, there is the “yellow road” designed to alert drivers near school zones to look out for students, there are “GOBO lights” from telephone poles to lighten up the dark streets, and there are also colorful murals to improve the quality of the dirty old concrete walls.

Figure 8. GOBO Lights

Figure 9. Telephone Poles to Lighten Up the Dark Streets

13


Figure 10. Murals for CPTED Purposes However, CPTED is seen less in different areas of Yongsan. Especially in areas that are not maintained as well, no attempt to apply CPTED can be found. This is especially dangerous because at night, the hilly small alleyways are very dimly lit, and far from regular maintenance. Furthermore, though a lot of them are approved for redevelopment, the process takes very long, leaving the old and unmaintained houses in a neglected state for a long time. Therefore, to deter crime from happening in these areas, CPTED can be applied. A. Solar floor lights: Known as “쏠라표지병“ in Korean, these are a rechargeable solar floor lighting device that stores solar energy during the day and emits light only at night. First, these floor lights illuminate the light, contributing to improving the safety of residents by relieving the psychological anxiety felt by citizens in this area at night. These also hold a crime prevention effect by suppressing the mind of criminals through the natural surveillance effect. These lights are also effective with the access control aspect, as they outline clear pathways for pedestrians, also reducing the likelihood of them straying off into private areas. Furthermore, its greatest value comes from the fact that it is a very cost-effective solution to dimly lit alleys. It requires low maintenance, making it a sustainable solution to light up dark alleys. Not only this, but it is a highly reliable source of energy since it can run even when there are power outages. Solar signs not only help create safe villages but also have a proactive crime prevention effect by improving aesthetics through lighting and relieving the psychological anxiety of people walking down the street at night through its natural surveillance. If applied, it is expected that the solar lights will cover blind spots where street lights do not shine, help prevent crime, and relieve pedestrians’ anxiety when passing by dark alleys at night B. Redesigning outdated walls: As another CPTED solution, the dirty old walls can be redesigned into bright and clean ones. This method is a method that relates to the “Broken window theory.” The “broken window” is a metaphor for visible signs of disorder in an environment that is dirty and not maintained. The theory argues that there is a connection between a person’s physical environment and their likelihood of committing a crime. The effectiveness of redesigning and cleaning up outdated walls is already proven in several cases. For example, in Seoul, Geumcheon-gu, and Gasan-dong, villagers worked together to repaint their walls and install lights. According to an authority of Geumcheon-gu, as the crime-prone area turned into a pleasant setting, the rate of crime also decreased.

EFFECTS The effects of CPTED have been proven in several different areas of Korea already. The Daegu Police Agency's CPTED project has had a great crime prevention effect. Since 2012, the police of Daegu City collaborated with districts and public institutions and spent 76.5 billion KRW on

14


a CPTED project together. LED crime prevention lights and emergency bells were installed in alleys to create a ‘safe road home’, and CCTVs were installed in passageways vulnerable to crime, such as dark underpasses. As a result, the number of cases of the five major crimes (murder, robbery, rape, theft, and assault) in Daegu (2018) was 22,155, a 6.3% decrease compared to the previous year (23,653 cases). Within Seoul so far, Mapo-gu has seen the greatest benefit through CPTED. As part of the CPTED project promoted by the Seoul Metropolitan Government in 2012, a 1.7 KM “salt road” was created in Yeomni-dong. By adding an exercise space, guard house, and guard post, and repairing the walls, the town saw a crime prevention rate of 78.6%, a resident satisfaction rate of 83.3%, and a 30% decrease in the number of report calls to the police station. However, a key to making these effects a long-term one comes from regular maintenance. Experts warn that if follow-up management is insufficient, the expected crime prevention effect will not be achieved. This means that the effects of CPTED could end up being a simple exhibition process. Lee Yun-ho, a professor at the Department of Police at Korea Cyber University, said, “CPTED is not simply improving the physical environment. If management is neglected, the area will quickly decline and return to the way it was before.”

CONCLUSION The findings of this research highlight the role that environmental design plays in preventing crime. At Yongsan International School of Seoul (YISS), the high rates of theft in the locker rooms was linked to poor surveillance, unclear territorial boundaries, and inadequate maintenance. The proposed solutions for the locker rooms include removing benches to improve sightlines, installing additional lighting in blind spots, and using mirrors to increase perceived surveillance. These measures are practical, tangible, and effective in creating a safer environment within the school. By applying CPTED principles— specifically enhancing visibility, defining personal spaces, and maintaining cleanliness—crime rates can be significantly reduced. These principles can be applied to the broader scope of Yongsan-gu. Solutions such as installing solar floor lights in dimly lit alleys and redesigning outdated walls can similarly enhance safety and deter crime. The success of these interventions in other regions of Korea highlights their potential effectiveness. However, the key to sustaining these benefits lies in regular maintenance and community engagement. Without continuous regulation from residents and authorities, the positive impacts of CPTED could diminish over time. In conclusion, this research demonstrates that CPTED is a powerful tool for reducing crime and improving safety both in a smaller setting and in urban districts. By addressing environmental design flaws and creating a design of maintenance, communities can significantly lower crime rates and enhance the quality of life for all residents.

15


REFERENCES American Public Transportation Association. (2010, June 24). Crime Prevention Through Environmental Design (CPTED) for Transit Facilities. 동아일보. (2019, January 4). 대구 “셉테드 사업” 효과… 5대 범죄 6% 줄었다. 동아일보. https://www.donga.com/news/Society/article/all/20190103/93550404/1 김미소. (2021, July 15). 강동구, 여성귀갓길 환히 밝혀주는 솔라표지병 설치 | 뉴스로. Www.newsro.kr. https://www.newsro.kr/article243/newsro134841 경찰청 생활안전과. (2005, September). 환경설계를 통한 범죄예방(CPTED) 방안. 경찰청생활안전국. (n.d.). 범죄예방 환경개선 (CPTED) 정책의 바람직한 방향. 하대석. (2017, January 23). 개소리가 세상을 지킨다고? SBS NEWS. https://news.sbs.co.kr/news/endPage.do?news_id=N1004005605 헤럴드경제. (2023, August 31). 꺾인 팻말, 흐릿한 안내선…흉악범죄에 “셉테드” 늘리겠다 더니. 헤럴드경제. https://mbiz.heraldcorp.com/view.php?ud=20230831000207 이승재. (2016, October 14). 마포구, 마을안전 “쏠라표지병” 설치. 시사경제신문. http://www.sisanews.kr/news/articleView.html?idxno=21516 Nation Master. (2014). South Korea vs United States: Crime Facts and Stats. Www.nationmaster.com. https://www.nationmaster.com/country-info/compare/South-Korea/United-States/Crime Pellington, P. (2022, April 20). The 5 Pillars of CPTED: Natural Access Control. Deep Sentinel. https://www.deepsentinel.com/blogs/crime-prevention-through-environmental-design-natural-accesscontrol/ Penrith Development Control Plan 2014 C1 Site Planning and Design Principles C1-17. (2014). https://www.penrithcity.nsw.gov.au/images/documents/services/healthsafety/Crime_Prevention_Through_Environmental_Design_Control_Plan.pdf The International CPTED Association (ICA) - A Brief History of the ICA. (n.d.). Www.cpted.net. https://www.cpted.net/A-brief-history

16


Dynamic Asset Allocation with Markowitz Theory and Proximal Policy Optimization

Author

Full Name

:

Baek, Seo-yun

:

Korean Minjok Leadership Academy

(Last Name, First Name)

School Name

Abstract This study is significant in that it integrates the traditional portfolio proposed by Markowitz with a portfolio based on reinforcement learning. This integration allows for the optimization and maximization of performance by addressing the shortcomings of the two models. The application of reinforcement learning contributes to the dynamic adaptation of Markowitz's theory-based portfolio to rapidly changing market conditions. Furthermore, the Proximal Policy Optimization (PPO) algorithm improves the stability of this process through the clipping technique. The portfolio is designed by combining Markowitz's portfolio with the portfolio based on reinforcement learning when determining the proportion of asset distribution at each point in the purchase or sale decision. This resulted in a cumulative return of 117% of Markowitz's portfolio and 336% of the portfolio based on reinforcement learning when back-testing based on market data from the previous year. Furthermore, it was demonstrated that the potential risks associated with high returns are effectively managed by outperforming the two models in terms of the Sharp ratio and Sortino ratio, which represent risk-to-risk performance.

Keywords Markowitz Theory, Reinforcement Learning, Proximal Policy Optimization

17


1. Introduction In the context of portfolio optimization, it is of paramount importance to ascertain the proportion of assets to be distributed in order to achieve the greatest possible performance. Previously, Markowitz's proposed Mean-Variance Optimization (MVO) or the traditional asset distribution weight determination model designed based on it were the primary means of determining asset distribution proportions. However, this approach was not conducive to dynamic responsiveness to changing market conditions in real-time or the ability to grasp nonlinear relationships between financial data, as the majority of these models were constructed using linear assumptions. In recent times, as artificial intelligence has demonstrated remarkable proficiency in data analysis and pattern recognition, the technology has been proposed as a potential solution to the aforementioned issues. In particular, reinforcement learning has the advantage of enabling the model to adapt rapidly even in the event of changes in market conditions through interaction with a given environment, such as financial markets. When the model based on this was tested using actual market data, it demonstrated superior performance compared to a traditional portfolio. Nevertheless, if only reinforcement learning is employed to ascertain the proportion of asset distribution, there is a considerable likelihood of incurring losses during the initial stages of learning when the model is not yet sufficiently adapted to the environment. Furthermore, the exploration process of reinforcement learning is inherently random, which may result in prolonged losses if learning progresses in an erroneous direction. Additionally, it is more volatile than traditional portfolios, rendering it susceptible to risk management challenges. Accordingly, this study sought to address the deficiencies of the two portfolios by developing a hybrid portfolio that combined the distribution weight of the Markowitz portfolio with the distribution weight of the portfolio based on reinforcement learning. This hybrid portfolio was designed to facilitate decision-making by the reinforcement learning model, specifically in determining the distribution weight of assets. Furthermore, the introduction of Proximal Policy Optimization (PPO) enabled the model to update the policy in a stable manner through the clipping technique.

2. Related Works 2.1 Portfolio using Reinforcement Learning only A number of studies have been conducted using strengthening learning models and studying the performance of these models using adversarial Deep Replication (Adversarial Deep Replication).[1] In addition, research has also been conducted on value-based algorithms such as DQN (Deep Q-Network).[2] However, in the case of the study, no solution was presented for risk management of strengthening risk management due to unstable volatility and high volatility in the early stages.

2.2 Portfolio using Integration of DDPG and Markowitz Theory Zheng et al. [3] have conducted studies that integrate Markowitz Theory with a reinforcement learning model based on DDPG. However, the DDPG algorithm is highly susceptible to noise, which limits its efficacy in financial market scenarios where environmental fluctuations are prevalent. Furthermore, the DDPG algorithm necessitates a multitude of hyperparameters, rendering the process of tuning the model exceedingly intricate. Moreover, Araujo et al. [4] conducted a study that integrated the two portfolios using knowledge distilling. However, this approach resulted in a portfolio that was overly influenced by the Markowitz Theory, thereby limiting its ability to respond promptly to real-time market data.

18


3. Methodology 3.1 Data Collection Stocks from the S&P 500 were downloaded over a ten-year period, with the downloaded data subsequently accumulated. 3.2 Reinforcement Learning-based Baseline Model Once the portfolio balance, distribution weight, and rate of return were established as a state at the present time, an action was devised to allow for the adjustment of the portfolio weight at a ratio between -10\% and 10\% for each asset. In the case of rewards, they were distributed according to the rate of return of the current portfolio. If the portfolio balance increased, a positive reward was given in an amount equal to the rate of return. Conversely, if the portfolio balance decreased, a negative reward was given. A policy-based algorithm was selected as the reinforcement learning algorithm due to the inherent volatility of asset prices and returns, which necessitates continuous adjustment of the proportion of assets. Additionally, the financial market is characterized by high volatility, which can rapidly result in significant losses. To address this, the PPO algorithm was employed to facilitate stable policy convergence through the clipping technique.

3.3 Integration with Markowitz Theory In the baseline model, the portfolio weight was determined through a reinforcement learning algorithm in the process of taking an action at each step. However, in this case, the portfolio's performance was markedly diminished at the outset of the learning process, and there was also an issue with the agent's inability to adapt to the environment due to the fact that the learning was occurring in the incorrect direction. Consequently, the stability of the model was ensured by employing the ensemble technique to take action with the weighted average of the portfolio weight derived from the Markowitz theory's meanvariance optimization and the portfolio weight determined through the reinforcement learning algorithm.

Figure 1. Workflow of Reinforcement Learning-Markowitz Theory Integrated Model

19


The Markowitz theory initially determines the proportion of asset distribution for the model, after which the reinforcement learning algorithm is weighted by 0.01 at each step, beginning at a 1:1 ratio. This compensates for the instability of reinforcement learning in the early stages of learning and enables dynamic adaptation to the market through reinforcement learning, with the objective of maximizing performance.

4. Experiment 4.1 Backtesting Environment To assess the efficacy of the portfolio, a backtest was conducted using actual market data from August 1, 2023 to August 1, 2024. Furthermore, the value of the portfolio could be quantified based on the rate of return over time when configuring the environment.

4.2 Result and Analysis In this study, we conducted a comparative analysis of the value changes over time of three portfolios: the portfolio introduced in this study (red graph), the portfolio using Markowitz theory only (blue graph), and the portfolio using reinforcement learning only (green graph).

Figure 2. Portfolio Value Changes Over Time Step The S&P 500 stocks were classified into three categories based on volatility: high, medium, and low. This allowed for the selection of stocks from each category to form a portfolio. The implementation of Markowitz theory resulted in an increased proportion of low-volatile stocks and a stable improvement in portfolio value. However, the rate of return exhibited a decline in the latter half of the period, indicating an inability to adapt to changes in market conditions. The reinforcement learning portfolio exhibited the least favorable performance among the three portfolios, with a higher volatility than that of the Markowitz portfolio. Although performance improved gradually over time, high performance was not recorded due to low returns in the early stages. In the case of the portfolio introduced in this study, stable returns were recorded from the beginning through asset allocation based on Markowitz’s theory. Furthermore, the rate of return gradually accelerated, reaching a record high portfolio value. This indicates that Markowitz’s stable portfolio optimization and the dynamic adaptability of reinforcement learning algorithms based on PPOs are effectively combined to reflect market changes.

20


Figure 3. Comparison of Performance Metrics Across Different Portfolio Strategies

Table 1. Specific Values of Figure 3 To facilitate a comparison of the three portfolios in terms of their respective performance, the cumulative and annual returns, maximum losses, Sharpe’s index, and Sortino’s index were represented in the form of a visual representation as illustrated in the aforementioned figure. With regard to cumulative returns, which are typically employed as a means of assessing the value of a portfolio, the portfolio under consideration in this study yielded returns of 168% in comparison to the Markowitz portfolio and 336% in comparison to the reinforcement learning portfolio based on PPO. Additionally, the maximum loss is also the highest among the three models. However, it is noteworthy that the risk 4 associated with high returns is effectively managed, as evidenced by the portfolio’s performance in the Sharpe and Sortino indices, which provide a measure of performance against risk.

5. Conclusion The objective of this study was to enhance dynamic adaptability to the market and achieve stability by complementing the shortcomings of a portfolio based on Markowitz’s theory and a portfolio based on a reinforcement learning algorithm. The portfolio introduced in this study demonstrated superior performance compared to a portfolio employing a single strategy in terms of portfolio valuation indicators, including cumulative returns. Additionally, the portfolio exhibited exemplary risk management capabilities. This study is notable for its presentation of a novel portfolio optimization strategy, designed to address the dynamic and volatile nature of the financial market. However, the issue of data dependence and complexity inherent to machine learning-based models, including reinforcement learning, represents a significant challenge that requires further investigation. Consequently, future research should aim to enhance the versatility of reinforcement learning models across diverse market environments and pursue additional risk management improvements.

21


References [1] Jiang, Z., Xu, D., and Liang, J. (2017). Adversarial Deep Reinforcement Learning in Portfolio Management. [2] Wang, Y., and Ni, Z. (2019). GraphSAGE with Deep Reinforcement Learning for Financial Portfolio. [3] Zheng, G., and Yu, Y. (2021). Bridging the Gap Between Markowitz Planning and Deep Reinforcement Learning. [4] Ara´ujo, T., Dias, R., and Le˜ao, P. (2021). Markowitz Meets Bellman: Knowledge-distilled Reinforcement Learning for Portfolio Management.

22


Main Obstacle to Achieving Poverty Eradication: Extreme Poverty in SubSaharan Africa Author

Full Name (Last Name, First Name)

School Name

:

Bak, Sunghyun

:

Korean Minjok Leadership Academy

ABSTRACT Poverty reduction has been a primary objective of the international community for decades. As a result of global efforts, poverty has declined both in raw numbers and in proportion to the world’s population. However, poverty is still the most pressing problem that Sub-Saharan Africa faces today. Sub-Saharan Africa, the poorest region in the world, suffers greatly from poverty, with many of its population living below the poverty line. This paper seeks to examine the extent and nature of poverty in Sub-Saharan Africa and then to make some suggestions for poverty reduction in this region. Using the World Bank’s data covering 25 years from 2000 to 2024, this study identifies the share and number of people living in extreme poverty and countries with high poverty rates. Then, it analyses the relationship between income level and poverty rate, the correlation between poverty rate and GDP per capita, and the Gini consumption index. Finally, social poverty indicators such as child mortality, life expectancy, and literacy rate are examined. This study shows that most Sub-Saharan African countries suffering from poverty have yet to achieve sufficient economic growth, which has led to their classification as Least Developed Countries (LDCs). This paper concludes that these countries, trapped in poverty, need financial help from the outside world, including foreign aid.

KEYWORDS Sustainable Development Goals, Poverty, Extreme Poverty, Sub-Saharan Africa, World Bank, Income Classification, Income Inequality, GDP Per Capita, Gini Index, Child Mortality, Life Expectancy, Literacy Rate, Development, Least Developed Countries (LDCs), Foreign Aid, Angola, Benin, Botswana, Burkina Faso, Burundi, Cameron, Cote d’Ivoire, Central African Republic, Chad, Democratic Republic of Congo, Djibouti, Eritrea, Ethiopia, Gambia, Guinea, Guinea-Bissau, Kenya, Lesotho, Liberia, Madagascar, Malawi, Mali, Mozambique, Niger, Nigeria, Republic of Congo, Rwanda, Senegal, Seychelles, Sierra Leone, Somalia, South Africa, South Sudan, Sudan, Tanzania, Togo, Uganda, Zambia, Zimbabwe

23


1. INTRODUCTION Poverty reduction has been a primary objective of the international community for decades. For instance, the first of UN’s Millennium Development Goals (MDGs) set in 2000 was to eradicate extreme poverty and hunger. Accordingly, a specific target was set: to reduce by half the proportion of people living on less than 1.25 US dollars a day by 2015 when compared to 1990 levels. When the MDGs were replaced by the Sustainable Development Goals (SDGs) in 2015, “No Poverty” was set as the first goal of the 17 SDGs. The new SDG called for the “end of poverty in all forms everywhere.” It has set 7 targets to achieve the goal, including the eradication of extreme poverty for all people everywhere, and the reduction of the proportion of people living in poverty by half regardless of age or gender. Due to collective global efforts, considerable progress has been made in poverty reduction. The povertyfocused MDG was particularly successful, achieving its target five years ahead of schedule. The population below the absolute poverty line decreased from approximately 1.9 billion in 1990 to 1 billion in 2011 (figure 1). Moreover, their number has fallen further to 836 million in 2015. Accordingly, the proportion of those living in extreme poverty has fallen from 37.8% to 10.8% between 1990 and 2015 (figure 2). After decades of progress, however, the tempo of global poverty reduction began to slow down from 2015. In particular, the Covid-19 pandemic had a negative impact on poverty reduction between 20202022. In 2020, the number of those living in extreme poverty increased rather than decreased. Likewise, their share rose from 8.9% to 9.7% within a year from 2019. However, extreme poverty rate in 2022 slightly decreased to 9%, due to economic recovery. Furthermore, it was estimated that 691 million people (8.6% of the world population) were living in extreme poverty in 2023, which is just below the level before the start of the pandemic.

Figure 1. Global Population Living below Poverty Line, 1990-2022. Source: World Bank Poverty and Inequality Platform (2024).

24


Figure 2. Share of Global Population Living Below Poverty Line, 1990-2019 (in 2017 PPP). Source: World Bank (2022). Retrieved from Statista. Nevertheless, one cannot deny that global efforts to fight poverty have been largely successful by significantly reducing the number of people living in poverty as well as the poverty rate globally. However, there are significant regional differences when we closely examine the details of the regional distribution of poverty. As shown in Figure 3, the Asian region (South Asia, East Asia, and the Pacific) has experienced great success in reducing poverty. On the other hand, Sub-Saharan Africa has not made any meaningful progress over the same period.

Figure 3. Population Living in Extreme Poverty by the Region, 2000-2019. Source: Our World in Data

25


Africa has been known as the poorest continent historically, but this does not explain why Sub-Saharan Africa has been left behind over the last 25 years. Africa has been the largest recipient of international aid over the past several decades, which could have helped it reduce poverty. Many African countries also have rich natural resources such as oil, gold, and diamonds. Why, then, have Sub-Saharan African countries been trapped in poverty for such a long time? How serious is the poverty problem in SubSaharan Africa? This paper deals with the poverty problem in Sub-Saharan Africa and makes some suggestions to solve the problem. In the paper, I will first examine the poverty level in the Sub-Saharan region by examining various poverty indicators and then analyze the impact of extreme poverty on child mortality, life expectancy, and education in the area. This study will conclude with some suggestions for solving the poverty problem in the region.

2. POVERTY DATA To examine the poverty levels in Sub-Saharan Africa, this paper will use World Bank data. Globally, the World Bank sets a global poverty line to collect and compare poverty rates across countries. In 2015, the World Bank classified anyone living on less than $1.90 daily at 2011 purchasing power parities (PPPs) as living in extreme poverty. In 2022, it updated the global extreme poverty line to $2.15 per person per day, using 2017 PPPs. The World Bank also tracks poverty at different income levels for countries with different costs of living, drawing the poverty line at an income of $3.65 per day for lowermiddle-income countries, and $6.85 per day for upper-middle-income countries. Based on the collected data on poverty in each country, the World Bank provides poverty rate for each country, which is defined as the ratio of the population whose income falls below the poverty line. In this study, the Poverty and Inequality Platform (PIP) is intensively used as the World Bank’s data can be easily accessed via it. It provides information on key indicators such as poverty, inequality, and shared prosperity. Moreover, the visualizations available at PIP enable multiple ways to explore PIP indicators, from cross-country and regional analyses to country-specific and subnational analyses. In addition, this paper will use visualizations provided by Our World in Data (OWID). Based at the University of Oxford, UK, OWID is an online publication which provides information on key development indicators such as poverty, hunger, inequality, disease, war and climate change. Therefore, it can be beneficial to study poverty. Although this paper concerns poverty mainly in Sub-Saharan Africa, information on Africa is also used in cases where no data could be found for Sub-Saharan Africa.

3. POVERTY IN SUB-SAHARAN AFRICA 3.1 Population Living in Extreme Poverty The regional distribution of poverty visualized below clearly shows that poverty disproportionally affects Africa. As can be seen in Figure 4 and Figure 5, extreme poverty rate has dramatically decreased in most countries between 2000 and 2019, except in Africa. In 2019, many Sub-Saharan countries had more than 30% of their population living in extreme poverty. According to the World Bank, extreme poverty remains heavily concentrated in many parts of Sub-Saharan Africa, especially in those countries with fragile and conflict-affected areas.

26


Figure 4. Share of Population Living in Extreme Poverty, 2000. Source: World Bank Poverty and Inequality Platform (2024).

Figure 5. Share of Population Living in Extreme Poverty, 2019 Source: World Bank Poverty and Inequality Platform (2024) Sub-Saharan Africa have had the highest extreme poverty rates globally for many decades. While the rate decreased from 56% in 2000 to 36.7% in 2019, it has always been 25% higher than the global average (see Figure 6). Moreover, due to population growth, the actual number of people living in extreme poverty in the region has increased rather than decreased. Whereas 375.86 million people in this region lived in extreme poverty in 2000, this number increased to 411.15 million in 2019.

27


Figure 6. The Share and Number of People Living in Extreme Poverty, 2000-2019 Source: World Bank Poverty and Inequality Platform (2024). No exact data are available for 2020-2022, but one can conjecture that the share and number of people living in extreme poverty in the region increased due to the COVID-19 pandemic. According to estimates of the World Bank, the COVID-19 pandemic, together with the adverse global effects of the war in Ukraine, could have led to 28 million new people living in extreme poverty over the 2020-2022 period, bringing the total number to more than 420 million people. In 2024, approximately 429 million people on the African continent are living below the extreme poverty line of $2.15 a day. As the continent’s population is about 1.4 billion inhabitants, approximately one third of its population are currently living in extreme poverty. The increase in the number of people living in extreme poverty in Africa is mainly due to the slow recovery from the economic difficulties caused by the COVID-19 pandemic. Indeed, the recovery appears to be uneven across the world. While lower-middle-income countries which initially experienced the most serious economic setback have returned to pre-COVID poverty rates by 2022, low-income countries were above pre-COVID poverty rates in 2022. Moreover, low-income countries experienced a modest increase in poverty between 2022 and 2023. As a result, the gap has not been closing, but widening. However, poverty in Africa is expected to decline slightly in the coming years. The number of African inhabitants living below the extreme poverty line is anticipated to decrease to 402 million by 2030. However, if the current trend of rapid poverty reduction in other regions continues, Africa is projected to house 90 percent of the world’s impoverished population by 2030.

3.2 Sub-Saharan African Countries with a Large Population Living in Extreme Poverty and High Extreme Poverty Rate Some 22 Sub-Saharan African countries in Table 1 account for almost half the world’s population in extreme poverty. The Democratic Republic of the Congo and Nigeria had the most significant number of people in extreme poverty, with 135 million people altogether. Other African nations with large populations of poor people were Ethiopia, Tanzania, Mozambique, and Kenya, which accounted for 95.6 million people.

28


Country (Closest Available Data Point)

Number of Population Share of Population in in Extreme Poverty Extreme Poverty (million) Democratic Republic of the Congo (2020) 73.3 78.9% Nigeria (2018) 61.3* 30.9% Ethiopia (2015) 27.7 27.0% Tanzania (2018) 26.1 44.9% Mozambique (2019) 22.6 74.5% Kenya (2021) 19.2 36.1% Madagascar (2012) 18.5 80.7% Uganda (2019) 18.1 42.1% Malawi (2019) 13.2 70.1% Zambia (2022) 12.9 64.3% Niger (2021) 12.8 50.6% South Africa (2014) 11.2 20.5% Angola (2018) 9.7 31.1% Ghana (2016) 7.5 25.2% South Sudan (2016) 7.5 67.3% Rwanda (2016) 6.2 52.0% Zimbabwe (2019) 6.1 39.8% Sudan (2014) 5.6 15.3% Burkina Faso (2021) 5.6 25.3% Chad (2022) 5.5 30.8% Central African Republic (2021) 3.6 65.7% Lesotho (2017) 0.7 32.4% Table 1. Sub-Saharan African Countries with Largest Number and Share of Population in Extreme Poverty Source: World Bank, Poverty and Equity Briefs (Spring 2024). * My own calculation According to the latest available data, the Democratic Republic of Congo had the highest share of its population living in extreme poverty, at 78.9% in 2020. Mozambique and Malawi were the countries with the second and third highest poverty rates of 74.5% and 70.1%, respectively. Over 60% of the population living in South Sudan, the Central African Republic, and Zambia were in extreme poverty. Furthermore, over half the inhabitants living in Rwanda and Niger were in extreme poverty (see Table 1). As can be seen in Figure 7, extreme poverty is more prevailing in Africa’s rural areas. Some 46% of its rural population lives in extreme poverty, whereas only 7% of the urban population suffers from poverty. This indicates that the most affected people are often rural and remote.

29


Figure 7. Share of Population Living in Extreme Poverty in Africa by Area of Residence, 2018-2024. Source: World Data Lab (2024). Retrieved from Statista.

3.3 Income Classification, GDP Per Capita, and Income Inequality According to the World Bank’s income classifications in 2023, only 10 of 54 African countries were relatively well-off. Seychelles was the only high-income economy in Africa, with Africa’s highest GDP per capita at $16,715.27 in 2023. The other nine countries, classified as upper-middle-income countries, included two North African countries (Algeria and Libya) and seven Sub-Saharan African countries (Botswana, Equatorial Guinea, Gabon, Namibia, Mauritius, Senegal, and South Africa). While Seychelles benefits from a robust tourism sector and rich fisheries, Mauritius has diversified its economy beyond traditional sectors like sugar and textiles, making it better off. Algeria, Libya, Gabon, and Equatorial Guinea benefit from their significant oil reserves and exports, whilst Botswana is known for its diamond mining industry. Senegal is famous for iron, gas, and gold, whereas Namibia is known for non-fuel minerals and uranium. South Africa is one of the world’s largest gold producers and exports coal and iron ore. On the other hand, the rest of the 44 African countries were either of lower-middle or low-income status. A total of 22 countries, painted in purple in Figure 8, were classified as low-income economies (see Table 2). As clearly shown in Figure 7, all of them are located in Sub-Saharan Africa. Apart from these 22 countries, only four countries outside Africa were classified as low-income: Afghanistan, North Korea, Syria, and Yemen.

30


Green: high-income countries Light Green: upper-middleincome countries Light purple: lower-middleincome countries Purple: lowincome countries

Figure 8. Income Classification by World Bank, 2023 Source: World Bank, “The World by Income,” 2023. Low-Income

Lower-Middle-Income

Burundi Burkina Faso Central African Republic Chad Congo, Democratic Republic of Eritrea Ethiopia Gambia Guinea-Bissau Liberia Madagascar Malawi Mali Mozambique Niger Rwanda Sierra Leone Somalia South Sudan Sudan Togo Uganda

Angola Benin Cabo Verde Cameron Comoros Cote d’Ivoire Congo, Republic of Djibouti Eswatini Ghana Guinea Kenya Lesotho Mauritania Nigeria São Tomé and Principe Tanzania Zambia Zimbabwe

22 Countries

19 Countries

Upper-MiddleIncome Botswana Equatorial Guinea Gabon Namibia Mauritius Senegal South Africa

7 countries

High-Income Seychelles

1 country

Table 2. Income Classification, Africa, 2023 Source: Our World in Data Figure 9 illustrates the correlation between the share of the population living in extreme poverty and GDP per capita worldwide. Countries with poverty rates of over 30% are all Sub-Saharan countries, and the Democratic Republic of Congo has the worst poverty rate at almost 80%. GDP per capita in the group ranges from $711 (Burundi, 2020) to $6,879 (Angola, 2018). This indicates the poverty rates of

31


Sub-Saharan countries are not directly proportional to the country’s wealth. For instance, Angola is the third largest economy in Sub-Saharan Africa, with abundant diamonds, oil, gold, and copper. Its oil and gas industry, which dominates the Angolan economy, accounts for approximately 30% of its GDP. Despite this, its poverty rate is relatively high, mainly due to social inequality and widespread corruption. Another example is Nigeria, the largest economy in Africa, benefiting from its oil and gas exports. However, not all of Nigeria’s population enjoys these economic benefits, and its poverty rate is considerably high, at just above 30%. Indeed, some African countries suffer significantly from severe income inequality. According to the Gini index of consumption 2015 (see Figure 10), the income inequality in South Africa was the worst in the world, with a Gini coefficient of 63.0%. Other Sub-Saharan African countries with a high Gini coefficient (above 50%) were Botswana, Zambia, Lesotho, Central African Republic, Comoros, and Eswatini. However, some Sub-Saharan African countries managed to achieve some improvement from their 1990 inequality levels, reaching a Gini coefficient below 50% by 2015. As a higher Gini coefficient means greater inequality, this entails that a few high-income individuals received larger percentages of the country’s total income, resulting in inequality and poverty in some parts of the population. This explains why some lower-middle-income economies, such as Angola, Kenya, Nigeria, and Zambia, show relatively high poverty rates, despite economic growth.

Figure 9. Share in Extreme Poverty vs. GDP Per Capita, 2022 Source: Our World in Data

32


Figure 10. Gini Index of Consumption in 2015 vs. 1990 Source: Our World in Data

3.4 Child Mortality, Life Expectancy, and Education The adverse effects of poverty are clearly shown in the following two figures (Figures 11 and 12). Figure 11 illustrates the correlation between child mortality and share of population in extreme poverty worldwide. While most countries outside Africa tend to have a child mortality rate of less than 4%, many African countries have much higher child mortality rates (see Figure 11). As of 2018, Nigeria had the highest rate of 12%, and Sierra Leone came second with a mortality of 11.7%. In 2021, Niger, Chad, Central African Republic, and Mali had a mortality rate of 11.5%, 10.7%, 10% and 9.7%, respectively. Other Sub-Saharan countries such as the Democratic Republic of Congo, Angola, and Mozambique showed a 7% and 9% child mortality. Although the poverty rates of Sub-Saharan countries are not directly proportional to the country’s child mortality, it is clear that poverty has significantly contributed to high child mortality. Another indicator of poverty is the relatively short life expectancy at birth in Africa (see Figure 12). Whereas the global average life expectancy was over 70 years in 2021, most Sub-Saharan African countries had much lower life expectancy. As shown in Figure 23, all the top 20 countries with the lowest life expectancy worldwide were in Sub-Saharan Africa. Astonishingly, life expectancy for the Central African Republic’s population was only 40 years as of 2021. Nigeria came next, with a life expectancy of 52 years. Senegal and Mauritania ranked at the top in the region, with the highest life expectancy of 67 years. Along with poverty, undernourishment is also prevalent in Africa. Undernourishment refers to the condition of not eating enough fool to continue to be in good health. Limited access to food and clean water contributes to poor health, which, in turn, increases the poverty risk. As of 2022, approximately 38% of the global undernourished population was living in Africa. Around 20 percent of the African population were undernourished in that year. Undernourishment was most prevalent in Middle Africa and Eastern Africa, where it reached 29.1 percent and 28.5 percent, respectively.

33


Figure 11. Child Mortality vs. Share in Extreme Poverty, 2022 Source: United Nations Inter-Agency Group for Child Mortality Estimation (2023); World Bank Poverty and Inequality Platform (2022).

Figure 12. Share in Extreme Poverty vs. Life Expectancy, 2023 Source: UN World Population Prospects (2024); World Bank Poverty and Inequality Platform (2024). The literacy rate of Sub-Saharan Africa also presents a gloomy picture (see Figure 13). Overall, SubSaharan African countries have the highest illiteracy rates in the world. Whereas the global literacy average was 87%, only a few Sub-Saharan African countries had a similar literacy rate. These included Sao Tome and Principe, Cape Verde, Zimbabwe, and Gabon. On the other hand, less than half of adults living in Chad, Niger, Somalia, and Sierra Leone could read and write in 2022. In particular, Chad's literacy rate was extremely low (28%).

34


Figure 13. Literacy Rate in Sub-Saharan Africa, 2022 Source: World Bank (2023). As can be seen in Figure 14, over 37.5 million children in the official primary school age range were not enrolled in school in 2023. This number accounted for over half the children not attending school worldwide. Noteworthy is that the figure for Sub-Saharan Africa has increased, rather than decreased, over the last 12 years from 2011. By contrast, Central and Southern Asia saw a remarkable decrease in children not attending school over the same period. According to data from 2020, over 70 percent of the poorest children living in Mali, Niger, Nigeria, and Guinea did not go to school (see Figure 15). The figures for Senegal, Mauritania, Liberia, and Chad were also high, ranging from 65% to 58%. This indicates that poor education of the poorest children is a problem faced by not only low-income countries but also lower-middle-income or even upper-middleincome countries of Sub-Saharan Africa.

Figure 14. Number of Primary-School-Age Children Who Do Not Attend School by the Region Source: UNESCO Institute for Statistics (2024)

35


Figure 15. Share of Poorest Children Not Going to School in Africa, 2020 Source: Terre des Hommes (2020). Retrieved from Statista.

4. SUGGESTIONS FOR POVERTY REDUCTION IN SUB-SAHARAN AFRICA 4.1 Facilitating Development As this research shows, many Sub-Saharan countries suffer from high levels of poverty that negatively impact millions of lives. Multiple factors are linked to increased poverty: low income, poor health, and lack of adequate education. In general, larger number of population fell into poverty in the regions with severe conditions of employment, education, health, nutrition, and conflict. Consequently, poverty is likely to be more widespread in least-developed and developing countries. Sub-Saharan Africa is a such region composed of least-developed countries with high poverty levels. According to the UN, 45 countries are designated as the world's least-developed countries (LDCs), 33 of which are in the SubSaharan Africa (see Figure 16). Multiple factors explain the lack of development in Sub-Saharan Africa: unfavorable geography and climate, cultural and historical legacies, uneven distribution of natural and government resources, political instability, war and conflict, and corruption (Acemoglu and Robinson 2010). Nevertheless, it is clear that if LDCs in Sub-Saharan Africa fail to overcome these obstacles, they will be left behind and remain poor in the future. However, it seems extremely difficult for the poorest Sub-Saharan African countries to find the momentum for economic growth without external help. The 22 low-income countries in Table 3 have not escaped low-income status for over three decades. Their GDP per capita has grown at a meager 0.26 % annually since 1987. Unfortunately, these countries seem stuck in a vicious cycle of poverty due to multiple factors mentioned above. This means there will be a need for concerted engagement and support from outside development partners. Otherwise, the poorest countries will have difficulties in eradicating poverty, and, as a result, the overarching goal of the SDGs, which is “to leave no one behind by 2030,” will never be realized.

36


Figure 16. Least Developed Countries (LDCs), 2023 Source: UNCTAD, “UN List of Least Developed Countries” 4.2 Expanding Foreign Aid to Africa Although there are controversies over the effectiveness of aid to Africa (Mayo 2009; Calderisi 2007), several economists pointed out that foreign aid can actually promote economic growth and thus reduce poverty (Tait et al. 2015; Chilinkhwambe 2018; Jena and Sethi 2020; Bila et al. 2023). Accordingly, foreign aid can be valuable in reducing extreme poverty in the poorest regions. Africa has been the primary recipient of foreign assistance for a long time since 1960. The total sum of aid sent to Africa is estimated to be over $2.6 trillion. Ethiopia has received the highest amount of official development assistance (ODA), at about $5 billion by 2022. Nigeria is the second largest recipient ($4.54 billion), followed by the Democratic Republic of the Congo ($3.33 billion). The top ten Sub-Saharan African countries receiving OECD’s DAC(Development Assistance Committee) Members’ bilateral ODA are shown in Figure 17. Despite Africa’s immense need for investments to tackle economic, health, and climate risks, foreign aid to Africa is not growing as rapidly as would be required to achieve the SDGs. The G7’s assistance to Africa peaked in 2006, since which time it has declined. For instance, aid to African countries accounted for only 25.6% of global aid, totaling $53.5 billion in 2022. The G7 and the EU institutions’ share of assistance to Africa was at a near-50-year low of 25.8% in 2022, a considerable drop from 34.2% in the previous year. Moreover, the EU, France, Germany, and the US announced further aid cuts, which amounted to $9 billion in 2024. Given that DAC donors invested only 0.37% of their Gross National Income (GNI) as aid in 2023, significantly short of their commitments to spend 0.7% of GNI, there is considerable room to increase the volume of international assistance further. Undoubtedly, such an increase would significantly help the poorest countries in Sub-Saharan Africa to reduce the incidence and extremity of their poverty.

37


Figure 17. Top Ten Sub-Saharan African Countries Receiving DAC Members’ ODA, 2013-2022 Source: https://www.facebook.com/OECDDevelopment/photos/-in-2023-preliminary-data-shows-a-5increase-of-bilateral-aid-to-sub-saharan-afr/871960388297078/ (accessed on 1 August 2024).

5. CONCLUSION This study investigates the extent and nature of poverty in Sub-Saharan Africa, mainly using the World Bank’s data on poverty. The findings are as follows. First, the study shows that the extent of poverty in Sub-Saharan Africa is disproportionally massive. The Sub-Saharan African population living below the extreme poverty line of $2.15 per day reached 429 million in 2024, accounting for more than half of the world’s population in extreme poverty. Moreover, Sub-Saharan Africa is the only region which has experienced an increase in the number of those living in extreme poverty between 2000 and 2023. If this trend continues, Sub-Saharan Africa will have almost 90 percent of the world’s impoverished population by 2030. This implies that the ambitious goal of the SDGs, mainly SDG 1, to eradicate poverty, leaving no one behind by 2030, cannot be implemented unless we urgently address the poverty problems of Sub-Saharan Africa. Second, countries with large number of populations in extreme poverty and high poverty rate tend to have low incomes, low GDP per capita, and high Gini coefficient. However, the poverty rates of SubSaharan African countries are not directly proportional to the country’s wealth. Poverty is the most pressing problem not only in low-income countries but also in some lower-middle-income countries in the region. Third, social indicators of poverty show that those countries with high poverty rates suffer from high child mortality rates, short life expectancies much lower than the global average, and high illiteracy rates. These phenomena can be explained as the adverse effects of poverty. Still, at the same time, they are also contributing to increased poverty by resulting in a weak and vulnerable labor force. Finally, the study shows that most Sub-Saharan African countries suffering from poverty have not achieved sufficient economic growth, leading to their classification as Least Developed Countries. These countries, trapped in poverty, need financial help from the outside world, including foreign aid.

38


As a whole, poverty in Sub-Saharan Africa is a complex and wide-ranging issue that impacts millions of lives every day. Political instability, war and conflict, corruption, uneven distribution of natural and government resources, and climate change all contribute to this multidimensional problem. However, due to the limited scope of this study, these variables cannot be thoroughly investigated. In addition, a more thorough examination needs in order to identify the precise mechanisms through which foreign aid can benefit Sub-Saharan African countries in poverty. I want to leave this as a topic for future research.

REFERENCES [1] United Nations, “We Can End Poverty: Millennium Development Goals and Beyond 2015,” https://www.un.org/millenniumgoals/poverty.shtml (accessed on 18 July 2024). [2] United Nations, “Goals 1: End Poverty in All Its Forms Everywhere,” https://sdgs.un.org/goals/goal1#targets_and_indicators (accessed on 18 July 2024). [3] United Nations, The Millennium Development Goals Report 2013, https://www.un.org/millenniumgoals/pdf/report-2013/mdg-report-2013-english.pdf (accessed on 18 July 2024). [4] Feng, Juan, “MDG 1: Uneven Progress in Reducing Extreme Poverty, Hunger, and Malnutrition,” May 7, 2015, https://blogs.worldbank.org/en/opendata/mdg-uneven-progress-reducing-extremepoverty-hunger-and-malnutrition (accessed on 18 July 2024). [5] World Bank, “Poverty Overview,” https://www.worldbank.org/en/topic/poverty/overview (accessed on 1 August 2024). [6] United Nations, The Sustainable Development Goals Report 2024, https://unstats.un.org/sdgs/report/2024/The-Sustainable-Development-Goals-Report-2024.pdf (accessed on 18 July 2024). [7] Yonzan, Nishant, Daniel Gerszon Mahler and Christoph Lakner, “Poverty Is Back to Pre-COVID Levels Globally, but not for Low-Income countries,” World Bank Data Blog, October 3, 2023, https://blogs.worldbank.org/en/opendata/poverty-back-pre-covid-levels-globally-not-lowincome-countries (accessed on 1 August 2024). [8] Chen, James, “What is Poverty? Meaning, Causes, and How to Measure,” Investopedia, April 19, 2024, https://www.investopedia.com/terms/p/poverty.asp (accessed on 1 August 2024). [9] United Nations, “Ending Poverty,” https://www.un.org/en/global-issues/ending-poverty (accessed on 1 August 2024). [10] The World Bank, “Poverty and Inequality,” https://datatopics.worldbank.org/world-developmentindicators/themes/poverty-and-inequality.html (accessed on 1 August 2024). [11] Poverty and Inequality Platform, “homepage,” https://pip.worldbank.org/home (accessed on 1 August 2024). [12] World Bank Group, “Poverty and Inequality Update,” Spring 2024, https://thedocs.worldbank.org/en/doc/69d007a1a509633933b92b3804d0e5040350012024/original/poverty-and-inequality-spring-update-6.pdf (accessed on 1 August 2024). [13] Carbone, Giovanni and Lucia Ragazzi, “Is Poverty Growing Again in Sub-Saharan Africa? Trends and Measures,” ISPI, 31 July 2023, https://www.ispionline.it/en/publication/is-povertygrowing-again-in-sub-saharan-africa-trends-and-measures-137866 (accessed on 1 August 2024). [14] Statista, “Number of People Living below the Extreme Poverty line in Africa from 2016 to 2030”

39


https://www.statista.com/statistics/1228533/number-of-people-living-below-the-extremepoverty-line-in-africa/ (accessed on 1 August, 2024). [15] Statista, “African Countries with the Highest Share of Global Population Living Below the Extreme Poverty Line in 2024,” https://www.statista.com/statistics/1228553/extreme-povertyas-share-of-global-population-in-africa-by-country/ (accessed on 1 August 2024). [16] Statista, “Number of People Living below the Extreme Poverty line in Africa from 2016 to 2030” https://www.statista.com/statistics/1228533/number-of-people-living-below-the-extremepoverty-line-in-africa/ (accessed on 1 August, 2024). [17] Carbone, Giovanni and Lucia Ragazzi, “Is Poverty Growing Again in Sub-Saharan Africa? Trends and Measures,” ISPI, 31 July 2023, https://www.ispionline.it/en/publication/is-povertygrowing-again-in-sub-saharan-africa-trends-and-measures-137866 (accessed on 1 August 2024). [18] Yonzan, Nishant, Daniel Gerszon Mahler and Christoph Lakner, “Poverty Is Back to Pre-COVID Levels Globally, but not for Low-Income countries,” World Bank Data Blog, October 3, 2023, https://blogs.worldbank.org/en/opendata/poverty-back-pre-covid-levels-globally-not-lowincome-countries (accessed on 1 August 2024). [19] Trading Economics, “Seychelles GDP per Capita” https://tradingeconomics.com/seychelles/gdpper-capita (accessed on 1 August 2024). [20] Lloyds Bank, “Angola: Economic and Political Overview,” April 2024, https://www.lloydsbanktrade.com/en/market-potential/angola/economical-context (accessed on 1 August 2024). [21] CIA, “Nigeria,” World Fact Book, February 2019, https://web.archive.org/web/20200923163518/https://www.cia.gov/library/publications/theworld-factbook/attachments/summaries/NI-summary.pdf (accessed on 1 August 2024). [22] Hayes, Adam, “Gini Index Explained and Gini Coefficients Around the World,” Investopedia, April 14, 2024, https://www.investopedia.com/terms/g/gini-index.asp (accessed on 1 August 2024). [23] Statista, “Share of Undernourished Population in Africa 2022 by Region,” https://www.statista.com/statistics/1250251/share-of-undernourished-population-in-africa-byregion/ (accessed on 1 August 2024). [24] Acemoglu, Daron, and James A. Robinson, “Why Is Africa Poor?” Economic History of Developing Regions, Vol. 25 No. 1, 2010, pp. 21-50. [25] Baah, Samuel Kofi Tetteh, Christoph Lakner, Umar Serajuddin, “Are the Poorest Countries Being Left Behind?” World Bank Blogs, March 14, 2024, https://blogs.worldbank.org/en/opendata/are-poorest-countries-being-left-behind (accessed on 1 August, 2024). [26] Moyo, Dambisa, Dead Aid: Why Aid Is Not Working and How There Is a Better Way for Africa, London: Allen Lane, 2009. [27] Calderisi, Robert, The Trouble with Africa: Why Foreign Aid Isn’t Work, St. Martin’s Griffin, 2007. [28] Chilinkhwambe, Tamara, “Foreign Aid and Economic Growth in Sub-Saharan Africa,” University of Cape Town, 2018, http://hdl.handle.net/11427/29948 (accessed on 1 August 2024). [29] Bila, Santos, Mduduzi Biyase, Matias Farahane, and Thomas Udimal, “Foreign Aid and

40


Economic Growth in Sub-Sahraran African Countries,” EDWRG Working Paper Series, University of Johannesburg, February 2023. [30] Jena, Nihar Ranjan, and Narayan Sethi, “Foreign Aid and Economic Growth in Sub-Saharan Africa,” African Journal of Economic and Management Studies, Vol. 11, Issue 1, 2020, pp. 147168. [31] Tait, Lauren, Abu Siddique and Ishita Chatterjee, “Foreign Aid and Economic Growth in SubSaharan Africa,” Discussion Paper 15.35, University of Western Australia, 2015, https://api.researchrepository.uwa.edu.au/ws/portalfiles/portal/96622368/DP15.35_Siddique1.pdf (accessed on 1 August 2024). [32] OECD. (December 23, 2023). Recipients of net official development assistance from official donors in Sub-Saharan Africa 2022 (in billion U.S. dollars) [Graph]. In Statista. Retrieved August 15, 2024, from https://www.statista.com/statistics/1360495/sub-saharan-africa-netofficial-development-aid-recipients/ (accessed on 1 August 2024). [33] ONE Date and Analysis, “Official Development Assistance,” https://data.one.org/topics/officialdevelopment-assistance/ (accessed on 1 August 2024). [34] ONE Date and Analysis, “G7 Share of Aid to Africa at Fifty-Year Low,” June 11, 2024, https://www.one.org/us/press/g7-share-of-aid-to-africa-at-50-year-low/ (accessed on 1 August 2024).

41


Financial Barriers to Orthodontic Care Access for Children with Special Healthcare Needs in Washington: A Literature Review Author

Full Name

:

Cho, Claire

:

Issaquah High School

(Last Name, First Name)

School Name

Abstract Oral hygiene is an essential aspect of general healthcare for children. Among oral care treatments, orthodontic procedures bolster the children’s long-term oral hygiene. This literature review explores the financial barriers to accessing orthodontic care for children with special healthcare needs in the state of Washington, focusing on the unique challenges faced by these children in maintaining their oral hygiene. This study reviewed relevant peer-reviewed articles, and data was analyzed thoroughly to identify recurring themes and statistics regarding barriers, as well as potential solutions. The results found that high treatment costs, limited insurance coverage, and inadequate Medicaid reimbursement are the most significant financial barriers that hinder the access to orthodontic treatment for children with special health care needs. The socioeconomic status of families can further exacerbate financial challenges. Because the financial barriers to accessing orthodontic care have implications for the overall health and well-being of children with SHCN, it may lead to potential long-term economic consequences. Potential solutions include policy changes to expand insurance coverage, provide financial aid programs, and establish initiatives to incentivize specialized orthodontic care. Collaborated efforts by both state policy-makers and specialized orthodontists are essential in preventing financial barriers from limiting access to orthodontic care for children with special healthcare needs.

Keywords “children,” “SHCN,” “dental,” “orthodontics,” “access,” “barriers,” “factors,” “finance,” and “insurance.”

42


Financial Barriers to Orthodontic Care Access for Children with Special Healthcare Needs in Washington: A Literature Review

Introduction Children with special healthcare needs (SHCN) are defined as children (aged 7-17 years) with mental developmental disorders such as autism spectrum disorders (ASD), Down syndrome, and intellectual disabilities. In addition to mental developmental disorders, many children with SHCN also face physical disabilities and chronic medical conditions, further complicating their healthcare requirements (Nelson & Webb, 2019). Orthodontic treatment plays a vital role in improving not only the aesthetics but also the oral and general physical health of these children. Beyond the superficial alignment of teeth, orthodontic care contributes to better oral function, reducing risks of dental caries and periodontal disease by ensuring teeth are easier to clean (American Association of Orthodontists, n.d.). Properly aligned teeth enhance speech articulation and chewing efficiency, contributing to better nutrition and overall health (Frazier, 2017). For example, children with SHCN are more likely to present malocclusion traits (abnormal alignment of the upper and lower teeth) than those of their peers without SHCN, which most often requires orthodontic care (Akinwonmi et al., 2019). Furthermore, children with autism spectrum disorder (ASD) frequently experience sensory processing difficulties, making it hard for them to tolerate oral care procedures (Nelson & Webb, 2019). Children with physical disabilities may struggle with manual dexterity, affecting their ability to brush and floss effectively (Nelson & Webb, 2019). Moreover, cognitive challenges associated with intellectual disabilities can hinder their understanding of the importance of oral hygiene, making consistent care more difficult to achieve. Certain medications taken by these children may also result in side effects such as dry mouth or gum overgrowth, further complicating their oral health (Nelson & Webb, 2019). Given these specific healthcare needs, it becomes evident that children with SHCN require special attention in terms of healthcare policy and access to orthodontic services. However, there are significant financial barriers to access in Washington. These financial challenges create a gap in care that leaves many children unable to receive necessary orthodontic treatment, ultimately contributing to poor oral and overall health outcomes. Addressing these financial barriers is crucial for ensuring equitable access to orthodontic care for children with SHCN in Washington. Therefore, this study aims to identify and analyze the financial barriers that hinder access to orthodontic care for children with SHCN in Washington, through a literature review. While various obstacles may limit access to care, this research specifically focuses on financial challenges as the primary category of barriers. The objectives of this study are to: investigate how the high cost of orthodontic treatment affects access to care for children with SHCN; examine the role of insurance coverage, including private insurance and Medicaid, in facilitating or restricting access to orthodontic treatment; analyze the financial burden placed on families of children with SHCN in relation to orthodontic care, considering the complex and often costly medical needs of these children; explore the relationship between socioeconomic status and the ability of families to access orthodontic care for children with SHCN; identify potential financial solutions and policy changes that could improve access to orthodontic care for children with SHCN in Washington State. By focusing on these financial aspects, this study seeks to provide a comprehensive understanding of the economic barriers that families of children with SHCN face in accessing orthodontic care. It also aims to inform potential interventions and policy reforms that could alleviate these financial burdens and improve healthcare equity for children with SHCN in the region.

43


Method To identify financial barriers to accessing orthodontic care for children with SHCN in Washington State, I conducted a literature review published between 2000 and 2024. The search was performed using the University of Sydney Library database and PubMed. Key search terms included “children,” “SHCN,” “dental,” “orthodontics,” “access,” “barriers,” “factors,” “finance,” and “insurance.” The search terms were varied and improved upon successive iterations to ensure the depth, breadth, and comprehensive coverage of relevant topics. Articles were included if they met the following criteria: peer-reviewed status, a focus on comparable regions to the State of Washington, and relevance to orthodontic care for children with SHCN. In addition to the literature review, I analyzed statistical data from the Washington State Department of Health, the Centers for Disease Control and Prevention (CDC), and other pertinent state and national databases. These sources provided valuable data on the prevalence of SHCN among children in Washington, their access to orthodontic care, and the financial frameworks influencing these services. To explore potential solutions, I analyzed case studies and reports on successful interventions that improved access to orthodontic care for SHCN children in contexts similar to Washington State. I evaluated the applicability of these interventions to Washington, taking into account local demographics, healthcare infrastructure, and policy environments. Although direct expert consultations were not conducted, I incorporated findings from expert consultations reported within the reviewed literature to enrich the analysis and recommendations. This approach ensured that insights from experts in the field were indirectly considered in the study. This review does not constitute a systematic review of all relevant research evidence; instead, it aims to identify synthesized evidence most useful for informing orthodontic care practices and service delivery for children with SHCN in Washington State. Through this approach, I identified seven articles that best matched my research criteria, focusing specifically on financial barriers to orthodontic care access. The findings aim to provide a relevant and insightful overview of the challenges faced and potential strategies to enhance access to care for this vulnerable population.

Results Nelson & Webb (2019): This study identified financial constraints as a significant barrier to dental care for children with SHCN. High treatment costs and limited insurance coverage were particularly burdensome for families already facing additional medical expenses. American Association of Orthodontists (n.d.): The authors found that orthodontic treatment costs can range from $3,000 to $10,000, which is prohibitive for many families, especially those with children with SHCN who often have additional medical expenses. Iida et al. (2010): This review identified inappropriate insurance coverage as one of the main barriers affecting access to dental care for children with SHCN. It also noted that the socioeconomic status of families with children with SHCN can lead to poor oral health outcomes. Veliz Mendez et al. (2021): While this study focused on barriers to orthodontic care for patients with neurodevelopmental disabilities, it highlighted financial barriers such as high costs and lack of insurance coverage as significant obstacles to care. Moursi et al. (2010): The authors emphasized that financial barriers, including lack of insurance coverage, high out-of-pocket expenses, and high deductibles, disproportionately burden children with SHCN and their families due to their significant healthcare needs.

44


Chen & Newacheck (2006): The authors noted that insurance coverage and financial burden are significant factors for families of children with SHCN. They also highlighted the need for policy changes to address these financial barriers. Washington State Dental Association (n.d.): The Washington State Dental Association’s SHCN Directory lists several clinics offering reduced-fee or sliding scale services, indicating a recognition of the financial barriers faced by families seeking orthodontic care for children with SHCN. Nelson and Webb (2019) found that children with SHCN have nearly twice the odds of having unmet dental needs compared to children without SHCN. Their study reported that 41% of children with autism spectrum disorder had at least one unmet dental need, compared to 22% of children without SHCN, highlighting the significant disparities in access to care. Despite the critical need for orthodontic services, specific data for Washington State is lacking. Nationally, there is a shortage of pediatric dentists and orthodontists trained to treat patients with SHCN, indicating a broader issue that extends beyond financial barriers alone.

The Major Financial Barriers to Access: High Cost of Treatment: Financial constraint is a predominant barrier to accessing orthodontic care for children with SHCN. The American Association of Orthodontists (n.d.) reported that orthodontic treatment costs range from $3,000 to $10,000, a prohibitive expense for many families, particularly those already managing other medical and therapy costs. Nelson and Webb (2019) further highlighted that many insurance plans consider orthodontic treatment cosmetic, often denying coverage, which leaves families to cover the entire cost out-of-pocket. This lack of financial support directly contributes to the high rates of unmet dental and orthodontic needs among children with SHCN. Additional financial barriers include high out-of-pocket costs and deductibles, which disproportionately affect children with SHCN due to the significant healthcare services they require (Moursi et al., 2010). These financial challenges create a gap in care that leaves many children unable to receive necessary orthodontic treatment, ultimately contributing to poor oral and overall health outcomes. Addressing these financial barriers is crucial for ensuring equitable access to orthodontic care for children with SHCN in Washington. Insurance and Policy Limitations: Insurance coverage is inconsistent and often inadequate, with Medicaid coverage for orthodontic treatment varying by state and frequently limited to severe cases of malocclusion (Nelson & Webb, 2019). Iida et al. (2010) identified that inappropriate insurance coverage is a main barrier to dental care for children with SHCN. The study emphasized that insurance policies often exclude essential orthodontic care, leaving families without viable financial options. These limitations highlight the need for policy reform to ensure that orthodontic care is accessible and affordable for all children with SHCN. Burden on Family: Parents of children with SHCN face unique challenges, including high levels of stress and time constraints, which can hinder their ability to manage their child’s orthodontic care (Nelson & Webb, 2019). The financial burden, combined with the complex healthcare needs of their children, often forces parents to prioritize other medical treatments over orthodontic care. These factors contribute to the overall difficulties families experience when seeking care, suggesting that financial solutions must be accompanied by broader support systems for parents. The financial burden of orthodontic care, combined with other medical expenses, disproportionately impacts families with children with SHCN. These children are nearly twice as likely to have unmet dental needs compared to children without SHCN, a disparity often exacerbated by the socioeconomic status of their families (Iida et al., 2010).

45


Discussion The economic implications of limited access to orthodontic care for children with SHCN in Washington State are significant. Financial barriers can strain the state’s healthcare system, as untreated dental issues may result in more complex and costly interventions in the future. Early orthodontic intervention can reduce the need for tooth extractions, decrease the risk of trauma to protruded front teeth, and improve bite problems such as crossbites (Baum, 2017). These benefits not only enhance health outcomes but also lead to potential cost savings for families and the healthcare system. Expanding Medicaid coverage for orthodontic treatment beyond severe malocclusion cases could significantly improve access to necessary care for children with SHCN in Washington. Current Medicaid policies often exclude orthodontic care for all but the most severe cases, leaving many families without support. Washington could also implement initiatives to incentivize specialized care, such as loan repayment programs or tax incentives for orthodontists who treat patients with SHCN, particularly in underserved areas. These policy changes would directly address the financial barriers faced by families and encourage more orthodontists to pursue training in SHCN care. The WSDA’s SHCN Directory already lists clinics offering reduced-fee or sliding scale services, but expanding these programs through public-private partnerships could further increase access. Creating financial aid programs tailored to low-income families, such as vouchers or grants for orthodontic care, would alleviate some of the financial burdens associated with treatment. Additionally, partnering with private companies and nonprofit organizations to fund these initiatives could enhance their reach and sustainability. To address financial barriers, Washington should consider expanding Medicaid coverage for orthodontic treatment using a tiered system based on medical necessity, rather than limiting it to severe malocclusion. Implementing loan forgiveness programs for orthodontists who treat a certain percentage of SHCN patients, especially in rural or underserved areas, would help address the shortage of specialized providers. Establishing state-funded grants or low-interest loan programs for families needing orthodontic care could provide immediate financial relief and increase access to care.

The Benefits of Early Orthodontic Intervention: The benefits of early orthodontic intervention are improved oral hygiene and health, enhanced oral functions, and increased self-esteem and social interaction. Early orthodontic intervention can significantly improve oral hygiene and overall health outcomes. Proper alignment of teeth reduces the risk of dental caries by 25-30% and decreases the likelihood of periodontal disease, as properly aligned teeth are easier to clean and maintain (American Association of Orthodontists, n.d.). Additionally, correcting malocclusion can also improve speech and nutritional outcomes. Frazier (2017) found that orthodontic treatment can improve speech articulation in up to 75% of cases where speech impediments are related to dental misalignment. Also, orthodontic treatment has been shown to boost self-esteem and enhance social interactions. The American Association of Orthodontists (n.d.) noted a 15-20% improvement in self-esteem scores among adolescents with visible malocclusion following orthodontic treatment. For children with SHCN, who may already face social challenges, the benefits of improved aesthetics and function extend beyond oral health, positively influencing their overall quality of life. Therefore, it is important and relevant to provide solutions, such as developing community-based support networks for families of children with SHCN could provide valuable resources and education on oral health. These networks could offer workshops, support groups, and informational materials to help families navigate the complexities of accessing orthodontic care. Additionally, creating mobile dental clinics specifically equipped to serve SHCN children in underserved areas could further alleviate geographical barriers and improve access. This review, however, acknowledges several limitations, including the scarcity of Washington Statespecific sources, which may affect the generalizability of findings. Nonetheless, the experience of

46


children with developmental disorders is relatively universal across cultures and locations, supporting the relevance of findings even when using comparable or national data. Where state-specific data were unavailable, information from similar regions or broader national data was used to provide context and support the analysis. Further research is needed to evaluate the long-term impact of early orthodontic intervention on children with SHCN in Washington State. Conducting longitudinal studies could provide valuable insights into the benefits of early treatment and help guide future policy decisions. Evaluating the effectiveness of implemented solutions, such as expanded insurance coverage and mobile clinics, would help determine their impact on improving access to care. Additionally, exploring innovative treatment approaches tailored to the unique challenges faced by SHCN patients, potentially in partnership with Washington’s dental schools and research institutions, could enhance the quality of care available to these vulnerable populations.

Conclusion Children with SHCN face significant barriers to accessing orthodontic care, primarily due to high treatment costs, limited insurance coverage, and inadequate policy support. Addressing these barriers requires a multifaceted approach involving policymakers, healthcare providers, and educational institutions. Policymakers should consider expanding Medicaid coverage for orthodontic treatment and creating incentives for orthodontists to specialize in treating SHCN patients. Dental schools in Washington should integrate comprehensive SHCN care into their curricula and partner with research institutions to explore innovative treatment solutions. The Washington State Dental Association should enhance its SHCN Directory and launch a statewide awareness campaign highlighting the importance of orthodontic care for children with SHCN. Healthcare providers must establish collaborative networks to share resources and expertise in treating SHCN patients, ensuring a more coordinated approach to care. Through targeted policy changes, community-based solutions, and ongoing research, Washington State can improve access to orthodontic care for children with SHCN, ultimately enhancing their overall health, well-being, and quality of life.

47


References Akinwonmi, B A, et al. “Orthodontic Treatment Need of Children and Adolescents with Special Healthcare Needs Resident in Ile-Ife, Nigeria.” European Archives of Paediatric Dentistry, vol. 21, no. 3, 23 Nov. 2019, pp. 355–362, https://doi.org/10.1007/s40368-019-00492-y. Accessed 20 Aug. 2024. Alamri, Hamdan. “Oral Care for Children with Special Healthcare Needs in Dentistry: A Literature Review.” Journal of Clinical Medicine, vol. 11, no. 19, 1 Jan. 2022, p. 5557, www.mdpi.com/ 20770383/11/19/5557, https://doi.org/10.3390/jcm11195557. Bastani, Peivand, et al. “Provision of Dental Services for Vulnerable Groups: A Scoping Review on Children with Special Health Care Needs.” BMC Health Services Research, vol. 21, no. 1, Dec. 2021, https://doi.org/10.1186/s12913-021-07293-4. Baum, Alan. “Is There a Benefit to Early Treatment?” American Association of Orthodontists, 10 Nov. 2017, aaoinfo.org/whats-trending/is-there-a-benefit-to-early-treatment/. Gazzaz, Arwa Z., et al. “Parental Psychosocial Factors, Unmet Dental Needs and Preventive Dental Care in Children and Adolescents with Special Health Care Needs: A Stress Process Model.” BMC Oral Health, vol. 22, no. 1, 11 July 2022, https://doi.org/10.1186/s12903-022-02314-y. Accessed 17 July 2022. Hieronymus, Hanna, et al. “Dental Treatment of Children with Special Healthcare Needs: A Retrospective Study of 10 Years of Treatment.” International Journal of Paediatric Dentistry, 9 Apr. 2024, https://doi.org/10.1111/ipd.13186. Accessed 20 Aug. 2024. Krishnan, Lakshmi, et al. “Barriers to Utilisation of Dental Care Services among Children with SHCN: A Systematic Review.” Indian Journal of Dental Research, vol. 31, no. 3, 2020, p. 486, https://doi.org/10.4103/ijdr.ijdr_542_18. Nelson, Travis, and Jessica R Webb. Dental Care for Children with SHCN : A Clinical Guide. Cham, Switzerland, Springer, 2019. “SHCN Directory | Washington State Dental Association.” Wsda.org, 2024, www.wsda.org/public/ special-needs-directory. Accessed 20 Aug. 2024.

48


The Reciprocal Relationship Between Humans and Architecture: An Exploration of Architectural Psychology and Case Studies in Design Intentions Author

Full Name

:

Cho, Eunbin

:

Hankuk Academy of Foreign Studies

(Last Name, First Name)

School Name

Abstract This study examines the reciprocal relationship between humans and architecture, focusing on how architectural design influences human behavior, emotions, and well-being, while also reflecting human intentions and psychological factors. By exploring the field of architectural psychology, this research aims to identify the dynamic interplay between built environments and human experiences, highlighting how design choices can both respond to and shape human needs and behaviors. The study utilizes a three-step methodology: analyzing architectural elements, investigating architects' intentions, and correlating these findings to reveal systematic connections between design and human psychology. Through case studies of Frank Lloyd Wright’s Fallingwater, Le Corbusier’s Villa Savoye, and Notre Dame du Haut, this research illustrates how architects' intentions and psychological insights are embedded in their architectural decisions. The findings confirm that a significant, reciprocal correlation exists between human psychology and architecture, underscoring the importance of integrating humancentric considerations into architectural practice. These insights have broader implications for designing built environments that not only reflect but also positively influence human experiences and societal dynamics.

Keywords Art, Architecture, Design, Psychology

49


Introduction The purpose of this study is to examine the correlation between human and architecture. Since architecture is an essential element of human living, it is crucial for architects to understand what connection between human and architecture exists. Architecture is designed and built by humans, and human influence can enrich architecture itself. It is also important to understand how architecture affects humans and how humans affect architecture. An understanding of this two-way relationship allows one to more readily reflect human needs in architectural designs. If reciprocal relationship exists, which I posit it does, architectural designs would be able to portray the works more precisely and confidently. This study also touches upon the field of architecture psychology. Psychology is a study of the human mind and behaviors. Architecture is designed by people whose psychology will also serve as an additional layer which enriches the architecture. This study aims to answer the question: ‘Is there any reciprocal relationship between human and architecture?’ I hypothesize that such relationship exists. Thereafter, the study seeks to answer the follow-on question: ‘Then, how does architecture affect human and vice versa?’ By studying the correlation and establishing its existence, it is expected that architects may reflect human desires and inclinations on the architecture, by reference to specific architectural psychology. At the same time, this study recognizes certain restrictions to its methodology. Namely, it is difficult to gain direct knowledge of what human intentions and thoughts are reflected in the elements of architecture, and it will involve some degree of speculation. This restriction can introduce some inaccuracy in respect of the correlation between them. The restriction is addressed by referring to theories of architectural psychology.

Theoretical Background There is a field of study called 'Architecture psychology’. While psychology is the study of human behavior, there is a branch of psychology which relates human behavior with the environment. Architectural psychology is defined as the science of human experience and behavior in environments. This study examines why, how, and where psychology can reach architecture. The focus is not on psychology or architecture in isolation – It focuses on the overlap between architecture and psychology. Architecture psychology gives information about the human’s experiences and intentions that influenced the architecture. It also studies how architecture design affects human behavior, emotions, and well-being. In human beings' environments, architecture plays a crucial role. Most of human environments are shaped by the architecture. Therefore, the shape, design, and function of architecture is bound to have some effect on humans. Architectural psychology deals with the two-way influence between both. When architects create architecture anything from their ostensible intentions to the most subtle moods can be reflected. The design choices they make reflect their perception which is eventually exhibited in their architecture. The architecture also affects humans. A well-designed architecture which reflects the purpose appropriately can elicit positive human reactions. By the same token, poor urban architecture design can give rise to long-term mental diseases like schizophrenia, and depression.

Research Methods The research proceeds in three steps. First, an analysis of the architectural elements of an architecture; second, an examination of the architect’s intentions; and third, connecting and matching the elements with the architect’s intentions.

50


The first step is analyzing the architecture. The purpose of this step is to investigate the characteristics and specifications of each architectural element. A precise understanding of the features and specifications of an architecture is required before being able to connect the architecture and the architect’s intentions. There are several architectural elements – color, depth, space, width, mood, light, and material. This first step involves finding hidden elements and dividing them into categories. In addition to identifying each architectural element, one should also analyze the influence such element has on the whole building. In other words, one needs to consider the question of ‘what effect is caused by this element?’. The second step is to figure out the architects’ intentions that manifest in their architecture. Each architect has their own intentions and purpose. They might have messages or goals that they wish to convey through their architecture. By investigating the intention and understanding the aim, one may be able to gain a more wholesome perspective and a deeper understanding of the architecture. Last step is connecting the architecture elements and the architect's intentions. Through the two preceding steps, we can identify what elements are made for what reasons. This step matches those correlations and finally reaches the purpose of this study. Matching these two parts will clearly show the correlation between human and architecture. And through this matching process, the way in which humans affect architecture and vice versa will become apparent. And after showing the way of expressing correlation, the same expressions will be divided. If a correlation really exists, the regular shape and intentions will be represented. The expected research result is the conclusion which satisfies the hypothesis: ‘A reciprocal relationship between human and architecture exists.’ It will also demonstrate how humans intentions, emotions and behaviors are reflected in the architecture. Architectural psychology provides that architecture and human influence each other. Human behavior, emotion, and intention are all reflected in the shape and type of architecture. Architecture also has physical and psychological effects on humans. This study would show in detail the way in which such correlation is expressed. For instance, it is expected that the architectural element of shape may be chosen to express a positive intent – such as, by using rounded and smooth shapes. A negative intent may be expressed as sharp edges. Through the process of categorizing these shapes of architecture and human’s intentions, systematic correspondence will become apparent.

Case Study 1 Frank Lloyd Wright Frank Lloyd Wright is one of the most famous architects of the 20th century. The central idea of his architecture is "organic architecture". Organic architecture refers to a type of architecture that has close association with nature. It preserves nature during the building process. The first feature of organic architecture is using natural materials. Using natural materials can bring the effect that the building can blend well with its surrounding environment. For example, wood is a commonly used material for this purpose. Another feature of organic architecture is that the building must have originality and adaptability. Since organic architecture utilizes surrounding natural components, its design should not ruin the adjacent landscapes. Also, buildings should have sustainable designs. Sustainable design is one which tries to minimize environmental impact and utilize natural resources. For instance, architects today would use solar energy panels and geothermal energy systems. The most illustrative example of this organic architecture is the "Falling Water" by Frank Lloyd Wright. Wright used rocks which existed around the building as material. How much consideration went into the surrounding environment is apparent when one sees the floor plan of the Falling Water. The layout is measured to protect the trees originally planted at the site.

51


Wright sought to create buildings that were both functional and emotionally and aesthetically engaging. We can deduce Wright’s design concept from the location of the Falling Water. He wanted to build a home that can be part of the surrounding landscape. There are some unique features that the architect used and from which his architectural intentions can be inferred. First, the location of the house. Wright integrated the house with the waterfall which was originally located there. By preserving the waterfall, the architecture gained naturalness. Second, the stairs. The architect made stairs that connect the home and the lake where people can swim in. The peculiar feature shown on these stairs is the existence of a door that can connect or disconnect the house from the outside. Wright installed the door horizontally unlike a typical door which is installed vertically. By placing the door horizontally, people can easily open and close the door while they go downstairs. Also, the reason that he used a window instead of the door is special. He wanted to divide the space but didn’t want to invoke a feeling that the space is blocked. So, he used a window to create the effect that the space is divided but not blocked. Third, using the surrounding elements. Wright used the surrounding natural elements in various places in the Falling Water. The floor of the whole house is composed of rocks which also exist around the building. The cantilever is quite similar with the branch of the tree that is sustained by one side. The terrace with the cantilever structure gives a sense of coordination with the forest and the trees. The architect's intention is that he wants to build a house that can bring harmony with its surroundings. Wright’s intentions are reflected in the planning and construction of the Falling Water. He wanted to preserve nature as a whole and wanted to bring harmony between the house and its surroundings. The advantage of this harmonious architecture is that it can utilize the surroundings as a building material.

Case Study 2 Villa Savoye by Le Corbusier The architect of Villa Savoye, Le Corbusier is regarded as the representative architect of modern architecture and famous for iconic style buildings. He revolutionized the design world. He had the largest impact not only on modern architecture, but also city planning, designing iconic buildings, and devising influential masterplans. He is famous for saying “a house is a machine for living in.” By this quote, we can see how his Villa Savoye is one of the most significant contributions to modern architecture in the 20th century. Le Corbusier emphasized the efficiency of architecture; this house also shows the reflection of architecture’s thoughts. Villa Savoye’s standout feature is its harmonious structure. Villa Savoye uses the basic elements of architecture, and thereby it stresses its basic and modern characteristics. But the single key distinguishing feature of Villa Savoye is that these elements are incorporated into each other – and this harmony exists not only between various elements, but also between its design and structure. The exterior of the house is another important aspect of Villa Savoye. Villa Savoye used concrete as its exterior material. Using concrete can ensure easy maintenance of buildings. But it can also bring about the effect of emphasizing the building's completeness. Concrete creates the sense that the building as completed and structurally sound. As the goal of the architect was to use architecture as a machine, he tried to use materials that are produced in the factory. Unlike Wright, he didn’t use natural elements such as lumber or rocks. He used cements, and rebars which were factory-manufactured. Rebars and concrete have the same expansion coefficient, so it is possible to use both at once. Cement doesn’t break down when they are used together, and this discovery at the time enabled the building of high architecture using these materials.

52


As the symbol of modern architecture, Villa Savoye is still considered a great achievement. Through this architecture, people realized that natural elements are not only material that can be used for building. Man-made materials can bring compact structure and clear images of buildings. For the architects who have intentions that they want to build dense structures, they use these factory materials. Some positive effects of using factory materials include structural integrity and vividness of the exterior. Notably, these aspects exude an image of transparency and compactness. Villa Savoye maximized these characteristics –compactness, clarity, and transparency, and we can infer from his emphasis of these aspects that he wanted to show how the building can be clear and bright through these materials.

Case Study 3 Notre Dame du Haut Notre Dame du Haut is another masterpiece of Le Corbusier. It is in France, in the village of Ronchamp. It is a religious building and constructed to receive pilgrims. This masterpiece is also considered one of the most significant landmarks of modern architecture. This building is constructed of sprayed untreated concrete walls and has a rough surface. Using concrete was a pragmatic decision, but it brought about the effect of being seen as more rigid and solid. Le Corbusier was asked to design a new Catholic church. Because of World War 2, a church was destroyed in Ronchamp; a new church to replace it was needed. Since it was going to be a religious building, the architect wanted to focus on spatial purity. The architect's structure was not too complicated by removing the modern aesthetics from the design. Rather, the architect wanted to make a space which is possible for meditating and reflecting. Turning to the exterior of the Notre Dame du Haut – the wall is all strong white. This white color wall symbolizes the pure mentality. The architect also used the light effect by making various designs of windows. The architect wanted to visualize the ethereal atmosphere using the appearance of light entering through the windows. The appearance of light is powerful when shown inside of the building, as it emphasizes the emotional qualities during religious activities. Another unique feature is that Notre Dame du Haut has an unusual structure – it has an irregular shape. The slope of the roof, walls, and floor are all different. The walls are both round shaped and flat shaped. The combination of these inclined walls maximizes the irregularities. The mixed use of irregular, slanted walls creates a unique experience when viewing the building.

Results The architects’ intentions are reflected in the overall design and structure of the architecture. Through 3 cases; Falling Water, Villa Savoye, and Notre Dame du Haut, we see that architects reflect their ideas directly or indirectly onto their architecture. In Falling Water, the architect reflected his intentions by using organic architecture. The architect’s intention to utilize surrounding natures can be identified by the materials that the architect used. The design of stairs and cantabile also reflected the architect's intention to enhance approachability to nature. The overall architect’s intention in Falling Water is coexistence with nature. In Villa Savoye, the overall intention of the architect is strictness. The rudimentary material used in the building structure shows that the architect wants to emphasize the rigidity of the building. And the simple structure highlights the abstractness of the architecture. In Notre Dame du Haut, the architecture shows how the architect visualized his building to fit the purpose as a religious building. The white color of the building tries to ensure focus is not on the

53


building itself lest its religious meaning is diluted. At the same time, there are unique features of the building using various slope designs to highlight the building for itself. Architects use various materials, design choices and architectural techniques to express their purpose and idea in their architecture. The examples here demonstrate that architecture is a reflection of deliberate human intention and emotions.

Conclusion The result of this study – that the reciprocal relationship between human and architecture exists – bears a significant social implication. Architecture and humans may appear to exist individually, but together they actually comprise the environment and society in which we live. Therefore, when designing an architecture, it is essential to consider the human effects that can be caused by the architecture. Thinking about the mutual effect will allow for a more planned and deliberate development of society. The result of this study can also serve as a warning to architects. The intentions with which they design and build architecture can cause a ripple to society. Their thoughts can have implications, both physically and mentally, for people. This study confirms the notion that architects have great responsibility for what they do. In addition, the result of this study will give humans a chance to see architecture with a new perspective. People rarely think about the way in which houses, buildings and bridges in their surroundings can affect their lives. By explaining the correlation between architecture and human psychology, this study hopes to bring about positive change to both disciplines.

References Domestika. “What Is Organic Architecture? 6 Main Characteristics: Blog.” Domestika, DOMESTIKA, 1 Nov. 2023, www.domestika.org/en/blog/11518-what-is-organic-architecture-6-main-characteristics. “Fallingwater Tours.” Fallingwater, 29 Apr. 2024, fallingwater.org/visit/fallingwater-tours/. Faraon, Amer, et al. “PA: Parametricarchitecture.” Parametric Architecture, 31 May 2024, parametricarchitecture.com/organic-architecture-harmony-between -nature-and-built-environment/. Kroll, Andrew. “Architecture Classics: Villa Savoye / Le Corbusier.” ArchDaily, ArchDaily, 27 Oct. 2010, www.archdaily.com/84524/ad-classics-villa-savoye-le-corbusier. “Le Corbusier, Chapelle Notre-Dame-Du-Haut, Ronchamp, 1950-1955.” Fondation Le Corbusier, 26 July 2023, www.fondationlecorbusier.fr/en/work-architecture/achievements-notre-dame-du-hautchapel-ronchamp-france-1950-1955/. Abel, Alexandra. “What is architectural psychology?” Dimensions. Journal of Architectural Knowledge, vol. 1, no. 1, 2021, pp. 201–208, https://doi.org/10.14361/dak-2021-0126. “Architectural Psychology: What Is It?” JERDE, www.jerde.com/news/8452/architecturalpsychology-what-is-it. Accessed 23 Nov. 2023. Gupta, Shreya. “What Is Architectural Psychology?” RTF | Rethinking The Future, 27 June 2023, www.re- thinkingthefuture.com/architectural-community/a9295-what-is-architectural-psychology/.

54


Karnik, Pranjali. “The Role of Psychology in Architecture - RTF: Rethinking the Future.” RTF | Rethinking The Future, 17 Jan. 2023, www.re-thinkingthefuture.com/rtf-fresh-perspectives/a2603-therole-of- psychology-in-architecture/. “The Relationship between Architecture Human Behavior and Construction.” Utilities One, utilitiesone.com/the- relationship-between-architecture-human-behavior-and-construction. Accessed 23 Nov. 2023. Kroll, Andrew. “Ad Classics: Ronchamp / Le Corbusier.” ArchDaily, ArchDaily, 3 Nov. 2010, www.archdaily.com/84988/ad-classics-ronchamp-le-corbusier. Kroll, Andrew. “Architecture Classics: Villa Savoye / Le Corbusier.” ArchDaily, ArchDaily, 27 Oct. 2010, www.archdaily.com/84524/ad-classics-villa-savoye-le-corbusier. “Le Corbusier, Chapelle Notre-Dame-Du-Haut, Ronchamp, 1950-1955.” Fondation Le Corbusier, 26 July 2023, www.fondationlecorbusier.fr/en/work-architecture/achievements-notre-dame-du-hautchapel-ronchamp-france-1950-1955/. Notre Dame Du Haut, architecture history.org/architects/architects/LE%20CORBUSIER/OBJECTS/1954,%20Notre%20Dame%20du%2 0 Haut,%20Ronchamp,%20France.html. Accessed 27 June 2024. Stefanie Waldek, Elizabeth Stamp. “Le Corbusier’s 15 Most Significant Architectural Works.” Architectural Digest, Architectural Digest, 27 Oct. 2023, www.architecturaldigest.com/story/lecorbusier-the-built- work-book.

55


A Waft of Miss Dior in Moscow: In Defense of Haute Couture in the Soviet Fashion Scene Author

Full Name

:

Choi, Eunseo

:

The Peddie School

(Last Name, First Name)

School Name

ABSTRACT This study will illustrate the cultural background and implications behind the first Dior show in 1959 in the post-Stalin Soviet Union, led by Nikita Khrushchev, and argue how Dior was the ideal candidate to transform the Soviet fashion scene compared to other Western fashion brands. On the surface, one may presume that the economic nature of the socialist state and Khrushchev inviting Dior, one of the most exclusive, capitalist brands to this day, contradict each other. In response to such a misconception, I will defend how the show in fact suited the host’s grander plan: paving the way for the “Thaw” of Russia in the period between 1955 and 1964, after Stalin’s death. Christian Dior, the father of postwar French couture, strived to revive the glory of French fashion before WWII. His artistic vision had shaped the Soviet fashion scene and inspired its numerous designers even before the show. I will also clarify misunderstandings that Dior had ties with the Nazi regime and that his ultra-feminine aesthetics in “New Look” conflicted with feminism. Overall, both the media and the Soviet public welcomed the show with enthusiasm. The journalists from the U.S. were especially in favor of Western influence, which changed the previously conservative standards of Soviet fashion.

KEYWORDS Christian Dior, Soviet Union, 1959 Dior Show, The Thaw, Fashion, Cold War

56


INTRODUCTION

Fig. 1. Note. From “Dior Comes to Moscow: Tracing the Threads of Haute Couture in the Soviet Union,” by Sophie Hardie, 2020. In 1959, Dior, one of the most coveted luxury fashion brands, hosting a show in Moscow at the height of the Cold War may have seemed an improper social and political choice. At the Soviet officials’ request, the house of Christian Dior, a French fashion house known for its haute couture, ready-to-wear, fragrances, and cosmetics, displayed over 120 outfits to over 11,000 Soviet spectators in Moscow for over a week (Bronner, 2021). The first of its kind in the city since Russian socialism was established, the show drew a lot of both local and international attention. Fashion, due to its visual aspect, is inevitably linked to expressing wealth and social class. Ostensibly, the discrepancy between the fashion show promoting extravagant clothing and its location—the “epicenter” of the communist agenda—is stark (p. 2). Many will call into question how Christian Dior, one of the most renowned luxury labels in the world at the time, was invited in the first place. However, this show served as a part of the de-Stalinization process led by Nikita Khrushchev, the first secretary of the Communist Party and premier of the Soviet Union (Gibney, 2024). Originally founded by Christian Dior himself and eventually taken over by his protege Yves Saint Laurent, Dior promoted an extravagant ultra-feminine aesthetic that displayed a traditional form of gender presentation, contradicting the “hygienic” aesthetic that the Soviets had been promoting for years (Gurova, 2009, p. 73-91). Despite these contradictions, the Dior show in Moscow highlighted the changes Khrushchev was bringing to the Soviet planned economy (Bronner, 2021). Commencing the “thaw,” he opened up international trade and gave the public access to foreign media. Besides this show, the Soviet Union also hosted a series of international exhibitions, including the American National Exhibition, which propelled cultural exchange and the beginning of “peaceful competition.” These events might appear incongruous with the socialist ideals of the Soviet Union at first glance, but this view overlooks the larger, more complicated picture.

57


FASHION AS A STRATEGIC MOVE FOR DE-STALINIZATION While post-Tsarist Russia operated under the socialist regime, remnants of the previous class system led to the divide between the rich and poor in fashion consumption. These conflicts between the Soviet political elite and bourgeoisie over what role fashion should play in society and the government’s intervention in response characterized the fashion landscape in the Soviet Union (Bronner, 2021). Over the years, the state wished to reformulate the concept of fashion by “civilizing and bringing culture to relatively uncultured social classes” (Gurova, 2009, p. 73-91, as cited in Bronner, 2021, p. 8). The state strived to establish a fashion bureau to create trends that could bridge the gap between the rich and the poor. This goal, however, encountered an obstacle: the remains of the Tsarist rule, in which the concept of luxury fashion was relevant. (Ruane, 1996). At the beginning of Russia’s transition to socialism, the government consistently displayed its “planned ideals,” creating a style fit for the lower and middle classes that overthrew the upper class (Gronow & Zhuravlev, 2015; Bronner, 2021, p. 7). This style embodied the Foucauldian framework—“Power is employed and exercised through a net-like organization”—overthrowing the concept of “fashion” by prioritizing functionality over aesthetics and holding the mission of creating visual equality across professions, ages, and genders (Klingseis, 2011, p. 84-115, as cited in Bronner, 2021, p. 9). Ideally, this would eliminate the competition over who the best dresser is. Magazines such as Rabotnitsa (“The Woman Worker” in Russian) were used to promote such ideals (Meek, 1952). One of the excerpts translated by Katharina Kingseis shows an example: Our “fashion” ought to be plain, comfortable, easy to accomplish, inexpensive, affordable to the woman worker and, above all, meet the requirements of clothing in general, i.e. protect people from cold, dust and mud, etc., while remaining elegant. (Klingseis, 2011, as cited in Bronner, 2021, p. 9-10)

Fig. 2. Note. From 1923 Issue of Rabotnitsa (Working Woman). These objectives began faltering under the New Economic Policy, which granted few “NEPmen” economic prosperity. It allowed them to consume luxury fashion that was still inaccessible to the rest, so “hostile remnants from the class society” of the Tsarist government began to resurface (Gronow & Zhuravlev, 2015, p. 41-43, as cited in Bronner, 2021, p. 10). At the same time, the working class, or “repair society,” had to make and repair their own clothes since they were unable to “make a fashionable coat when the only fabric available is a piece of coarse felt used to make military overcoats” (Gurova,

58


2009, p. 73-91; Stolyarova, 2023, as cited in Bronner, 2021, p. 10). Yet, despite its initially unsuccessful efforts, the government kept on pushing to establish a fashion bureau, directly influencing fashion at the time. It continued to promote the idea of a clean look, by replacing the words fashion and beauty with “hygienic” instead. This term did not refer to literal hygiene, such as sanitary conditions, but “correspond[ed] to the conditions of comfort, beauty, and durability.” (Zaitsev, 1982, translated by Klingseis, 2011, p. 84-115, as cited in Bronner, 2011, p. 10-11). Fashion trends were created by the government-backed fashion house, the Center for the Creation of New Soviet Dress, later named the Fashion Atelier of the Moscow House of Fashion Design (Bronner, 2021). “In addition to design institutes and fashion ateliers,” the ministries initiated “a great number of scientific institutes and laboratories that laid the foundation for the design and construction of clothes.” (Grownow & Zhuravlev, 2010). This new fashion that would serve the New Woman incorporated the aesthetics of the geometric design principles of Cubism to deliver the concept of modernity (Bronner, 2021). Eventually, in the 1930s, during his five-year plan, Stalin shifted his focus toward developing industries that propelled material prosperity, including fashion, to appeal to the growing middle class (Klingseis, 2011, as cited in Bronner, 2021). Ultimately, the government-enforced fashion plans did not last long, as a worker-enforced political campaign against the Westernization of the Soviet Union broke out in 1949 (Gronow & Zhuravlev, 2015, as cited in Bronner, 2021). Regardless of the plans, the “proletarian reality” had settled in, leading to the government making constant efforts to bridge this gap (Bronner, 2021, p. 7). This continued conflict between the Soviet elite, Bourgeois, and the government over the role and aesthetics of fashion demonstrates that fashion was still relevant back then. In other words, while one may assert that the topic of fashion was inconsequential in a socialist state, arguing that the nature of fashion that creates visual differences among people went against the socialist ideology of equality and abolishing class differentiation, history tells us otherwise. The opening up of the Soviet Union to the international fashion market may also seem contradictory to the socialist ideals of minimal competition and government-controlled enterprises. Yet, this was not the case, as Khrushchev ensured that the “opening up” was done in a censored manner. During the thaw, he began to enable cultural contact with the West as a part of his foreign policy (Gronow & Zhuravlev, 2015). His goal was to achieve peaceful coexistence with the United States and its allies, which contrasted with Stalin’s policies. Nearly a decade and a half has elapsed since the long literary "freeze" occasioned by Stalinist censorship. Since then, a body of prose has appeared—either in the Soviet literary journals or manuscripts smuggled to the West-which, for the boldness of content and variety of technique, may rival the protest and experiment of the symbolists and various "fellow travelers" writing in the twenties. (Rogers, 1968, p. 198-207) To begin with, fashion was encouraged by the state because it was one of the ways the Soviets could assert themselves in “peaceful competition with the West” (Bronner, 2021, p. 13). Fashion was a “symbolic manipulation” that replaced the violent and totalitarian control of Stalin’s rule by satisfying people with “ex-bourgeoisie elements such as fashion, glamor, luxury, coziness, and pleasure” (Gurova, 2009, p. 3-4, as cited in Bronner, 2021, p. 13). Furthermore, while the regime promoted the expansion of the fashion industry, it did not want the ex-bourgeoisie traditions to make a full comeback, especially not the “degenerate Western fashion” (Klingseis, 2011, p. 84-115, as cited in Bronner, 2021, p. 14). To prevent this, the government reincarnated the word “hygiene” from the 1920s to describe fashion practices. Once again, the virtues of “simplicity,” “practicality,” and “modesty” were emphasized, leading to fashion being dictated by gender norms once again. The interest in fashion by the public at the time was also reflected in the popularization of fashion magazines. These new publications were targeted toward women and subcategorized their audience based on their roles in society: Rabotnitsa (Working Woman), Krest’ianka (Peasant Woman), Sovetskaia Zhenshchina (Soviet Woman), along with other magazines about the construction of garments such as Modelu Sezona (Fashions of the Season) and Zhural Mod (Magazines of Fashions) (Gurova, 2008, as cited in Bronner, 2021).

59


Based on the translated text of Rabotnitsa 3 from 1958, Klingseis (2011) points out that the individuality mentioned here was quite different from the individuality we know today, while it did display that the “uniform” dress was no longer relevant. How young women ought to dress . . . Your wardrobe should reflect individuality, taking fashion into account without imitating it blindly . . . It is not recommendable for a young woman to dress too “fashionably,” flamboyantly, garishly, attracting everybody’s attention in the street. And it is always nice to see a young woman dressed elegantly, comfortably, simply, and harmonically. (p. 84-115, as cited in Bronner, 2021, p. 14) However, despite this newly granted freedom, the government still banned some elements of Western fashion, such as bright colors that resembled the colors of a rebel group (Gurova, 2009, as cited in Bronner, 2021). It also established the department store GUM in Moscow to control which designers could enter the Soviet fashion scene and showcase their works (Gronow & Zhuravlev, 2021). By balancing freedom and censorship, the Soviet government under Khruschev’s rule successfully utilized the fashion industry as a strategic move toward de-Stalinization.

CHRISTIAN DIOR: ARTIST AND FATHER OF POSTWAR FRENCH COUTURE

Fig. 3. A photo taken in 1957 at the Dior Couture Salon. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022. One may question why the anti-capitalist Soviet officials chose to host one of the most capitalist brands for a public fashion show. Yet, regardless of its capitalistic nature, Dior was the most elite brand at the time. The state inviting Dior created an opportunity to proclaim its highly cultural and artistic taste that would influence Soviet fashion. Dior was also renowned for being “artistic”: in his autobiography, Christian Dior recognized himself as an artist, yet one who simply knew how to market (2018). Aside from his own words, Dior’s artistic drive was reflected in his original ambition of founding an “avantgarde gallery with a friend, Jacques Bonjean,” where he showcased works by “emerging artists including Max Jacob and Christian Bérad, alongside more established modern masters such as Picasso, Matisse, and Duffy” (Picardie, 2022, p. 27). However, this all fell apart soon after his father Maurice Dior's bankruptcy. He joined another gallerist friend, Pierre Colle, in giving Alberto Giacometti his first-ever solo show in Paris and championing Salvador Dalí with a series of notable exhibitions. But despite the prescience of their aesthetic choices, Colle and Dior had little commercial success; for example, Dalí's masterpiece The Persistence of Memory was sold for a modest $250.

60


Fig. 4. Salvador Dali. (1931). The Persistence of Memory. After his gallery days were cut short, Dior taught himself how to draw, specifically fashion illustrations for magazines. By selling his illustrations, he could earn money and eventually, with his younger sister Catherine as his first muse and model, he began to design clothes, creating the fashion brand Dior.

Fig. 5. Photos of Catherine modeling Christian Dior’s designs at Hotel de Bourgogne in 1937. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022. Furthermore, Dior as a brand had a reputation for being the father of postwar French couture. During World War II, Christian Dior worked for the House of Lelong as a couture designer, and after the war

61


was over, he saw himself as an ambassador for the reinvention of French fashion: “[a]fter long years of stagnation, I believed that there was a genuine unsatisfied desire abroad for something new in fashion . . . To meet this demand, French couture would have to return to the traditions of great luxury” (Dior, 2018, as cited in Bronner, 2021, p. 19-20). He dedicated his life to reviving French couture’s pre-war glory and successfully influenced women’s fashion all over the world, “[creating] dresses that enchant the public” (Bertin, 1954). His new style revolutionized the way women dressed: women went as far as hitching up their hemlines a drastic fifteen inches off the ground (Parkins, 2012). In the last ten years of his life leading up to his death on October 24, 1957, Dior evolved into an international household name (“Dior, 52,” 1957).

Fig. 6 & 7. Initial designs for the Miss Dior Fragrance. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022. In addition, Dior had an iron fist over the global fashion market due to its licensed fragrances that had taken over the small-luxury market. At his debut fashion show, Dior ordered the Maison to be “sprayed throughout with the scent of Miss Dior” (Picardi, 2022, p. 269). He dedicated this scent to his sister Catherine, a member of the French Resistance network F2 during WWII, who had recently managed to make it back from Ravensbruck alive without betraying a single one of her comrades. Though the creation of his fragrance started innocently as Christian being “the tender brother who could not forget his sister's suffering and sacrifice” and “creating the floral scent of Miss Dior inspired by Catherine, that still survives as a timeless tribute to the tenderness of Christian's love for his sister” (Picardie, 2022, p. 249).

(Left) Fig. 8. Catherine Dior’s deportation files. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022. (Right) Fig. 9. Maurice Dior’s letter to Catherine after her return. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022.

62


Miss Dior became one of the main factors that granted the House of Dior its status as an international brand, as women of all social classes bought into the fantasy Dior was selling by purchasing (Parkins, 2012). Ultimately, Dior made up a significant portion of both French fashion and general exports. Interested in this influence, the Soviet government sponsored a trip to France for designers whose mission was to “extract ‘useful benefits’” from the infamous House of Dior and bring them back to Soviet clothing production (RGANI, as cited in Bronner, 2021, p. 23). Therefore, while some may claim it was obscene for the Soviet officials, who most vehemently opposed Western capitalism, to strike a deal with one of the most commercially recognized fashion houses at the time, this claim is flawed: Dior’s reputation as an artistic couture house created an opportunity for the Soviet government to publicly assert high cultural taste that will influence Soviet fashion.

“NEW LOOK”: DIOR’S ULTRA-FEMININE AESTHETICS AND WOMEN’S LIBERATION

Fig. 10. The Dior “New Look” that became the basis of the Dior aesthetic. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022. Though its extravagant and ultra-feminine aesthetic originated from the “New Look” and the Soviet ideals may seem conflicting, Dior had already been influencing the Soviet fashion scene even before its 1959 show in Moscow. The Soviets and France, besides numerous variances on many fronts, shared a similar fashion landscape: “fashion as a means of revitalization” (Bronner, 2021, p. 17). Evident in the articles from the time, there was no denying that Dior was the designer “who guided its [fashion’s] transition from misery to majesty” and created a new ‘silhouette’” that still influences the way women dress to this day, a feat that not many designers could achieve (McAuley, 2023). From fashion history’s standpoint, he helped restore a beleaguered postwar Paris as the fashion capital. Each of his collections throughout this period had a theme. Spring 1947 was “Carolle,” or “figure 8,” a name that suggested the silhouette of the new look with its prominent shoulders, accentuated hips, and small waist (Charleston & Koda, 2004). A look called “Bar” from this collection was christened “New Look” by the then Editor-in-Chief of Harper’s Bazaar, Carmel Snow.

63


Fig. 11 & 12. Backstage photos from the 1947 Dior Fashion Show. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022. Just like in Paris, socialist fashion ideals were also evolving to allow more gendered fashion. Feminist theorist Ilya Parkins argued that Dior personally sought to redefine the image of femininity after WWII (Picardie, 2022). Its new style of femininity was drastically distinct from other simpler styles, such as Chanel’s. Chanel’s look was popular during the war because Paris fashion went through substantial changes due to both material scarcity and the need for practical clothing for the “now working woman.”

Fig. 13. Style of Chanel: Military-inspired tweed suit with a simple silhouette. Note. From Rédaction. After the war was over, Dior wanted to create a new aesthetic that liberated women from the burden of war. In his autobiography, he stated his vision: “In December 1946, as a result of the war and uniforms, women still looked and dressed like Amazons . . . But, I designed clothes for flower-like women” (2018). At the time, his statement stirred up some controversy, since even though women gained the right to vote in 1944, their status had barely changed. His choice to revive the ultra-feminine aesthetic amid this tension misleadingly portrayed him as if he were condoning the idea of returning women to so-called “pretty things,” angering members of the feminist movement. When Dior visited Chicago to promote

64


“New Look,” a mob of angry female protesters awaited, screaming, “Mr. Dior, we abhor dresses to the floor” (McAuley, 2023, as cited in Bronner, 2021, p. 19). Some Dior models were even physically assaulted on the streets (Picardie, 2022).

Fig. 14. Dior models attacked in the streets of Paris. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022. However, despite the controversy, Parkins (2012) carefully attests that Dior’s designs were inspired by his nostalgia for his childhood during the Belle Epoque, highlighting how he praises his muses as an “extension of [him]self . . . suggest a fluidity of gender identity that is striking given his overt conservatism” (Dior, 2018, p. 95). In other words, Dior’s ultra-femininity was not to constrain women but rather to relieve them from the memories of war by restoring the aesthetics from the pre-war times of his childhood.

DIOR IN DEFIANCE OF THE NAZI REGIME In this historical perspective, one may mistakenly question why the Soviet officials collaborated with Dior, who worked for the house of Lelong during WWII and made dresses for Nazi officials’ wives and collaborators. After WWII, the Soviet Union’s treatment of Nazi collaborators was extremely intense— depending on their level of collaboration, they had to face punishments of varying severity, from imprisonment to death. Those who were able to escape were still “isolated from the rest of society” and “usually assigned to fulfill the most ‘dirty’ jobs connected with the extermination of the population” (Kovalev, 1998, p. 43-48). Yet in fact, compared to his contemporaries, he had the fewest ties with the Nazis, especially considering that it was a nearly impossible feat to find a brand that survived WWII without any Nazi collaboration. For example, Gabrielle Chanel, the founder of Chanel, became “the mistress of the German intelligence officer Baron Hans Günther von Dincklage” during WWII (Warner, 2011). The Vuitton family, the owners of the luxury luggage brand Louis Vuitton, “actively supported the puppet government led by Marshal Philippe Pétain and made money from their business dealings with the Germans” (“Louis Vuitton’s,” 2004).

65


Fig. 15. A dress that is believed to Christian Dior’s work during his time at the house of Lelong. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022. Unlike them, Dior was always opposed to the idea of collaboration, even when he could not avoid creating dresses for those wives and collaborators. Back then, Lucien Lelong, for whom he worked, was convincing the Nazi Party not to move French couture to Berlin and had to keep the officials somewhat pleased. In his memoir My Years and Seasons, Pierre Balmain (2021), Dior's friend and colleague at Lelong, notes Dior’s opinions on the customers they were obliged to dress. This recollection clearly shows Dior’s frustration from being forced to dress the clients only to continue his craft during the war: The clientele at Lelong during the Occupation consisted mainly of wives of French officials who had to keep up appearances, and of industrialists who were carrying on business as usual. Apart from Madame Abet, the French wife of the German Commissioner, few Germans came to us. Nevertheless, there was still a somewhat unreal, strange atmosphere about the showings. I remember I was standing with Christian Dior behind a screen, scanning the audience awaiting the first showing of 1943, the women who were enjoying the fruits of their husbands' profiteering. "Just think!" he exclaimed. "All those women going to be shot in Lelong dresses!" Also, most importantly, Catherine Dior, his sister, first muse, and one of the founding members of the House of Christian Dior, was a member of the F2 resistance during WWII. More widely known by the code name “Caro,” she was given tasks to “gather and transmit information on the movements of German troops and warships, and to do so, she made frequent and lengthy trips by bicycle to liaise with other F2 agents” (Picardie, 2021, p. 54). She also used the apartment she shared with him to hold meetings that provided intelligence to the British forces planning for D-Day.

66


Fig. 16. Catherine in 1947, aged 30. Note. From Miss Dior: A Story of Courage and Couture, by Justine Picardie, 2022. Dior supported her work and held her in high regard, which was also why he carried on with his job. His employment allowed him to help Lelong coax the Nazis and to financially support himself, his sister, and, in extension, her work as an F2 agent (Picardi, 2021). His contribution in defiance of the Nazis, in other words, aligned with the Soviets’ standpoint and befitted him presenting the show in Moscow.

PUBLIC AND MEDIA RESPONSES

Fig. 17 & 18. Designs by Nadezhda Lamanova. Note. From Fashion East: The Spectre That Haunted Socialism, by Djurdja Bartlett, 2010. From the Russian Revolution in 1917 to the thaw of Russia, the progress in the Soviet fashion industry was rather stagnant, despite the government’s efforts for improvement. Until the designers at GUM began replicating designs from Western brands such as Dior, Soviet fashion still had a reputation for being tacky, both domestically and internationally. Even the insiders, while playing along with the government’s plan, were not satisfied with the designs produced by the government-backed fashion houses. Indeed, some prominent Soviet designers disapproved of Western influence. Nadezhda Lamanova, an early Soviet fashion designer before the time of Dior, had refused to work with French fashion designer Paul Poiret (“King of Fashion”) and publicly advocated for “Socialist realism in

67


fashion” (Bartlett, 2010, p. 43-45). Prominent designers, including L. K. Efremova in the 1950s and 1960s, claimed that the French Haute Couture magazines were useless and inappropriate (Zakharova, 2010). Yet, unpublished reports consistently expressed interest in studying French fashion, and early Soviet designs, such as Lamanova’s, were disappearing. As a result of Dior’s style seeping into the now more culturally open Soviet fashion scene, Soviet designers’ shows in Moscow began to have the Dior flair to their work, despite the inevitable drawback: lack of originality as they mostly derive their ideas from Western designers.

Fig. 19. Photo of Western-inspired designs by the Soviet designers. Note. From “Moscow Fashions Go Dior and Ivan League,” by Nicholas Tikhomiroff, 1957.

Fig. 20. Soviet design by Nadezhda Lamanova before Western influence. Note. From Fashion East: The Spectre That Haunted Socialism, by Djurdja Bartlett, 2010. Still, many reporters from the U.S. who had exhibited negative views of Soviet fashion, especially of its conservative styles, favored this Western inspiration, or even “invasion,” as some might call it (“Dior in Moscow,” 2010). For example, in a 1957 article “Fashion Designers of the Soviet Bloc Meeting in Moscow: East Germans Critical Hungarian Good Taste” published in The New York Times, Max Frankel (1957) comments that “if she lived near the Polish-East German border, her hemline would be where it is in New York; as she moved East, it would drop a bit” (as cited in Bronner, 25). He describes the spectrum of “Communist woman” fashion, with one side closer to Western fashion and the other side remaining in the past.

68


Fig. 21 & 22. Note. From “Dior in Moscow: A Taste for Luxury in Soviet Fashion Under Khrushchev,” by Larissa Zakharova, 2010. In “Moscow Fashions Go Dior and Ivan League,” another The New York Times reporter, Nicholas Tikhomiroff (1957), reveals the resemblance between the design promoted at the GUM and Western designs, such as Dior’s (as cited in Bronner, 2021). Here he states that the clothes inspired by Dior are “gayer, more colorful-more western,” and less conservative than typical Soviet women’s clothing. He also notes that the outfits from the “Dior-inspired” shows at GUM “display[ed] a larger amount of skin compared to typical Soviet Fashion” (1957). The presence of the Décolleté dress with a low neckline at the show hinted that the level of conservativeness in the Soviet fashion industry was changing. In other words, how Dior-inspired pieces appeared on the 1957 runway even before Dior’s show in 1959 marked a significant shift in the Soviet fashion landscape, transforming what the Soviet government, fashion designers, and public deemed appropriate and appealing.

Fig. 23. Note. From “Moscow Fashions Go Dior and Ivan League,” by Nicholas Tikhomiroff, 1957.

69


Noticeably, GUM, the site of these shows, was a mainstream department store. It was not only the rich elite who attended this luxury fashion event; regardless of the wealth gap, Soviet citizens from all walks of life appeared. In May 1959, articles related to Dior accepting the Soviet invitation for the GUM fashion shows were released, drawing international attention to the commencement of the project. For two to three presentations a day for a week, the show displayed all of the 120 outfits, everything from evening gowns to loungewear (Bartlett, 2010). Numerous high-quality photo records of large crowds gathering around GUM demonstrate the international attention this event received. While parts of this five-day-long show were reserved for the political elite, the rest was available for the public, who showed up to watch models “parading through Moscow . . . ahead of a five-night Christian Dior fashion show” (Cosgrove, 2014). A private showing tonight of Christian Dior fashions at the French Embassy attracted more than 450 women, with husbands in tow, from Mos-cow's fifty foreign embassies. A few Russians. Several ballerinas attended the show merce under the patronage of the Soviet cultural exchange authorities. (“Show in Moscow,” 1959)

Fig. 24 & 25. Photos from the 1959 show. Note. From “Dior Comes to Moscow: Tracing the Threads of Haute Couture in the Soviet Union” by Sophie Hardie, 2020. Staging a Western fashion show in front of the mainstream audience allowed the Soviet government to notice that its people wore “Western clothing.” Though the American press of the time may have described the 1959 show as a last-minute plea to salvage the fashion industry, it was unmistakable that this plan of opening up to the international fashion market had been in motion since 1957, considering other shows at GUM and the government-funded trip to France mentioned above, which was met with reverence. In the 1950s the nascent Thaw and its advocating of Soviet coexistence with the West permitted Soviet designers to visit French couture houses for education. Tours of the Dior company, which at the time was headed by a young Yves Saint Laurent, no doubt left Soviet designers giddy with admiration as they requested repeat visits in the name of enrichment. (Hardie, 2020) While most of the American journalists highlighted the styles of clothes and their comparison with the Soviet designs, some documented the reactions from the Soviet attendees. The Western media coverage of this high-profile event, including The New York Times article “Dior models held in awe by Russians,” confirms that overall the event gained positive reactions from the Soviet public (Emerson, 1959).

70


Fig. 26 & 27. More photos from the Dior show. Note. From “Dior Comes to Moscow: Tracing the Threads of Haute Couture in the Soviet Union” by Sophie Hardie, 2020.

CONCLUSION The 1959 Dior show in Moscow served as an effective strategy during the Khrushchev Thaw, liberalizing Soviet fashion that previously promoted “hygiene” over beauty or luxury as an aesthetic standard to serve the working class. Though Khrushchev opened up the state’s gate with caution in a censored manner, the show still epitomized the inevitable process of Westernization. It appealed not only to the “NEPmen,” who could afford luxury fashion but also to the public, who all gathered to witness this pivotal shift. Compared to other luxury brands, Dior befitted the role of guiding such transformation, or liberation, as one might say since its goal was to revitalize fashion as well. Its highly artistic taste successfully encouraged Soviet designers in need of inspiration to learn from French fashion and expand their styles. Also, Dior’s support for Catherine Dior’s resistance to the Nazi regime aligned with the Soviet Union’s response to Nazi collaborators after WWII. In other words, Dior’s appearance in Moscow was a timely and symbolic event in Soviet fashion history.

References Article 10—no title: Dior in Moscow. (1959, June 21). The New York Times. https://www.proquest.com/historical-newspapers/article-10-no-title/docview/114773469/se-2 Balman, P. (2021). My years and seasons. S.l.: V&A Publishing. Bartlett, D. (2010). FashionEast: The spectre that haunted socialism. MIT Press. https://www.proquest.com/docview/2131811350/bookReader?accountid=34987 Bertin, C. (1954, September 5). A new look at Christian Dior: the man behind the 'H-line' is a gentleman farmer who decorated his home in the style of the Paris subway and believes in the zodiac and homemade liqueur. The New York Times. https://www.proquest.com/historical-newspapers/newlook-at-christian-dior/docview/113099621/se-2 Bronner, E., & Kaye, J. (2021). Dior flair in Red Square: Moscow’s 1959 fashion show and Khrushchev Thaw. Charleston, B. D., & Koda, H. (2004, January 1). Christian Dior (1905–1957): essay: the Metropolitan Museum of Art: Heilbrunn timeline of art history. The Met's Heilbrunn Timeline of Art History. https://www.metmuseum.org/toah/hd/dior/hd_dior.htm#:~:text=Christian%20Dior%27s%20reputatio n%20as%20one,femininity%20and%20opulence%20in%20women%27s.

71


Cosgrove, B. (2014, June 28). Haute couture and the Cold War: Dior in Moscow, 1959. Time. https://time.com/3880307/dior-fashion-models-in-moscow-during-the-cold-war/ Dalí. Salvador. (1931). [Painting]. The persistence of memory. The Museum of Modern Art. Dior, 52, creator of “New Look,” dies: Designer won fame in 1947 for style innovation; Christian Dior is dead at 52; Designer created “New Look”; Tyrant of hemlines; Luxury must be defended. (1957, October 24). The New York Times. https://www.proquest.com/historical-newspapers/dior-52creator-newlook-dies/docview/114066955/se-2. Dior, C. (2018). Dior by Dior: The autobiography of Christian Dior. London: V & A Publishing. Emerson, G. (1959, June 29). Dior models held in awe by Russians. The New York Times. https://www.proquest.com/historical-newspapers/dior-models-held-awerussians/docview/114767665/se-2 Frankel, M. (1957, June 11). Fashion designers of the soviet bloc meeting in Moscow: East Germans critical Hungarian good taste. The New York Times. https://www.proquest.com/historicalnewspapers/fashion-designers-soviet-bloc-meeting-moscow/docview/114206685/se-2. Gibney, F. B. (2024, April 14). Nikita Khrushchev. Encyclopedia Britannica. https://www.britannica.com/biography/Nikita-Sergeyevich-Khrushchev. Gronow, J., & Zhuravlev, S. (2010). Fashion design at GUM, the state department store at Moscow. Baltic Worlds. https://balticworlds.com/fashion-design-at-gum-the-state-department-store-at-moscow/ Gronow, J., & Zhuravlev, S. (2015). Fashion meets socialism: Fashion industry in the Soviet Union after the Second World War. vol. 20. Finnish Literature Society. http://www.jstor.org/stable/j.ctvggx2cr. Gurova, O. (2009).The art of dressing: Body, gender and discourse on fashion in Soviet Russia in the 1950s and the 1960s. E. Paulicelli & H. Clark (Ed.), The fabric of cultures: Fashion, identity, globalization (pp.73-91). London: Routledge. Hardie, S. (2020, April 9). Dior comes to Moscow: Tracing the threads of Haute Couture in the Soviet Union. Pushkin House. Pushkin House. https://www.pushkinhouse.org/blog/2020/4/7/dior-inmoscow-1959 Klingseis, K. (2011). The power of dress in contemporary Russian society: on glamour discourse and the everyday practice of getting dressed in Russian cities. Institute for Slavic Languages. https://www.semanticscholar.org/paper/The-power-of-dress-in-contemporary-Russian-society%3AKlingseis/d5c87ad75713a3e199bf0ac472fe22aabd704ebb Kovalev, B. (1998). Nazi collaborators in the Soviet Union during and after World War II. Refuge: Canada’s journal on refugees / Refuge: Revue Canadienne sur les réfugiés 17 (2), 43–47. http://www.jstor.org/stable/45411389. Louis Vuitton’s Links with Vichy Regime Exposed. (2004, June 3). The Guardian. https://www.theguardian.com/world/2004/jun/03/france.secondworldwar. McAuley, J. How Christian Dior rescued Paris from its postwar misery. (2023, April 8). The Washington Post. WP Company. https://www.washingtonpost.com/world/how-christian-dior-rescuedparis-from-its-postwar-misery/2017/07/05/858eb506-5ab5-11e7-aa69-3964a7d55207_story.html

72


Meek, D. L. (1952). A Soviet women’s magazine. Soviet Studies 4 (1), 32–47. http://www.jstor.org/stable/148721. Parkins, I. (2012). Poiret, Dior and Schiaparelli: Fashion, femininity and modernity. London: Berg. Persson, L. (2021, August 20). Sister Act: A closer look at the quietly influential life of Catherine Dior.” Vogue. Vogue. https://www.vogue.com/article/a-closer-look-at-the-quietly-influential-life-ofcatherine-dior. Picardie, J. (2022). Miss Dior: A story of courage and couture. S.l.: Picador. Rédaction, L. (2021, August 19). Coco Chanel: Simply chic.” Vogue France. RGANI (Russian State Archive of Contemporary History), f. 5 (Central Committee Apparat), оp. 43 (Department of industrial goods for mass consumption), d. 69, l. 83, 89. By way of L. Zakharova, Dior in Moscow, 101. Rogers, T. F. (1968). Trends in Soviet prose of the “Thaw” period. Bulletin of the Rocky Mountain Modern Language Association 22 (4), 198–207. https://doi.org/10.1353/rmr.1968.0018, 1. Ruane, C. (1996). Clothes make the comrade: A history of the Russian fashion industry. Russian History 23 (1/4), 311–43. http://www.jstor.org/stable/24660930. Show in Moscow: Diplomats and wives view new Paris fashions. (1959, Jun 12). The New York Times.https://www.proquest.com/historical-newspapers/dior-show-moscow/docview/114772630/se-2 Snow, C. (2017). The world of Carmel Snow. V&A Publishing. Stolyarova, G. (2023, April 27). Stalin era fashion, during wartime and at parties. The Moscow Times. https://www.themoscowtimes.com/2012/05/22/stalin-era-fashion-during-wartime-and-at-partiesa14947. Tatu, M. (1959, June 14). Le public moscovite à la découverte de Dior.” Le Monde.fr. https://www.lemonde.fr/archives/article/1959/06/15/le-public-moscovite-a-la-decouverte-dedior_2158999_1819218.html. Tikhomiroff, N. (1957, May 26). [Photographs]. Moscow fashions go Dior and Ivan League. The New York Times. https://www.proquest.com/historical-newspapers/moscow-fashions-go-dior-ivanleague/docview/114107741/se-2 Warner, J. (2011, September 2). Was Coco Chanel a Nazi agent? The New York Times. https://www.nytimes.com/2011/09/04/books/review/sleeping-with-the-enemy-coco-chanels-secretwar-by-hal-vaughan-book-review.html. Zakharova, L. (2010). Dior in Moscow: A taste for luxury in Soviet fashion under Khrushchev. Northwestern University Press. 10.2307/j.ctv43vtgm7 Zakharova, L. (2010). Dior in Moscow: A taste for luxury in Soviet fashion under Khrushchev. In Pleasures in socialism: Leisure and luxury in the Eastern bloc (pp. 95-119).

73


How have Reforms in Greece’s Public Administration Influenced the Absorption and Implementation of the EU Cohesion Funds? Author

Full Name

:

Choi, Sunho

:

Cheongshim International Academy

(Last Name, First Name)

School Name

Abstract This essay explores how reforms in Greece’s public administration influenced the absorption and implementation of EU Cohesion Funds. First, the essay will explore the inefficiency of the cohesion effect’s economic growth in Greece through an examination of cohesion funding statistics, economic development data including GDP and HDI, and fiscal budget balance. Then, it will delve into structural changes and reforms in the Greek government by examining previous literature, organizational charts, power structure charts, news reports, government reports, and corruption surveys and reports from a historical institutionalist framework standpoint. The essay finds that the centralized governance structure and lack of regional autonomy limited the effectiveness of these funds in fostering long-term economic development. The shift from the Community Support Framework (CSF) to the National Strategic Reference Framework (NSRF) was intended to decentralize fund management, but the historical centralization of power in Greece continued to influence fund allocation, leading to a focus on visible, short-term projects rather than sustainable development as many of the local authorities were limited by institutional capacity and political power framework with the Ministry of National Economy (MNE). Moreover, the historical path dependency of centralized control meant that regional authorities were not fully empowered to diverge from the central government’s priorities, leading to a continuation of the focus on large-scale, visible infrastructure projects and a neglect of soft investment projects.

Keywords EU Cohesion Policy, Greece, Integrated Mediterranean Programs, Community Support Frameworks, National Strategic Reference Framework, Regional Development, Historical Institutionalism, Institutional Capacity

74


INTRODUCTION To address the research question, we must examine how reforms in Greece's public administration have influenced the absorption and implementation of EU Cohesion Funds. The European Union's Cohesion Policy represents a cornerstone of the union's work in promoting socioeconomic and territorial cohesion among its member states. It aims to reduce disparities between regions by providing financial support to less developed areas, thereby fostering balanced development across the Union. For countries like Greece, the EU Cohesion Funds have been at the frontier of addressing significant economic challenges related to the country's accession to the union in 1981. Greece's economic history within the EU is marked by a series of severe challenges, including high public debt, low competitiveness, and a reliance on external funding. These issues have been exacerbated by external economic crises, particularly during the late 2000s, which highlighted the fragility of Greece's economic structure and its dependency on EU support. The role of the EU Cohesion Funds in such a context cannot be understated, as they have provided crucial resources for public investment, infrastructure development, and social programs aimed at stabilizing and growing the Greek economy. This research tackles both timely and important research questions as it seeks to reflect on the areas of improvement of the union's cohesion policy in fostering internal economic development in Greece over the past four decades. Given the persistent economic struggles faced by Greece, particularly following the 2009 depression and the coronavirus pandemic, understanding the fund’s exact impact is critical for both policymakers and scholars in Greece and beyond. The broader implications of this research extend beyond Greece, offering insights into how changes in governance structures can affect the effectiveness of the union's Cohesion Policy in other member states facing similar challenges. The research question in this case study is as follows: How have the changes in cohesion funding policy implementation impacted Greece’s economic developments since 1981? This question is particularly pertinent considering the significant financial resources allocated to Greece under various EU funding programs, and the rising criticism about the efficiency and effectiveness of these funds among Euroskeptics. To explore this overarching question, this research will also address several sub-questions: How have reforms in Greece's public administration influenced the absorption and implementation of EU Cohesion Funds? What lessons can be drawn from Greece’s experience that might be relevant for other EU member states? As will be discussed in the literature review section, qualitative assessment studies have failed to answer why Cohesion Policy impact has consistently failed to improve in Greece considering the outsized EU spending in the region over three decades in multiple evolving programs. The study hypothesizes that this is primarily attributable to weak structural changes that failed to fundamentally reform the public administration between the funding periods. This research seeks to reflect on these areas of improvement and analyze how the structural reforms in Greece's public administration have directly influenced the absorption and effective use of EU Cohesion Funds. By addressing these questions, the study aims to provide a nuanced understanding of the interplay between administrative reforms and economic outcomes within the context of EU Cohesion Policy, contributing to the broader discussion on effective governance in EU member states

LITERATURE REVIEW The Cohesion Policy is one of the key policies of the union, and it is intended to enable all EU regions to benefit from membership and its common market policies. Cohesion, as a unitary force, keeps the union together and fosters willingness to cooperate within an economically and politically interdependent system. Therefore, cohesion among member states and individuals is necessary for effective policy implementation. Greece presents a unique profile as a country that is highly funded by the EU, but Greeks are experiencing a decline in resilience, engagement, and attitude indicators. While structural cohesion remains high due to the continued financial commitment of the EU, the level of

75


individual cohesion is decreasing after the financial crisis and the ongoing migrant crisis (Janning, 2018). Cohesion policy takes up around one-third of the EU’s annual budget due to the essentialness of the policy in the European integration process (Guia, 2024). This policy, to be more specific, was motivated by concerns that the single market and the single currency would exacerbate existing regional disparities (Padoa-Schioppa, 1987). As trade barriers and tariffs are removed across the continent, a key concern of member states was the outflow of physical, monetary, and human capital from their less-developed regions to more developed countries, including Germany and France. The Cohesion Policy consists of several financial interventions, which are denoted as Cohesion and Structural Funds (CSF). The structural funds include the European Regional Development Fund (ERDF) and the European Social Fund (ESF), while the Cohesion Fund (CF) is a standalone fund for weaker regions in the EU. In the 1980s, the EU’s Cohesion Policy’s conceptualization moved from cohesion policies based on country units to regional units. This introduction of a territorial dimension transitioned cohesion policy from a sectoral policy to a territorial policy (Leonardi, 2006). Such a transition from focusing on NUTS1 regions to focusing on NUTS-2 regions involved the design of new programs across European states that further integrated regional authorities into the regional policy-making stage, creating a new powersharing arrangement that succeeded the national policy-making structure. The Commission now asks for and requires the active involvement of local authorities in cohesion operational programs (OPs) from the planning to execution stages. However, as will be discussed later, this has not been effective in Greece, where long-lasting political structures and institutions continue to leave power in the hands of centralized and national institutions and figures. During the Maastricht negotiations, Greece obtained a legally binding agreement to increase the “already existing structural fund resources” and to create a new fund for cohesion that would help the poorer states in “environmental and transport infrastructure” (Mazzucelli, 1997). Particularly for Greece, an EU-15 country that has been struggling economically, past economic crises and the following nearbailout from the EU have drawn particular interest across the continent in the cohesion policy implementation in Greece. Since 1981, Greece has received great aid from the EU funds (European Regional Development Fund, European Social Fund, Cohesion Fund, and structural support for agriculture). In every sector of public investment, the EU’s funds are contributing substantial financial assets into thousands of projects across the entire Greek state. For the 2007-13 program, the EU transferred roughly 22 billion euros to Greece, which is about 2.4–3.3% of the country’s total GDP (Liargovas, 2015). The subject of the EU's Cohesion Policy has been widely covered in academic research. Many quantitative and qualitative studies have debated the effectiveness of the OP funds in regional social and economic development over the years. Prior to the 21st century, project progress and result evaluation from both a project and program perspective were not systematically evaluated, and a substantial amount of funding placed into projects could not be accounted for in the results of the project (Bachtel & Michie, 2007). This was addressed in the 1988 and 1993 reforms after (1) the EU began to consider the evaluation of the project’s both result and outcome as one of the final steps of pursuing such work, (2) the funds began to take a more multi-annual long-term program in a set period of six years, and (3) systematic evaluation initiatives including MEANS (Means for Evaluating Actions of a Structural Nature), which was later substituted by EVALSED (The Guide to Socio-Economic Development), began to transition the evaluation process from one dependent on individual evaluators to a more systematic approach. The initial literature in the early 2000s (Boldrin & Canova, 2001; Midelfart-Knarvik & Overman, 2002) was divisive on the effectiveness of such policies, but recently, works have increasingly supported the side of success (Cerqua & Pellegrini, 2018; Crescenzi & Giua, 2016). However, some doubts still remain as in Liargovas’s work: only 37% of the 3,600 SCF policies studied showed evidence of positive growth; projects in “tourism and culture, urban development, social inclusion” consistently failed to deliver identifiable positive growth. Additional works have found

76


success in smoothing exogenous shocks affecting regional incomes (Giua, 2024). Of the 3,600 SCF policies studied, only 37% of all projects showed evidence of positive growth. In terms of Greece, the country became poorer in comparison with the EU-15’s average in the years 1982 to 2000 (Dauderstädt, 2012). In the period 2002 to 2013, Greece continued to face a loss of prosperity when compared to the EU-28’s average, which was worsened by the severe crisis in 2009 (Liargovas, 2015). A key determinant in the success of the program is the institutional capacity and policies of the recipient state (Acemoglu et al., 2004). In Greece, the state’s weak institutional frameworks alongside corruption have limited efficiency in using such funds: Greece became poorer from 1982 to 2000 in comparison to other EU-15 countries and even worsened after the 2009 debt crisis (Tzifakis, 2015). Despite each Greek citizen receiving 1,369 euros from structural funds in the years 1994 to 1999, Greece, as a highly indebted country, was unable at the onset of the financial crises to combat the pressure for excessive yields from international financial markets. Factors including “corruption, clientelism, ineffective administration, low absorption rates, and decreasing competitiveness” had a negative impact on the effects of Cohesion Policy (Liargovas, 2015). The analysis of Greece’s lack of an independent regional policy-making structure (Andreou, 2010), lingering clientelism, and corporatist structure between select interest groups and the ruling party PASOK (Kalaitzidis, 2015) is shared across much of the literature available on Greece’s administration of the Cohesion policy. Such literature tends to describe populism, cronyism, corruption, and inefficiency to varying degrees, which led to the relative failure of cohesion policies to overcome such difficulties in ruling systems. Existing efforts of the government to overcome them were limited to only “some symbolic changes” that failed to get to the root of problems for the objective of keeping the EU funds flowing. Ioannides similarly characterizes Greece’s implementation of SCF funds as more of a ritual compliance rather than addressing the substance of the matter. Organized civil society, which in other countries serves as the public’s oversight of government and public authorities, is very weak when compared to other European countries (Liargovas, 2015). In this book, Huliaras describes the state of civic society as “cachectic, atrophic or fragile” and attributes such weakness to the overpowering strength and clientelism of dominant political parties. Due to the junta history of modern Greece, civic society did not dominate the political culture: in many cases, political parties encompassed organizations ranging from student councils, labor unions, and even cultural organizations. Huliaras describes this phenomenon as the colonization of civil society by the political parties, or “partitocracy.” This is exacerbated by the centralized nature of the Greek government, which was built on the French model of democracy after the late and nonlinear national unification in 1833 and late democratization that empowered the central political control of institutions and policy-making capabilities. Consequently, the civil society’s weakness leads to a lack of independence and autonomy from the state, thus removing the possibility of any organized intervention led by citizens (Andreou, 2015). Studies trace the root of these problems to Greece’s history: late transition to democracy as a third-wave democratization state (Huntington, 1993), which eventually was taken over by the PASOK party. The same study argues that integration has entrenched an ineffective state by allowing political elites to promote policies and changes without reciprocal reform requirements to strengthen the foundations of the public service (Kalaitzidis, 2015). Liargovas’s 2015 study details the extent to which corruption from “virtual service providers, product substitution, fraudulent reimbursement, work-cost swelling, and large consultancy fees” have taken up a substantial portion of the cohesion funds. As such, the existing literature covers Greece’s continued struggle with internal leadership, administration, and fraud thoroughly in both qualitative and quantitative studies. Studies also well document cohesion levels of Greece at a country level. However, how such cohesion has changed over the three decades in particular individual regions and territories of Greece since its implementation of this policy as a member state in the EU presents a literature gap. Liargovas’s 2015 work points out this

77


“under-representation of investment in education, R&D, and innovation in all CSFs…clear..not used as a means to achieve country’s targets in the Lisbon Treaty,” noting the very low (1 to 2%) allocation to such areas. The paper notes that if Greece had heavily invested in such areas, Greece could’ve fundamentally bolstered its weak economic model, which is a low-value and low-knowledge-based economy heavily dependent on government funding; the discussion section outlines the author’s possible explanation of lack of institutional capacity, weak structure, and heavily clientelist political culture but leaves the underlying institutional development and mechanisms of the state that led to such problems as suggestions for further research. While evidence suggests that Greece has done a better job with infrastructural and construction projects compared to softer funding projects (Andreou, 2010), no academic study exists on the structural processes underlying such a discrepancy between policy areas. How all of such developments impacted Greece’s integration with the EU member states over the three decades has not been analyzed yet. In response to such gaps in the literature, this study will aim to measure how transformations in the EU’s Cohesion Fund policy from the Structural Funds to the “Community Support Framework Funds” over the years affected Greece’s financial difficulties as a member state in the Union.

METHODOLOGY To explore the hypothesis that the underperformance of Greece can be attributed to the weak structural changes that failed to fundamentally reform the public administration between the funding periods, this study will explore the variables of economic growth and institutional structure. The evolution of the EU’s Cohesion Fund policy is best analyzed through the framework of Historical Institutionalism (HI). Historical Institutionalism is a framework that examines public policy by analyzing how the context of development affects the formulation of policy and the continuity or transformation of the policy that follows (Skogstad, 2023). This framework of analysis focuses on the impact of prevailing rational-legal law and rule frameworks of institutions and temporal factors, including the order and timing of events, on the activities of politicians, political parties, and public administration. This study will examine the relationship of causation related to both formal organizations (legally formed institutions) and behavioral norms (political culture) that result from power struggles in which select actors prevail over others by dissecting the interdependent constituent components and analyzing these through an interactive and temporal dimension. In terms of Greece, the focus will be on institutional structures as well as political incentives and power that led to continued clientelism and ineffective administration, especially in the “soft cohesion” policy from a temporal perspective. This will be very useful as many of Greece’s issues with the ineffective administration of the Cohesion Funds stem from the political relationship between the Ministry of National Economy (MNE) and regional governments, and the party-based clientelism that dominates the political structure. Much previous literature has found that these features of Greece’s democracy stem from a relatively late rise of democracy in Greece. In Greece’s case, the transition from Community Support Framework Funds to National Strategic Reference Framework Funds marks a significant turning point in its relationship with the EU. This shift reflects broader changes within the EU’s institutional framework, particularly in response to evolving economic and political challenges. This essay will explore the effect of the Cohesion Fund’s economic growth impact through examining cohesion funding statistics, economic development data including GDP and HDI, and fiscal budget balance. Path dependence helps explain why certain funding mechanisms have persisted and how these institutional arrangements have influenced Greece’s role within the EU. This essay notes the interconnected nature of EU policies and the multiple channels of interaction between states and nonstate actors, and thus selected HI as the framework of reference. This framework is especially relevant for understanding the varied impacts of different types of EU funding on Greece: traditional power politics are intertwined with economic and social ties that bind regions together.

78


The essay will explore structural changes and reforms in the Greek government through examining previous literature, organizational charts, power structure charts, news reports, government reports, and corruption surveys and reports. In the context of EU funding, this means that the effectiveness of soft funding initiatives, like education and job training, must be understood within the broader web of relationships and dependencies that characterize EU-Greece relations and domestic political relations between central public administration, such as the Ministry of National Economy (MNE), and local and regional authorities. The EU’s Cohesion Fund was designed to “reduce regional disparities and promote economic and social cohesion among member states.” However, the different outcomes and impacts of various types of funding—particularly the relatively lower effectiveness of soft funding compared to infrastructure investments—can be traced to how these funds interact with the institutional and socio-economic context of the recipient NUTS-2 regions. While infrastructure investments often yield immediate, tangible benefits through improved roads, bridges, and public works that can ensure political support for the regional and national administration or parties, soft funding initiatives like education and job training typically require a more supportive institutional environment and longer time frames to show measurable results. This essay utilizes HI to focus on real-world empirical questions and the ways through which changes in institutional structure shape political behavior and outcomes. This can be attributed to the fact that changing one set of norms or institutions can create larger implications for system-wide change. The underlying belief in HI includes the societal tendency to resist changes in institutions due to (1) interest groups that are disadvantaged by the update, (2) difficulty in predicting outcomes of reforms, and (3) up-front costs of investment into re-education (Steinmo, 2008). As such, early institutional choices can entrench policy paths by creating path-dependent processes due to the accumulation of institutional commitments and political investment over time. In Greece, the essay’s literature review found the reason behind the limited success of soft funding to factors such as weak institutional capacity, economic instability, and the challenges of implementing long-term reforms in a context of ongoing socio-political issues (Liargovas, 2015). HI helps explain why these soft funding initiatives have struggled to achieve their intended outcomes, as they are affected by a range of interconnected factors, including domestic political conditions, the role of non-state actors, and the broader EU policy framework. To analyze the impact of these funding changes on Greece, this study will use a combination of primary and secondary texts. Primary sources will include official EU resources, such as Cohesion Fund reports and legislative texts, as well as Greek government documents detailing the implementation of these funds and the extent of reform in political structures and public administration. Secondary sources, including academic books and papers on EU cohesion policy, Greek economic development, and the role of EU funding in regional development, will be reviewed to contextualize and interpret the underlying statistical quantitative data and qualitative data. Additionally, public opinion surveys, particularly those from Pew Research and Eurobarometer, will also be used to gauge Greek citizens' perceptions of the effectiveness of EU funding. This mix of qualitative and quantitative data will enable a thorough analysis of the factors influencing the relative success of different types of EU funding in Greece. As a frequent concern among the previous literature is related to the gap between visible political reform and de facto continuity of national clientelist politics in regional development policy, such in-depth analysis into the actual workings and the power distribution between different players in the Greek political scene for cohesion funding will be integral.

79


HISTORICAL BACKGROUND

Figure 1: Evolution of cohesion policy, 1988-2027 (European Court of Auditors, 2022) Accession of Greece as a EU State (1981) and the Integrated Mediterranean Programs (1986-92) The Cohesion Policy in the EU first began with the Treaty of Rome at the foundation of the EEC in 1957. The treaty’s preamble declares that “the differences existing between the various regions and the backwardness of less favored regions should be reduced,” and the agreement newly created the ESF and the EIB (Liargovas, 2015). Greece joined the Union in 1981 as it sought a stable institutional framework for its new democracy and wanted to develop and modernize the Greek economy. When Greece, alongside Portugal and Spain, joined the EC, economic and social disparity among European states widened. Historically, Greeks saw regional policy as an essential factor in its high national growth rates and have been developing public investment projects of an indicative character (Andreou, 2006). Upon Greece’s submission of a memorandum requesting further economic support to restructure the Greek economy, the EC committed a seven-year budget for regional development in such regions under the Integrated Mediterranean Programmes (IMP) in response in 1985. This was a pivotal step in Cohesion Policy as it represented the EU’s new effort towards structural policy development. The community devoted 6,600 million ECU, 1,600 million ECU over the ordinary ERDF, ESF, and the EAGGF, over the seven years to address the socioeconomic problems in Greek regions and offer such countries concessions in the integration policy for increased agricultural competition. Greece’s Ministry of the Economy (ME) planned the Greek IMP package between 1985 and 1997. The program focused on development, adaptation, and support for employment and income through productive investment and adapted new power-sharing measures that gave increased administrative power to local officials to plan and implement the plan, with the oversight of the EC (CEC, 1986). This program served as a precursor to the following Cohesion Programs, including the Community Support Frameworks (CSFs). The Maastricht Treaty that followed the IMP further solidified such Cohesion Policy deep into the objectives of the Union. This was in response to the demands of Spain’s Esteban González for EU funding for balanced regional development and environmental policies (Mazzucelli & College, 1997). As a result, the Maastricht Treaty (now Article 174 of TFEU) states that “In order to promote its overall harmonious development, the Union shall develop and pursue its actions leading to the strengthening of its economic, social and territorial cohesion.” This principle explicitly recognized and set economic and social cohesion as a fundamental objective of the Union and embedded the goals of the CSF into the EU’s legal and strategic framework. The treaty founded the Committee of the Regions and devolved decision-making from the national level to a shared structure between EU institutions and regional authorities (Marks, 1993; Hooghe & Marks, 1996).

80


The First Community Support Framework (1989–1993) The first Community Support Framework (1st CSF), also known as the 1st Delors package, dispensed an unprecedented 15.4 billion ECU funding from the EU to small infrastructure projects widely across the country. Such efforts were led by the national Monitoring Committee (MC) for the CSF, assisted by regional and OP MCs (Andreou, 2006). This funding went to the construction of transportation, healthcare, education, agriculture, and water systems, alongside environmental studies and environmental clean-up efforts. This accompanied the EC development policy reform in 1988, which orchestrated coordinated development policies among discretely and separately administered community structural policies and newly created a power-sharing structure between the EC and the regional authorities. The infrastructure investments were largely successful, but the large deficiencies in institutional capacity and governance structures prevented deeper restructuring for a turnaround of the low-productivity economy. Objective 1: Enhance growth and structural progress in less developed regions. Objective 2: Transformation of regions facing intense deindustrialization. Objective 3: Tackling long-term unemployment. Objective 4: Facilitating the integration of young people into the labor market. Objective 5: Accelerating the change in agricultural production structures and strengthening the development of rural areas.

Figure 2: Main objectives of 1st CSF (Liargovas, 2023) However, the 1st CSF failed to successfully pioneer major infrastructure projects necessary for attracting FDI and investments into research and human capital necessary for increasing workforce productivity (G.S.I.D., 2005). While fragmentation of funding into smaller and more local projects assisted the quick and full absorption of the available funds, this did not successfully result in an integrated regional-development framework as the local authorities lacked both the institutional and human capital to lead such projects effectively at the time. Human capital investments were limited to inefficient and unpopular seminars (Psicharis, 2004). The Second Community Support Framework (1994–1999) The second CSF, in compliance with the new guiding principles established at Edinburgh in 1992, featured doubled funding to the amount of 34.76 billion ECU. This supported 16 sectoral and 13 regional OPs during this period and came after hard negotiations on the distribution of funds amongst regions and sectors. Prior to the submission of the RDP, domestic competition over funding led to the intervention of the PM himself to finally confirm the allocation. This reaffirmed the higher hierarchy of national goals over regional development goals, and the RDP was submitted in 1999 to the newly appointed Prodi Commission (Andreou, 2006). The EC was unhappy with such dominance of national authorities and pushed for independent, transparent, and autonomous administration structures with high quality of human capital during this period. Projects from the previous framework were continued, and further investment was added for the energy and industry sectors, and major national infrastructure projects including “national highways, ports, Hellenic Railway Network, metro, energy projects, telecommunications infrastructure.” The government also introduced the OP for Education and Early Vocational Training to improve the Greek education system. Nevertheless, the programs failed to make a lasting impact as new institutions in Greek higher education did not continue after the initial group of students completed their studies. Objective 1: Strengthening the development and structural adjustment of regions that are lagging behind. Objective 2: Transformation of regions facing intense deindustrialization. Objective 3: Tackling long-term unemployment and making it easier for young people and those at risk of marginalisation to join the market, ensuring equal access for men and women.

81


Objective 4: Empower workers to adapt to changes in industry and production systems. Objective 5: Strengthening the development of rural areas with modernization actions and structural changes. Objective 6: Development actions and structural changes in sparsely populated regions.

Figure 3: Main objectives of 2nd CSF (Liargovas, 2023) The Third Community Support Framework (2000–2006) For the 3rd CSF, 11 sectoral and 13 regional OPs were run with a budget of 44.75 billion euros. This CSF worked to further integrate Greece into the knowledge-based economy and European Union institutions through an increase in productivity and employment. However, despite such nominal goals, most of the funding was used for “transportation infrastructure (28%) plus infrastructure related to health, social care, and sewage networks” (Plaskovitis, 2006). Objective 1: Strengthening the development and structural adjustment of the least developed regions. Objective 2: Support economic and social transformation in regions with structural problems. Objective 3: Support the adaptation and modernization of education, training and employment policies and systems.

Figure 4: Main objectives of 3rd CSF (Liargovas, 2023) New reforms for the framework were mainly around the new Reg 1260/1999, which stipulated the mandatory use of the European regulatory framework for management, implementation, and auditing. This formed an updated framework for partnership between Greece and the Commission: this attempted to introduce a new regional government-led leadership in the program. Alongside such reforms, the Lisbon Treaty set in 2000 set a new goal for the Cohesion Policy: “making Europe the most dynamic and competitive knowledge-based economy in the world” in response to rising globalization and increasing technological innovations around the world. In response, the EC attempted to further focus on investing in research and development OPs, and Cohesion Policy in such regimes was tripled in most European countries. To narrow the gap in R&D among the member states of the EU, the new local governing structures were encouraged to consider local capabilities and networks to strengthen the knowledge flows in specific regions. First National Strategic Reference Framework (2007-2013) As the EU admitted new member states in both 2004 and 2007, the distribution of NSRFl funds decreased as Greece was no longer relatively poorer; the EU as a whole had a lower average income from the intake of the new states. The EC minimized co-financing requirements during this period as the Greek government, amid the economic crisis that severely limited the government’s asset liquidity, was unable to co-finance projects. This “statistical effect” of enlargement led to Greece only receiving 20 billion euros in this period, and 15 billion euros from 2014 to 2020. Despite such a decrease in funds, analysis of these funds indicates an increase of 2.8% in the GDP during this period, 7,000 newly created start-up enterprises and 14,000 new jobs, 3,500 research projects, and 257,000 people who attended vocational training. Similar to previous periods, OPs related to infrastructure continued to receive the majority of the SCFs. Furthermore, energy policy issues continued to be worked on under Directive 96/92, to achieve a smooth change from a state-led to a free market-based energy market in promoting energy-policy issues. Entrepreneurship and energy transportation infrastructure were built to promote sustainability during this period. However, Greece’s competitiveness failed to improve, and deficiencies in the EU Cohesion Policy persisted (Liargovas, 2015). This period was especially troubled with delays in administration and continued bureaucratic administrative procedures at all levels. The government’s efforts to fully complete the 3rd CSF programs diverted resources away from the 1st NSRF programs, and OP

82


operational programs still had to face a number of certifications and approvals that negatively affected the project’s efficiency. While the government ran a substantial administrative reform that transferred substantial power to the newly elected administrations in each of the regions, this failed to transform the traditional power structures in regional funding. Environment - Sustainable Development Enhancing Accessibility Competitiveness and Entrepreneurship Digital Convergence Human Resource Development Education and Lifelong Learning Administrative Reform Technical Application Support National Contingency Reserve

Figure 5: Main objectives of NSRF Sectoral OPs (Liargovas, 2023) Second National Strategic Reference Framework (2014 - 2020) For the 2nd NSRF, a total of 15 billion euros was invested in Greece for Cohesion Policy. This policy continued to focus on promoting the development of less developed provinces within Greece. This funding was used for the completion of the Trans-European Transport Network, environmental projects including reducing climate change, and enhanced prevention and risk management. This funding, for the first time, was designed to support OPs concentrated alongside the EU's thematic goals present in the Europe 2020 strategy. This reflects the EU’s continued effort to “Lisbonize,” or increase the synergy effect between Cohesion Policy and other public policies of the EU. The priorities of Greece were set out in the PA between Greece and the EC on May 23, 2014, and covered OPs that targeted all eleven themes of the continent-wide program. Such provisions for concentration based on themes for each fund have further empowered Cohesion Policy to focus the available funding and resources on key growth factors. “Competitiveness, human resources, active social inclusion, and the completion of infrastructure” were the key focuses for this PA. The Greek provinces also gravitated investment during this period towards ERDF priorities (SMEs, ICT, and R&D) rather than ESF priorities of education, social inclusion, and employment (Liargovas, 2015). OP "Competitiveness, Entrepreneurship and Innovation" (EPANEK), with the aim of strengthening the competitiveness and extroversion of businesses, the transition to quality entrepreneurship and innovation in sectors such as tourism, the agri-food sector, manufacturing and services of high domestic value added value. OP "Transport Infrastructures, Environment and Sustainable Development" (YMEPERAA), which in part channeled funding to PEPs and set the goal of completing road and rail transport infrastructure, ports, airports, strengthening sustainable urban mobility and taking care of the limiting the burden on the environment. OP "Human Resource Development - Education and Lifelong Learning", with the aim of connecting education, training and lifelong learning with the labor market, creating sustainable jobs, integrating socially vulnerable groups into the labor market, improving the education system, OP "Public Sector Reform", with the aim of modernizing the administration structures, with an emphasis on better coordination, greater efficiency and better services to citizens. OP "Agricultural Development" for sustainability in the agri-food sector and increasing the added value of rural areas. OP "Fisheries and the Sea" to improve the competitiveness of the aquaculture and processing sectors, the sustainability of sea fishing and the sustainable development of areas traditionally dependent on fishing. OP "Technical Assistance" for the overall support of the implementation of all OPs.

Figure 6: Main objectives of 2nd NSRF Sectoral OPs (Liargovas, 2023)

83


This new programming period also ambitiously aimed to reform the governance structure in a way that promotes effective management and reduces the structural rigidities that inhibit the efficient operation of the OPs. Despite the fact that additional funding was allocated for the administrative capacity of the public administration for the first time, the funding hardly tackled the large deficiencies and backward political culture in the Greek public institutions (Liargovas, 2015).

FINDINGS Economic Data In the interest of examining the economics of the cohesion policy, this study presents data on Greece’s economic performance, including data on the amount and types of cohesion funding received by Greece. Gross Domestic Production: The evolution of Greece’s GDP can be found in Figure 7. Greece faced significant economic decline in the aftermath of the debt crisis. Greece suffered from a decrease in the GDP in the years 2008 to 2013 as structural reform policies decreased household spending and industrial production. To this day, Greece still suffers from structural weaknesses of the economy, and GDP growth rates have been under 2%.

Figure 7: Greece's GDP evolution 2008-2018 in market price-based euro (Liargovas, 2023) Cohesion Funding in Greece: Cohesion funding comprises a large percentage of the economy in Greece. Cohesion funding in Greece comprises 2.2% of the Greek GDP in comparison to 0.5% in EU-28 states. This substantial percentage highlights the critical dependence of Greece on European Union funds for regional development, emphasizing the importance of such funding in bolstering the Greek economy relative to other member states. While many EU member states benefit from Cohesion funding, Greece's proportionally larger reliance illustrates how pivotal these funds are to its weak economy. This analysis is essential for understanding the broader implications of EU regional policies, particularly in regions where such funding significantly contributes to economic growth and development. For the second NSRF period, Greece received a total of 15 billion euros from the European Union. This funding is lower in comparison with other countries such as Poland or the Balkan EU-13 states as Greece, previously one of the most financially challenged states in the EU, became relatively less lowincome as the average income in the EU declined because of the entry of such new states in the union. For example, Poland which entered the EU in 2004 has the highest proportion of funding at around 80 billion euros of NSRF funding. As part of the Lisbonization strategy of this funding scheme, greater funding was geared towards structural improvement projects. However, despite this statistical effect of the EU enlargement that led to this decrease in funding, Greece continued to achieve infrastructural and development at about the same rate as the previous period.

84


Additionally, as previously discussed in the historical background section, EC’s demands for rationalization of the public administration accompanied this new funding period. However, as will be discussed in the later analysis section, this program simply changed the visible structure of the public administration and failed in successfully reforming the fundamental and underlying causes of the inefficiencies including top-down policy planning, corruption and corporatism, and clientelist practices throughout the state. As the funding periods transformed from the IMP to the 3 CSF frameworks, the budget meaningfully increased from 2.1 billion in the initial 1986 to 1989 period to 14.3 billion in the 1st CSF, 29.7 billion in the 2nd CSF, and 44.4 billion during the 3rd CSF. National public participation and private participation significantly increased in the later programs as the structure of the programs were reformed.

Figure 8: Co-financed development programs in Greece 1986-2006 (Liargovas, 2023) Fiscal Budget in Greece: Greece from 2009 to 2015 consistently performed worse in terms of fiscal balance compared to other Eurozone countries. During these periods, Greece showed a much larger budget deficit at around two-fold to threefold greater proportion. After 2016, Greece attained a positive budget by 0.2%, 0.6% in 2017, and 0.9% in 2018. This illustrates how Greece was able to balance its budgets through trimming and reforming its excessive, inefficient, and clientelist welfare regime according to the bailout restructuring agreement with the union.

Figure 9: Evolution of fiscal balance in Greece and the Eurozone (Eurostat) Reform of CSF Governance Structure As part of the commission-led reforms into public administration, the Greek government consolidated the local administration structure to simplify and streamline the extended heavy local bureaucracy that had a very weak institutional capacity in implementing the cohesion fund projects. Across all 15 regions, local government mergers were able to consolidate 1151 governments into 246 newly merged OTAs. Law 2539/1997 also successfully consolidated 5823 to 1033 municipalities and community units. With the cooperation of these OTAs, the Greek government created development associations and area councils to institute a new framework of cooperation that attempted to focus on the regional authorities instead of the national ones. In total, 6385 development links and 492 area councils were established

85


with such a purpose. Additionally, motivated by a similar rationale, the Greek government with the European Commission trimmed the public servants by 38%, thereby conserving 38% of the personnelrelated costs in administration. Initially, Greek public administration was overstaffed with political connections to one of the two major dominant parties in a clientelist arrangement. These created problems of overcrowded public services, but without any corresponding increase in efficiency and capacity that should follow such large cost expenditures related to personnel in public service.

Figure 10: Voluntary Local Government Mergers (Liargovas, 2023) Outcomes of cohesion policy in Greece: Economic Growth GVA growth has consistently underperformed the GDP growth. This discrepancy is even more significant when we view this from a more comparative approach in comparison with other European Union countries. Greece’s GVA growth is 0.03 while GDP growth is at 0.04 level.

Figure 11: GDP vs GVA Growth per worker over 2000-2006 period (Pontarollo, 2017)

86


Outcomes of cohesion policy in Greece - Direct Returns: Greece obtained 50.17 million euros in direct returns, however the contract obtained index at 0.8% was significantly lower than the EU15 average of 85.5%. Instead, the ownership of capital in Greece was 99.2%, which was significantly higher than the EU15 average of 14.5. Instead of obtaining contracts or investing in a future-oriented project, Greeks predominantly favored obtaining capital, in other words, having physical infrastructure over obtaining contracts. In another study, ownership of capital for Greece accounted for about 35% of all direct and indirect returns in contrast to the close to 5% in other EU countries. Conversely, Indirect benefits which tend to account for the predominant majority of returns in other countries were limited to only 60% in Greece.

Figure 12: Direct Returns resulting from the implementation of Cohesion Funding (Polverari, 2014) Outcomes of cohesion policy in Greece – Trade: Cohesion policy very often is related to rise in production and subsequently exports as industrial capacity and national infrastructure improves. However, in Greece, the increase in exports is nearly non-existent and Greece ranks the lowest amongst all EU member states in this regard. The cohesion policy failed to create meaningful improvements in the Greek balance of trade.

Figure 13: Additional exports of EU15 to V4 countries as a result of Cohesion policy funding 2004-2015 (Polverari, 2014)

87


Political Data and Program Reforms From the 1990s to 2010, Greek maintained its distinct political culture of a two-party governance system. This is achieved through a 3% threshold and a 50-seat bonus that is granted by law to the governing party. This favored the two-large parties, PASOK and New Democracy in the ballots and subsequently the two parties consistently exercised considerable power not only over politics but also over society. Out of the 300 seats, the two parties have always held a predominant position in the Parliament despite the recent rise of SYRIZA representation in the Parliament. Greek public sentiment placed the blame for poor economic governance and extensive clientelism on PASOK and ND.

Figure 14: The parliamentary strength of the two dominant parties 1990-2009 (Liargovas, 2023) Policy Networks The General Secretariat for Investment and Development, originally part of the MNE, served in a crucial role in implementing Greece’s cohesion policy from the first beginning of the cohesion policy in Greece. Over the years, it has emerged as the central authority within the cohesion policy network, entrusted with overseeing the distribution and management of EU funds aimed at mitigating regional disparities. The significance of this institution has only increased with time, as it continues to harmonize national priorities with EU directives, as illustrated in Figure 15. Integrated Mediterranean Programs (1986-92): The period of the Integrated Mediterranean Programs (IMP) from 1986 to 1992 marked a pivotal phase for Greece as the country adjusted to the demands and expectations of EU membership. During this time, the European Union encouraged Greece to establish a range of specialized institutions to better manage the complexities of EU-funded projects. Among these were: (1) Regional Development Agencies: Greece established regional development agencies responsible for the management and implementation of the IMPs. These agencies played a crucial role in coordinating regional projects, managing funds, and ensuring that the objectives of the IMPs were met effectively. (2) Specialized Monitoring and Evaluation Bodies: To comply with EU requirements, Greece created specialized bodies tasked with monitoring and evaluating the progress of the IMPs. These bodies were essential in ensuring that projects remained on track and achieved their intended outcomes, while also providing reports to the European Commission on the utilization of funds and the impact of the programs. (3) Coordination Units within Ministries: Within the relevant ministries, Greece established coordination units that collaborated closely with the European Commission and other EU entities to align national policies with the goals of the IMPs. These units were instrumental in ensuring that the implementation of programs adhered to EU guidelines and facilitated effective communication between national and EU authorities. The 1st Community Support Framework (1989–1993): The introduction of the 1st CSF from 1989 to 1993 was a milestone in Greece's relationship with the European Union, marking the first structured approach to EU-funded development projects within the country. During this period, the GSID

88


continued to be a key player, guiding the implementation of EU policies and ensuring that Greece could effectively absorb the financial support provided by the EU. However, this period was largely characterized by the GSID and the MNE’s focus on maximizing the absorption of funds, with less emphasis on the efficiency and effectiveness of program management and monitoring. The priority was to ensure that Greece could access and utilize as much EU funding as possible to address its development needs, particularly in infrastructure and regional development. While this approach did succeed in channeling substantial financial resources into the country, it also highlighted the need for more robust administrative systems and oversight mechanisms, which would become more prominent in subsequent frameworks. The experience gained during the First CSF laid the foundation for future improvements in how Greece managed and implemented EU-funded programs, setting the stage for more sophisticated and effective frameworks in the years to come. The 2nd Community Support Framework (1994–1999): The 2nd CSF from 1994 to 1999 represented a period of significant institutional development for Greece as it responded to increasing demands from the EU to improve the management and oversight of EU-funded programs. During this time, most of the key institutions within Greece’s core policy network were established, driven by the European Commission’s insistence that Greece enhance its administrative capacity. The EU required Greece to create a host of new institutions focused on better management, monitoring, and evaluation of projects co-financed by the EU. These new bodies were tasked with ensuring that EU funds were used effectively and that projects met the rigorous standards set by the Commission. The 1999 reform of cohesion policy was particularly influential, as it mandated the establishment of efficient management systems that could ensure transparency, accountability, and results-oriented implementation. This led to the Greek government negotiating new management structures with the Commission, resulting in each Operational Programme being managed by a "special service" within the relevant ministry or region. These changes marked a shift towards a more systematic and professional approach to managing EU funds, reflecting a growing understanding within Greece of the importance of meeting EU standards and the benefits of doing so. The reforms of this period laid the groundwork for even more extensive changes in the following years, as Greece continued to align its policies and practices with those of the EU (Andreou, 2006: 251–2). The 3rd Community Support Framework (2000–2006): In 2000, Greece introduced legislation (L. 2860/00) that strengthened the institutional framework for the 2000–2006 period. The General Secretariat was expanded, and new Managing Authorities were set up within various ministries and regions. Monitoring committees, composed of national, regional, and social representatives, were established to oversee these programs. This period saw the creation of a network focused on integrated planning and strict management. However, challenges like political influence and uneven performance among regional authorities remained (Andreou, 2010). The National Strategic Reference Framework (2007-2020): During the 1st NSRF period in 2007–2013, which led into the 2014–2020 framework, the EU aligned its cohesion policy with the Lisbon Strategy, introducing new governance structures and greater centralization. NSRFs replaced the older Community Strategic Frameworks, with a focus on achieving Lisbon goals. Despite government claims, these changes made decision-making more complex and increased bureaucratic burdens, complicating the efficient implementation of programs as can be seen in Figure 15 (Andreou and Lykos, 2011; Andreou and Papadakis, 2012). Here, pivotal changes were made in the policy network for the cohesion policy implementation. Instead of the CSF Managing Authorities and the Managing Committees that were centralized at the MNE level, the managing authorities were instead created at the Regional OP (13) and National OP program-levels.

89


Figure 15: Greek Cohesion Policy Networks (Liargovas, 2015) Outcomes Society Index: Greece, compared with the EU average, scored consistently lower for the general index, human capital index, connectivity index, integration of digital technology index, and the digital public services index. This indicates inadequate preparation for an information and knowledge-based economy on the part of technological infrastructure and human capital in the country of Greece.

Figure 16: Digital Economy and Society Index 2022 (Liargovas, 2023) Throughout the different programs, shifts in program focus occurred. The Lisbonization that followed the transition from the CSF to the NSRF ensured different EU programs across multiple different countries had the same goals in mind. This prompted the Greeks to adopt the EU-2020 goals and to reprioritise such areas of focus within the OP selection process. However, a continued concern as can be seen in the following data is that while Greece invests extensively in physical infrastructure, the country is neglecting soft projects such as education, research and development, and development of human capital. While infrastructure and industry consistently received over half of the entire funding, such soft areas constantly were amongst the least funded areas in Greece’s cohesion policy. The data shows that even as priorities shifted throughout the different programs, the neglect of such important thematic areas of soft funding persisted through the different programs while hard funding projects consistently succeeded in receiving a large bulk of the cohesion policy funds.

90


Figure 17: Evolution of Financial Allocations by Category of Intervention (Liargovas, 2015; Pontarollo, 2017) Based on such Europe 2020 headlines of employment, R&D and innovation, climate change and energy, education, poverty and social exclusion, the Greek government set the following headline national targets for NSRF II. The following table showcases Greece's baseline levels as of 2010 (Karvounis, 2015), the target level by the end of the NSRF II period, and the current level as of 2024 (Eurostat, 2024; World Bank; 2024).

91


Figure 18: NSRF II Headline targets (2020) compared with current (2024) and baseline levels (2010) This stage youth employment program shows a commonplace problem amongst Greece’s many programs. Instead of creating long-lasting and structural improvements, such programs are geared towards political objectives and not one in the interest of the state. The majority of new jobs and educational programs that have been created during the funding periods in Greece have been government-funded jobs instead of private sector jobs. Such makes the country not only more dependent on the EU funding in daily operations but also contributes to the sustained failure fundamentally to reform the program.

Figure 19: Stage Program Postings by year (Ioannidis, 2015)

92


Cohesion Indicator (Janning, 2018): Greece's funding indicator is strong, but its resilience indicator is weak. While Greeks tend to hold a negative view of the European Union, they do recognize the EU's successes in fostering integration. The Greek population's personal connection to the EU, in terms of engagement and attitude, has diminished significantly due to a prolonged loss of confidence. Despite a notable increase in interactions between Greeks and other Europeans and a rise in Greece's Approval indicator, these gains could not offset declines in other areas. In the EU Cohesion Monitor ranking, Greece maintained 16th place in structural cohesion between 2007 and 2017, but its individual cohesion ranking dropped from 14th to 26th during the same period. The financial crisis and subsequent migration crisis led to a sharp decline in Greece's individual cohesion with the EU.

ANALYSIS Historical Context and Centralized Governance Greece’s troubled policy administration is not merely a contemporary phenomenon but is deeply embedded in the country’s institutional history. Greece’s centralized governance structure can be traced back to its modern state formation, where the central government retained tight control over regional and local authorities (Liargovas, 2015). This centralization was reinforced throughout the 20th century power struggles, particularly during periods of political instability, where the national government sought to consolidate power and maintain control over resource distribution (Andreou, 2010). This historical legacy has had a profound impact on how Greece manages its EU cohesion funds. The various reforms in Greece’s public administration, including the shift within the 1st, 2nd, and 3rd Community Support Framework (CSF) and to the National Strategic Reference Framework (NSRF), were intended to address these centralized structures. The reforms aimed to decentralize the management of EU cohesion funds by transferring more responsibilities to regional authorities. However, these reforms have had limited success in altering the entrenched centralization and clientelism, which has continued to shape the absorption and implementation of EU cohesion funds. The persistence of centralized decision-making processes has hindered the flexibility and responsiveness needed for effective fund absorption and implementation at the regional level. As a result, the reforms did not achieve their intended outcomes of enhancing regional autonomy and ensuring a more strategic allocation of resources. The highly centralized administration means that key decisions regarding the allocation of these funds are made at the national level, with minimal input from regional or local authorities. This centralization has led to a uniform, top-down approach to development that prioritizes large-scale infrastructure projects, which are easier to manage and control from a central authority. Chardas (2012) proposed that the lack of autonomy for local governments in Greece significantly limits their ability to influence the allocation of funds to meet region-specific needs, further entrenching this centralized approach deep into the political culture of Greece. In Greece, the centralized control of EU funds has created a path dependency where the focus on infrastructure projects has become the norm, driven by both historical precedents and the ongoing need to demonstrate the effective absorption of EU funds to the political constituents and groups within the patronage network. This focus on visible, tangible projects has been perpetuated over time (Aggelakis, 2007), even as the underlying needs of the economy and society have evolved. This analysis section will explore different sources and literature to explore why the reforms of the cohesion policy throughout the decades failed to intervene in the fundamental weaknesses of the policy. Clientelism and Its Impact on Development Strategy Clientelism in Greece is another historically entrenched institution that has significantly shaped the country’s development strategy. Historically, the Greek political system has been characterized by a clientelist network where political support is secured through the distribution of state resources and favors. This system has deep roots in Greece’s political culture and has been reinforced through successive governments, making it a persistent feature of the country’s governance structure (Andreou,

93


2010). Such a political culture of clientelism has created an environment where policymakers are incentivized to focus on projects that deliver immediate, visible benefits to specific constituencies. Infrastructure projects, particularly in transportation, are highly visible and can be easily presented as concrete achievements to voters. The persistence of clientelism in Greece further undermined the effectiveness of the NSRF’s decentralization efforts. Although the reforms aimed to create a more region-specific and needs-based allocation of EU funds, clientelism continued to influence the distribution of resources. This has limited the effectiveness of reforms intended to promote more equitable and strategic use of EU funds. The transfer of managing authorities to the regional level during the 90s and early 2000s was intended to foster a more region-specific and needs-based allocation of EU funds. Clientelism in Greece operates through networks of political loyalty and patronage, where resources are allocated based on political considerations rather than strategic needs. Such a politicized aspect has been analyzed by Aggelakis (2007), who discovered that allocations of funding in regional prefectures increased in the years before national elections. This supports the argument that public financing is a politically loaded instrument closely linked with the Greek clientelist network. Additionally, the rationalization process under the transition to NSRF, without sufficient investment in building infrastructural thickness into regional authorities, led to regional administration becoming increasingly dependent on central authorities for guidance. This not only entailed a regional sclerosis of funding distribution but instead led to far greater centralization than before, as the new system presented barriers of implementation to regional authorities (Aggelakis, 2005). Under the NSRF, regional leaders, now in control of managing authorities, found themselves under pressure to deliver visible results to secure political support, much like their predecessors at the national level. Infrastructure projects, particularly in transportation, continued to be favored because they provided tangible, immediate benefits that could be easily showcased to constituents. The Ongoing Legacy and Challenges of Centralization The shift from the Community Support Framework (CSF) to the National Strategic Reference Framework (NSRF) was intended to bring about more strategic, effective, and decentralized management of EU cohesion funds in Greece. However, the historically centralized nature of Greece’s policy administration has remained a significant barrier to this transition, undermining the intended reforms. Greece’s governance structure has long been characterized by centralization, with the national government holding tight control over decision-making processes related to the allocation of EU funds. This centralization, deeply rooted in the country’s institutional history, has meant that even with the introduction of the NSRF—designed to promote more regional autonomy and strategic alignment with EU objectives—much of the decision-making power remained concentrated at the national level. This lingering centralization was partly because, although the managing authorities were formally transferred to the regional level, many of these authorities lacked the experience, resources, and institutional capacity to manage such a significant responsibility effectively. As a result, they often continued to rely heavily on guidance and oversight from the central government, which diluted the intended effects of decentralization. Moreover, the historical path dependency of centralized control meant that regional authorities were not fully empowered to diverge from the central government’s priorities, leading to a continuation of the focus on large-scale, visible infrastructure projects. Liargovas (2015) argues that Greece’s development model, shaped by these historical institutional factors, has left the country with modern infrastructure but without the corresponding human capital and innovative capacity needed to sustain economic growth. This path dependency has locked the country into a self-reinforcing cycle where infrastructure investment continues to dominate, even as the needs of the economy shift toward more knowledge-intensive industries at the onset of the AI and automation era. Despite various opportunities for reform, such as during the economic crisis of the late 2000s, the entrenched nature of clientelism and centralized control has largely prevented a significant

94


shift away from the focus on infrastructure. Instead, these practices have persisted, continuing to shape the allocation of EU funds in ways that prioritize visible achievements over sustainable development. This has led to failure to fundamentally grow and reform the economy for competitiveness in the modern knowledge-based economy. Thus, the analysis shows that the reforms in Greece's public administration have been insufficient to overcome the deep-seated issues of centralization and clientelism, which continue to affect the absorption and implementation of EU cohesion funds.

CONCLUSION Greece has been a significant beneficiary of European Union (EU) cohesion and structural funds, intended to reduce regional disparities and promote economic convergence across Europe. However, rather than fostering balanced growth, Greece has predominantly directed these funds towards infrastructure projects, especially in transportation, while underinvesting in crucial areas like education, research, and human resources. This essay explores such a phenomenon from how Greece’s deeply rooted centralized policy administration and clientelist political practices have historically influenced this funding strategy, leading to an overemphasis on visible infrastructure achievements at the expense of sustainable long-term development. The literature review discusses the European Union's Cohesion Policy, which is integral to reducing regional disparities and promoting economic and social cohesion across EU member states. The review highlights Greece's unique position, heavily funded by the EU but struggling with issues like low resilience and decreasing public engagement, especially after the financial and migrant crisis. The policy's evolution from a sectoral to a territorial approach is emphasized, with the transition to regional units and the increased involvement of local authorities in operational programs. However, in Greece, centralized power structures have hindered effective implementation. The review also touches on the historical context, including Greece's position during the Maastricht Treaty negotiations and the impact of external economic crises on its economy. It critiques the effectiveness of the Cohesion Funds in Greece, pointing out that institutional weaknesses, corruption, and clientelism have limited the positive impact of these funds. The literature identifies a gap in the analysis of cohesion changes at the regional level in Greece over the decades. The historical background outlines the evolution of the EU's Cohesion Policy and its impact on Greece since its accession to the EU in 1981. It begins with the Treaty of Rome in 1957, which laid the foundation for reducing regional disparities. Greece's integration into the EU, alongside Portugal and Spain, widened economic and social disparities within the Union. The introduction of the Integrated Mediterranean Programs (IMP) in the 1980s marked a significant step in addressing these issues, followed by the Maastricht Treaty, which further entrenched cohesion as a key EU objective. The Community Support Frameworks (CSFs) from the late 1980s to the 2000s provided substantial funding to Greece, primarily for infrastructure projects. However, despite these efforts, the centralized governance structure and lack of regional autonomy limited the effectiveness of these funds in fostering long-term economic development. The shift from the CSF to the National Strategic Reference Framework (NSRF) was intended to decentralize fund management, but the historical centralization of power in Greece continued to influence fund allocation, leading to a focus on visible, short-term projects rather than sustainable development. This lingering centralization was partly since, although the managing authorities were formally transferred to the regional level, many of these authorities lacked the experience, resources, and institutional capacity to manage such a significant responsibility effectively. As a result, they often continued to rely heavily on guidance and oversight from the central government, which diluted the intended effects of decentralization. Moreover, the historical path dependency of centralized control meant that regional authorities were not fully empowered to diverge from the central government’s priorities, leading to a continuation of the focus on large-scale, visible infrastructure projects.

95


REFERENCE Andreou, George. “EU Cohesion Policy in Greece: Patterns of Governance and Europeanization ∗.” South European Society and Politics, vol. 11, no. 2, June 2006, pp. 241–59. DOI.org (Crossref), https://doi.org/10.1080/13608740600645865. ---. “The Domestic Effects of EU Cohesion Policy in Greece: Islands of Europeanization in a Sea of Traditional Practices.” Southeast European and Black Sea Studies, vol. 10, no. 1, Mar. 2010, pp. 13–27. DOI.org (Crossref), https://doi.org/10.1080/14683851003606747. Borrás, Susana, and Helle Johansen. “Cohesion Policy in the Political Economy of the European Union.” Cooperation and Conflict, vol. 36, no. 1, Mar. 2001, pp. 39–60. DOI.org (Crossref), https://doi.org/10.1177/00108360121962254. Boychuk, Gerard W. “‘Studying Public Policy’: Historical Institutionalism and the Comparative Method.” Canadian Journal of Political Science / Revue Canadienne de Science Politique, vol. 49, no. 4, 2016, pp. 743–61. Di Caro, Paolo, and Ugo Fratesi. “One Policy, Different Effects: Estimating the Region‐specific Impacts of EU Cohesion Policy.” Journal of Regional Science, vol. 62, no. 1, Jan. 2022, pp. 307– 30. DOI.org (Crossref), https://doi.org/10.1111/jors.12566. Geppert, Kurt, and Andreas Stephan. “Regional Disparities in the European Union: Convergence and Agglomeration.” Papers in Regional Science, vol. 87, no. 2, June 2008, pp. 193–218. DOI.org (Crossref), https://doi.org/10.1111/j.1435-5957.2007.00161.x. Giua, Mara, et al. “EU Cohesion Policy and Inter‐regional Risk‐sharing: First Evidence and Lessons Learned.” JCMS: Journal of Common Market Studies, vol. 62, no. 1, Jan. 2024, pp. 142–67. DOI.org (Crossref), https://doi.org/10.1111/jcms.13483. Iain, BEGG. The European Union and Regional Economic Integration. Janning, Josef. CRISIS AND COHESION IN THE EU: 2018. ---. Making Sense of Europe’s Cohesion Challenge. 2024. Kalaitzidis, Akis, and Nikolaos Zahariadis. “Greece’s Trouble with European Union Accession.” Cahiers de La Méditerranée, no. 90, June 2015, pp. 71–84. DOI.org (Crossref), https://doi.org/10.4000/cdlm.7951. LIARGKOVAS, Panagiotis, et al. Η Ευρωπαϊκή Πλιτική Συνοχής Και η Ελλάδα (1981-2021)Η Ευρωπαϊκή Πλιτική Συνοχής Και η Ελλάδα (1981-2021). 2023, p. 335. DOI.org (Datacite), https://doi.org/10.57713/KALLIPOS-369. Liargovas, Panagiotis, et al. Beyond “Absorption”: The Impact of EU Structural Funds on Greece. Dec. 2015. Mazzucelli, Colette, and Beaver College. FRANCE AND GERMANY AT MAASTRICHT. Musiałkowska, Ida, et al. Successes & Failures in EU Cohesion Policy: An Introduction to EU Cohesion Policy in Eastern, Central, and Southern Europe. De Gruyter Open, 2020. DOI.org (Crossref), https://doi.org/10.1515/9788395720451. OECD. Regional Policy for Greece Post-2020. OECD, 2020. DOI.org (Crossref), https://doi.org/10.1787/cedf09a5-en. Polverari, Laura. Balance of Competences: Cohesion Review: Literature Review on EU Cohesion Policy. Feb. 2014.

96


Steinmo, Sven. “Historical Institutionalism.” Approaches and Methodologies in the Social Sciences, edited by Donatella Della Porta and Michael Keating, 1st ed., Cambridge University Press, 2008, pp. 118–38. DOI.org (Crossref), https://doi.org/10.1017/CBO9780511801938.008. Tsoukalis, Loukas. Greece and the EU: A Turbulent Love Affair, Now More Mature?

97


Unveiling Gender Disparities: A Comprehensive Analysis of South Korea’s Social and Policy Challenges and Their Significance for Gender Equality Author

Full Name

:

Chung, Joylynn

:

Culver Academies

(Last Name, First Name)

School Name

Abstract Gender inequality persists across various domains, including education, healthcare, and economic opportunities, affecting social, political, and economic spheres. This paper reviews the current footprint of gender inequality, with a specific focus on its manifestations in South Korea, particularly examining issues such as the recent restructuring of the Ministry of Gender Equality and Family (MOGEF) and its impact. It also draws comparisons with international practices to provide a broader perspective. This paper addresses key issues including workplace dynamics, violence against women, and low birth rates, and proposes solutions informed by global approaches. By integrating personal observations, it offers actionable insights aimed at improving gender equality in South Korea. The analysis underscores the broader significance of these efforts for achieving social justice and highlights the need for effective policy interventions to address these disparities.

Keywords Gender Equality, Society, Policy, South Korea

98


1. Introduction: Unveiling Gender Disparities In contemporary societies, gender inequality remains a pervasive issue impacting fundamental human rights. This issue manifests differently depending on the region, with notable disparities in how it is addressed. For instance, Nordic countries such as Sweden and Norway have made significant progress in reducing gender disparities through comprehensive policies and social frameworks. In contrast, many regions in Asia and Africa continue to grapple with deeply entrenched cultural norms and limited legislative progress. These differences underscore the universal nature of gender inequality, despite the varied circumstances and responses across continents. In South Korea, gender inequality is particularly pronounced due to longstanding cultural practices and legal frameworks that disadvantage women. Traditional Confucian values and contemporary socioeconomic factors, including an inadequate social welfare system and demographic shifts such as population aging and family dissolution, have exacerbated gender disparities and contributed to rising family income inequality (Kwon & Doellgast, 2018; Shin & Ju, 2014). South Korea also faces high rates of sexual violence, low birth rates, and elevated suicide rates, all intricately linked to gender inequality. Compounding these issues, the government body responsible for gender equality, the Ministry of Gender Equality and Family (MOGEF), is undergoing significant restructuring and faces potential dissolution. This contrasts sharply with global trends where many countries are strengthening their commitments to gender equality through more robust administrative support. To identify the problems and risks that South Korean society faces, this paper aims to examine the current situation of gender inequality both within South Korea and in a comparative context. The goal is to provide insights for citizens and policymakers to address these issues more effectively. The paper reviews the general status of gender inequality, with a specific focus on workplace issues in South Korea and other Asian countries. It also explores the development of women’s participation in business leadership and addresses critical social issues closely related to gender inequality, such as violence against women and low birth rates. Additionally, the paper discusses early childhood education as a potential remedy for gender inequality and evaluates the ongoing debate regarding the abolition of MOGEF. Finally, it offers personal observations and recommendations for addressing the emerging challenges related to gender inequality in South Korea.

2. Gender Inequality Across Asian Contexts The Footprint of Gender Inequality The World Health Organization (WHO) defines sex as “the biological and physiological characteristics that define men and women.” In contrast, gender is described as the “socially constructed roles, behaviors, activities, and attributes that a given society considers appropriate for men and women.” Unlike sex, which is biologically determined, gender is shaped by societal expectations and roles assigned to individuals based on their sex. This distinction is crucial as it highlights that gender roles are not fixed but are constructed and perpetuated by societal norms and expectations. Gender inequality occurs when men are afforded higher status and greater control over various aspects of life compared to women. This imbalance is rooted in deeply ingrained biases and misconceptions that perpetuate the notion that women are inherently inferior and therefore undeserving of equal treatment (UN Women, 2021). The United Nations defines gender equality as a fundamental human right and a critical component for achieving a more just and equitable world. Gender equality is the fifth of the UN’s seventeen Sustainable Development Goals (SDGs). However, despite its importance, progress toward achieving gender equality remains slow. At the current pace, the world will not achieve gender equality by the target year of 2030 (UNSD, 2023). Although there has been a global increase in women's representation in managerial positions by approximately 3%, significant economic disparities persist, with about 2.4 billion women still facing barriers to full economic participation (World Bank, 2022).

99


Gender inequality is deeply rooted in cultural traditions and stereotypical beliefs about gender roles. Certain industries, such as construction, are often associated with men, while others, like education, are stereotypically linked with women. These stereotypes contribute to a persistent gender divide in various sectors. While there has been some global progress in reducing gender disparities, many countries, particularly in Asia, continue to experience stagnation or even an increase in gender inequality. For instance, in the Middle East, severe restrictions on women's public visibility, including strict dress codes such as mandatory veiling, exemplify how deeply entrenched cultural norms can impact gender equality. In South Korea, the situation reflects a complex interplay between cultural norms and institutional factors. Data from the OECD shows a significant increase in female enrollment in higher education, surpassing male enrollment by 5% in 2013. This progress in educational attainment is a positive development; however, gender inequality in the workplace remains a significant issue. South Korea consistently ranks 118th out of 144 countries in terms of gender gaps in economic and political participation and empowerment (World Economic Forum, 2017). Moreover, the OECD reports that South Korea has the highest gender wage gap among its member nations, with women earning approximately 30% less than men (Yang, 2021). This disparity is largely attributed to cultural factors, including entrenched traditional gender roles and expectations that continue to influence women’s opportunities and earnings in South Korean society. Comparative Insights: Gender Inequality Across Asian Contexts The concept of hegemonic masculinity, as defined by Ma et al. (2021), refers to the belief that men inherently hold a more dominant social status than women. This notion reinforces the power dynamics that place men in superior positions across various aspects of life. Benevolent sexism, as described by Hideg and Shen (2019), involves seemingly positive comments and beliefs about women performing traditional gender roles. While these comments may appear complimentary, they effectively reinforce gender stereotypes by confining women to specific roles and limiting their opportunities. Although hegemonic masculinity emphasizes the dominance of certain men, particularly those in high-ranking positions, it simultaneously relegates other men and all women to subordinate roles. In East Asian countries like China, Japan, and South Korea, hegemonic masculinity manifests distinctly, influenced by deep-seated cultural traditions and hierarchical social structures. Traditionally, East Asian cultures have placed significant value on Confucian beliefs, which stress strict social hierarchies and roles within organizations and family dynamics (Kinias and Kim, 2012). These cultural norms prioritize maintaining social harmony over achieving equality. The differences in attitudes towards gender equality between Eastern and Western cultures are notable. In Western countries, such as the United States and Europe, there is generally less prevalence of the belief that women are inherently inferior to men compared to Eastern countries. For example, studies (Kinias and Kim, 2012) reveal that Japan and South Korea exhibit higher levels of gender inequality than the United States and Europe, reflecting these regional differences in cultural norms and gender perceptions. Research indicates that women frequently face biases related to expectations about their leadership roles. The "think-leader, think-male" phenomenon, where leadership attributes are stereotypically associated with men, creates barriers for women aspiring to leadership positions (Bullough et al., 2022). Women are often perceived as less suitable for leadership roles due to stereotypes that associate femininity with traits such as communal and compassionate behavior, as opposed to the assertive and dominant traits linked with masculinity. In East Asia, where Confucian practices remain influential, these gender biases are even more pronounced, further hindering women's progress in leadership and professional arenas. Despite significant strides in recent years, Asia still grapples with entrenched gender disparities rooted in Confucian traditions. South Korea, while experiencing economic growth and increasing women's participation in the workforce, continues to face challenges due to the lingering influence of Confucian ideologies. These beliefs promote the subordination of women and restrict their social and professional opportunities (Stainback and Kwon, 2021). The traditional Confucian view that men are the primary

100


earners and women should be subordinate remains prevalent, contributing to the gender gap in job opportunities and career advancement (Craddock, 2022). In South Korea, Confucian ideals deeply impact both workplace dynamics and family life, resulting in clearly defined gender roles. Cho et al. (2019) have described this phenomenon as the "glass fence," where women are expected to remain at home while men pursue careers. Despite progress in women's economic roles and challenges to traditional Confucian beliefs, South Korea remains one of the least favorable countries for working women due to a high glass ceiling index (Cho et al., 2021). The prevalence of hegemonic masculinity is also reinforced by mandatory military service for South Korean men, which contributes to the perception of men as strong and powerful leaders. Furthermore, the dominance of Chaebol families, which often exclude women from high- ranking positions, exemplifies how traditional masculine values continue to shape South Korean society and perpetuate gender inequality (Ma et al., 2021). Women Leadership: Challenges and Pathways The influence of patriarchal norms and male dominance in countries like China, Japan, South Korea, and India profoundly affects women's roles in families and societies, directly impacting their employment opportunities (Cooke, 2010). In South Korea, deeply ingrained cultural perceptions assert that women should prioritize domestic responsibilities over professional careers, reinforcing the notion that "men's jobs" are unsuitable for women (Ma et al., 2021). These traditional gender roles create a significant divide between men, who are expected to hold high-paying and leadership positions, and women, who are often confined to household duties. This cultural divide is compounded by the additional layer of racial discrimination faced by Asian women, further exacerbating their challenges in attaining leadership roles. Furthermore, Asian women are burdened with the dual identities of being female and Asian, which adds another layer of complexity to their struggles in professional environments. South Korea's corporate culture, heavily influenced by collectivism and military values, perpetuates a significant gender imbalance. Despite women constituting about 40% of the workforce, they remain underrepresented in executive positions. According to the World Economic Forum (2017), South Korea ranks 118th out of 144 countries in terms of gender gap in economic participation, education, health, and political empowerment. Among the 29 Organization for Economic Co-operation and Development (OECD) countries surveyed, South Korea has been ranked last continuously since 2016 in The Economist's (2023) glass-ceiling index. This index evaluates factors such as higher education, labor force participation, pay, childcare costs, maternity and paternity rights, business school applications, and representation in senior jobs. Patterson et al. (2016) provide a detailed analysis of the persistent gender discrimination in South Korean workplaces, despite decades of gender policy reforms and improved education for women. Their study identifies several key factors contributing to this issue: inadequate legal enforcement, a weak punishment system, tacit acceptance of the status quo by women, organizational cultural issues rooted in traditional Korean attitudes, and a general lack of knowledge about equal opportunity regulations among companies. Addressing these issues through comprehensive reforms could significantly reduce gender discrimination and enhance workplace equality in South Korea. However, new businesses started by women entrepreneurs have tripled from 693 in 2008 to 2,430 in recent years, though they still account for only 6% of the 35,187 startups in 2017, according to the South Korea Venture Business Association. Startups in the IT industry represent 17.5% of these 2,430 businesses, but only a select few survive beyond five years (Cho et al., 2021). There is a stark difference in the representation of women in multinational corporations compared to fully South Korean companies. Multinational firms in South Korea have up to 60% women executives, while South Korean companies have only 2.7% (Cho et al., 2021). Additionally, out of 246 members in 31 large companies, only 4 were women, and 27 other companies did not have a single woman on their board (Cho et al.,

101


2021). This disparity underscores the ongoing challenges women face in attaining leadership positions within South Korea’s corporate sector. Recent developments indicate a shift in the South Korean corporate landscape, particularly in the food and distribution industry. Female leaders in these sectors are driving change through performance, expertise, and innovative leadership. This industry, traditionally more conservative, is experiencing transformation due to the leadership of women who bring new perspectives and approaches. Companies that place female management leaders at the forefront are benefiting from improved management practices and adaptability to market trends. This trend highlights the potential for breaking traditional gender norms and underscores the importance of supporting women’s leadership development to foster innovation and growth in South Korea’s business environment. Gender Equality and Violence Against Women To achieve gender equality, discrimination against women and barriers that restrict them from being fully equal with men should be eliminated and the rights of women should be recognized and promoted. One of the most pervasive barriers is violence against women (Whaley, 2001). Gender inequality, in the form of power imbalances, traditional gender norms, stereotypical hierarchies, and women’s status in society are the main causes of violence against women. However, violence against women is not only a consequence of gender inequality. Using the power imbalance between men and women, it increases the gap in social equality while also strengthening women’s traditional status of being lower than men. Violence against women reinforces traditional gender norms and allows the patriarchal system to continue that men have the inherent right to control women. One of the most important ways to prevent violence against women is to achieve gender equality. Even the UN Gender Assembly, in the 1993 Declaration on the Elimination of Violence Against Women, recognized that traditional patriarchal hierarchies were the cause of violence against women. Not surprisingly, wartime gender displacements and segregation heavily affect sexual violence rates. Increasingly, wartime and post-war societies have a higher gender-inequality-affected sexual violence rate. Wartime is a key time for the crumbling of long-standing social hierarchy and traditional gender norms in which sexual violence rates have a steep increase. For instance, during the Yugoslavia conflict, 20,000 women were reported to be raped. During the Rwandan genocide, there were reports of upwards of 400,000 women. This proves to be a reoccurring pattern, with similar statistics showing up in other major world conflicts such as the civil war in Kashmir, Peru, and Sudan, as well as the 1990 conflict in Somalia. According to a 2011 study by World Bank, 50 different countries saw major increases in sexual violence after conflicts like civil war (Buvinic, 2013). Specifically in South Korea, sexual violence against women has been a continued social problem since the Japanese colonial periods and World War 2. Violence against women, or “systematic sexual violence[s]” as said by Armstrong et al (2018), caused by soldiers are often exempt from history books and thus forgotten. Not only is violence against women closely tied with gender inequalities in underdeveloped countries, but also in the developed countries including the United States and South Korea. A 2020 study to test the relationship between traditional gender norms and sexual violence rates in the U.S. (Kearns et al., 2020), shows states with a higher gender inequality rate also have a higher estimate of rape and physical violence against women. This then suggests that gender inequality is an important factor in sexual violence rates, at least in societal environments. Historically in the United States, laws used to assign women as a white man’s property, thus giving women no rights to speak up about sexual violence. Although presently, this is not the case, it was only in the late 1920s and early 1930s that the age of consent was raised to 18 or 19, and for a longest time, judges would never prosecute a father who sexually assaulted a child (Freedman, 2013). In various cultures, sexual violence has been a long-stranding way to maintain power imbalances between men and women (Armstrong et al., 2018). Social media has severely diluted sexual violence (Gansen, 2017), and exposed young boys to jokes regarding rapes and other sexual assaults, thus making

102


boys think that it is acceptable to “assert their masculinity” through sexual violence (Armstrong et al., 2018). In its essence, gender equality has been proven to be an important factor when violence against women increases in any country. In both 2005 and 2010, WHO reported that in order to reduce and prevent violence against women, countries should first reduce gender inequality (Wall, 2014). Due to this, it is vital to consider each country’s traditional gender norms and/or patriarchy. Although there has been no concrete evidence that ties together gender inequality and violence against women, the unequal ratio of victims, majorly female, and perpetrators, majorly male, show an obvious link between gender inequality and violence against women (Wall, 2014). Demographic Impacts: Gender Equality and Low Birth Rates Social demographers have assumed that gender inequality to be one of the factors in fertility rates in both developed and under-developed countries. In the 20th century, most researchers thought that as more women got more access to quality education and work opportunities, fertility rates would drop as women would prefer to have fewer children. Following that assumption, countries with lower gender equality showed higher fertility rates. Thus, researchers have shown that there seems to be a strong relation between gender equality and birth rates – specifically increased gender equality leading to lower birth rates (Lesthaeghe, 2014). In East Asia especially, where work demands have always been the most extreme, a large part of working women are simply choosing to not marry at all. In Western European countries, there are many studies that show increased birth rates in regions where policies that increase childcare for working women and male participation in parental leave have been implemented. Inequality between individual responsibilities – education and workforce participation – and familial or group responsibilities – childcare – lead to lower fertility rates. An increased gap – for instance, if women participation in the workforce increases while childcare responsibility remains on the women's side – fertility rates can only drop (Neyer and Rieck, 2009). Recently, however, researchers have argued for the “U-shaped curve” for the relationship between gender equality and fertility rates. They argue that as societies become more gender-equal, fertility rates are bound to fall at first because women are no longer restricted to household roles. However, as time passes, increased gender equality increases fertility rates because women feel more encouraged to have children in a more helping society. In the developed countries, higher gender equality may lead to higher fertility rates; it is still an assumption, as not enough evidence has been provided and there are still controversies surrounding the argument (Kolk, 2019). In developed European countries during the late 20th century, fertility rates dropped before rising again in the early 21st century. Many studies credit the rise in fertility rates to increased gender equality and egalitarian work-family policies that either governments or workplaces implement. Purr et al. (2008) discovered that egalitarian attitudes, held by men, regarding gender equality tended to lead to higher fertility rates, and thus, a higher birth rate in eight European countries with the most egalitarian gender system. However, there were select countries in which fertility rates dropped when the society took on a more egalitarian gender worldview (Westoff and Higgins, 2009). In a study done by Lee and Song (2023), women’s education quality, seats in the National assembly, and workforce participation all significantly impacted a developed country’s fertility rate. The four factors also impacted the country’s gender equality. Specifically, the quality of women’s education had a strong relationship with fertility rates. In developed countries, as the quality of a woman’s education increased, fertility rates also increased; however, in developing countries, the same factor caused a steep decrease in fertility rates. Gender equality showed similar effects – increasing fertility rates in developed countries while being close to insignificant in developing countries. Overall, as gender equality increases, the fertility rate increases. However, fertility rates and affecting factors vary depending on a country’s socioeconomic status. Thus, country-wide gender has the biggest impact on raising fertility rates after industrialization or full development of a country (Lee and Song, 2023).

103


Foundation of Gender Equality: The Role of Early Childhood Education Early childhood education plays a crucial role in shaping a child’s beliefs and behaviors, influencing their future interactions and emotional development (Bakken et al., 2016). At ages 5 to 8, children are especially vulnerable to forming rigid gender stereotypes based on their surroundings, including home, school, and societal influences. Therefore, pre-school and early primary education must focus on countering these stereotypes by promoting gender equality and respect for all genders. Teachers, as key figures in children’s lives, have a significant impact through their attitudes and practices in creating gender-inclusive classrooms. By integrating gender equality education at this formative stage, educators can help dismantle stereotypes and foster a more equitable mindset in their students, which is essential for reducing long-term gender inequality. In Nordic countries, specifically Sweden, Norway, Denmark, and Finland, there have been continued efforts to promote gender equality and educate students on topics of or relating to gender equality in schools. Broadly, works regarding gender equality significantly focuses on policy or early childhood education such as bringing change in attitudes towards traditional gender norms through early childhood education. Deeper topics such as gender stereotype, gender gaps in both educational opportunities and achievement as well as the gap in the labor-force, are primarily saved for upper secondary school (Heikkila, 2020). Although each country’s policy differs, each country has achieved, what has been deemed by the World Economic Forum in 2013 and the UNDP in 2014, overall gender equality. This is especially seen in the number of women politicians, leaders, and significant business owners in each country. In Finland, only 9.5% of the entire Parliament were women in 1907. In 2019, the ratio had risen dramatically to 47% after the rewriting of their Gender Equality Act in 1995 (Eduskunta, 2019). Similarly, in Sweden, women take part of 47.3% of representative members. Overall, Nordic countries hold first in the total number of female members of Parliament/government at an average of 42.5%. In comparison, the Americas are 30.6%, Europe (excluding the Nordic countries), is 27.2%, SubSaharan Africa is 23.9%, and Asia at 19.9% (Eduskunta, 2019). Rwanda ranked first in the world for the number of female politicians in their representative government in 2019. Even now, their “equality rate” remains somewhat similar to that of 2019. The country has been especially commended for their almost universal primary school– in both enrollment and opportunity – and a free secondary education until grade 12. In a study led by Ruterana (2017), researchers found that how gender and gender norms are portrayed in children’s books or literature impacts the child’s attitude towards gender norms in society. Shortened, if literature works displayed gender stereotypes, children were more likely to latch onto “traditional” gender norms, which ultimately led to more gender inequality. However, when children were given a fairy tale book that portrayed a female character who did not reflect traditional gender roles and norms, children reacted positively towards efforts to change traditional gender norms – both in the book and in society – in the future. After the study, when researchers asked the children if they wanted to see change in society, the majority of the children expressed wanting to see radical or non-traditional changes. Rwanda has been a patriarchal society. However, after the Tutsi genocide in 1994, many widows took initiative to implement themselves in a broken society in order to restore their community. Women were hired in non- traditional job pursuits such as construction, police work, justice, and politics. This trend has continued, for the most part, and in 2019, Rwanda had 49 female representatives (or 61.3%) in the Chamber of Deputies and 36% in the Senate. Women not only comprise a notable part of the Parliament, but also in the Ministry, Public Service, and Justice (Randell and Earnest, 2015). In general, countries and regions that have strived to implement gender-equal policies starting from early childhood education see increased levels of gender-equal government seats, among many others. Due to younger children being more influenced by traditional gender norms, implementing policies which focus on increasing gender equality in classroom setting while also teaching students about gender norms and gender equality is vital in future gender equality – not only in workspaces but in society in general. Gender equality education in early childhood education has proven to be significantly

104


important, as countries like Rwanda, Sweden, and Finland, which all rank in the top 10 of gender-equal parliaments, have a significant number of women in traditionally male-dominated work forces. Countries and regions that prioritize gender-equal policies from early childhood education often see significant improvements in gender representation across various sectors, including government. Implementing policies that focus on gender equality within the classroom and educating young students about gender norms are essential for shaping a more equitable future. Young children are highly impressionable and deeply influenced by traditional gender norms, which can be challenged and redefined through targeted educational practices. For example, nations like Rwanda, Sweden, and Finland, which rank highly for gender equality in their parliaments, also have robust early childhood education programs that emphasize gender equality. By instilling respect for gender diversity from a young age, these countries not only foster more balanced gender representation in the workplace but also contribute to a more inclusive and equitable society as a whole. Ministry of Gender Equality and Family (MOGEF) in South Korea: A Comprehension Review of Historical Context, Evolution, and Controversies The global push for gender equality received a major boost in 1995 with the adoption of the Beijing Declaration by the United Nations, a landmark resolution aimed at addressing gender disparities and promoting women's rights worldwide. This international movement inspired countries to establish dedicated women's policy ministries to address gender issues comprehensively. In South Korea, the growing awareness and advocacy for women’s rights, coupled with the global momentum from the Beijing Declaration, led to a significant push for a specialized government body. In response to these demands, the South Korean government established the Ministry of Women's Affairs in 2001. This new ministry assumed various responsibilities from other governmental bodies, including managing women's housing issues previously handled by the Ministry of Employment and Labor, and overseeing the protection of victims of domestic and sexual violence, as well as the prevention of sex trafficking, which had been under the Ministry of Health and Welfare. In 2004, the Ministry further expanded its role to include infant and childcare services, reflecting a broader commitment to addressing issues affecting women and families. The evolution of this ministry continued with a notable reorganization in 2010, when it was renamed the Ministry of Gender Equality and Family Affairs (MOGEF). This change was part of a strategic move to integrate and address a wider range of gender-related issues, including the affairs of multicultural families, which had previously been managed separately. MOGEF's expanded mandate aimed to promote gender equality through a more comprehensive approach, including policy planning and coordination, gender impact analysis, and the development and utilization of women’s resources. The Ministry was tasked with fostering increased participation of women in various sectors of society, combating sex trade, and protecting victims of domestic and sexual violence. By consolidating these functions, MOGEF sought to enhance its effectiveness in promoting gender equality and addressing the diverse needs of women and families in South Korea. Despite its broad mandate, MOGEF faced substantial criticism, particularly for its perceived focus on women's issues to the detriment of men’s concerns. Critics argued that the Ministry's focus on women’s rights and gender equality led to a divisive effect, exacerbating gender discrimination against men and implying an unfair preferential treatment of women's issues. The inclusion of the term "women" in its title was seen as a signal of bias, leading to accusations of reverse discrimination and misrepresentation of gender equality concepts. Additionally, the Korea Institute for Gender Equality Education, established under MOGEF in 2003, faced criticism for its perceived alignment with radical feminism and a lack of balanced representation in addressing gender issues, further fueling calls for the Ministry’s abolition. In 2022, following over a decade of controversies, the South Korean president announced plans to dissolve MOGEF. The Ministry of Internal Affairs proposed a government reorganization that would

105


transfer MOGEF's responsibilities to the Ministry of Health and Welfare. This decision sparked significant opposition from over 8,000 women activists and citizens, who argued that the removal of a dedicated gender equality ministry would regress progress on gender issues. Comparisons to the German model, which merged rather than dissolved its women’s ministry, were made to highlight concerns about the potential negative impact on gender equality efforts. The German model retained a powerful, unified ministry for gender issues, which continued to play a crucial role in legislative processes. Effective Gender Equality Administration In the case of most OECD countries, government organizations dedicated to gender equality policies oversee women's policies in various ways. According to 'Status and Implications of Gender Equality Policy Promotion System at Home and Abroad' published by the Korea Women's Policy Research Institute, 194 countries have established gender equality policy organizations as of 2020. While the number of gender equality organizations with "women" in their title decreased from 90 in 2008 to 76 in 2020, the proportion of gender equality organizations and other departments that include "women" increased from 37.8 percent to 59.2 percent. The emphasis on gender equality promotion movement across those member countries has led to the use of multidisciplinary bodies to drive gender equality policies. France's Ministry of Gender Equality, Diversity, and Opportunities is responsible for proposing and overseeing the implementation of gender equality legislation, mainstreaming gender equality policies, promoting women's professional equality and economic autonomy, gender-based violence prevention, women's participation in media, culture, and sports, diversity policies, and combating discrimination. In recent decades, legislative developments, case law and policy initiatives in European Union member countries have improved many people’s lives and helped building more equal and welcoming societies, including for lesbian, gay, bisexual, trans, non-binary, intersex and queer (LGBTIQ) people. In 2020, the European Commission adopted the LGBTIQ Equality Strategy 2020-2025. In the Netherlands, the Ministry of Education, Culture and Science is also responsible for promoting and implementing policies for women's participation in the labor market, equal treatment of men and women in employment, gender equality policies, gender diversity and equal treatment, and LGBTI protection. In the United States, under the Biden administration, the White House established a separate Gender Equality Policy Council to guide government policies that affect women, including economic policy, health care, gender-based violence, and foreign policy. According to the National Legislative Research Service, most countries have a ministry responsible for gender equality policy and a body that monitors discrimination, which has independent authority, and organizational systems and government bodies such as judicial institutions to promote gender equality and discrimination remediation policies with local and central government in an integrated manner. Gender equality policies that lower discrimination in the labor market or that increase the time spent by a father on childcare can contribute positively to female labor market participation and per capita income growth. One simulation study shows that when the disparities between men and women at home and in the labor market are completely removed, the female labor force participation rate increases from 54.4 to 67.5%, and the growth rate in per capita income rises from 3.6 to 4.1% on average over a generation. (Kim et al., 2018) I think the pledge to abolish the MOGEF was caused by political populism. It was mired in the word "women," and it was sympathetic to hatred instead of solving problems, and this extended to the dismantling of an organization that is necessary for society. Even if the MOGEF is abolished, gender equality issues will have to be dealt with by other ministries. The result of welfare always creates another inequality. I would say that it is also an important task of the current government to balance such gaps and imbalances. There are many concerns that the sudden abolition of the MOGEF will reduce the support for single-parent families and victims of sexual crimes and that the related work and policies might disappear. If the Ministry were to disappear, the budgets allocated to day care, childcare,

106


extra-curricular activities or the training of professionals to defend victims of domestic violence or sexual trafficking would be stopped. Women would have to stop working again, which would expose them to economic violence. They would also face an increase in structural violence as it is difficult for South Korean women to return to the labor market. Especially when they have left the labor market to raise their children for several years. The criticism and complaints against the MOGEF should be directed that there are many tasks to address and resolve issues related to gender inequality in the current society of South Korea. This means that the Ministry has not done a good job and lost its credibility but its roles and responsible functions are more needed than ever. The MOGEF should be given enough manpower, budget, and authority to implement gender equality policies. In fact, an investigator at the National Assembly Legislative Research Office claimed that The MOGEF has not even fully implemented a gender equality policy, which is a fundamental prerequisite task necessary to resolve gender inequality issues, because they mainly spend money and manpower to take care of issues related to domestic violence, youth crime, and childcare. The organizational structure of MOGEF appears to be oversized and decentralized to administer its functions in integrated manner. It may be more efficient to transfer the administration function of helping youth, families, and socially disadvantaged people back to the Ministry of Health and Welfare. The difference in employment wages between men and women, and the issue of women who have career breaks can be handled by the Ministry of Labor and Employment. Mostly due to disinterest, MOGEF has become a sit-in organization, as it does not have a strong presence. South Korea’s ranks 99th out of 146 countries worldwide. Compared to South Korea’s economy, which ranks as the 10th- largest in the world, the gender inequality poses to be a significant and reoccurring problem. Due to the ongoing gender equality problem, as well as MOGEF’s lack of gender equality policies, Korean society has become more adverse towards the concept of feminism and/or feminists. However, therein lies the core of the problem. The government cannot shrink a gender gap if society is not willing. MOGEF is not problem in South Korea. Rather, the problem is the attitude towards gender equality in general. The word “feminism” or “feminist” has been linked closely towards political views from when the current President Yoon won the South Korean presidential election, in which he expressed plans to disassemble MOGEF. There have been occasions in which feminists have rallied and marched through the streets of Seoul as well, thus somewhat twisting the idea of feminism and linking it with what gender equality will bring. In South Korea, where women only make up 19% of all lawmakers and 4.8% of executives in the top 100 companies (Kang, 2022), the acute social problem of gender equality remains heavily present. Thus, disbanding MOGEF is definitely not the solution towards achieving gender equality. Rather, MOGEF is vital to South Korea’s social growth and gender equality. Although President Yoon and his fellow cabinet members have blamed feminism for South Korea’s low birth rate, fiercely denied the existence of a structural gender gap, and promised to increase punishments for false reports of sexual violence, the existence of MOGEF serves as an implicit message. In the future, MOGEF should implement stricter policies regarding gender equality through a variety of different ways. For instance, designing and implementing more effective gender equality training and education in early childhood education may qualify as one of solutions to gender inequality issues.

3. Conclusion: Synthesis and Strategic Recommendations This paper has explored the pervasive issue of gender inequality in South Korea, highlighting its impact across various domains including education, healthcare, and economic opportunities. Through an indepth analysis, we have examined the manifestations of gender inequality in South Korea, with a particular focus on the recent restructuring of the Ministry of Gender Equality and Family (MOGEF) and its implications. The persistence of gender discrimination, evidenced by the gender pay gap, career interruptions for women, and the glass ceiling, underscores the need for more effective policy interventions and greater authority for gender equality bodies.

107


In addition to the issues within the workplace, South Korea’s challenges extend to violence against women and declining birth rates, all of which are deeply intertwined with societal attitudes and policy shortcomings. Despite progress symbolized by female leadership, systemic barriers and high crime rates against women highlight the ongoing need for comprehensive reforms. By comparing South Korea’s approach with international practices, this paper has provided actionable insights and proposed solutions informed by global experiences. The broader significance of these findings emphasizes the need for a balanced approach that addresses both gender inequality and potential reverse discrimination. Effective policy interventions, coupled with societal and cultural shifts, are essential for fostering social justice and achieving gender equality. As South Korea continues to confront these challenges, the integration of informed and strategic measures will be crucial in creating a more equitable and inclusive society.

References Armstrong, Elizabeth A., Gleckman-Krut, Miriam, Johnson, Lanora. “Silence, Power, and Inequality: An Intersectional Approach to Sexual Violence.” Annual Review of Sociology, vol. 44 (2018): 99-122. Bakken, Linda, Nola Brown, and Barry Downing. “Early Childhood Education: The Long-Term Benefits.” Journal of Research in Childhood Education 31 (2017): 255–69. Bullough, Amanda, et al. "Women’s entrepreneurship and culture: gender role expectations and identities, societal culture, and the entrepreneurial environment." Small Business Economics 58.2 (2022): 985-996. Buvinic, Mayra, Monica Das Gupta, Ursula Casabonne, and Philip Verwimp. "Violent conflict and gender inequality: An overview." The World Bank Research Observer 28, no. 1 (2013): 110-138. Cho, Y., Park, J., Han, S. J., & Ho, Y. (2019). “A woman CEO? You’d better think twice!” Exploring career challenges of women CEOs at multinational corporations in South Korea. Career Development International 24(1) (2019): 91-108. Choi, Jaerim, and Theresa M. Greaney. "Global influences on gender inequality: Evidence from female employment in korea." International Economic Review 63.1 (2022): 291-328. Cooke, Fang Lee. "Women's participation in employment in Asia: a comparative analysis of China, India, Japan and South Korea." The international journal of human resource management 21.12 (2010): 2249-2270. Craddock, Danny S. "The Asian Five Dragons: What’s the Relationship of Confucianism and Gender Inequality?" Gettysburg College Student Publication (2022). Freedman, Estelle B. Redefining rape. Harvard University Press, 2013. Gansen, Heidi M. "Reproducing (and disrupting) heteronormativity: Gendered sexual socialization in preschool classrooms." Sociology of Education 90, no. 3 (2017): 255-272. Heikkilä, Mia. "Gender equality work in preschools and early childhood education settings in the Nordic countries—an empirically based illustration." Palgrave Communications 6, no. 1 (2020): 1-8

108


Heise L. “Violence against women: the hidden health burden.” World Health Stat Q 1993;46(1):78-85. PMID: 8237054. Hideg, Ivona, and Winny Shen. "Why still so few? A theoretical model of the role of benevolent sexism and career support in the continued underrepresentation of women in leadership positions." Journal of Leadership & Organizational Studies 26.3 (2019): 287-303. Kearns, Megan C., Ashley Schappell D'Inverno, and Dennis E. Reidy. "The association between gender inequality and sexual violence in the US." American journal of preventive medicine 58.1 (2020): 12-20. Kim, Jinyoung, Jong‐Wha Lee, and Kwanho Shin. "Gender inequality and economic growth in Korea." Pacific Economic Review 23, no. 4 (2018): 658-682. Kinias, Zoe, and Heejung S. Kim. "Culture and gender inequality: Psychological consequences of perceiving gender inequality." Group Processes & Intergroup Relations 15.1 (2012): 89-103. Kolk M. "Weak support for a U-shaped pattern between societal gender equality and fertility when comparing societies across time." Demographic Research (2019): 27-48. Kwon, Heiwon, and Virginia L. Doellgast. "Women, employment and gender inequality in South Korea." The Evolution of Korean Industrial and Employment Relations (2018): 219-240. Lee, Jesang, and Song, Yoomee. “A Study on the Relationship between Fertility and Gender Eqality According to Socioeconomic Development.” Health and Social Welfare Review 43, no. 4 (2023): 179193. Lesthaeghe, Ron. "The second demographic transition: A concise overview of its development." Proceedings of the National Academy of Sciences (2014). 111. 10.1073/pnas.1420441111. Ma, Gongning, Chunduoer Yang, Zhaojun Qin, and Meixi Guo. "Hegemonic Masculinity in East Asia: China, South Korea and Japan." In 2021 4th International Conference on Humanities Education and Social Sciences (2021): 2405-2410. Miller, Jennifer M. "Neoconservatives and Neo-Confucians: East Asian growth and the celebration of tradition." Modern Intellectual History 18, no. 3 (2021): 806-832. Neyer, Gerda, and Dorothea Rieck. "Moving towards gender equality." How generations and gender shape demographic change: Towards policies based on better knowledge (2009): 139-154. Patterson, Louise, and Brandon Walcutt. "Explanations for continued gender discrimination in South Korean workplaces." Management In South Korea Revisited (2016): 18-41. Piccigallo, Jacqueline R., Terry G. Lilley, and Susan L. Miller. “‘It’s cool to care about sexual violence’ men’s experiences with sexual assault prevention." Men and Masculinities 15, no. 5 (2012): 507-525. Piscalho, Isabel, Maria João Cardona, Teresa Tavares, and Marta Uva. "To address gender equality since the preschool education: research and practices." International Conference" Mapping gender equality: research and practices-The national and international perspective (2010). Puur, A., L. S.Oláh, M. I.Tazi Preve, and J.Dorbritz. “Men’s childbearing desires and views Of the male role in Europe at the dawn of the 21st century.” Demographic Research, 19 (2008): 1883-1912.

109


Randell, S., and Jaya Dantas. “Implementing Gender Equality: a comparative analysis of women's empowerment in Rwanda and Australia through education, empowerment and mentoring.” Commission on the Status of Women 59 (2015). Ruterana, Pierre Canisius. "Using children’s literature to promote gender equality in education: The case of the fairy tale of Ndabaga in Rwanda." Rw anda Journal 2, no. 2 (2017): 31-43. Shin, Kwang-Yeong, and Ju Kong. "Why does inequality in South Korea continue to rise?" Korean Journal of Sociology 48, no. 6 (2014): 31-48. Stainback, Kevin, and Soyoung Kwon. "Female leaders, organizational power, and sex segregation." The Annuals of the American Academy of Political and Social Science 639.1 (2012): 217-235. Westoff, C. F., and J. Higgins. “Relationship between men’s gender attitudes and fertility: Response to Puur et al.’s ‘Men’s childbearing desires and views of the male role in Europe at the dawn of the 21st century.’” Demographic Research, 21 (2009): 65-74. Whaley, Rachel Bridges. “The Paradoxical Relationship between Gender Inequality and Rape: Toward a Refined Theory.” Gender and Society, vol. 15, no. 4 (2001): 531–55. “Despite stark gender inequality in South Korea, hostility to feminism is growing.” NBC World News, NBC News, 4 Dec. 2022, https://www.nbcnews.com/news/world/stark-genderinequality-south-korea-hostility-feminism-growing-rcna59747. Accessed 20 June 2024. "Gender equality and violence against women: what’s the connection?" Analysis Policy Observatory, 4 June 2014, https://apo.org.au/node/40036. Accessed 03 June 2024. “Gender equality: Korea has come a long way, but there is more work to do.” OECD, Organization for Economic Co-operation and Development, 25 Oct. 2021, https://www.oecd.org/country/korea/thematic-focus/gender- equality-korea-has-come-a-long-way-butthere-is-more-work-to-do-8bb81613/. Accessed 20 July 2023. “Global Gender Gap Report 2023.” World Economic Forum, www.weforum.org/publications/globalgender-gap- report-2023/. Accessed 5 June 2023. "The Economist’s glass-ceiling index." The Economist, www.economist.com/graphic-detail/glass-ceiling-index. Accessed 10 July 2023. “UNSD - Demographic and Social Statistics.” United Nations, www.unstats.un.org/unsd/demographicsocial/gender/index.cshtml. Accessed 27 July 2023. “Women as Members of Parliament.” Eduskunta Riksdagen, The Finnish Parliament, 2019, https://www.eduskunta.fi/EN/naineduskuntatoimii/kirjasto/aineistot/yhteiskunta/womens-suffrage110- years/Pages/women-as-members-of-parliament.aspx. Accessed 18 June 2024. “Women, Business, and the Law 2022.” World Bank, World Bank Group, 4 Mar. 2022, www.worldbank.org/en/news/press-release/2022/03/04/new-data-show-massive-wider-than-expectedglobal-gender-gap. Accessed 03 July 2023.

110


The Role of Early Childhood Education in Educational Achievement Disparities Between White and Hispanic/African American Students: A Review Author

Full Name

:

Hwang, Alison

:

Crean Lutheran High School

(Last Name, First Name)

School Name

Abstract This paper examines the systemic inequalities in access to early childhood education caused by neighborhood predictors and childcare resources that exacerbate the gaps in school readiness between white, African-American, and Latino preschool-aged children in the United States. It further recognizes the key role early childhood education plays in lessening the current gap in academic performance and overall well-being of these children.

111


Introduction The impact of early childhood education on closing the White-Hispanic/African American students’ readiness gap for higher level education and beyond, is immense in modern American society. Every individual in the United States should have equal rights and opportunities for education. Education is one of the key factors to achieve higher socioeconomic status; however, the gap in America today still has not closed. A report by the California-Mexico Studies Center stated that only 30% of Latino adults 25 years of age and older had earned an associate degree or higher in 2022, compared to 53% of White, non-Hispanic, 39% of Black, and 66% of Asian adults (Cmsc & Center, 2023). Through education, people develop fundamental skills and gain valuable lessons that are crucial to be qualified for society today. “In Mississippi, for example, Black families make up 55 percent of low-income households but comprise 88 percent of Child Care & Development Block Grant (CCDBG) recipients” (Staub, 2017). Today, amid geopolitical crises and a rapidly expanding digital divide, it is more important now, than ever, to rightly educate our children. Early Childhood Education can be defined as a branch of education theory that relates to the instruction of children from birth up to the age of eight. It is pivotal to human development, laying the foundation for later cognitive, social, and emotional growth. A study conducted by Harvard University states that the brain is the most malleable in the first few years of life, as it accommodates new interactions and environmental changes (2024). Consequently, high-quality Early Childhood Education can enhance future outcomes, strengthening cognitive skills essential for various aspects of life. Its investment pays off, as children who fall behind in the first few years of their childhood struggle to catch up and succeed, even in areas outside of academics. Indeed, the current status of the educational gap in the United States is exacerbating; because opportunities are predetermined by the economic and racial backgrounds of students. When surveyed by the Pew Research Center on treatment for employment opportunities, it was found that “while 40% of blacks say their race or ethnicity has made it harder for them to succeed in life, just 5% of whites – and 20% of Hispanics – say this” (Patten). It’s important to note that this is a systemic issue, as discrimination continues to be a prevalent problem in today’s society. The current study examines how early childhood education affects low-income, underprivileged children. Focusing on programs that are beneficial specifically to preschool children, like Obama’s Preschool for All or Head Start, this paper demonstrates an understanding why barriers still exist despite nationwide efforts. It then thoroughly addresses such socioeconomic barriers, and how it relates to a student’s cognitive performance and well-being. This is consequently followed by the question of why the readiness gap is prevalent in our society today, making a comparison based on different ethnic groups. Later, it compares the United States’ approach to South Korea’s, providing insight on the different curriculums countries have used.

Disparities in Early Childhood Education This section covers how expanding preschool education can pose as a bridge to closing the gap minority group students face in their younger years of education. It also assesses already implemented programs by the government, as well as the differing socioeconomic barriers that hinders full success in students’ performance. Health and wellbeing is also thoroughly discussed, as it directly correlates to academic success. Recent research proves that there are remarkable differences in preschool attendance based on racial groups in the United States. The enrollment rate for Black non-Hispanic children aged three and four was 61.7% in 2022, suggesting a significant improvement to prior years (Census Bureau, 2023). Factors like area also have a big impact on preschool attendance; African American and Hispanic kids are more likely to go to preschool when a larger percentage of their peers from the area are enrolled (diversitydatakids.org, 2013-2017). Despite progress in other areas, early childhood education remains a pressing global issue, affecting both highly developed and developing countries alike. Switzerland, one of the wealthiest nations in the world with the highest density of millionaires in the world (Ventura, 2024), has the second lowest early childhood education enrollment rate in the world at just 26 percent (COE - Enrollment Rates by Country, n.d.). The Swiss national government leaves early childhood

112


education in the hands of the cantons, municipalities, and parents. Factors such as high daycare prices, poor quality caregivers, and limited parental leave contribute to low early childhood education enrollment rates, prompting families to rely on grandparents or au pairs instead. The “Swiss problem” highlights the paradox of ECE failures even in the wealthiest nations, where governments take a decentralized approach coupled with other barriers that obstruct access to ECE. So the central question arises: why is it that preschool is not underscored enough in order to lessen the disproportional gap, affecting them on an immense scale for a lifetime?

The Importance of the Preschool Period for Different Ethnicity Groups Expanding access to preschool education is a key determinant for the long-term high performance of school-aged children. According to the United States, the age range for preschool is between 3-5 years old (2017). Cascio and Schanzenbach (2013) find that early childhood education results in positive outcomes for children who come from low-income families: attending preschool not only increases individual lifetime earnings but also results in reductions in crime and the need for public assistance. Consequently, there is a need to expand access to preschool education. In the U.S., access to highquality education is not universal. Skilled and competent teachers are tough finding employment in highly impoverished educational systems (2021). Rather than providing educational interventions for children from low-income families who perform below educational standards, the U.S. government aims to nationally improve preschools’ quality of education by implementing an affordable educational standard for early childhood. Former President Obama’s “Preschool for All” initiative called for an extreme increase in the number of four-year-olds attending public preschool programs and aimed to improve the quality of preschool nationwide (Cascio & Schanzenbach, 2013). It was funded by a $75 billion federal investment over ten years, targeted at 4-year-olds who came from low-and moderateincome families (Cascio & Schanzenbach, 2013). Findings from the “Preschool for All” initiative reveal that children who participated in its Head Start program had substantially higher vocabulary test scores, making it significantly less likely for these children to repeat grades. The Head Start program specifically focuses on children dealing with high poverty, providing not only educational resources but also prioritizing the overall well-being of these children through immunization check-ups or covering any food insecurities (Insights, 2024). Magnuson and Waldfogel (2005) theorized that closing up to 20 percent of the school readiness gap between Black and White and up to 36 percent of the Hispanic and White populations could be achieved by implementing universal preschool enrollment for three and four-year-old children and improving the quality of their care. Expanding access to preschool yields positive results, yet there are specific barriers low-income children in the U.S. face, directly affecting cognitive development.

Addressing Socioeconomic Barriers and their relation to Cognitive Development/Performance The socioeconomic barriers to preschool education that affect low-income children also have a significant impact on their cognitive health and development. The socioeconomic status of families and child health are strongly correlated; scholars believe that child health is an “important pathway” for breaking the “cycle of poverty” (Lee & Jackson, 2017, p. 1845). In the U.S., ethnoracial minority children are disproportionately likely to experience both short and long-term poverty, experience acute and chronic health problems, live in disadvantaged neighborhoods, and attend poorly resourced schools (Currie 2005; Iceland 2013; Lichter et al. 2012). Moreover, financial and time stressors can prevent low-income families from offering the same extent of stimulating or educational materials and activities to their children as higher-income families (Christensen et al., 2014). As a result, there are discrepancies in the academic performances of children coming from families with varying incomes and ethnic makeups; the quality of education a child receives is often determined by the socioeconomic makeup of their family. Another indicator of a socioeconomic barrier includes the lack of parental involvement. According to research, parents in economically disadvantaged areas are less likely to interact with other adults and their kids, which makes it harder for them to form social bonds with other parents and get involved in school activities (Li & Fischer, 2017). One study found that cultural differences based on race/ethnicity is in direct correlation to parent participation in school involvement in school, with white

113


parents significantly contributing more compared to African/Latino parents (Park & Holloway, 2013). Identifying where and how access to high-quality education exists can begin with zoning restrictions and neighborhood resources—such as libraries, museums, community centers, and other educational centers. The higher the quality of educational resources and preschool education a child receives, the more likely they are to experience less negative cognitive and physical health outcomes later on. Circling back to cognitive development, a long-term study conducted on preschoolers resulted in the conclusion that socioeconomic status of a family is instrumental. Explicit memory, which includes conscious attention, language, episodic memory, and a sense of time, self, and memory, is developed throughout the second year of life and is controlled by the hippocampus. The prefrontal cortex helps with insight and empathy, balanced emotions, regulating body systems, controlling the ability to pay attention to others, taming fear, and signaling physical discomfort or demands (Cooper & Mulvey, 2015).

The Readiness Gap The quality of preschool education can be determined by a child’s school readiness and academic performance. Children who attend preschool programs are more school-ready than those who do not; but even within preschool, the quality of care children receive can “differ by race and ethnicity.” Magnuson and Waldfogel (2005) find that Black children are “more likely to attend preschool than white children, but may experience lower-quality care,” while “Hispanic children are much less likely than white children to attend preschool (p. 169). Additionally, the researchers find that the Head Start program in particular, is beneficial: children who attended the Head Start program scored up to seven points higher on vocabulary tests than those who did not (Magnuson & Waldfogel, 2005). As attending quality preschools can create a strong foundation for subsequent academic performances and achievements, there is a dual need to accommodate the needs of families with lower socioeconomic statuses and the academic offerings of the preschools themselves. The quality of preschool education, consequently, cannot be separated from school readiness and the subsequent academic trajectories of children. Before entering kindergarten, children should have attended high-quality preschools to cement longterm positive academic outcomes. In their quantitative study of the effects of preschool education, utilizing data ranging from 1960 to 2000, Camilli et al. (2010) found significant positive cognitive effects for children who attended preschool before entering kindergarten as well as positive results for their overall social progress in later years; Bassok and Loeb (2007) commenting on similar findings, report that this is likely because “children’s experiences in the early years have a disproportionately large impact relative to experiences during school-age years and later” (p. 510). The more solid social and academic foundation a child has, the more likely it is that their program sets them up for later success. Chetty et al. (2010) show that one of the most significant factors in positive outcomes is program quality, pointing out that the quality of kindergarten programs has been shown to affect later life earnings, college attendance rates, the quality of colleges attended, and homeownership rates. Highquality programs include teachers with bachelor’s degrees and specialized training, a comprehensive curriculum, a maximum class size of 20, a child-to-teacher ratio of 10:1 or better, and at least one family support service (Pianta et al., 2009, p. 67). Rather than dismissing early childhood education as “daycare” or a subset of “childcare,” parents, educators, communities, and legislators should work collaboratively to ensure that the benefits of high-quality preschool education are accessible to all children—regardless of race and socioeconomic and citizenship status.

Evaluating U.S Programs The U.S World News reports that the United States has an overall enrollment rate of 67 percent for children aged three to five (Camera, 2021). Turkey is one of only two countries with a lower enrollment rate for the same age group, with a less than 30 percent enrollment rate. Despite their economic differences, both nations struggle with similar hurdles, indicating systemic issues that transcend economic status and require multifaceted solutions (Addressing the disparities in Early Childhood Education enrollment rates necessitates a coordinated effort focusing on policy reform and financial support avenues to ensure equitable access to Early Childhood Education for all children. As mentioned

114


earlier, programs such as “Head Start” or former President Obama’s “Preschool for All” can pose a realistic and effective solution for all schools. Nevertheless, it is crucial to note the results of implementing such programs. Cook et al. (2022) utilized the MOU (Memorandum of Understanding) to precisely correspond with the Head Start program, resulting in the conclusion that might not be assisting and aligning cross-system transition initiatives to their fullest extent possible. Particularly, they distinguished that the program more so explicitly focused on children with disabilities (Cook et al., 2022).

South Korea’s Approach to Early Childhood Education To address the global challenge of low Education Childhood Education enrollment rates, wealthier nations, in particular, should draw inspiration from South Korea’s Early Childhood Education model as an example. Many countries use an educational framework that covers children until they are of a compulsory schooling age (normally five or six), instead of including separate frameworks for care and education (Directorate for Education and skills). Such an approach may face challenges when aiming to provide comprehensive and specialized support to the diverse needs of children in different age groups. Maintaining a split framework like South Korea’s offers greater flexibility in designing curricula that address the unique needs of infants, toddlers, and preschoolers. Additionally, addressing the economics of Early Childhood Education is crucial as many countries face funding constraints. The South Korean government still provides funding even though 73 percent of pre-primary children attend private schools. To adequately fund these programs, nations will need to find solutions to increase funding. South Korea’s ECE structure emphasizes household stability to accommodate young children. The South Korean government annually invests around 14 trillion KRW (US$12,700,000,000) in ECE, offering subsidies for children aged up to five years old through vouchers known as “i(child)-Happiness Cards,” covering full-day care payments for children up to two years old, and offering 220,000 KRW (US$200) per month for children between three and five years old. The significant investment in ECE reflects a commitment to giving universal access to quality care and education during formative years. By providing financial support directly to families, the South Korean government alleviates parental stress by mitigating financial strains on households. South Korea’s educational organizational structure reflects a united effort to ensure adequate regulative involvement in ECE services. There are two ministries responsible for ECE in South Korea, dividing functions for monitoring, standard setting, and financing (2017). The Ministry of Health and Welfare provides childcare to children up to five years old and the Ministry of Education organizes kindergartens and primary education for children between the ages of three and six (2017). This division of care enables targeted policy and resource allocation tailored to the specific needs of varying age groups. Since 2013, South Korean children from three to five years old have been enrolled in a common national-level early childhood curriculum called the Nuri Curriculum. In 2019, the Nuri Curriculum was revised to emphasize child-centered and play-based learning through five learning areas: physical exercise and health, communication, social relationships, artistic experience, and nature exploration (Yu et al., 2021). These adjustments allow for adaptable lessons tailored to specific contexts. The evolution of the Nuri Curriculum shows South Korea’s commitment to continually improving the quality of ECE. The curriculum aligns with contemporary education philosophies that prioritize holistic development over purely academic focuses. With the commitment to accommodating the needs of different ages and contextualizing their education accordingly, South Korea’s education system is poised for success. South Korea’s educational structure and curriculum framework exemplify efforts to enhance the quality and accessibility of ECE, serving as valuable models for other nations seeking to improve their systems. In countries like Thailand and Vietnam, a “sin tax” has been implemented on cigarettes, alcohol, and gambling, which is then redirected to fund Early Childhood Education programs and services (Unesdoc.unesco.org). Previous empirical findings suggest that South Korea’s education model implements strategic measures to enhance governmental support and funding, allowing the country to successfully take strides toward increasing Early Childhood Education enrollment and effectiveness.

115


Conclusion This paper primarily focused on diverse parts of the central issue on how preschool education is not emphasized enough in the United States. By presenting already taken initiatives like Obama’s “Preschool for All” or the “Head Start” programs, it illustrated how even with these government funded programs, there is still a lack of participation, which was further connected by addressing the socioeconomic barriers and the effect it has on cognitive performance, as well as the overall wellbeing of the child. Inequalities and access to early childhood education have persisted over many decades despite overwhelming evidence of the importance of early childhood education for later life outcomes. My analysis and key findings indicate that there are a multitude of factors that all contribute to this disparity, which was addressed earlier: access to quality education not being universal, restrictions due to socioeconomic status of the resident neighborhood or family, and not enough emphasis on how significant of an effect attending preschool has in an individual’s later stages of life.

A Call for Action This calls for urgent action, not limited to educators, but to parents, school boards, and the government. Because the United States is a highly developed country, Early Childhood Education is often overlooked in educational policies. As the country struggles with the challenges of providing accessible and highquality early education, South Korean initiatives offer valuable insights for policymakers and educators worldwide. By embracing principles of equity, quality, and flexibility regarding Early Childhood Education policies, nations can build more prosperous and resilient futures in a world marked by friction and instability. South Korea’s collective experiences and investments are a reminder that countries can establish the foundation for education and development within their local contexts. Consistent government funding and intervention in Early Childhood Education are key factors in addressing equity concerns. Furthermore, ensuring that all children have access to high-quality Early Childhood Education contributes to better outcomes for children, regardless of their socioeconomic background. If people do not take action on such an urgent matter, not only will the minority youth fail, but so will society as a whole. After all, for a country to succeed, the people must lead the way. Whether it’s culturally, economically, socially, or politically, without the collaboration to produce exceptional results for the future. People often can become confused that by assisting in the achievement of equal opportunities in education for the minority, they are serving a form of social justice. However, that is not the case—they must perform change so that equality can be achieved. Another possible solution can include resource equality, or the distribution of resources no matter the race or situation. Although this sounds like a simple solution, allocating it all across the United States is strenuous. Because in some states, there is more of a white-dominating population, while in other states, blacks dominate the population. Regardless, inequality does not have to necessarily be defined by one’s race or socioeconomic status when great change can and should be implemented to better the situation. For instance, implementing strategic staffing—such as hiring highly qualified teachers to teach in disadvantaged areas can even out the unbalanced ratio (Betts et al., 2000). As proud United States citizens, people should recognize the tremendous sacrifices that have been made in the past to stand where they are today and focus on progressing beyond current standards instead of being satisfied with what is already given. Ultimately, our children are the future, and the need to prioritize early education initiatives remains as vital as ever.

References Betts, J. R., Reuben, K. S., & Danenberg, A. (2000). Equal Resources, Equal Outcomes? The Distribution of School Resources and Student Achievement in California. http://econweb.ucsd.edu/~jbetts/Pub/A28%20PPIC%202000%20Betts%20Rueben%20Danenberg%2 0PPIC%20R_200JBR.pdf. Bureau, U. C.. Census Bureau releases New Educational Attainment Data. Census.gov. 16 February 2023, https://www.census.gov/newsroom/press-releases/2023/educational-attainment-data.html

116


Camera, Lauren. “U.S. Falls behind Other Developed Countries in Early Childhood Education Enrollment.” U.S News , 21 June 2021, www.usnews.com/news/best-countries/articles/2017-06-21/usfalls-behind-other-developed-countries-in-early-childhood-education-enrollment. Camilli, G., Vargas, S., Ryan, S., & Barnett, W. S. (2010). Meta-Analysis of the Effects of Early Education Interventions on Cognitive and Social Development. Teachers College Record: The Voice of Scholarship in Education, 112(3), 579–620. https://doi.org/10.1177/016146811011200303 Cappetta, K. (2017, August 17). Preschool age: What age is pre-K?. Preschool Age: What Age is PreK? https://www.thebump.com/a/preschool-age. Cascio, E.U. & Schanzenbach, D.W. (2013). The Impacts of Expanding Access to High-Quality Preschool Education. Brookings Papers on Economic Activity, 127-178. https://www.jstor.org/stable/23723436. Center on the Developing Child (2007). The Science of Early Childhood Development (InBrief). Retrieved from www.developingchild.harvard.edu. Christensen, D. et al. (2014). Socioeconomic status, child enrichment factors, and cognitive performance among preschool-age children: Results from the follow-up of growth and development experiences study. Research in Developmental Disabilities, 35(7), 1789-1801. https://doi.org/10.1016/j.ridd.2014.02.003. COE - Enrollment Rates by Country. (n.d.). https://nces.ed.gov/programs/coe/indicator/cgh/enrollment-rates-bycountry#:~:text=Retrieved%20October%2013%2C%202022%2C%20from,Education%20Statistics% 202022%2C%20table%20601.35.&text=highest%20(approximately%20100%20percent)%20in,Nethe rlands%2C%20T%C3%BCrkiye%2C%20and%20Denmark. Cook, K. D., Barrows , M. R., Ehrlich Loewe, S. B., Lin, V.-K., & Toit , N. du. (2022, September). Facilitating Kindergarten Transitions: The Role of Memoranda of Understanding (MOUs) between Head Start and Local Education Agencies. U.S. Department of Health & Human Services. https://www.acf.hhs.gov/sites/default/files/documents/opre/hs2k_brief_mou_analyses_sept2022%20(2 ).pdf. Cooper, B. S., & Mulvey, J. D. (2015). Connecting Education, Welfare, and Health for American Families. Peabody Journal of Education, 90(5), 659–676. http://www.jstor.org/stable/43909852 Cmsc, & Center, Cmscc.-M. S. (2023, August 1). Report shows widening gap between Latino and white students who graduate college. The California-Mexico Studies Center, Inc. https://www.california-mexicocenter.org/report-shows-widening-gap-between-latino-and-whitestudents-who-graduatecollege/#:~:text=Nationwide%2C%20just%2030%25%20of%20Latino,White%2C%20non%2DHispa nic%20adults. Directorate for Education and Skills, www.oecd.org/en/about/directorates/directorate-for-educationand-skills.html. Accessed 30 July 2024. diversitydatakids.org. (2013-2017). Neighborhood preschool enrollment patterns by race/ethnicity. Retrieved from https://www.diversitydatakids.org/research-library/research-brief/neighborhoodpreschool-enrollment-patterns-raceethnicity. Insights, G. (2024, April 5). What is the head start program? understanding early education impact. Gray Group International. https://www.graygroupintl.com/blog/what-is-the-head-startprogram#:~:text=In%20conclusion%2C%20the%20Head%20Start,and%20positively%20impact%20f uture%20generations.

117


Lee, D., Jackson, M. The Simultaneous Effects of Socioeconomic Disadvantage and Child Health on Children’s Cognitive Development. Demography 54, 1845–1871 (2017). https://doi.org/10.1007/s13524-017-0605-z. Li, A., & Fischer, M. J. (2017). Advantaged/Disadvantaged School Neighborhoods, Parental Networks, and Parental Involvement at Elementary School. Sociology of Education, 90(4), 355–377. http://www.jstor.org/stable/26383024. Magnuson, K.A. & Waldfogel, J. (2005). Early childhood care and education: Effects on ethnic and racial gaps in school readiness. The Future of Children, 15(1), 169-196. https://www.jstor.org/stable/1602667. OECD (2017), Starting Strong 2017: Key OECD Indicators on Early Childhood Education and Care, OECD Publishing, Paris. http://dx.doi.org/10.1787/9789264276116-en. Pianta, R. C., Barnett, W. S., Burchinal, M., & Thornburg, K. R. (2009). The Effects of Preschool Education: What We Know, How Public Policy Is or Is Not Aligned With the Evidence Base, and What We Need to Know. Psychological Science in the Public Interest, 10(2), 49-88. https://doi.org/10.1177/1529100610381908 Raj Chetty, John N. Friedman, Nathaniel Hilger, Emmanuel Saez, Diane Whitmore Schanzenbach, Danny Yagan, How Does Your Kindergarten Classroom Affect Your Earnings? Evidence from Project Star , The Quarterly Journal of Economics, Volume 126, Issue 4, November 2011, Pages 1593–1660, https://doi.org/10.1093/qje/qjr041 Staub, C. J. (2017). Equity starts early. https://www.clasp.org/sites/default/files/publications/2017/12/2017_EquityStartsEarly_0.pdf Susanna Loeb, Margaret Bridges, Daphna Bassok, Bruce Fuller, Russell W. Rumberger, How much is too much? The influence of preschool centers on children's social and cognitive development, Economics of Education Review, Volume 26, Issue 1, 2007, Pages 52-66, ISSN 0272-7757, https://doi.org/10.1016/j.econedurev.2005.11.005. The United States Government. (2021, July 23). Fact sheet: How the biden-harris administration is advancing educational equity. The White House. https://www.whitehouse.gov/briefingroom/statements-releases/2021/07/23/fact-sheet-how-the-biden-harris-administration-is-advancingeducational-equity/ The World Bank. Early Childhood Education in Turkey - World Bank Document, Mar. 2013, documents1.worldbank.org/curated/en/987251468110675594/pdf/777230ENGLISH0ECE0EN0july0 3.pdf. Unesdoc.unesco.org. (n.d.-a). https://unesdoc.unesco.org/ark:/48223/pf0000383668/PDF/383668eng.pdf.multi Ventura, L. (2024, June 27). Richest Countries in the World 2024. Global Finance Magazine. https://gfmag.com/data/richest-countries-in-theworld/#:~:text=Switzerland%F0%9F%87%A8%F0%9F%87%AD&text=According%20to%20the%2 02023%20Global,than%20one%20million%20U.S.%20dollars. Yu, H. M., Cho, Y. J., Kim, H. J., Kim, J. H., & Bae, J. H. (2021). A mixed-methods study of early childhood education and care in South Korea: Policies and practices during COVID-19. Early childhood education journal. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8360645/#:~:text=Instead%20of%20dividing%20lear ning%20content,institution%20and%20teacher%20a%20higher

118


Music Instrumental Interventions and their Effects on Mental Health and Social Adjustment among Refugee Children and Adolescents Author

Full Name

:

Hwang, Hannah

:

Berkshire School

(Last Name, First Name)

School Name

Abstract Refugee children and adolescents often face significant psychosocial challenges due to trauma and displacement, which traditional therapeutic methods fail to address. This study aimed to examine the effects of music therapy, particularly instrumental interventions, among refugee children, which may offer a potential alternative method for enhancing mental health and social adjustment. Data was drawn from the “Part project Oldenburg”, which investigated the impact of school-based musical interventions on psychosocial health among third-grade refugee children in Germany (n = 161). Using measures of 1) empathy, self-efficacy, optimism, self-concept, and integration into peer groups (FRKJ) and 2) orientation to host/origin culture (FRACC-Y), the effects of music intervention were compared to those of a mathematical intervention. Analysis of variance (ANOVA) was performed to compare the mean differences in changes in psychological health and cultural adjustment between the two groups (P < 0.05). As a result, those in the instrumental group showed significantly greater improvement in selfefficacy score compared to the math group (P = .029). However, no significant group differences were found in other variables observed (Ps > .05). This study highlights the unique ability of music interventions to enhance self-efficacy among refugee children, contributing to their psychological wellbeing. Our findings provide insights for developing holistic support strategies for refugee children by emphasizing the potential of music-based interventions in therapeutic and educational settings.

Keywords Refugee children, Self-efficacy, Music therapy, Mental health, Social adjustment

119


Introduction Music has a profound impact on mental health, serving as a powerful tool for emotional regulation and psychological well-being (Dingle et al., 2021). It can reduce stress, alleviate symptoms of depression and anxiety, and improve mood by releasing dopamine, the brain's reward hormone (Chanda & Levitin, 2013). Listening to soothing music has been shown to lower cortisol levels, thereby helping to manage stress and promote relaxation (Hou et al., 2017). Furthermore, engaging in musical activities, such as playing an instrument or singing, fosters social connections and provides a creative outlet, which can enhance self-esteem and provide a sense of accomplishment (Dingle et al., 2021). Music therapy has been shown to be particularly effective in treating various mental health conditions, offering a noninvasive and enjoyable way to support mental health recovery and maintenance (Huang & Huang, 2023). Refugee children and adolescents, having experienced significant trauma and displacement, face unique challenges in their mental health and social adjustment (Dangmann et al., 2022). Traditional therapeutic approaches often fall short in addressing the complex needs of these young individuals, necessitating the exploration of alternative methods (Pacione et al., 2013). Music therapy, with its non-verbal and expressive nature, offers a promising avenue for intervention (Kamioka et al., 2014). By engaging in music interventions, refugee children and adolescents can express their emotions, build resilience, and foster social connections in a safe and supportive environment (Marsh, 2016). Among various types of music therapy, the active use of musical instruments has been particularly effective in eliciting significant mental health benefits, such as reducing symptoms of depression, post-traumatic stress disorder, and anxiety disorders. (Leubner & Hinterberger, 2017; Mattney, 2017; Pezzin et al., 2018). More specifically, music intervention, particularly through the use of musical instruments, can significantly enhance mental health through targeting self-efficacy, optimism, self-concept, and empathy (Ritchie & Williamon, 2011; Taft, 2019). Learning to play an instrument builds self-efficacy by helping individuals master new skills and achieve goals, fostering confidence (Hendricks, 2016). It can also nurture optimism through positive experiences, stress reduction, and social connections (ZarzaAlzugaray, 2024). Further, engaging in music helps improve self-concept by enabling self- expression, providing a sense of achievement and recognition, and aiding identity formation (Hash, 2017). Additionally, music can enhance empathy by promoting emotional engagement, facilitating group collaboration, and exposing individuals to diverse cultures and perspectives (Clarke et al., 2015). Overall, music intervention can offer a holistic approach to improving psychological well- being and personal growth (Cuadrado, 2019). Additionally, instrument-based music interventions facilitate social adjustments by cultivating social skills, enhancing communication, and promoting a sense of belonging (Gooding, 2011). Participating in music groups or ensembles requires cooperation, active listening, and teamwork, which helps individuals develop interpersonal skills and learn to work harmoniously with others (Kniffin et al., 2017). The shared experience of making music creates opportunities for social interaction and bonding, reducing feelings of isolation and promoting inclusivity (Stensæ th, 2018). Furthermore, music often serves as a universal language, breaking down cultural and linguistic barriers and facilitating connections between diverse groups (Letts, 1997). This collaborative and inclusive environment may help individuals adjust socially by building relationships, increasing social confidence, and enhancing individuals’ ability to navigate various social settings (Boal-Palheiros & Llari, 2023; Llari, 2016). Therefore, this study aimed to investigate the impacts of musical instrumental intervention on the mental health and social adjustment of refugee children and adolescents. Specifically, our research compared the effects of instrumental interventions with other academic interventions, such as math, to assess their impact on features like empathy, self-efficacy, optimism, self-concept, and social and cultural integration. Our hypotheses were as follows: First, we hypothesized that the Music group would have greater mental health benefits (e.g., empathy, self-efficacy, optimism, self-concept) compared to the Math group among refugees in Germany. Secondly, we predicted that the Music group would also show greater social adjustment benefits compared to the Math group. The comparative aspect of our

120


study highlights the merits of instrumental interventions, offering valuable insights for practitioners and policymakers. It also contributes to the growing body of evidence supporting the use of music therapy as a vital tool in the holistic treatment of refugee children and adolescents.

Methods Participant and procedure The data for this study were obtained from “Part project Oldenburg: Learning music, cognitive, psychosocial and integrative development of primary school children” dataset. This Part project Oldenburg was funded by the Federal Ministry of Education and Research (BMBF). It collected information on how school-based musical intervention impacted cognitive, affective, and integrative development in refugee children and adolescents. The study compared the effects of music programs to those of a mathematical intervention and a control condition without any forms of intervention. Specifically, the participants in the study were third-grade children from seven schools in the city of Oldenburg, Germany. These schools were selected upon having a high number of children with migration backgrounds. Data were collected across three timepoints based on the intervention (T!: preintervention, T2: post-intervention, T3: follow-up). The sample size for the raw data was 220 participants (n=109 female, n=111 male; n=136 German, n=84 migrant). From this raw data, in the current study, we only used data of participants who engaged in either a music program or a mathematical intervention. Therefore, we excluded data with participants from the no-intervention group, as well as participant data missing for demographics, which resulted in a total sample size of 161 participants. Of these participants, 88 were involved in the “Music group”, and 73 were in the “Math group”. Measures Empathy, Self-efficacy, Optimism, and Integration into Peer Group (FRKJ). Psychological health was assessed using the German questionnaire on resources in children and adolescents (Fragebogen zu Ressourcen im Kindes- und Jugendalter; FRKJ) (Wienand, 2017). The FRKJ had a total of 36 items, with four subscales including (1) empathy, (2) self-efficacy, (3) optimism, and (4) integration into peer groups. The responses were measured on a scale ranging from 1 “never true” to 4 “always true”. The internal reliability of the scale was adequate, ranging from Cronbach’s α = 0.69 to α = 0.89. Orientation to German Culture and Orientation to Origin Culture (FRACC-Y). Cultural adjustment was assessed using the Frankfurt Youth Acculturation Scale (FRACC-Y) (Frankenberg & Bongard, 2013). The FRACC-Y had a total of 12 items. The first 7-item factor measured (1) orientation toward host culture and the second 5-item factor assessed (2) orientation toward culture of origin. The responses were assessed on a 5-point scale ranging from 0 (not at all true) to 4 (completely true). Both subscales proved sufficient internal reliability of 0.62 and 0.79. Statistical analysis Statistical analyses were conducted using the Jamovi software. First, chi-square and independent t- tests were performed to assess differences in demographics, such as gender, age, household annual income, nationality, and language, between the Music and Math groups. Subsequently, an analysis of variance (ANOVA) was used to compare the mean differences of changes in psychological health and cultural adjustment between the two groups. The significance level for all analyses was set at P < 0.05 (twotailed).

Results Demographic features

121


When we compared individuals in the Music group compared to the Math group, there were no statistically significant differences between groups in gender (P = .955), age (P = .722), household annual income (P = .895) , nationality (P = .944), and language (P = .274). On average, participants in our study for both groups were 49.1% female, had a mean age of 104 months, and their household annual income was low-middle class. Additionally, 84.2% of the participants were born in Germany, with 27.5% being bilingual. These results are shown more specifically in Table 1. Instrumental group

Math group

T(χ²)

P

Gender (% Female)

48.9%

49.3%

(0.003)

0.955

Age (Months)

104.08 (7.59)

104.49 (6.62)

-0.357

0.722

Household Annual Income

2.29 (2.00)

2.35 (2.02)

-0.132

0.895

Nationality (% Born in Germany)

84.1%

84.3%

(0.116)

0.944

Language (% Bilingual)

31.0%

23.3%

(1.19)

0.274

Notes. P < 0.05*, P < 0.01** Table 1. Group comparisons in demographic features Mental health and sociocultural adjustment features In partial alignment with our first hypothesis that the Music group would have greater mental health benefits (e.g., empathy, self-efficacy, optimism, integration into peer-groups) compared to the Math group, individuals who participated in the Music group were more likely to gain higher levels of change in self-efficacy (F = 4.857, P = .029). Figure 1 visualized the change of scores in the self- efficacy subscale. However, there were no differences in changes of scores in empathy (P = .345), optimism (P = .071), and integration into peer-groups (P = .868) between the two groups.

Figure 1. Changes in FRKJ-SE scores visualized between groups

122


In contrast to our second hypothesis that the Music group would also show greater social adjustment benefits compared to the Math group, we did not find any differences in changes of scores for orientation to German culture (P = .179) and orientation to origin culture (P = .928). Table 2 depicts the results found for all these analyses. Instrumental group

Math group

F

P

FRKJ-EPA 0.641 (4.62) -0.116 (4.82) 0.899 0.345 FRKJ-SE 0.715 (4.07) -0.856 (4.35) 4.857 0.029* FRKJ-OP 0.246 (3.75) -1.006 (4.34) 3.31 0.071 FRKJ-IP 0.641 (3.54) 0.531 (4.18) 0.028 0.868 FRAKK-Host -1.034 (3.85) -2.688 (3.86) 1.895 0.179 FRAKK-Origin 0 (6.34) -0.231 (7.96) 0.008 0.928 Notes. P < 0.05*, P < 0.01**; FRKJ-EPA, Empathy Subscale; FRKJ-SE, Self-Efficacy Subscale: FRKJ-OP, Optimism Subscale; FRKJ-IP, Integration into Peer Group Subscale; FRAKK-Host, Orientation to German Culture Subscale; FRAKK-Origin, Orientation to Origin Culture Subscale Table 2. Group comparisons in change on FRKJ and FRAKK scores changes

Discussion The findings of our study not only contributes valuable insights to the psychological benefits of music interventions but also is the first to focus on a particularly vulnerable population - refugee children. Unlike previous research, which was primarily cross-sectional and often focused on the effects short term interventions among the general population (Geipel et al., 2022; Moss et al., 2018), our study employed a longitudinal design to capture the causal effect of instrumental music on mental health factors and social adjustment skills over time. Overall, our results indicated significant improvement in self-efficacy scores among the Music group, compared to the Math group. However, the Music group did not show any significant differences in terms of empathy, optimism, integration into peer groups, adjustment to host and origin culture. Our study marks a key discovery that underscores the unique ability of music to develop self-efficacy. In accordance with our hypothesis that music intervention will bring mental health benefits, we found that those who received musical instrumental intervention showed a significant increase in self- efficacy. Not surprisingly, past studies have demonstrated that practicing and mastering the complex skills required to play a musical instrument profoundly contributes to self-efficacy by raising a sense of accomplishment (Hendricks, 2016). Further, similar to the concept of self-efficacy, music therapy has also been shown to effectively boost self-esteem in emotionally disturbed adolescents and undergraduates with anxiety and depression (Haines, 1989; Wu, 2002). This may suggest that the structured and expressive nature of music-making provides individuals with a safe and effective outlet to explore and recognize their self-worth (Wu, 2002). Indeed, since self-efficacy and self-esteem are central to mental health, numerous studies have documented that music positively impacts overall mental well-being (Wesseldijk et al., 2019; Lin et al., 2011; Zhou et al., 2021). Thus, these findings collectively point to the transformative power of music in not only strengthening self-efficacy, but also in promoting general mental health, making music a crucial tool for intervention for supporting psychological well being. More specifically, our study highlighted that music can especially be a powerful tool for the recovery of refugee children, who often face immense psychological and emotional challenges. For example, a study conducted by Begovac and colleagues (2004) highlighted that refugee children often encountered difficulties with poor self-image due to the trauma of war and the challenges of their refugee status, which significantly impaired their self-perception. Further, the internalization of stigma towards refugees had also contributed to their low self-esteem (Kira et al., 2014). In this context, there may be

123


several reasons why music has affected self-efficacy in refugees. First, music could have enhanced selfefficacy or self-esteem by giving a sense of connection and/or contribution to society and other individuals. Past studies have posited that the process of participating in music programs enhances social connections, a sense of belonging and community (Barrett & Bond, 2015; Lenette et al., 2016; Perkins et al, 2020). Secondly, music could have impacted self-worth by nurturing a sense of autonomy, which may be particularly lacking in refugees who have experienced disempowerment and loss of control in their lives. For instance, studies have shown that music may raise self-persistence through an iterative process of practice and feedback, with positive instruction further reinforcing this persistence, resulting in a sense of autonomy (Hayes, 2019; Küpers et al., 2014; Patston & Waters, 2015). Moreover, the sense of autonomy gained through music practice enhances self-efficacy and motivation, demonstrating that autonomy from music leads to greater fulfillment (Bonneville-Roussy et al., 2020; Valenzuela et al., 2018). In contrast to our hypothesis, however, we did not find any significant changes in empathy, optimism, integration into peer-groups, and orientation host and origin culture following musical instrumental intervention in refugee children. There may be several reasons why this may be the case. First, contrary to the prevalent notion that refugee children may lack baseline levels of empathy and/or optimism, these qualities may actually be preserved or remain intact in these children. For instance, in a previous study on African asylum seekers and refugees, these individuals were able to perform the same level, or even higher levels of, empathy towards other people’s suffering (Aragona et al.,2020). In another study, refugees and newcomers to Canada showed high levels of empathy, optimism, and an openness to a new culture (Magro, 2009). Secondly, external societal factors, which were not controlled in our study, may have affected the degree of integration into peer-groups and orientation to culture. In fact, a previous study on Syrian refugees reported that external challenges like bullying and racial discrimination were significant factors that hindered the sense of belonging, making children feel that they belong “nowhere” (Guo et al., 2019). Even among Somali refugees, unsupportive social interactions significantly impacted their cultural adaptation skills (Jorden et al., 2009). Lastly, other factors of our study design, such as the duration and intensity of the music intervention, might not have been sufficient enough to bring the measurable changes in these complex outcomes. The multifaceted nature of these psychological and social constructs may require more longer and intensive interventions to yield significant results. There are several limitations to consider when interpreting our results. First, our study was primarily focused on refugee children, which limits the generalizability of our findings of the effects of music intervention to a broader population, such as older adults, other vulnerable populations, or healthy children. Second, another limitation of our study is that we could not compare the effects of different types of musical instruments (e.g., strings, percussions, and wind instruments) to provide a more nuanced understanding of how different forms of instrumental intervention influence mental health. Third, given that we used existing data, we could not examine the impact of instrumental intervention on other indices of mental health, such as depression, anxiety, and trauma. Future studies should employ a broader range of mental health indices to provide a deeper understanding in this area. Lastly, as our study relied on self-report measures, future studies could further use more observational measures of social adjustment or mental health, such as through Ecological Momentary Assessments (EMA). Despite these limitations, our study had clear strengths. One of the significant strengths of our study was the longitudinal design which allowed for the observation of mental health changes over time and the establishment of potential causal relationship between music interventions and mental health outcomes. Another strength is the utilization of samples based on refugee children, a special population that has often been underrepresented in other studies. In conclusion, the findings of our study carry significant implications across refugee care, psychological interventions, education, and policy making. In refugee children care, past interventions have focused predominantly on increasing academic performance (Aghajafari et al., 2020), which often neglected the

124


need of enhancing self-efficacy and emotional well-being. This study, along with insights from past studies, demonstrates that music-based interventions can be a novel avenue for holistically addressing the needs of refugee children (Hettich et al., 2020; Kankaanpää et al., 2022). Thus, educational interventions that target refugee children should incorporate and invest towards more music based activities (McFerran et al., 2017; Shuler, 2011). Furthermore, our research has implications in recognizing the specific needs of refugee children, rather than treating them as a subset of the broader adult refugee population (Bronstein & Montgomery, 2011; Farmer, 2018). Our findings shed light on the profound impact that targeted interventions like music therapy can have on transforming the lives of refugee children. References 1.

Dingle, G. A., Sharman, L. S., Bauer, Z., Beckman, E., Broughton, M., Bunzli, E., ... & Wright, O. R. L. (2021). How do music activities affect health and well-being? A scoping review of studies examining psychosocial mechanisms. Frontiers in psychology, 12, 713818.

2.

Chanda, M. L., & Levitin, D. J. (2013). The neurochemistry of music. Trends in cognitive sciences, 17(4), 179-193.

3.

Hou, Y. C., Lin, Y. J., Lu, K. C., Chiang, H. S., Chang, C. C., & Yang, L. K. (2017). Music therapy-induced changes in salivary cortisol level are predictive of cardiovascular mortality in patients under maintenance hemodialysis. Therapeutics and Clinical Risk Management, 263-272.

4.

Huang, E., & Huang, J. (2023). Music Therapy: A Noninvasive Treatment to Reduce Anxiety and Pain of Colorectal Cancer Patients—A Systemic Literature Review. Medicina, 59(3), 482.

5.

Dangmann, C., Dybdahl, R., & Solberg, Ø . (2022). Mental health in refugee children. Current Opinion in Psychology, 48, 101460.

6.

Pacione, L., Measham, T., & Rousseau, C. (2013). Refugee children: Mental health and effective interventions. Current psychiatry reports, 15, 1-9.

7.

Kamioka, H., Tsutani, K., Yamada, M., Park, H., Okuizumi, H., Tsuruoka, K., … Mutoh, Y. (2014). Effectiveness of music therapy: a summary of systematic reviews based on randomized controlled trials of music interventions. Patient Preference and Adherence, 8, 727–754.

8.

Marsh, K. (2016). Creating bridges: music, play and well-being in the lives of refugee and immigrant children and young people. Music Education Research, 19(1), 60–73.

9.

Leubner, D., & Hinterberger, T. (2017). Reviewing the effectiveness of music interventions in treating depression. Frontiers in psychology, 8, 1109.

10. Matney, B. (2017). The effect of specific music instrumentation on anxiety reduction in university music students: A feasibility study. The Arts in Psychotherapy, 54, 47-55. 11. Pezzin, L. E., Larson, E. R., Lorber, W., McGinley, E. L., & Dillingham, T. R. (2018). Musicinstruction intervention for treatment of post-traumatic stress disorder: a randomized pilot study. BMC psychology, 6, 1-9. 12. Ritchie, L., & Williamon, A. (2011). Measuring distinct types of musical self-efficacy. Psychology of Music, 39(3), 328-344.

125


13. Taft, S. A. (2019). Harnessing optimism in response to musical challenges. Music Educators Journal, 106(2), 22-27. 14. Hendricks, K. S. (2016). The sources of self-efficacy: Educational research and implications for music. Update: Applications of Research in Music Education, 35(1), 32- 38. 15. Zarza-Alzugaray, B., Casanova, O., & Zarza-Alzugaray, F. J. (2024). Musical Instrumental SelfConcept, Social Support, and Grounded Optimism in Secondary School Students: PsychoPedagogical Implications for Music Education. Education Sciences, 14(3), 286. 16. Hash, P. M. (2017). Development and validation of a music self-concept inventory for college students. Journal of Research in Music Education, 65(2), 203-218. 17. Clarke, E., DeNora, T., & Vuoskoski, J. (2015). Music, empathy and cultural understanding. Physics of life reviews, 15, 61-88. 18. Cuadrado, F. (2019). Music and Talent: An experimental project for personal development and well-being through music. International Journal of Music Education, 37(1), 156-174. 19. Gooding, L. F. (2011). The effect of a music therapy social skills training program on improving social competence in children and adolescents with social skills deficits. Journal of music therapy, 48(4), 440-462. 20. Kniffin, K. M., Yan, J., Wansink, B., & Schulze, W. D. (2017). The sound of cooperation: Musical influences on cooperative behavior. Journal of organizational behavior, 38(3), 372-390. 21. Stensæth, K. (2018). Music as participation! Exploring music’s potential to avoid isolation and promote health. Music and public health: A Nordic perspective, 129-147. 22. Letts, R. (1997). Music: universal language between all nations?. International Journal of Music Education, (1), 22-31. 23. Boal-Palheiros, G., & Ilari, B. (2023). Music, drama, and social development in Portuguese children. Frontiers in Psychology, 14, 1093832. 24. Ilari, B. (2016). Music in the early years: Pathways into the social world. Research Studies in Music Education, 38(1), 23-39. 25. Geipel, J., Koenig, J., Hillecke, T. K., & Resch, F. (2022). Short-term music therapy treatment for adolescents with depression–A pilot study. The arts in psychotherapy, 77, 101874. 26. Moss, H., Lynch, J., & O’donoghue, J. (2018). Exploring the perceived health benefits of singing in a choir: an international cross-sectional mixed-methods study. Perspectives in Public Health, 138(3), 160-168. 27. Hendricks, K. S. (2016). The sources of self-efficacy: Educational research and implications for music. Update: Applications of Research in Music Education, 35(1), 32- 38. 28. Haines, J. H. (1989). The effects of music therapy on the self-esteem of emotionally- disturbed adolescents. Music Therapy, 8(1), 78-91. 29. Wu, S. M. (2002). Effects of music therapy on anxiety, depression and self-esteem of undergraduates. Psychologia, 45(2), 104-114. 30. Wesseldijk, L. W., Ullén, F., & Mosing, M. A. (2019). The effects of playing music on mental

126


health outcomes. Scientific reports, 9(1), 12606. 31. Lin, S. T., Yang, P., Lai, C. Y., Su, Y. Y., Yeh, Y. C., Huang, M. F., & Chen, C. C. (2011). Mental health implications of music: Insight from neuroscientific and clinical studies. Harvard review of psychiatry, 19(1), 34-46. 32. Zhou, Z., Zhou, R., Wei, W., Luan, R., & Li, K. (2021). Effects of music-based movement therapy on motor function, balance, gait, mental health, and quality of life for patients with Parkinson’s disease: A systematic review and meta-analysis. Clinical rehabilitation, 35(7), 937951. 33. Begovac, I., Rudan, V., Begovac, B., Vidović, V., & Majić, G. (2004). Self-image, war psychotrauma and refugee status in adolescents. European child & adolescent psychiatry, 13, 381-388. 34. Kira, I. A., Lewandowski, L., Ashby, J. S., Templin, T., Ramaswamy, V., & Mohanesh, J. (2014). The traumatogenic dynamics of internalized stigma of mental illness among Arab American, Muslim, and refugee clients. Journal of the American Psychiatric Nurses Association, 20(4), 250-266. 35. Barrett, M. S., & Bond, N. (2015). Connecting through music: The contribution of a music programme to fostering positive youth development. Research Studies in Music Education, 37(1), 37-54. 36. Lenette, C., Weston, D., Wise, P., Sunderland, N., & Bristed, H. (2016). Where words fail, music speaks: The impact of participatory music on the mental health and wellbeing of asylum seekers. Arts & Health, 8(2), 125-139. 37. Perkins, R., Mason-Bertrand, A., Fancourt, D., Baxter, L., & Williamon, A. (2020). How participatory music engagement supports mental well-being: a meta-ethnography. Qualitative health research, 30(12), 1924-1940. 38. Hayes, L. (2019). Beyond skill acquisition: Improvisation, interdisciplinarity, and enactive music cognition. Contemporary music review, 38(5), 446-462. 39. Küpers, E., van Dijk, M., McPherson, G., & van Geert, P. (2014). A dynamic model that links skill acquisition with self-determination in instrumental music lessons. Musicae Scientiae, 18(1), 17-34. 40. Patston, T., & Waters, L. (2015). Positive instruction in music studios: Introducing a new model for teaching studio music in schools based upon positive psychology. Psychology of Well-being, 5, 1-10. 41. Bonneville-Roussy, A., Hruska, E., & Trower, H. (2020). Teaching music to support students: how autonomy-supportive music teachers increase students’ well-being. Journal of Research in Music Education, 68(1), 97-119. 42. Valenzuela, R., Codina, N., & Pestana, J. V. (2018). Self-determination theory applied to flow in conservatoire music practice: The roles of perceived autonomy and competence, and autonomous and controlled motivation. Psychology of Music, 46(1), 33-48. 43. Aragona, M., PETTA, A., Kiaris, F., Begotaraj, E., Lai, C., & Spitoni, G. F. (2020). The empathic migrant: empathy is preserved in African refugees with PTSD. Dialogues in Philosophy, Mental & Neuro Sciences, 13(2).

127


44. Magro, K. (2009). Expanding conceptions of intelligence: Lessons learned from refugees and newcomers to Canada. Gifted and Talented International, 24(1), 79-92. 45. Guo, Y., Maitra, S., & Guo, S. (2019). “I belong to nowhere”: Syrian refugee children’s perspectives on school integration. Journal of Contemporary Issues in Education, 14(1), 89-105. 46. Jorden, S., Matheson, K., & Anisman, H. (2009). Supportive and unsupportive social interactions in relation to cultural adaptation and psychological distress among Somali refugees exposed to collective or personal traumas. Journal of Cross-Cultural Psychology, 40(5), 853-874. 47. Aghajafari, F., Pianorosa, E., Premji, Z., Souri, S., & Dewey, D. (2020). Academic achievement and psychosocial adjustment in child refugees: a systematic review. Journal of Traumatic Stress, 33(6), 908-916. 48. Hettich, N., Seidel, F. A., & Stuhrmann, L. Y. (2020). Psychosocial interventions for newly arrived adolescent refugees: A systematic review. Adolescent Research Review, 5(2), 99-114. 49. Kankaanpää, R., Aalto, S., Vänskä, M., Lepistö, R., Punamäki, R. L., Soye, E., ... & Peltonen, K. (2022). Effectiveness of psychosocial school interventions in Finnish schools for refugee and immigrant children,“Refugees Well School” in Finland (RWS- FI): a protocol for a cluster randomized controlled trial. Trials, 23(1), 79. 50. McFerran, K. S., Crooke, A. H. D., & Bolger, L. (2017). Promoting engagement in school through tailored music programs. International Journal of Education & the Arts, 18(3). 51. Shuler, S. C. (2011). Music education for life: Building inclusive, effective twenty-first- century music programs. Music Educators Journal, 98(1), 8-13. 52. Bronstein, I., & Montgomery, P. (2011). Psychological distress in refugee children: a systematic review. Clinical child and family psychology review, 14, 44-56. 53. Farmer, A. (2018). Finding a new balance: bringing together children’s rights law and migration policy for effective advocacy for migrant children. In Research handbook on child migration (pp. 173-186). Edward Elgar Publishing.

128


The Impact of the US Presidential Election on Global Business Strategies and Consumer Behavior Author

Full Name

:

Jang, Dongmin

:

Hankuk Academy of Foreign Studies

(Last Name, First Name)

School Name

ABSTRACT This paper begins by announcing a rematch between Joe Biden and Donald Trump for the 2024 U.S. presidential election; however, significant changes occur: Donald Trump is shot at a rally, and Joe Biden, facing internal pressure, steps down, transferring his candidacy to Vice President Kamala Harris. The election is now between Trump and Harris. The outcome will significantly impact U.S. policies and global business strategies. Korean and other global companies should prepare for different scenarios. A Trump win could change U.S. investment and trade policies, especially towards China, while Harris is expected to continue Biden’s policies. Trump's 2016 presidency brought shifts in multilateral trade regimes, tax cuts, and deregulation, affecting global businesses. Biden's 2020 election shifted strategies again, with investments in clean energy and infrastructure benefiting sectors like renewable energy and construction. This election's impact extends to international relations, affecting companies like Huawei that are facing U.S. restrictions. Political scientists, including Martin Wolf, argue that Trump's 2016 success was rooted in nationalism, xenophobia, and a cult of personality, leading to a rejection of alliances and scientific truth. Despite the pandemic, Trump garnered substantial support, culminating in the Capitol riot, reflecting deep loyalty among his followers. Kamala Harris is expected to continue Biden’s policies, although she has yet to present distinct campaign promises. The study examines how election outcomes influence global business strategies and consumer sentiment, citing case studies and market behaviors. This research emphasizes that public sentiment drives presidential choices, but policy implementation impacts economic markets, and it concludes that presidential candidates must recognize bottom-up power dynamics and the global economic implications of their policies. The study also explores the intersection of property policy and new technologies like AI and virtual currencies, noting candidates' lack of focus on these issues. It stresses the need for responsible promises and practical attitudes from candidates, advocating for a shift towards a more multilateral economic community away from U.S.-centric paradigms. Taken together, the document analyzes the implications of the 2024 U.S. presidential election, focusing on how different outcomes could affect global businesses, consumer behavior, and economic policies, urging candidates to make responsible and practical promises.

KEYWORDS U.S. Presidential Election 2024, Donald Trump, Joe Biden, Kamala Harris, Global Business Strategies, Trade Policies, Political Sentiment

129


“When America sneezes, the world catches a cold.” Charles Maurice de Talleyrand-Périgord.

Ⅰ. Introduction A rematch between Biden and Trump has been confirmed for the November 2024 U.S. presidential election; however, as this article was nearing completion, there were significant changes in the landscape of the U.S. presidential election. During a campaign rally, Donald Trump was shot by an assailant. Meanwhile, Joe Biden, who was facing calls to step down even from within the Democratic Party, eventually announced that he would transfer his candidacy to Vice President Kamala Harris. Consequently, the U.S. presidential election has been decided as a two-way race between Trump and Kamala Harris. Depending on which of the two candidates wins, U.S. policies will be quite different, and it is clear that it is time for Korean and global companies to rethink their strategies for successful business in the U.S. No doubt, the U.S. presidential election is a significant event that determines the political direction of the world’s largest economy, and its outcome has far-reaching implications for global business strategies and consumer behavior. Generally speaking, when the results of the U.S. election are announced, both domestic and global companies adjust their business strategies to align with the new policy environment, and market participants make investment decisions in anticipation of these changes. Korean companies, for instance, should consider both the possibility of a Biden re-election and a Trump victory in their business policies. While Biden’s re-election is unlikely to result in major changes to Korean companies’ business strategies, as many of the existing policies will be maintained, while Trump’s election is expected to result in significant changes to U.S. investment and trade policies, requiring closer scrutiny. With Trump leading in most of the key swing states, it is likely that various benefits for green industries under the IRA will be prioritized under a Trump administration. One common prediction is that Trump will increase protectionist measures, including trade restrictions against China, in the area of trade policy. To take another example, the election of Donald Trump in 2016 signaled a shift in the U.S. stance on multilateral trade regimes such as the World Trade Organization (WTO), creating substantial uncertainty for global businesses that rely on free trade. While the tax cuts announced under the Trump administration have positively impacted corporate profitability, immigration policies and environmental deregulation could exert long-term pressure on companies regarding workforce and sustainability. Companies like Apple Inc., for instance, have had to navigate the political risks associated with overseas manufacturing and supply chain management and have developed strategies to address these challenges. Additionally, environmental policy changes—such as the withdrawal from the Paris climate agreement—have influenced companies in the energy sector to adjust the pace of their transition to green energy. The election of Joe Biden in 2020 has once again changed corporate business strategies and consumer behavior; his plans to invest in clean energy and infrastructure have sent positive signals to the renewable energy sector and construction-related industries. This shift was reflected in the rising market value of electric vehicle companies—such as Tesla Inc. and solar panel manufacturers. The changing political landscape and economic policies in the U.S. are also important factors in its relations with China, the European Union (EU), and other major countries, redefining the international alliances and conflicts multinational companies face. As another case in point, China-based global technology companies—such as Huawei Technologies Co.—have had to navigate increased scrutiny and restrictions in the U.S. market, impacting their global business strategies. As many political scientists, including Martin Wolf, have unanimously argued, Donald Trump’s election as President of the United States was fundamentally based on three forces: part nationalism, part xenophobia, and part the cult of personality. So, as Wolf suggests, we look back and think about it this way. These three strange foundations, and Trump’s success strategy, which is somehow unfamiliar with

130


American democracy, later led to a rejection of alliances, multilateralism, international rules, science and truth, and the reality of climate change.1 Yet, even through the 2020 pandemic, Trump recorded 46.8% of the vote. What is even more frightening is the invasion of the National Assembly building by Trump’s followers. This sudden political act is a manifestation of the blind loyalty shown by uncritically believing the “big lie” that their leader said was a stolen election. Although Biden has stepped down, his policy successor, Kamala Harris, is unlikely to present campaign promises that diverge significantly from Biden’s. The fundamental course will remain unchanged. Therefore, this paper is written under the assumption that Harris—who has yet to present any significant policy points—will maintain Biden’s campaign promises and administrative policies as she approaches the presidential election. Drawing upon these historical and current events, this study analyzes how the outcome of the US presidential election affects the business strategies of global companies and consumer sentiment. Through specific case studies, the author of this research explores in detail the macroeconomic volatility implied by the election results and how it manifests in different industry sectors.

Ⅱ. The Analysis of the Impact of the Presidential Election on Global Business and Consumer Behavior In Andrew Heywood’s Politics, which has established itself as an excellent textbook for teaching political science in American universities, we find a passage that summarizes the aftermath of globalization and its dangerous ramifications. Heywood’s summary captures what we could both imagine and have seen in practice. The first reality is that economic decisions are increasingly influenced by global financial markets, which were unstable to begin with. The second reality is the increase in individualism and the destruction of existing traditional social characteristics. To borrow a term from German economist Ulrich Beck, a ‘risk society’ is an inevitable result of globalization. The third and final category includes arbitrary trends heading towards environmental crisis and destruction. What Heywood did not mention, but what we cannot ignore in terms of the economic impact that the results of the US presidential election may have on the world, are the following questions, equally paying attention to the remark that “Globalization has never enjoyed universal support.”2: 1. How to escape from excessive influence on the US economy 2. What each country is preparing as a countermeasure and exit strategy 3. What can be specifically prepared for in Korea’s situation Considering these three derivative environments, we would like to first take a look at the predictions and economic effects announced by various institutions and organizations regarding the results of the US presidential election. 1.1.

Impact of policy commitments

While Trump’s policies share some similarities with those of the Biden administration—such as reviving manufacturing in the U.S.—he has been outspokenly critical of green policies, including support for electric vehicles and environmental regulations for internal combustion engines. Given the precedent set by the first Trump administration, it is likely that a second Trump administration will seek to make these pledges a reality. The first Trump administration issued a total of 321 executive actions, 115 of which were rollbacks of existing regulations, with nearly 100 related to environmental deregulation. Examples include the Clean Power Plan (CPP), introduced by the Obama administration to encourage the use of alternative energy and reduce carbon emissions. In an attempt to nullify the CPP, Trump issued an executive order outlining the CPP Review and the Affordable Clean Energy Rule (ACE), which promotes coal-fired

131


power plants. Some of the Trump administration’s deregulatory efforts were ultimately ruled unlawful by federal courts, but the process of reinstating green policies has been lengthy and has taken a significant toll.

Figure -1: Survey of US adults conducted July 27-Aug 2, 2020.3 As a preliminary indication of the outcome of the November 2024 presidential election, a survey conducted in August 2020 already showed that the two candidates’ platforms and voters’ mixed reactions to them were evident. According to the data compiled and published by the Pew Research Center, the economy is rated as the most important factor in deciding which candidate to support, as the graph on the right shows. With the US economy recently entering a contraction, 8 out of 10 registered voters say economic issues are the most important in the 2020 presidential election. Other issues perceived as important by U.S. voters include healthcare, Supreme Court nominations, the coronavirus outbreak, and violent crime. Yet, there is a stark difference in the importance of issues for registered voters who support Trump and Biden. Trump, for instance, supporters see the economy (88%) and violent crime (74%) as the most important issues, while Biden supporters see healthcare (84%) and the coronavirus outbreak (82%) as very important. 1.2.

Corporate Case Studies

As has been well known, the defining characteristic of the Trump administration in 2020 has been its emphasis on America First and protectionism. Yet, the US trade deficit has not improved and has continued to widen. More specifically, the US was engaged in a trade war with China. The conflict escalated as the two countries imposed reciprocal tariffs, and they signed the first phase of a trade agreement in January 2020, which seemed to put a stop to the trade war; however, recently the US has continued to press China, citing pressure on American businesses and blaming China for COVID-19. China’s import fulfillment rate of US, for instance, agricultural, energy, and commodity imports was at 53.5% in September 2020. The US is still imposing high tariffs on China and has signed executive orders to impose additional sanctions on WeChat and remove TikTok and WeChat from the US. No matter who becomes president between Trump and Biden, the largest economic base, that is, the industry that the United States will steadily focus its entire economic power on as a matter of utmost interest, will be semiconductors. If the Internet has been “one of the most fruitful innovation platforms

132


in history” over the past 40 years, then semiconductors are now the most appropriate word to represent the foundation of future innovative technologies and the tremendous power they will generate. Therefore, the word “omnibus,” which refers to this general-purpose technology, refers to “a concept that grasps at the sheer levels of generality, the extreme versatility on display,”4 and the semiconductor industry will be an essential part of the foundation of the global industry for the next 20 years or more. It will become established as a word that expresses the core of the semiconductor industry. 1.3.

International trade and policy change

During his first term, Trump signed several executive orders prioritizing the elimination of bilateral trade deficits with countries having large trade surpluses with the U.S. He also ordered investigations into unfair trade practices by China and imposed special tariffs of 10 percent on Chinese imports, with further tariff increases on certain items, citing intellectual property infringement and forced technology transfer. Additionally, Trump pursued a protectionist trade policy based on the “America First” principle by issuing executive orders banning the use of telecommunications equipment from China’s Huawei and ZTE, and renegotiating the North American Free Trade Agreement (NAFTA) along with other U.S. trade agreements. Trump’s second term is expected to continue in the same direction as his first, with his presidential campaign pledge, Agenda 47, outlining his trade policies. Trump has criticized the Biden administration’s trade policies as pro-China and has stated that he would overhaul the tax and trade system to protect domestic manufacturing and impose additional taxes on foreign companies. As part of this, he has proposed a “universal base tariff” of up to 10 percent on foreign goods. Additionally, Trump announced his intention to introduce a four-year plan to impose a “universal baseline tariff” of up to 10 percent on foreign goods, revoke China’s most-favored-nation status, and ban imports of essential goods—such as electronics, steel, and pharmaceuticals from China. In addition, the Uyghur Forced Labor Prevention Act (UFLPA), passed by the Biden administration, prohibits the importation into the United States of any goods produced in the Uyghur region of China or by entities on the U.S. Government Designated Entity List (the “UFLPA Covered Goods”). This act is also likely to be enforced by the Trump administration as a primary means of restricting the importation of Chinese goods into the United States. Now, let us compare the key policies and their expected outcomes between the two candidates as follows: the Biden administration has largely continued the tariff policies initiated by the Trump administration, particularly under Sections 232 and 301 of the Trade Expansion Act. Recently, Biden requested the U.S. Trade Representative to triple tariffs on certain steel and aluminum products subject to Section 301 tariffs. This continuation suggests that Biden’s approach to trade policy remains focused on addressing trade imbalances and protecting domestic industries, albeit with some adjustments. When it comes to economic objectives, Biden’s tariff policy aims to: 1) Address Trade Imbalances, 2) Protect Domestic Industries, and 3) Promote Domestic Investment. More specifically, let us pay attention to the impact on the U.S. Economy: 1) Imports and Exports: The unilateral imposition of tariffs is expected to reduce U.S. imports by $255.1 billion to $499.7 billion and decrease exports by $83.6 billion to $184.4 billion, depending on the scenario, 2) Trade Balance: The U.S. trade balance could improve by $171.5 billion to $315.3 billion, 3) Economic Growth: Additional tariffs are anticipated to negatively impact U.S. economic growth in all scenarios, with the extent of the negative effects vary based on the type and extent of tariffs imposed, and 4) Consumer Prices: Tariffs are expected to raise consumer prices, with increases ranging from 1.8% to 10.4% depending on the scenario and the level of retaliation from trading partners. The impact on the Korean Economy can be summarized as follows: Korea’s total exports could decrease by $5.3 billion to $24.1 billion. If the U.S. excludes Korea and USMCA countries from tariffs, Korea’s

133


exports may decrease by $5.3 billion to $7.7 billion due to indirect effects. If universal tariffs are imposed on Korea, its exports to the U.S. could decrease by approximately $15.2 billion, with a similar decrease to third countries due to indirect effects. From the perspective of policy implications, however, the continuation of Trump's tariff policies under Biden has had mixed results. While tariffs may improve the U.S. trade balance in the short term, trade imbalances are ultimately influenced by the difference between savings and investment. Universal tariffs could raise questions about their legitimacy and potentially threaten U.S. global leadership. Expanding tariffs to include FTA partners requires a cautious approach. While the improvement in the U.S. trade balance might be modest, it could significantly increase inflationary pressure on U.S. consumers and lead to legal disputes, destabilizing relationships with allies. Now that we have observed Biden’s policies and their expected impacts, it is time to screen the Background of Trump’s tariff policy, one of his most representative ideas to be examined here. Trump’s tariff policies during his first term were marked by significant protectionist measures. These were implemented under Sections 201 and 301 of the Trade Act and Section 232 of the Trade Expansion Act. The Biden administration has largely continued these policies, even requesting the U.S. Trade Representative to triple tariffs on certain steel and aluminum products. The theoretical basis for these policies is the beggar-thy-neighbor approach, which may improve domestic welfare at the cost of global welfare. For this reason, it is natural to come up with economic and political Objectives. The primary objectives of Trump’s tariff policies include addressing trade imbalances, protecting domestic industries, and promoting U.S. manufacturing jobs. Trade policy goals aim to negotiate from a position of strength against perceived unfair practices like intellectual property theft and state-owned enterprise subsidies. Politically, the policies resonate with anti-globalization sentiments, particularly in regions like the Rust Belt. On the other hand, impact on the U.S. Economy includes those features: 1) the unilateral imposition of tariffs is expected to reduce U.S. imports by $255.1 billion to $499.7 billion, while retaliatory tariffs could decrease U.S. exports by $83.6 billion to $184.4 billion, 2) the overall trade balance could improve by $171.5 billion to $315.3 billion, 3) the decrease in imports and exports is greater with universal tariffs compared to reciprocal tariffs. The impact on economic growth is negative across all scenarios, with higher tariffs leading to greater negative effects, and 4) consumer prices are expected to rise significantly, with potential increases ranging from 1.8% to 10.4% depending on the scenario. Previous analyses have deemed the Trump administration’s first-term tariff policy as unsuccessful in achieving its economic objectives. While tariffs could improve the trade balance in the short term, trade imbalances are ultimately determined by the difference between savings and investment, making lasting impacts through trade policy challenging. Universal tariffs could also threaten U.S. global leadership and increase inflationary pressures domestically. Expanding tariffs to FTA partners requires caution due to potential inflationary impacts and legal disputes, which could destabilize relationships with allies. Promoting trade cooperation over protectionist measures is emphasized for mutual benefits, as seen in the USMCA and KORUS FTA. In summary, Trump’s tariff policies for his potential second term are poised to continue the protectionist approach, focusing on economic and political gains. Yet, the broader implications for both the U.S. and its trading partners indicate significant economic challenges and the need for a balanced approach to trade policy.5 The 2024 U.S. presidential election is shaping up to be a rematch between former President Donald Trump and incumbent President Joe Biden. Both parties will confirm their candidates through national conventions, with the Republicans convening in July in Milwaukee, Wisconsin, and the Democrats in August in Chicago, Illinois. The election campaign will lead up to the 47th U.S. presidential election on November 5, 2024.

134


The Biden administration has largely continued the tariff policies initiated by the Trump administration, particularly under Sections 232 and 301 of the Trade Expansion Act. Recently, Biden requested the U.S. Trade Representative to triple tariffs on certain steel and aluminum products subject to Section 301 tariffs. This continuation suggests that Biden’s approach to trade policy remains focused on addressing trade imbalances and protecting domestic industries, albeit with some adjustments. To capitulate briefly, Biden’s tariff policies continue to focus on protecting domestic industries and addressing trade imbalances, mirroring many of the objectives of the previous administration. Yet, these policies come with potential economic costs, including higher consumer prices and negative impacts on both U.S. and global economic growth. The ongoing trade policy debates will likely play a significant role in the 2024 presidential election, influencing voter sentiment and economic strategies. 2. Impact of the US presidential election on consumer psychology 2.1.

Economic outlook and consumer sentiment

Before we start predicting and analyzing the outcome of the November 2024 US presidential election, it is important to examine significant prior research. As the graph below illustrates, presidential elections are generally positive for the economy, like a box office hit or a ripple in the water that throws rocks into the economic order and environment. David Vogel’s Fluctuating Fortunes (1989)— representative research of more than 30 years pertaining to our research tasks—examines the dynamics of business and government relations in the United States from 1960 to 1988. The author Vogel identifies key factors that influenced the ebb and flow of political power held by American corporations during this period. The book focuses on six themes: 1) the political influence cycle, 2) regulation and deregulation, 3) public attitudes and corporate power, 4) the impact of economic hardship, 5) business and political campaigns, and 6) case studies and examples. Vogel provides detailed case studies and examples—such as the effort to repeal the Glass-Steagall Act, to show how business interests have a significant impact on economic and regulatory policy.6 Overall, the study provides a comprehensive look at the changing landscape of business power in American politics, emphasizing the interplay between economic cycles, public attitudes, and regulatory policy.

Figure -2: BEA, Haver Analytics, White House History, J.P. Morgan Wealth Management. Data as of Q3 2023. The party indicator is that of the serving president at that time. Markers only represent election years (intra-term presidents not pictured).7

135


Figure -3: CNN’s Road to 270 interactive.8 Unlike the photo above, the photo below (CNN’s release, July 2024) shows Trump leading, indicating that the predictions for the U.S. presidential election are becoming even more uncertain. 2.2.

Case Studies

Elections can have a more direct impact on the economy and markets when candidates’ proposals clearly affect specific industries or regions. For example, the Patient Protection and Affordable Care Act of 2010 (ACA) significantly changed the healthcare sector, while the Infrastructure Investment and Jobs Act of 2021 (IIJA), the Inflation Reduction Act of 2022 (IRA), and the Chips and Science Act of 2022 (CHIPS) brought a greater focus on the real economy and semiconductors. Policy changes, such as trade agreements, alter expectations for growth and inflation. The North American Free Trade Agreement (NAFTA), for instance, promoted globalization, increased growth, and lowered prices.9 As to the issue of inflation, to be specific, it is imperative to screen as below. As Harold James warns in his latest book, Seven Crashes (2023), negative shocks tend to lead to deflation. Conversely, inflation may seem like a good way to cope with or adjust to the immediate consequences of supply shocks, but it does not and cannot solve the fundamental problem of securing reliable and safe resources over geographically large distances. This is likely why electric vehicle batteries and semiconductors are so important to both presidential candidates, both before and after the election. Consequently, South Korea’s SK Hynix, Samsung Semiconductor, and Hyundai Motor are investing astronomical amounts into the U.S. to try to win the battle.10 Since the incident at Trump’s campaign rally however, the evaluations of the presidential candidates have been fluctuating dramatically, with noticeable increases in support for the Democratic candidate, Trump. The Republican leader, the elderly Biden, is in a serious crisis, facing internal party pressure to withdraw amid an already unfavorable public opinion compounded by a recent COVID-19 infection. Furthermore, Democratic candidate Joe Biden was criticized for his poor performance in last month’s TV debate, leaving him seemingly unable to counter Trump’s rising momentum. The media has been focusing on Trump’s resurgence, even mentioning the so-called “Trump risk.” There have been frequent reports about the potential economic repercussions and significant threats if Trump were to be re-elected. These reports suggest that while his “America First” slogan might benefit U.S. national interests, it could have a substantial negative impact on the economies of major countries, including South Korea, Taiwan, China, and the broader Asian economic region. Korean newspapers have been consistently

136


reporting that experts believe the slogan would cause a considerable shock to major economies, despite potentially benefiting U.S. interests. They also warn that South Korea’s record-high trade surplus with the U.S. could backfire, posing significant risks.11 3. Strategic response of global companies 3.1.

Rebalancing production base

The table below published by the Korea Institute for Foreign Economic Policy (KIEP) Studies projects hypothetical outcomes based on the policies of the two candidates after the election, with a focus on the economy. This results in ten different scenarios. The first two scenarios (1-1, 2-1) involve an additional 10% tariff on non-FTA signatories and either a 25% or 60% increase for China. The next two scenarios (3-1, 4-1) apply universal tariffs to both FTA and non-FTA signatories, including Korea, Canada, and Mexico. In scenario 5-1, reciprocal tariffs are imposed based on the tariff rate difference for non-FTA signatories, with a 60% tariff applied to China. The top 10 countries with trade deficits, including China, the EU, Vietnam, Japan, Taiwan, Thailand, India, Malaysia, Switzerland, and Indonesia, are considered. The retaliatory tariff scenario mirrors these five scenarios, with trading partners imposing equivalent tariffs on the U.S.12

Table -1 Retaliatory tariff scenario In the meantime, a very significant change has recently occurred. Just two days after candidate Trump was attacked by an assailant during a speech, he announced the appointment of a young 39-year-old vice-presidential candidate J. D. Vance—who is considered Trump’s alter ego and avatar. The impact of these two events, which are likely to cause a major shift in the presidential race, is already being felt among Americans and in the global economy, especially in Europe and China, even before the election.

137


Picture -1: (Left) Donald Trump surrounded by Secret Service agents after gunfire erupted at a campaign rally, Butler, Pennsylvania, US, July 13, 2024. Source: BRENDAN MCDERMID / REUTERS. ESCAPED AN ASSASSINATION ATTEMPT. (Right)Republican vice-presidential nominee J.D. Vance waves to supporters during the first day of the Republican National Convention in Milwaukee, on July 15, 2024. Source: Francis Chung/POLITICO.13 Former President Donald Trump survived an assassination attempt at a rally in Butler, Pennsylvania. The gunman, identified as Thomas Matthew Crooks, was shot dead by security, but not before killing one spectator and injuring two others. Trump sustained a minor injury to his ear. Republicans blamed Democratic rhetoric for inciting violence. Trump expressed gratitude to law enforcement and first responders and posted a message on social media affirming his safety and condemning the attack. The incident has intensified the already heated political climate ahead of the Republican convention. As shown in the picture, Mr. Trump has nominated J. D. Vance, a 39-year-old senator from Ohio, as his vice-presidential candidate. This move has caused concern in both the European Union and China. Vance is known for his protectionist stance, advocating high tariffs on Chinese imports and opposing extensive financial aid to Ukraine. His nomination suggests a potential increase in trade tensions and changes in international relations if Trump wins the presidency. These policies could lead to economic shifts, affecting global markets and international diplomacy. In fact, Trump wrote on his Truth Social website: “After lengthy deliberation and thought, and considering the tremendous talents of many others, I have decided that the person best suited to assume the position of Vice President of the United States is Senator J.D. Vance of the Great State of Ohio.”14 Former President Trump, buoyed by his rising approval ratings, has been making increasingly strong “America First” statements, which have dampened investor sentiment. In a July 16 interview, he reiterated his dissatisfaction with Taiwan, stating, “Taiwan has taken almost 100% of our semiconductor business.” Meanwhile, Kang Gu-sang, head of the North America and Europe team at the Korea Institute for International Economic Policy (KIEP), suggested that Trump might increase tariffs on countries with significant trade surpluses with the U.S. rather than targeting South Korea specifically. He emphasized that recent trends in trade surpluses indicate that this issue will certainly be addressed. On July 17, the day after Trump’s interview, the impact was felt as semiconductor companies saw a sharp decline, leading to a 2.77% drop in the tech-heavy Nasdaq index and a 1.39% decrease in the large-cap S&P 500 index. In South Korea, the negative developments also affected the two largest companies by market capitalization, Samsung Electronics and SK Hynix. SK Hynix experienced a decline of over 5% on the 17th, around 3% on the 18th, and about 1% on the 19th, while Samsung Electronics fell by 2.88% on the 19th. The anticipated broader impacts of these events are summarized as follows: 1. Increased Market Volatility: The semiconductor sector, crucial for many technology-driven markets, is likely to face ongoing volatility. 2. Tariff Implications: Potential increases in tariffs on countries with large trade surpluses with the U.S. could strain international trade relationships. 3. Investment Sentiment: Continued “America First” rhetoric may lead to a prolonged decline in investor confidence, particularly in sectors heavily reliant on international supply chains.

138


Picture -2: (Upper) Biden and Harris, the candidate for the election (Low) Harris and the former president Obama.15 Vice President Harris is notable for being the first female vice president in U.S. history, as well as the first Black, African American, and Asian American vice president.16 These attributes are considered her strengths. In a poll conducted by Politico and Morning Consult in June, which surveyed 3,996 voters, only 34% of respondents believed that Harris would win if she ran for president. In contrast, 57% believed she would not win. Among Democratic supporters, about 59% thought she would win, but only 13% of Republican supporters and 25% of independents responded positively. Politico reported on the 19th, of July that a recent poll commissioned by a Trump-affiliated super PAC showed that Harris’s competitiveness against Trump is weaker than Biden’s. Let us take a moment to discuss the new candidate appointed by Biden, Kamala Harris. Although she has not yet clearly articulated her own policies to directly counter Trump in a one-on-one race, the following outlines the key issues that align closely with Biden’s agenda. By maintaining core aspects of Biden’s policy agenda, Harris will be positioning herself as a candidate of continuity, seeking to build upon the current administration’s achievements while potentially introducing her own nuanced approaches to these critical issues. Now let us take a look at the most recent survey data. As highlighted in the Politico article, “So you wanted some Harris polling?” the current public perception of Vice President Kamala Harris is comprehensively examined, emphasizing both her strengths and challenges. The key points of the article are as follows. Here are the main points from the recent Politico article on Kamala Harris, “So you wanted some Harris polling?”: 1. Public Perception and Polling: Harris has not gained sufficient confidence from the public regarding her ability to win a future presidential election. Only one-third of voters believe she could win if she became the Democratic nominee. While three out of five

139


Democrats think she will win, her support drops significantly among independents. 2. Leadership and Trustworthiness: According to the poll, 42% of voters see Harris as a strong leader, but this number falls to about one-third among independents. Trustworthiness is another area where she struggles, with 46% of voters viewing her as untrustworthy. 3. Issue Performance: Harris has performed relatively well on issues such as healthcare, gender inequality, and LGBTQ+ rights. However, she has received less favorable ratings on immigration, relations with China, and the Israeli-Palestinian conflict. 4. Support Among Key Demographics: Despite her challenges, Harris has solidified her position among key Democratic supporters, particularly Black voters. She has outperformed President Biden in this demographic and extended her lead over potential rivals in a hypothetical 2028 matchup.17 These points summarize the current public perception of Kamala Harris as highlighted in the Politico article, reflecting both her strengths and the challenges she faces in the political landscape. 3.2.

New market entry and investment strategy

Given the experience of the first Trump administration’s attempts to roll back green regulations, many of the green policies contained in the Inflation Reduction Act (IRA)—a key policy of the Biden administration—are likely to be targeted by a second Trump administration. This is especially true in the electric vehicle battery sector, where our Korean company has invested heavily. Trump has criticized tax incentives for green EVs, notably the 30D credit, saying they will spell the end of the U.S. auto industry, and has publicly expressed negative opinions on EVs, potentially neutralizing them in various ways. The UFLPA evaluates whether a product is subject to its regulations by examining the raw materials or intermediates used, not the final product. If some of the raw materials or intermediates contained in a particular product produced by our company are UFLPA-covered goods, our company may not be able to export such products to the United States. Initially, the UFLPA was applied to cotton, textiles, tomatoes, and polysilicon; however, its scope has been expanded to include lithium-ion batteries, steel, aluminum, tires, and other automotive parts. Consequently, the UFLPA may significantly impact the export of Korean electric vehicles and battery products to the U.S.—which contain many Chinese components, as well as products made in China.18 4. Changes in consumer behavior 4.1.

Political Stability and Consumer Spending

According to recent polls, the 2024 presidential election between former President Donald Trump and incumbent President Joe Biden is expected to be very close. Various surveys indicate a tight race between the candidates. National polls often show Trump narrowly ahead of Biden. In battleground states, both candidates have strong support in key areas. A notable divergence in economic perceptions exists, with many voters recalling the economy under Trump as stronger than it is now under Biden. Regarding policy impact, voters are considering the candidates' past performances and the potential effects of proposed policies on specific areas such as healthcare and infrastructure. While we expect the election results to significantly impact the consumer market, predicting the exact outcomes at the moment is challenging due to differing forecasts from various media outlets in the US. The same goes for the Cook Political Report conducted in July 2024. The polling data indicates a very tight race between President Joe Biden and former President Donald Trump for the 2024 presidential election. Various polls—including those from CBS News and the Cook Political Report—show Biden

140


trailing slightly behind Trump. The race has intensified, with both candidates having strong and polarized support bases. Specific polling averages show Trump with a slight edge in national polls, but the margins are close enough that the outcome remains uncertain and could change as the election approaches.

Picture -3: Results of the support survey for the two candidates (July 2024).19 4.2.

Protectionist economic policies and consumer preferences

A site that studies the relationship between U.S. price indexes and investor sentiment (Investopedia) shows a somewhat negative prediction for the effect of the election on the economy. In particular, the warning of a downturn in consumer sentiment in the US has some good comparative lessons for the Korean economy, too. To take an example, it presents that regarding the impact on the U.S. Economy as follow: The unilateral imposition of tariffs is expected to reduce U.S. imports by $255.1 billion to $499.7 billion. Retaliatory tariffs by trading partners could decrease U.S. exports by $83.6 billion to $184.4 billion. This could improve the U.S. trade balance by $171.5 billion to $315.3 billion. Economic Growth tells us that: Tariffs are projected to negatively impact U.S. economic growth across all scenarios. The negative effects are more pronounced with reciprocal tariffs or extensive universal tariffs. While tariffs may improve the terms of trade in some scenarios, they are likely to reduce overall welfare by decreasing consumption of imported goods. Consumer Prices, the main point in this section unfolds that: Unilateral tariffs could increase U.S. consumer prices by approximately 1.8% to 3.6%. If trading partners retaliate, the increase could range from 1.9% to 10.4%. Particularly, universal tariffs applied to FTA partners could result in the highest price increases.20

141


These points below outline the significant changes and pro-Trump predictions following the attack incident, reflecting a shift in public sentiment and expectations for future policies and their impacts. The anticipated broader impacts of these events are summarized as follows:

1. The inflationary pressures, dubbed “Trumpflation” (a combination of Trump and inflation), resulting from Trump’s policies could pose an additional challenge for macroeconomic management by policymakers.21 2. His tax cut policies are likely to expand the U.S. budget deficit, and high tariffs could lead to increased import prices, thereby heightening inflationary pressures. 3. Democratic President Joe Biden’s strong push for mandatory electric vehicles would be repealed under Trump, who has vowed to increase tariffs to support the automobile industry and implement “massive tax cuts” to “make America great again.” The emotional appeal behind Trump's rhetoric is the notion that the U.S. is being “taken advantage of by other countries.”22 4. Trump’s repeated announcements to close the borders and complete the wall (the so-called ‘Trump Wall’) to end the illegal immigration crisis are expected to gain more traction.23 5. On the international front, Trump has pledged to end all the global crises allegedly created by the current administration, including the wars between Russia and Ukraine, and Israel and Hamas.

III. Conclusion “Patriotic appeals to “buy American” never sell. But “build America” appeals almost always resonate.”24 One noteworthy point here is an article on an internet site exploring the correlation between presidential elections and consumer spending, titled “Presidential Elections Affect Consumer Spending.”25 It is notable for its reversal of the relationship between the U.S. presidential election and its outcome. The article points out that consumer spending often increases during election years—except in 2008—due to rising consumer confidence. Yet, the causal relationship is still unclear, as the state of the economy has a greater impact on elections than elections have on the economy. The author offers a cautionary tale: marketers shouldn’t rely on election fever to drive sales but can leverage election-related issues such as economic management, trust, and foreign policy to strengthen brand connections with consumers. This is exactly what the author of this study believes. The simplistic view is that the economic impact of the election was direct and global, but Trump’s rallying cry of “Buy America” was only temporarily effective. In other words, from an economic and consumer psychology perspective, the outcome of the presidential election is not top-down—but rather the opposite: economic conditions and levels from the bottom up affect the presidential election, which mirrors our political and economic reality, like in Korea. Taken together, it is true that public sentiment and sensitivity are the criteria for deciding who to choose as the next president, but it is also true that policy pledges and their actual implementation, as well as the aftermath of the election, have a direct impact on domestic and global economic markets. Considering both aspects, my conclusion is that presidential candidates must first and foremost recognize that the power at their disposal comes from the bottom up, not the top down. Second, the U.S. election itself has enormous implications for the global economy and the domestic consumer economy. The newly elected leader will need to carefully consider these realities and their aftermath to ensure

142


that they sincerely deliver on all promises made during the campaign, while also taking into account the many conditions of the global economy. This study analyzes the campaigns of the two contenders for the US presidential election (pre-election) and the resulting changes in economic and consumer sentiment. Yet, one item is missing: property policy. Whether in South Korea or the United States, policies and practices that view real estate as a means of wealth accumulation are entrenched in the capitalist system. As is well known, the modern world recognizes the right to private property ownership as something granted for the use of human labor from God’s creation in nature; however, the perception of money is rapidly changing, especially with the advent of virtual currencies like Bitcoin. This new monetary system will raise issues based on property rights created by new technologies such as robots and artificial intelligence. Neither candidate has addressed this yet.26 Without guidelines or standards, there is a fear of sudden, unexpected policies affecting not only Korea but also the global economic and cultural community. How should we respond to these new threats as the US pushes forward? This study is significant in that it analyzed hypothetical predictions of the US presidential election outcome and attempted to understand possible future scenarios and their prescriptions. This research— which requires extensive data collection and sharp analysis—deserves more time and analytical power. Yet, at this stage, we believe this research makes a valuable contribution by enriching economic forecasts through these hypothetical scenarios and addressing expectations in unexplored areas of research, such as the new interpretation of property and labor influenced by AI. Through this study, considering the rapidly changing and uncertain landscape of the U.S. presidential election, I appeal for more responsible promises and practical attitudes from the candidates, including most political leaders. The interest in politics in both the U.S. and Korea is as intense as the interest in the economy, and public understanding is high. We have witnessed too many instances where this interest is leveraged and exploited. My final hope is that only conscientious and feasible promises are presented to the public by the presidential candidates. More importantly, I wish for the formation of a more relaxed and multilateral economic community, moving away from a U.S.-centric economic paradigm.

143


1

Martin Wolf, The Crisis of Democratic Capitalism (New York: Penguin Books, 2023), pp. 368-369.

2

Andrew Heywood, Politics, 5th ed. (London: MacMillan Education Limited, 1997), p. 169.

3

https://www.pewresearch.org/politics/2020/08/13/important-issues-in-the-2020-election/

(access: 2024.07.16). 4

Mustafa Suleyman, The Coming Wave (New York: Crown, 2023), p. 128.

5

For policy comparisons between two candidates, especially predictive analysis of hypothetical outcomes after the election, see Young Gui Kim, “2024 U.S. Presidential Election: The Effects of Trump’s Tariff Policy,” World Economy Brief (March 17, 2024).

6

David Vogel, Fluctuating Fortunes: The Political Power of Business in America (New York: Basic Books, 1989).

7

https://www.chase.com/personal/investments/learning-and-insights/article/tmt-november-ten-twenty-three (2024.07.21). 8

https://edition.cnn.com/election/2024/electoral-college-map?game-id=2024-PG-CNN-ratings&gameview=map (access: 2024.07.21). https://www.chase.com/personal/investments/learning-and-insights/article/tmt-november-ten-twenty-three (access: 2024.07.15) 9

https://www.chase.com/personal/investments/learning-and-insights/article/tmt-november-ten-twenty-three (access: 2024.07.15) 10

Harold James, Seven Crashes. The Economic Crises That Shaped Globalization (New Haven and London: Yale University Press, 2023), pp. 24-25. 11

The two articles particularly noteworthy in relation to this situation are the following: https://chatgpt.com/c/af488a64-cdf7-404d-afd2-e56581987c56 and https://v.daum.net/v/20240719075702605 (access: 2024.07.19). 12

Young Gui Kim, op cit., pp. 2-3.

13

(left) https://www.lemonde.fr/en/international/article/2024/07/14/donald-trump-escapes-assassinationattempt-republicans-blame-democratic-rhetoric_6683773_4.html . (right) https://www.politico.com/news/2024/07/15/trump-vice-president-jd-vance-00168277 (access: 2024.07.17). 14

https://www.politico.com/news/2024/07/15/trump-vice-president-jd-vance-00168277

(access: 2024.07.17). 15

(upper) https://www.politico.com/newsletters/west-wing-playbook/2024/07/18/so-you-wanted-some-harrispolling-00169524 / (low) https://edition.cnn.com/politics/live-news/joe-biden-election-drop-out-07-2224/index.html (access: 2024.07.22). 16

https://v.daum.net/v/20240722113310144 (access: 2024.07.22).

17

https://www.politico.com/newsletters/west-wing-playbook/2024/07/18/so-you-wanted-some-harris-polling00169524 (access: 2024.07.22). 18

박현주, “미대선 트럼프 재집권시 우리 기업의 미국 비즈니스에 미칠 영향” 「법무법인세종 뉴스레터」 (2024년 4월 23일). https://www.shinkim.com/kor/media/newsletter/2428.

144


19

This CBS News/YouGov survey was conducted with a nationally representative sample of 2,636 U.S. adult residents interviewed between Oct. 30 - Nov. 3rd, 2023. The sample was weighted according to gender, age, race, and education based on the U.S. Census American Community Survey and Current Population Survey, as well as past vote. The margin of error is ±2.6 points. https://www.cbsnews.com/news/trump-vs-biden-poll-2024presidential-election-year-out/ (access: 2024.07.15). 20

https://www.investopedia.com/trump-biden-election-could-be-pivotal-for-the-economy-and-your-wallet8604576 (access: 2024.07.15). 21

https://www.investopedia.com/terms/t/trumpflation.asp (access: 2024.07.21).

22

New York Times (July 18, 2004). https://www.nytimes.com/live/2024/07/18/us/trump-rnc-republicanconvention#08f690b7-9805-5159-9ffa-a953ee9b814e (access: 2024.07.21). 23

https://en.wikipedia.org/wiki/Trump_wall (access: 2024.07.21).

24

https://brandingstrategyinsider.com/presidential-election-impacts-consumer-spending/ (access: 2024.07.15)

25

https://brandingstrategyinsider.com/presidential-election-impacts-consumer-spending/ (access: 2024.07.15). Ibid. 26

김원동, 『인문학으로 읽는 금융화폐 자본주의』 (서울: 지식공감, 2024), pp. 386-388.

References 1. Wolf, Martin. The Crisis of Democratic Capitalism. New York: Penguin Books, 2023. 2. Heywood, Andrew. Politics. 5th ed. London: Macmillan Education Limited., 1997. 3. Suleyman, Mustafa. The Coming Wave. New York: Crown, 2023. 4. Kim, Young Gui. “2024 U.S. Presidential Election: The Effects of Trump’s Tariff Policy.” World Economy Brief (March 17, 2024). 5. Vogel, David. Fluctuating Fortunes: The Political Power of Business in America. New York: Basic Books, 1989. 6. James, Harold. Seven Crashes. The Economic Crises That Shaped Globalization. New Haven and London: Yale University Press, 2023. 7. 박현주. “미대선 트럼프 재집권시 우리 기업의 미국 비즈니스에 미칠 영향.” 「법무법인세종 뉴스레터」 (2024년 4월 23일) 8. 김원동. 『인문학으로 읽는 금융화폐 자본주의』. 서울: 지식공감, 2024.

145


A Comparative Analysis of Three Key Policies Under the Trump and Biden Administration: Fiscal, Trade, Tariffs Author

Full Name

:

Kim, Dayeon

:

Chadwick International

(Last Name, First Name)

School Name

Abstract This study compares significant economic policies undertaken by the Trump and Biden administrations, with an emphasis on fiscal, tax, and tariff issues. Three key policies from each administration were analyzed to better understand their effects on the US economy and the larger consequences for American citizens. Under President Trump, the Bipartisan Budget Act (2018) addressed short-term budget difficulties, the Tax Cuts and Jobs Act (2017) significantly reduced the tax code, and Section 301 tariffs were introduced to protect and promote the domestic industry from unfair Chinese trading practices. In contrast, President Biden's Infrastructure Investment and Jobs Act (2021) aimed to boost long-term economic growth through infrastructure improvements, while the American Families Plan (2021) expanded social welfare and tax credits for families, and the administration maintained and adjusted Chinese import tariffs. This study's findings suggest that Trump's policies were motivated by protectionist and short-term economic tactics, whereas Biden's approach prioritized long-term prosperity and social equality. Additionally, it suggests that future economic policies should take a balanced approach, balancing short-term needs with long-term goals, addressing economic inequality while also dealing with other issues, and carefully assessing the effects of trade policies on both domestic and international fronts.

146


Introduction The United States went through radical changes within the past two centuries. During the twentieth century, the United States acted as a benign global leader in international affairs. However, under former President Trump's "America First" policy, the country abruptly stepped back from its leadership role in economic relations (Ikenberry; Boyer et al., 455). The Biden administration has since been working to restore the nation's global standing (Burns). With the Biden administration coming to an end, it is important to evaluate the impact of its policies in comparison to those of the preceding Trump administration. This descriptive comparison analysis focuses on three critical areas: fiscal, tariff, and tax policies. This paper aims to examine one policy from each administration from these domains to provide insights about their rationales, implementation, and impacts on the U.S. economy. The similarities and differences from each policy will be carefully analyzed to understand their implications on the overall economic landscape of the United States. Ultimately, the goal of the thesis is to enhance the understanding of how different approaches to fiscal management, trade regulations, and taxation can shape economic outcomes and influence the wellbeing of American citizens.

Discussion Background 1. Definition of Administrations (1) Trump Administration

[Figure 1: Donald Trump] “How MAGA World is Taking on its New Opponent”

Donald Trump is the 45th US president who represented the Republican party. He was in office from 2017 to 2021. The economic policies and principles he implemented are referred to as Maganomics, which stands for his agenda “Make America Great Again” (Harte). Trump is a dedicated advocate of protectionism—a strategy aimed at shielding a country’s domestic industries from foreign competition through import taxes. Consequently, he implemented policies during his tenure that restricted international trade (Rasure). He planned to grow the economy by boosting manufacturing and job creation, reduce trade deficits, increase revenue through tariffs, and strengthen the nation’s economic independence (Schoenbaum).

Trump’s protectionist stance was evident through various measures that were aimed to limit imports and promote domestic industries. For instance, he withdrew from the Paris Climate Agreement, TransPacific Partnership, UNESCO, Iran Nuclear Deal (JCPOA), and UN Human Rights Council (“Trump’s Top Five Withdrawals from International Agreements”). Additionally, he pushed for tax cuts, particularly for corporations and the wealthy, to spur investment growth (Allen). Trump’s agenda was a compelling example of a trickle-down economy, which is the complete opposite of Biden's agenda. The trickle-down economics, in simple terms, is the theory that by increasing the wealth of the affluent, the affluent would spend more, and allow their wealth to trickle down throughout society, resulting in increased riches for everyone (Mallinder).

147


(2) Biden Administrations Joe Biden is the 46th US president who represented the Democratic party. He started his term in 2021 and will end in 2024. The economic policies and principles he implemented are referred to as Bidenomics (Schoenbaum). Biden’s agenda starkly contrasts the trickle-down philosophy and prioritizes building a “middle-out” economy, which is the belief that "prosperity grows from the bottom up and the middle out."In essence, the middle-out economy is based on the straightforward concept that the economy is fundamentally composed of people, and when people thrive, so does the economy. Therefore, [Figure 2: Joe Biden] policies should prioritize enhancing the well-being “Joe Biden Outlines New Steps to Ease of the middle class (Hanauer). Focusing on labor Economic Burden on Working People” and American workers rather than consumerism, Biden’s economic strategy was built on three main pillars: making strategic public investments in the U.S., empowering and educating workers to expand the middle class, and fostering competition to reduce costs and support entrepreneurs and small businesses (“Bidenomics Is Working”). He aimed to address the economic inequality within the country and take drastic action to take action on the economy to support effective reform (Constant).

2. Discussion Development (1) Trump’s Candidacy and Presidency In June 2015, Donald Trump officially declared his candidacy for the Republican nomination. Then, in June 2016, Joe Biden, who was serving as Vice President under Barack Obama, endorsed Hillary Clinton for President. In November 2016, Trump won the election and became the 45th president, with his inauguration taking place on January 20, 2017. This also marked the end of Biden’s tenure as Vice President. In November 2018, In the midterm elections, Democrats took control of the House of Representatives, which led to a series of investigations into Trump’s administration. By September 2019, Speaker Nancy Pelosi announced an impeachment inquiry into Trump concerning his interactions with Ukraine. This led to his impeachment by the House of Representatives in December 2019 on charges of abuse of power and obstruction of Congress (King). (2) Biden’s Candidacy and Presidency Between January and March 2020, Joe Biden emerged as the leading candidate for the Democratic nomination. He defeated Trump in the November 2020 election and was inaugurated as the 46th President in January 2021. In August 2021, the U.S. completed its withdrawal from Afghanistan, a decision by Biden that faced significant criticism and led to a decline in his approval ratings. By June 2022, inflation had surged to a 40-year high of 9.1%, becoming a major concern for Biden's presidency. In the November 2022 midterm elections, Democrats performed better than anticipated, maintaining control of the Senate but losing the House of Representatives (Fung et al.). (3) 2024 Election Campaigns and Events In April 2023, Trump announced his candidacy for the 2024 presidential election, while Biden, amid concerns about his age and cognitive abilities, participated in several campaign events in June 2023 as he prepared for reelection. In October 2023, a surprise attack by Hamas on Israel led to an escalation in conflict, with Biden's response facing scrutiny from various Democratic factions. Despite challenges related to his age and performance, Biden actively campaigned for reelection between January and

148


March 2024, winning key primaries in New Hampshire and Nevada. In April 2024, Trump was convicted on all 34 felony counts of falsifying business records. The first presidential debate between Biden and Trump took place in June 2024, with Biden's performance raising concerns among Democrats about his viability (Dorn). By the end of July, Biden decided to drop out of the election and endorsed Kamala Harris, the current democratic Vice President (Shear).

Policy Categorization (1) Change in Administration A 2018 study by Branda showed that discussion and concerns were sparked due to the change of administration in 2016. To be specific, while the Democratic former president Obama focused his policies on maintaining foreign relationships, the Republican president Trump had a more rigorous view of becoming an isolationist country. This radical change occurred once again when Democratic President Biden got elected to office (Branda). A 2024 study by Boecker revealed that the results of the 2020 elections evoked tremendous feelings in the public. In general, people saw Biden as more respectable and Trump as more dominant (Boecker 15). Thus, it is evident that a change in the political administration provokes strong emotions in the citizens, making the election an important event in US politics. (2) Compared Domains This paper compares policies created under both the Biden and Trump administrations and evaluates their effectiveness. Table 1 provides a simplified definition for the types of policies that will be analyzed in this study. Fiscal policies utilize government spending and tax strategies to affect economic conditions, especially at the macroeconomic level (Boyle). A 2008 study demonstrated that these policies significantly influence poverty, income distribution, and environmental outcomes (Lopez, Thomas, and Wang). Additionally, a 2010 study revealed that the State's economic involvement, particularly through the fiscal system, is crucial for managing the increasing complexities of the modern world, such as globalization, technological reliance, resource limitations, and social issues (Popa and Codreanu). Both administrations applied different fiscal policies to develop the U.S. economy. This analysis will explore the aims, implementations, outcomes, and limitations of these fiscal policies under each administration. Similarly, tax policies, which refer to the adoption of public strategies related to taxation to enhance tax effectiveness, were established for multiple reasons by each administration (Dias, Ribeiro). A 1984 study indicated that inflation on effective tax rates resulted in poor performance of US productivity (Hulten, 239). To be more specific, tax policies significantly affect how individuals and businesses make economic decisions. For individuals, tax rates can influence choices about work, savings, and consumptions. Higher taxes may discourage work or savings, while lower taxes can incentivize increased labor participation and investment in personal redevelopment. For businesses, tax policies determine how they structure their operations, whether to invest in new projects, hire employees, or expand into new markets (Streeter). This analysis will evaluate both administrations' policies' aims, implementation, outcomes, and limitations. Additionally, tariff policies, which involve taxes imposed on foreign products to protect and safeguard the domestic industry, were enacted by Presidents Biden and Trump based on their philosophical beliefs (“Tariffs”) . A 2020 study established that tariff policies had notable impacts on supply chain strategies (Dong and Kouvelis 25-35). Tariffs impose additional taxes on imported products, raising their price. This increase in costs can lead businesses to either absorb the higher expenses or pass them onto consumers, which can disrupt pricing strategies and demand across the supply chain ("Impact of Trade Tariffs"). This analysis will assess the aims, implementations, outcomes, and limitations of these tariff policies to understand their impact on the domestic economy and international trade. Comparing the Biden and Trump administrations' tariffs, budgetary, and tax policies reveals their contrasting economic

149


ideologies and techniques. Trump's strategy relied on protectionism and tax cuts to boost the economy, whereas Biden stressed coalition building, progressive taxation, and social equality. This comparison illustrates their respective effects on the domestic economy, trade relations, fiscal discipline, and voter opinion. Overall, these three policies evidently have a significant impact on the United States. Thus, it is essential for the policies that have been implemented under different administrations to be comprehensively analyzed.

Analytic Results Trump

Biden

Fiscal

Bipartisan Budget Act (BBA) of 2018

Infrastructure Investment and Jobs Act (IIJA)

Tax

Tax Cuts and Jobs Act (TCJA)

American Families Plan (AFP)

Tariffs

Section 301 Tariffs on Chinese Imports

Section 301 Tariffs on Chinese Imports

[Table 2: Policies That Have Been Implemented Under Each Administration] 1. Fiscal Policies The Bipartisan Budget Act (BBA) of 2018, enacted during Trump’s presidency, was aimed at addressing federal spending and budgetary issues, particularly in light of previous spending caps established by the Budget Control Act of 2011. This policy was introduced during a period marked by political negotiations over government funding, which led to multiple shutdowns. Therefore, some of the bipartisan agreements were intended to raise spending limits for both defense and non-defense discretionary programs, following earlier budget agreements from 2013 and 2015 (Daniels and Harrison). The policy increased discretionary spending limits for both defense and non-defense sectors for fiscal years 2018 and 2019. Specifically, it allocated an additional $80 billion for defense discretionary funding in FY 2018 and $85 billion in FY 2019. To avoid a government shutdown, the bill included a continuing resolution (CR) that provided funding through March 23, 2018, allowing time for the completion of appropriations bills. It also set aside nearly $90 billion for disaster relief in response to hurricanes impacting Puerto Rico, the U.S. Virgin Islands, Florida, Texas, and wildfires in California. The policy extended the authorization for the Children’s Health Insurance Program (CHIP) by an extra four years, in addition to the six-year extension granted earlier in the year. It also suspended the debt ceiling— the limit on the amount the U.S. government can borrow—until March 1, 2019, allowing unrestricted borrowing during this period. Additionally, the Act expanded access to hardship withdrawals from Section 401(k) retirement plans and included provisions from the Family First Prevention Services Act to boost federal funding for services aimed at keeping children out of foster care systems ("Legislation Introduced to Fund Government"). However, the Congressional Budget Office (CBO) estimated that the Act would increase government deficits by $342 billion over the next decade ("Monthly Budget Review: June 2024"). In contrast, President Biden's signing of the Infrastructure Investment and Jobs Act (IIJA) in November 2021 aimed to modernize and enhance the nation's infrastructure. The policy sought to ease inflationary pressures, fortify supply chains with long-term improvements to ports, airports, rail systems, and roadways, create well-paying union jobs with strong labor standards accessible to all—especially underrepresented communities—and promote sustainable and equitable economic growth for the long

150


term (Government Finance Officers Association; Sharpey). The IIJA authorized $1.2 trillion in total expenditures, with $550 billion earmarked as new federal spending over a five-year period. This funding included $110 billion for roads, bridges, and major projects; $66 billion for passenger and freight rail; $11 billion for safety and research; $39.2 billion for public transit; $25 billion for airports; $17.4 billion for ports and waterways; $54 billion for water infrastructure; $65 billion for power and grid improvements; $46 billion for resilience; $7.5 billion for low-carbon and zero-emission buses and ferries; $7.5 billion for electric vehicle charging; $1 billion for reconnecting communities; and $21 billion for addressing legacy pollution (Caprez). The Act has already yielded successful outcomes. For instance, Pittsburgh's Fern Hollow Bridge collapsed in 2022, yet reopened less than a year later by using $25 million from the IIJA. More than $36,000 transportation improvement projects have been completed since the Act's implementation, with $306 billion allocated to states and direct investment projects by 2023 (Navarra). Notably, the Biden administration has disbursed monies without political bias in order to reassure bipartisan lawmakers about its political neutrality ("At Its Two-Year Anniversary"). However, the majority of IIJA competitive grant programs have very weak equity standards, giving administering agencies significant discretion. To strengthen the equity focus, policymakers should increase grant values or federal shares for proposals with direct community benefits, prioritize disinvested communities in proposal review and scoring, mandate authentic community engagement, and have applicants demonstrate benefits to disinvested communities (Rothwell). Additionally, the Act has faced challenges with inflationary pressures, as high prices for materials like concrete, aggregates, pipes, steel, and iron have caused highway construction costs to grow grow at a 24% annual rate, with costs increasing in 9 out of the last 10 quarters and up 53.8% compared to the last quarter of 2020 ("FHWA: Highway Construction Costs"). In conclusion, while the BBA of 2018 was primarily a response to immediate budgetary and disaster relief needs with a focus on short-term fiscal stimulus, the IIJA represents a more strategic, long-term investment in the nation’s infrastructure aimed at sustainable economic growth and job creation. Both policies had significant impacts but also faced limitations, particularly related to economic and budgetary pressures. 2. Tax Policies The Tax Cuts and Jobs Act (TCJA) of 2017, which was the most significant tax overhaul in three decades, was implemented under Trump’s administration with a $2 trillion price tag. The TCJA permanently lowered the corporate tax rate from 35% to 21% and provided temporary reductions in individual taxes. The highest earners were anticipated to benefit the most, while the lowest earners could face higher taxes once the temporary provisions expire ("Trump's Tax Reform Plan Explained"). The Act also eliminated the mandate requiring individuals to purchase health insurance and introduced "Opportunity Zones" that offer tax incentives. While 27% of households in the lowest income quintile were expected to receive a tax cut or a larger refund, the most significant tax reductions were projected to go to high-income households, particularly those in the 95th to 99th income percentiles ("The Effect of the TCJA Individual Income Tax Provisions"). Studies indicate that between 60% and 76% of taxpayers in every state received a tax cut, with the average reduction being 1.8% of after-tax income nationwide. However, taxpayers in seven states— Alaska, Louisiana, North Dakota, South Dakota, Texas, Washington, and Wyoming—saw a minimum tax cut of 2.1% of after-tax income, while those in California, New York, and Oregon experienced a maximum reduction of 1.5% of after-tax income. This difference is partly attributable to the State and Local Tax (SALT) deduction, which enables some taxpayers to lower their federally taxable income by deducting the amount of state and local taxes paid. The three states aforementioned previously benefited more from SALT deductions, resulting in lower tax cuts, whereas the seven states mentioned had a lower reliance on SALT deductions, overall having a larger tax cut ("Tax Deduction - Charitable Contributions"). Nonetheless, a significant portion of low-

151


income households did not see substantial changes in taxes, and only widened the economic inequality within the country. Biden's American Families Plan (AFP) aimed to assist American families and workers through a range of measures, proposing $1.8 trillion in new federal spending for education, preschool, child care, paid family leave, and health care. Biden believed that enhancing support for the middle class would help ensure political stability, create more opportunities, and aid struggling Americans (“FACT SHEET: The American Families Plan”). The AFP included provisions to expand tax credits for middle-class families, such as continuing monthly advance payments of the Child Tax Credit (CTC) and boosting the Earned Income Tax Credit (EITC) for low- to moderate-income workers and families. The plan also aimed to make permanent improvements to the Child and Dependent Care Tax Credit, which would cover up to 50% of eligible child care costs, with limits of $4,000 for one child and $8,000 for two or more children (Crandall-Hollick). The American Families Plan (AFP) led to increased labor force participation and longer working hours, especially among low-income women, by making child care and paid family and medical leave more affordable. This support allowed parents to balance work and family responsibilities without sacrificing their careers (IRS). However, the AFP has faced criticism and limitations. Critics argue that the plan prioritizes immediate needs over long-term considerations and represents an unprecedented level of federal intervention in personal family matters. They contend that shifting responsibility from families to government programs and bureaucrats could undermine the traditional values of strong families and self-reliance, potentially reducing families' control over their own circumstances (Zandi and Yaros). Research on paid family leave programs in states like New Jersey and California, as well as in countries like Austria, shows mixed results. In New Jersey, the paid family leave program led to an estimated 89% reduction in employment among young women. In California, new mothers who used paid family leave experienced 7% lower employment and 8% lower annual earnings six to ten years after childbirth compared to those who did not use the program, and the program was associated with lower fertility rates. In Austria, pro-family policies, including paid family leave and subsidized child care, had minimal impact on long-term gender inequality and slightly worsened it, with women's earnings estimated to be two percentage points higher relative to men without such government interventions. Additionally, the AFP relies heavily on tax increases for high-income individuals, which some argue could negatively impact the economy and are misleadingly presented as tax cuts. These tax increases are expected to place significant financial burdens on American families (Greszler et al.). 3. Tariff Policies List

Duty Rate

Effective Date

Import Goods Value

List 1

25%

July 6, 2018

34 billion

List 2

25%

August 23, 2018

16 billion

List 3

25%

May 10, 2019

200 billion

List 4a

7.5%

February 14 2020

300 billion

List 4b

Suspended

Suspended

300 billion

[Table 2: Round of Tariffs Trump Implemented]

152


The United States Trade Representative (USTR) conducted an investigation into China's trade practices and found that many of China's policies were unreasonable or discriminatory, creating significant challenges for U.S. companies and workers. China has employed various tactics to facilitate technology transfers from U.S. firms to Chinese entities, including requiring joint ventures, imposing restrictions on foreign investments, and utilizing complicated administrative review and licensing processes. Moreover, China’s unfair licensing practices have prevented U.S. companies from earning marketbased returns on their intellectual property. The country has also orchestrated and supported investments and acquisitions that result in substantial transfers of technology and intellectual property, consistent with its industrial objectives like the Made in China 2025 initiative. Additionally, China has conducted and endorsed cyber intrusions into U.S. networks to access valuable business information ("Section 301 China Tariffs & Exclusions Guide"). Thus, to encourage China into modifying trade practices that were judged unfair to American enterprises, Trump slapped Section 301 tariffs on Chinese imports in 2018 (Tankersley and Bradsher). Table 2 shows the list of tariffs imposed in four rounds. Since Trump was a strong believer in protectionism, he imposed such taxes to boost the domestic industries and create jobs, which will generate revenue for the government. However, the tariffs also led to significant costs for US households and businesses. Prior to factoring in behavioral effects, the $79 billion in increased tariffs resulted in an average annual tax hike of $625 per U.S. household. In reality, actual revenue data showed that the trade war tariffs raised tax collections by $200 to $300 per household per year. The true cost to households was likely higher than both the $625 estimate (which does not account for behavioral impacts) and the $200 to $300 figure (based on actual tax collections), as these amounts do not include the reduction in earnings due to decreased output from the tariffs or the loss of consumer choice when individuals switch to alternatives not affected by tariffs (York). Moreover, national debt increased due to higher interest payments, reaching $659 billion in 2023. As the national debt grew, so did the interest payments on that debt. Higher interest payments on government debt can lead to increased federal deficits, potentially necessitating cuts in other areas such as social programs or increase in taxes. Studies have shown that while this might not have had a direct impact on citizens in the short term, it could potentially lead to reduced funding for programs like Social Security, Medicare, and education, or increased taxes in the long term. Economic uncertainty was also heightened due to the increased national debt. When there is a high level of national debt, there may be concerns about the government’s ability to manage its finances effectively, which can affect consumer confidence and spending investments behaviors among US citizens. If the government chooses to finance debts by increasing the money supply, it can lead to inflationary pressures, which erode purchasing power and negatively affect the standard of living for citizens ("Economic Effects of Waiting to Stabilize Federal Debt"). For example, washing machine prices increased by about $86 per unit due to tariffs, leading to an aggregate increase in consumer costs exceeding $1.5 billion. Domestic impacts included thousands of U.S. importers filing lawsuits in the U.S. Court of International Trade (CIT), challenging the tariffs on procedural and statutory grounds. The CIT ruled that while the USTR had the authority to impose the tariffs, it violated the Administrative Procedure Act by not adequately responding to public comments. Numerous studies and analyses have concluded that the tariffs resulted in a net loss of jobs. For instance, a February 2018 study by economists Kadee Russ and Lyria Cox highlighted that jobs in sectors that consume steel are 80 times more numerous than those in steel production, suggesting that the steel tariffs led to more job losses than gains (Russ and Cox). Similarly, a March 2018 study from the Chicago Booth School of Business, which surveyed 43 economic experts, found no consensus that U.S. tariffs on steel and aluminum would enhance American well-being (Cockrell). In August 2018, economists at the Federal Reserve Bank of New York warned that the Trump administration's use of tariffs to address the trade imbalance would likely result in minimal improvement in the trade deficit due to reduced imports and U.S. exports (Amiti et al.). A March 2019 analysis by the National Bureau of Economic Research indicated that the trade war tariffs did not lower

153


the pre-tax import costs of Chinese products, meaning that U.S. importers bore the full impact of import duties through increased after-duty prices (Fajgelbaum et al.). Additionally, an April 2019 study by the University of Chicago found that tariffs on washing machines led to a price increase of $86 per unit and $92 per unit for dryers, resulting in a total rise in consumer costs exceeding $1.5 billion (Hortacsu et al.). Furthermore, an April 2019 report by the IMF projected that a 25 percent increase in tariffs on all trade between China and the U.S. would cause significant economic losses for both countries (International Monetary Fund). An October 2019 study indicated that tariffs on Chinese imports were predominantly reflected in U.S. import prices but only partially impacted retail consumer prices. This suggests that some businesses absorbed the higher tariffs by cutting into their profit margins rather than fully passing the costs onto consumers (Cavallo et al.). Additionally, a December 2019 Federal Reserve study found a net reduction in manufacturing jobs due to the tariffs, showing that the benefits of increased production in protected industries were outweighed by the higher costs of inputs and retaliatory tariffs (Flaaen and Pierce). On the other hand, the Biden administration kept the tariffs implemented by Trump and added an additional increase in tariffs on $18 billion worth of imports from China. This decision was influenced by a comprehensive review conducted by USTR regarding the effectiveness of existing tariffs imposed during the previous administration. The review concluded that while some progress has been made in addressing China’s unfair practices, many issues remained unresolved. Some of the new tariffs include tariff rates of electric vehicles increasing from 25% to 100% in 2024, tariff rates of semiconductors increasing from 25% to 50% in 2025, tariff rates of steel and aluminum products increasing from 7.5% to 25% in 2024, tariff rates of solar cells modules increasing from 25% to 50% in 2024, and tariff rates of medical products, including syringes and personal protective equipment increasing significantly with some reaching up to 50% (The White House). The report also recommended an exclusion process for machinery used in domestic manufacturing, increased funding for enforcement, and greater collaboration to combat state-sponsored technology theft. USTR Ambassador Katherine Tai proposed an exclusion process for machinery used in domestic manufacturing, recommended increased funding for enforcement, and called for enhanced cooperation to address state-sponsored technology theft (USTR). The tariffs imposed by the current administration led to an additional tax increase of $3.6 billion and caused a decrease in long-term GDP by 0.2%, capital stock by 0.1%, and employment by 142,000 full-time equivalent jobs. Without considering behavioral effects, the universal tariff raised taxes by $311 billion, while increasing the average tariff rate on Chinese goods to 60% would generate approximately $210 billion. However, actual revenue collected would likely be much lower due to avoidance and evasion, reduced imports, and decreased incomes, which would result in lower payroll and income tax revenues overall (York). Various studies and analyses have shown that tariffs on steel, aluminum, and Chinese goods have led to higher prices and lower aggregate real income in both the U.S. and China. In December 2021, Pablo Fajgelbaum and Amit Khandelwal reviewed data and methods assessing the trade war's effects and concluded that “U.S. consumers of imported goods have borne the brunt of the tariffs through higher prices, and that the trade war has lowered aggregate real income in both the U.S. and China, although not by large magnitudes relative to GDP” (Fajgelbaum and Khandelwal). In January 2022, the U.S. Department of Agriculture estimated that retaliatory tariffs resulted in direct export losses amounting to $27 billion from 2018 through the end of 2019 (USDA). A report published in May 2023 by the United States International Trade Commission, led by Peter Herman and his team, found that the tariffs on steel, aluminum, and Chinese goods were almost entirely passed through to U.S. prices. Although industries shielded by these tariffs saw a $2.8 billion increase in production, downstream industries facing higher input costs saw a $3.4 billion reduction in production (USTR). In January 2024, the International Monetary Fund released a paper showing that unexpected tariff shocks generally led to a greater reduction in imports than exports, resulting in a modest decrease in the trade deficit but persistent GDP losses (York). Additionally, a January 2024 study by David Autor and colleagues found that tariffs did not provide significant economic benefits to the U.S. heartland. The

154


study noted that import tariffs had “neither a large nor significant impact on U.S. employment in newlyprotected regions,” while retaliatory measures from other countries had “clear negative effects on employment, especially in agriculture” (Autor et al., 3).

Conclusion This research revealed that the majority of the policies enacted by different administrations were diverse from one another owing to their differing political ideas and ideologies. Based on this investigation, numerous recommendations might be made. To begin, both regimes' budgetary strategies highlight the importance of balancing short-term requirements with long-term sustainability. As previously stated, Trump's BBA was able to momentarily meet individuals' demands, but Biden's IIJA promoted longterm economic growth through infrastructure investments. Future administrations should attempt to develop a policy that combines both words. Another option is to implement equitable tax policies for all Americans in order to reduce economic disparity. To clarify, future tax reforms should assist lower and middle-income people while ensuring that the rich pay their fair amount. Tax policies should be assessed for their long-term effects on economic inequality and government income. Furthermore, while preserving domestic businesses is critical, the influence on international relations and consumer costs should also be addressed. Future trade policy should take a balanced approach to combating unfair trade practices while minimizing the cost on local firms and consumers. It is crucial to note that this study relied heavily on secondary sources and analysis, which may vary in their methodologies. Primary data and real models may be useful in future studies. Furthermore, the study is based on existing data and research, which may not completely account for the external factors influencing the policy's outcomes. For example, it is possible that while Trump's tax policy was effective, the COVID-19 epidemic hampered its implementation. Furthermore, concentrating on one program from each administration per sector may fail to capture the depth and complexity of each government's economic activities, resulting in a brief description. A more thorough evaluation of other policies may provide a more full picture of their economic impact.

References Allen, Katie. “Trump’s Economic Policies: Protectionism, Low Taxes and Coal Mines.” The Guardian, The Guardian, 9 Feb. 2018, www.theguardian.com/us-news/2016/nov/09/trumpseconomic-policies-protectionism-low-taxes-and-coal-mines. Accessed 31 July 2024 Amiti, Mary, et al. “Do Import Tariffs Help Reduce Trade Deficits?” Liberty Street Economics, 13 Aug. 2018, libertystreeteconomics.newyorkfed.org/2018/08/do-import-tariffs-help-reducetrade-deficits/. “At its two-year anniversary, the bipartisan infrastructure law continues to rebuild all of America | Brookings.” Brookings Institution, 15 November 2023, https://www.brookings.edu/articles/atits-two-year-anniversary-the-bipartisan-infrastructure-law-continues-to-rebuild-all-ofamerica/. Accessed 30 July 2024. Arriaga, Xavier. “Debt Limit Legislation Prevents Default, Sets Spending Caps for Next Two Fiscal Years.” Enterprise Community Partners, 5 June 2023, https://www.enterprisecommunity.org/blog/debt-limit-legislation-prevents-default-setsspending-caps-next-two-fiscal-years. Accessed 30 July 2024. Autor, David, et al. Help for the Heartland? The Employment and Electoral Effects of the Trump Tariffs in the United States. 1 Jan. 2024, https://doi.org/10.3386/w32082. Accessed 21 Apr. 2024.

155


“Bidenomics Is Working: The President’s Plan Grows the Economy from the Middle Out and Bottom Up—Not the Top Down.” The White House, 28 June 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/06/28/bidenomics-isworking-the-presidents-plan-grows-the-economy-from-the-middle-out-and-bottom-up-notthe-top-down/. Accessed 30 July 2024. Boecker, Lea, et al. "The interplay of social rank perceptions of Trump and Biden and emotions following the US presidential election 2020." Cognition and Emotion (2024): 1-19. Boyer, Mark A, et al. “Forum: Did “America First” Construct America Irrelevant?” International Studies Perspectives, vol. 22, no. 4, 16 Aug. 2021, pp. 458–494, https://doi.org/10.1093/isp/ekab013. Accessed 16 Jan. 2022. Boyle, Michael J. “Monetary Policy vs. Fiscal Policy: What's the Difference?” Investopedia, https://www.investopedia.com/ask/answers/100314/whats-difference-between-monetarypolicy-and-fiscal-policy.asp. Accessed 30 July 2024. Branda, Oana-Elena. "Changes in the American foreign policy: from Obama to Trump." International Conference Knowledge-Based Organization. Vol. 24. No. 2. 2018. Burns, William J. “Judy Asks: Can the United States Regain Its Global Leadership?” Carnegie Endowment for International Peace, 15 October 2020, https://carnegieendowment.org/europe/strategic-europe/2020/10/judy-asks-can-the-unitedstates-regain-its-global-leadership?lang=en. Accessed 31 July 2024. Caprez, Katie. "HR 3684: Infrastructure Investment and Jobs Act." The National Law Review 8 (2021) Cavallo, Alberto, et al. “Tariff Passthrough at the Border and at the Store: Evidence from US Trade Policy.” National Bureau of Economic Research, 1 Oct. 2019, www.nber.org/papers/w26396. Cockrell, Jeff. “Will Americans Benefit from New Tariffs on Steel and Aluminum?” The University of Chicago Booth School of Business, 18 Mar. 2018, www.chicagobooth.edu/review/willamericans-benefit-new-tariffs-steel-and-aluminum. Accessed 30 July 2024. Constant, Paul. “Bidenomics Explained: Why Building the Economy from the Middle out Might Be the Most Revolutionary Concept in Modern Politics.” Business Insider, 21 Apr. 2021, www.businessinsider.com/how-bidens-middle-out-theory-works-to-boost-economic-growth2021-4. Accessed 31 July 2024. Crandall-Hollick, Margot. CRS INSIGHT Prepared for Members and Committees of Congress INSIGHT. 21 Mar. 2021. Daniels, Seamus P., and Todd Harrison. “Making Sense of the Bipartisan Budget Act of 2018 and What It Means for Defense.” CSIS, 20 February 2018, https://www.csis.org/analysis/makingsense-bipartisan-budget-act-2018-and-what-it-means-defense. Accessed 30 July 2024. Davidson, Alex, et al. “Privacy Pass: Bypassing Internet Challenges Anonymously.” Proceedings on Privacy Enhancing Technologies, vol. 2018, no. 3, 1 June 2018, pp. 164–180, prasad.dyson.cornell.edu/doc/NYT.17Sep18.pdf, https://doi.org/10.1515/popets-2018-0026. “Donald J. Trump Event Timeline.” The American Presidency Project, https://www.presidency.ucsb.edu/documents/donald-j-trump-event-timeline. Accessed 30 July 2024.

156


Dong, Lingxiu, and Panos Kouvelis. "Impact of tariffs on global supply chain network configuration: Models, predictions, and future research." Manufacturing & Service Operations Management 22.1 (2020): 25-35. Dorn, Andrew. “Biden and Trump: Comparing Presidential Policy Track Records.” NewsNation, 30 Jan. 2024, www.newsnationnow.com/politics/2024-election/trump-biden-presidentialrecords/. Fajgelbaum, Pablo, et al. NBER WORKING PAPER SERIES the RETURN to PROTECTIONISM. 2019. Fajgelbaum, Pablo, and Amit Khandelwal. “The Economic Impacts of the US-China Trade War *.” National Bureau of Economic Research, vol. 14, no. 29315, 2021, pfajgelb.mycpanel.princeton.edu/tradewar_0920.pdf, https://doi.org/10.1146/annureveconomics-051420-110410. Flaaen, Aaron, and Justin Pierce. Disentangling the Effects of the 2018-2019 Tariffs on a Globally Connected U.S. Manufacturing Sector. 2019, www.federalreserve.gov/econres/feds/files/2019086pap.pdf, https://doi.org/10.17016/FEDS.2019.086. “FHWA: Highway Construction Costs Continued to Grow at 24% Annual Rate.” The Eno Center for Transportation, 27 March 2024, https://enotrans.org/article/fhwa-highway-construction-costscontinued-to-grow-at-24-annual-rate/. Accessed 30 July 2024. “FACT SHEET: The American Families Plan.” The White House, 28 April 2021, https://www.whitehouse.gov/briefing-room/statements-releases/2021/04/28/fact-sheet-theamerican-families-plan/. Accessed 30 July 2024. Floyd, David. “What Is the Tax Cuts and Jobs Act (TCJA)?” Investopedia, https://www.investopedia.com/taxes/trumps-tax-reform-plan-explained/. Accessed 30 July 2024. Frank, Colleen C., Alexandru D. Iordan, and Patricia A. Reuter-Lorenz. "Biden or Trump? Working memory for emotion predicts the ability to forecast future feelings." Emotion (2023). Fung, Katherine, et al. “Joe Biden Timeline Shows Key Turning Points in Presidency.” Newsweek, 11 July 2024, https://www.newsweek.com/joe-biden-timeline-key-turning-points-presidencydebate-afghanistan-2024-1924001. Accessed 30 July 2024. Government Finance Officers Association. Building a Better America. Government Finance Officers Association, 2023. Greszler, Rachel, et al. Why President Biden’s Government Solutions Would Actually Weaken the Infrastructure of American Families. No. 3616, 2021, www.heritage.org/sites/default/files/2021-05/BG3616.pdf#page=12.39. Accessed 30 July 2024. Hanauer, Nick. “The Transformation at the Heart of Biden’s Middle-out Economic Agenda.” The American Prospect, 9 Feb. 2023, prospect.org/economy/2023-02-09-biden-middle-outagenda/.

157


Handley, Kyle, et al. “Rising Import Tariffs, Falling Export Growth: When Modern Supply Chains Meet Old-Style Protectionism.” International Finance Discussion Paper, vol. 2020, no. 1270, Feb. 2020, https://doi.org/10.17016/ifdp.2020.1270. Harte, Julia. “What MAGA Means to Trump Voters.” Reuters, 8 Oct. 2018, fingfx.thomsonreuters.com/gfx/editorcharts/USA-ELECTION-TRUMPMAGA/0H001BBVZ2XL/index.html. Accessed 31 July 2024. Hitchens, Antonia. “How MAGA World Is Taking on Its New Opponent.” The New Yorker, 29 July 2024, www.newyorker.com/news/dispatch/how-maga-world-is-taking-on-its-new-opponent. Accessed 31 July 2024. Hortacsu, Ali, et al. “The Production, Relocation, and Price Effects of US Trade Policy: The Case of Washing Machines.” SSRN Electronic Journal, 2019, bfi.uchicago.edu/wpcontent/uploads/BFI_WP_201961-1.pdf, https://doi.org/10.2139/ssrn.3374918. “H.R.6363 - 118th Congress (2023-2024): Further Continuing Appropriations and Other Extensions Act, 2024.” Congress.gov, https://www.congress.gov/bill/118th-congress/house-bill/6363. Accessed 30 July 2024. Hulten, Charles R. "Tax policy and the investment decision." The American Economic Review 74.2 (1984): 236-241. Ikenberry, G. John. “Rethinking the Origins of American Hegemony.” Political Science Quarterly, vol. 104, no. 3, 1989, pp. 375–400. JSTOR, https://doi.org/10.2307/2151270. Accessed 31 July 2024. International Monetary Fund. “World Economic Outlook, April 2019: Growth Slowdown, Precarious Recovery.” IMF, 2019, www.imf.org/en/Publications/WEO/Issues/2019/03/28/worldeconomic-outlook-april-2019. IRS. “Earned Income Tax Credit (EITC) | Internal Revenue Service.” Www.irs.gov, 25 Jan. 2023, www.irs.gov/credits-deductions/individuals/earned-income-tax-credit-eitc. Accessed 30 July 2024. King, Becca. “The Trump Presidency Timeline: From 2016 to 2020.” Shortform, 10 December 2023, https://www.shortform.com/blog/trump-presidency-timeilne/. Accessed 30 July 2024. Lopez, Ramon, Vinod Thomas, and Yan Wang. "The quality of growth: Fiscal policies for better results." Lopez, Ramon, Vinod Thomas, Yan Wang (2008). Luis Dias, Sara, and Joao Sergio Ribeiro. “What Is Tax Policies | IGI Global.” Www.igi-Global.com, 2023, www.igi-global.com/dictionary/taxation-policies-as-an-environmental-protectioninstrument/120075. Mallinder, Jacob. “Economy 101 – Trickle-down Economics.” Finance Monthly | Monthly Finance News Magazine, 2 June 2023, www.finance-monthly.com/2023/06/economy-101-trickledown-economics/#:~:text=In%20its%20simplest%20form%2C%20trickle. “Majority News Release | Majority News Releases | News | United States Senate Committee on Appropriations.” Majority News Release | Majority News Releases | News | United States Senate Committee on Appropriations, 2 July 2018, https://www.appropriations.senate.gov/news/majority/legislation-introduced-to-fundgovernment-through-march-23. Accessed 30 July 2024.

158


“Monthly Budget Review: June 2024 | Congressional Budget Office.” www.cbo.gov, June 2024, www.cbo.gov/publication/60361/html. Accessed 30 July 2024. Morgan, Stephen, et al. “The Economic Impacts of Retaliatory Tariffs on U.S. Agriculture.” Www.ers.usda.gov, Jan. 2022, www.ers.usda.gov/publications/pub-details/?pubid=102979. Navarra, Katie. “Here’s What’s Happening with IIJA Funding | Built.” Built | the Bluebeam Blog, 14 Nov. 2023, ebeam.com/iija-construction-funding-2023/. Accessed 30 July 2024. Popa, Ionela, and Diana Codreanu. "Fiscal Policy and its role in ensuring economic stability." (2010). Rasure, Erika. “Protectionism: Examples and Types of Trade Protections.” Investopedia, https://www.investopedia.com/terms/p/protectionism.asp. Accessed 30 July 2024. Rothwell, Jonathan, and Andre M. Perry. “How equity isn't built into the infrastructure bill—and ways to fix it | Brookings.” Brookings Institution, 17 December 2021, https://www.brookings.edu/articles/how-equity-isnt-built-into-the-infrastructure-bill-andways-to-fix-it/. Accessed 30 July 2024. Russ, Kadee, and Lydia Cox. “Will Steel Tariffs Put U.S. Jobs at Risk? | Econofact.” ECONOFACT, 26 Feb. 2018, econofact.org/will-steel-tariffs-put-u-s-jobs-at-risk. Accessed 30 July 2024. Schoenbaum, Thomas. BIDENOMICS versus MAGANOMICS on TRADE LAW: PICK YOUR POISON IEP@BU Policy Brief. 2024. “Section 301 China Tariffs & Exclusions Guide | C.H. Robinson.” www.chrobinson.com, www.chrobinson.com/en-us/resources/resource-center/guides/section-301-china-tariff-guide/. Accessed 30 July 2024. Sharkey, Jennifer. "Infrastructure Investment and Jobs Act (Iija)/Bipartisan Infrastructure Law (Bil)." (2022). Shear, Michael D. “Live Updates: Biden Drops out of Presidential Race, Endorses Harris.” The New York Times, 21 July 2024, www.nytimes.com/live/2024/07/21/us/biden-drops-out-election. Accessed 31 July 2024. Streeter, Jialu L. "How do tax policies affect individuals and businesses." Stanford Institute for Economic Policy Research (SIEPR). Available online: https://siepr. stanford. edu/publications/policy-brief/how-do-tax-policies-affectindividuals-and-businesses (accessed on 06.07. 2023) (2022). Tankersley, Jim, and Keith Bradsher. "Trump hits China with tariffs on $200 billion in goods, escalating trade war." The New York Times 17 (2018). “Tariffs.” Trade - European Commission, https://trade.ec.europa.eu/access-to-markets/en/content/tariffs-0. Accessed 30 July 2024. Accessed 30 July 2024 “Tax deduction - Charitable contributions and others | FTB.ca.gov.” Franchise Tax Board, https://www.ftb.ca.gov/about-ftb/newsroom/tax-news/march-2019/tax-deduction.html. Accessed 30 July 2024. “The Economic Effects of Waiting to Stabilize Federal Debt | Congressional Budget Office.”

159


www.cbo.gov, 28 Apr. 2022, www.cbo.gov/publication/58055. Accessed 30 July 2024 “THE EFFECT OF THE TCJA INDIVIDUAL INCOME TAX PROVISIONS ACROSS INCOME GROUPS AND ACROSS THE STATES.” Tax Policy Center, 28 March 2018, https://ntanet.org/wp-content/uploads/2018/02/the_effect_of_the_tcja_individual_income_ta _provisions_across_income_groups_and_across_the_states.pdf. Accessed 30 July 2024. “The Impact of Trade Tariffs on Global Supply Chain Strategies.” Disk.com, 5 May 2024, https://disk.com/resources/the-impact-of-trade-tariffs-on-global-supply-chain-strategies/. Accessed 30 July 2024. The Wall Street Journal. “Full Debate: Biden and Trump in the First 2024 Presidential Debate | WSJ.” YouTube, 27 June 2024, www.youtube.com/watch?v=qqG96G8YdcE&ab_channel=TheWallStreetJournal. Accessed 30 July 2024. The White House. “FACT SHEET: President Biden Takes Action to Protect American Workers and Businesses from China’s Unfair Trade Practices.” The White House, 14 May 2024, www.whitehouse.gov/briefing-room/statements-releases/2024/05/14/fact-sheet-presidentbiden-takes-action-to-protect-american-workers-and-businesses-from-chinas-unfair-tradepractices/. Accessed 30 July 2024 Trish, Barbara. "Big data under Obama and Trump: The data-fueled US presidency." Politics and Governance 6.4 (2018): 29-39. Accessed 30 July 2024 “Trump’s Top Five Withdrawals from International Agreements.” Trump’s Top Five Withdrawals from International Agreements, 2018, www.trtworld.com/americas/trump-s-top-fivewithdrawals-from-international-agreements-18543. Accessed 31 July 2024. United States International Trade Commission. U.S. Imports of Solar Photovoltaic Products: Trends and Global Competition. USITC Publication 5405, July 2023, https://www.usitc.gov/publications/332/pub5405.pdf. Accessed 30 July 2024. “U.S. Debt Ceiling: Definition, History, Pros, Cons, and Clashes.” Investopedia, https://www.investopedia.com/terms/d/debt-ceiling.asp. Accessed 30 July 2024. “U.S. Trade Representative Katherine Tai to Take Further Action on China Tariffs after Releasing Statutory Four-Year Review.” United States Trade Representative, ustr.gov/about-us/policy-offices/press-office/press-releases/2024/may/us-trade-representativ katherine-tai-take-further-action-china-tariffs-after-releasing-statutory. Accessed 30 July 2024 York, Erica. “Tariff Tracker: Tracking the Economic Impact of Tariffs.” Tax Foundation, 26 June 2024, taxfoundation.org/research/all/federal/trump-tariffs-biden-tariffs/. Accessed 30 July 2024. Zandi, Mark, and Bernard Yaros. “The Macroeconomic Consequences of the American Families Plan and the Build Back Better Agenda.” Moody's, 3 May 2021, https://www.moodysanalytics.com/-/media/article/2021/american-families-plan-build-backbetter-agenda.pdf. Accessed 30 July 2024.

160


Figures References Hitchens, Antonia. “How MAGA World Is Taking on Its New Opponent.” The New Yorker, 29 July 2024, www.newyorker.com/news/dispatch/how-maga-world-is-taking-on-its-new-opponent. Accessed 31 July 2024. Biden, Joe. “Joe Biden Outlines New Steps to Ease Economic Burden on Working People.” Medium, 9 Apr. 2020, medium.com/@JoeBiden/joe-biden-outlines-new-steps-to-ease-economic-burden-onworking-people-e3e121037322. Accessed 31 July 2024.

161


Significance of K-pop influencing Japanese consumers’ views on the Korean between 2011 to 2017 Author

Full Name

:

Kim, Harhim

:

St. Mary's School

(Last Name, First Name)

School Name

Abstract The relationship between Japan and Korea has always been unstable since Japan’s colonization of Korea in the 1900s. Historical disputes have risen after Korea’s independence that hindered the improvement in the relations at times. However, as the popularity of Korean culture rises and Japan becomes one of its main markets, scholars have predicted a refinement of the conflicts. In order to determine the influence of the Korean culture’s power to improve the relationship with Japan, this article questions: What is the significance of K-pop in influencing Japanese consumers’ views on Korea between 2011 to 2017? After examining the affinity rate of Japan towards the Korean society during the Korean waves, Japanese K-pop fans’ positive attitude towards Koreans, and giving an explanation of why such positive attitude was not able to enhance the comparatively lower affinity rate between 2011 to 2017, the study suggests that Korean wave’s effect on Japanese views towards Korean society can be heavily dependent on Japanese women population and youth generation in the total population. Also, the disconnection between the political views of the citizens and cultural consumption highlights the limitations of Korean soft power based on international cultural consumption.

162


Introduction The bilateral relationship between Japan and Korea has always been fragile due to their historical conflicts; Japan’s colonization (1910-1945) on the Korean continent had left resentment and bitterness for both countries. Particularly, the sexual slavery of Korean women by Japan during the wartimes raised ethical questions that lasted decades after Korean independence. The dispute over the historical claim of the Liancourt Rocks only added to the tensions. Although the comfort women issue finally came to an agreement in 2015, the territorial claim of the Liancourt Rocks remains unsettled. Surprisingly, in the midst of such a historical dispute, K-pop became sensational in Japan during 2011 to 2017. The “Korean wave" or the Hallyu in Chinese terms is the phenomenon of rising international popularity in Korean culture. Among the vast fields of K-culture, Korean pop music gained much fame in Japan due to its attraction to young Japanese women (Kozhakhmetova, 2012). A research further presented that such projection of K-pop idols allow the young Japanese consumers to idealize Korean men in general, thinking they would mirror the good traits of the idols (Kozhakhmetova, 2012). The idealization of Korean men by Japanese consumers raises a question on K-pop’s impact in being able to shape one’s view towards another. Dinara Kozhakhmetova conducted research on the attitudes of Japanese K-pop consumers on their feelings towards Korean society (Kozhakhmetova, 2012). By conducting interviews in Tokyo for a half month with the K-pop group AKiss’ Japanese fans, Kozhakhmetova concluded that continuous consumption of Korean pop music is able to affect the Japanese youths’ perception of Korean society positively based on the idealization of Korean peoples (Kozhakhmetova, 2012). Another researcher Anthony J. Giancana expands on the previous study by Kozhakhmetova by arguing that the favorable attitudes towards Korean society by the Japanese youth generation through K-culture consumption are present but have definite limitations in the affected populations and implications (Giancana, 2022). The idea that the Korean wave can be used as a strong soft power for South Korea has been tested by many scholars who made different conclusions. Some focused on the potential of Korean culture as soft power while others pointed out the real impact in the present. To determine both its impact and the potential, this paper is going to explore the limitations of K-pop’s power in creating affinity of individuals towards the Korean society during the period from 2011 to 2017 when K-pop’s popularity soared. The research will use both qualitative analysis and quantitative analysis considering the need of graphs and data for support of claims. First, the views of Japanese people towards Korean society will be laid out for later comparison with the reactions of the Korean wave. Then the research will explore the rise in K-pop popularity between 2011- 2017. After the explorations, an analysis will be conducted on the limitations of K-pop in being able to influence the affinity towards the Korean peoples by Japan.

1. The views of Japanese peoples towards the Korean society After Japan’s defeat in World War II, the colonization in Korea ended naturally in 1945. South Korea and Japan normalized their relations with a treaty in 1965 in which Japan agreed to provide Korea with 300 million dollars grants and additional 200 million dollars as low interest loans (Skabelund, 1994). Under president Park Chung Hee of the Republic of Korea, who disregarded the dominant anti-Japanese sentiment because he saw the value of Japan’s modernization, started to secure ties with Japan (Magbadelo, 2006). Although Japan and Korea established ties adhering to their normalized relations, such collaboration was highly vulnerable due to the evident blocks that existed from their history. In the 1965 treaty where the war clashes were suppose to settle, Korea pointed out the absence of an official apology from the Japanese state on their sexual abuse of the Korean comfort women. This matter was urgent for the Korean government to assure the victims who were already in their 70s and 80s. Japan stood its ground by claiming it had done ample apologizing and thus it is no longer necessary to repeat the process. In addition to the comfort women issue, Japan had shown its desire to solely control the East Sea by seizing South Korean fish boats under a new baseline and repealing the 1965 fishery treaty in 1999 (Magbadelo, 2006). Simultaneously, a great dispute on the claim of Liancourt Rocks based on

163


different understandings of history has been a continuous debate that never came to a settlement. The last obstacle to improving the bilateral relationship between Japan and South Korea was the anger of Koreans on Japan’s supposing distortion of the nation’s war history in its textbooks and education systems (Magbadelo, 2006). The tension between the two countries was evident and therefore Japan did not possess much favorable attitude towards the Korean society even after their normalized relationship.

Figure 1- Japan’s Public Opinion on South Korea Above graph is a research conducted by the Japanese government’s Cabinet Office on the affinity towards Korea from 1980 to 2020. The affinity and attitude were easily changed under the different regimes of Korea but the percentage of no affinity reached its peak during the presidency of Kim YoungSam of South Korea. After the 2000s, the affinity rate of Japanese people towards the Koreans started to gradually increase. This period is also when Korean pop music started gaining popularity in Japan.

2. The Korean wave in Japan from 2011-2017 2.1 The rise of Korean wave in Japan The “Korean Wave” indicates the phenomenon of increasing international popularity in Korean culture that encompasses all fields, including dramas, music, fashion, TV shows, Korean cuisine, and etc. First period of the Korean wave appeared in the late 1990s when a Chinese channel started airing a Korean drama called What is Love (Ahn, 2020). The Confucian values and freewheeling attitude of the drama allowed itself to gain popularity in China and was able to gradually expand into the rest of East Asia (Ahn, 2020). Japan rose as one of K-dramas’ main markets as the NHK (Japan Broadcasting Corporation) started streaming Winter Sonata for the first time (Ahn, 2020). Around 3.5 million dollars of profit were made through any related products (Ahn, 2020).

164


The second Korean Wave is known to have emerged in the mid 2000s with the focus on Korean Popular Music and the spread to all of Asia, parts of Europe, and the United States. The major consumers expanded from manias to the youth generation of teens and young adults (Giancana, 2022). The media age allowed such diversity of fans and widespread geography as related contents flew around globally with social networking. With the rest of the regions, Japan reached the period of rapid increase in exposure and popularity of the Korean wave from 2011 to 2017. 2.2 The Korean wave in Japan from 2011-2017 In 2011, a distinct sector in Youtube was launched solely for K-pop. Such creation of a new Youtube channel in the rising media and social networking age allowed foreigners’ easy access to the comparatively new music genre and the entertainment system (Kozhackmetova, 2012). The channel uploaded more than 5 million videos of K-pop performances in a year, and eventually with the support of different platforms, K-pop idols entered the Oricon chart in Japan, a data analysis of album sales gathered from 26,000 Japan’s music stores (Kozhakhmetova, 2012). For example, 2pm ranked two of its singles in fifth and higher in the Oricon charts after its debut in 2008 (Kozhackmetova, 2012). The results from the Oricon charts are highly appreciated due to its accuracy (Kozhakhmetova, 2012).

Figure 2 - The number of K-pop concerts during 2012 - 2017 in Japan Based on K-pop’s spread in media platforms, the number of concerts held in Japan rapidly increased during the period from 2012 - 2017. Japan became the largest market overseas and the media and entertainment market was valued at 194 trillion Won in a year by 2012. This was five times greater than the Korean market at the time.

165


Figure 3 - Weeks When Non-Japanese Musicians Topped Japan’s Weekly Hot 100 Despite the difficulty of having a singer of a different nationality winning awards in their own country, Korean singers were able to finish 1st place since 2010 with multiple records (Giancana, 2022). The gap between the number of songs by Korean singers finishing 1st place and number of songs by Korean or non-Japanese singers finishing first in a week proves that most of the K-pop songs performed in Japan were chanted in Japanese (Giancana, 2022). According to a KOFIEC (Korean Foundation for International Cultural Exchange) survey taken from 2014 to 2018 on the Japanese, Korean pop music was popular among them because of their outstanding performance and appealing appearances (Giancana, 2022). As a result of their exposure to good-looking Korean idols with a kind personality, a survey was taken in Tokyo, Japan among Japanese AKiss fans who tended to fantasize the rest of Korean people to appear and behave similarly to their idols (Kozhakmetova, 2012). They said in an interview that the distance between the fans and the idols felt closer than ever, even more than their own Japanese celebrities (Kochakmetova, 2012). The engagement in K-pop also allowed the presentation of the Korean language due to the need for communication with their idols and the lifestyle of Korea (Kochakmetova, 2012). Thus, such exposures and idealizations of Korean peoples can be said to have indeed contributed to Japan’s feelings of affinity towards South Korea, although the exploration of the extent of its influence in influencing affinity would be needed. 3. The Impact of the Korean Wave When looking back at figure 1 showcasing the feeling of affinity from the Japanese peoples towards South Koreans, the rate of no affinity between 2011- 2017 is higher than the feeling of affinity, despite the rise in K-pop popularity and idealization of the Korean peoples following. Such a result indicates a clear limitation in K-pop being able to fully influence the feelings of the majority of Japanese citizens towards Korea. 3.1 The limited population of Japanese consumers of K-pop

166


Figure 3- The gender gap on Japanese K-pop consumption According to the KOFICE report from 2016 to 2019, Japanese women were more invested into Korean pop music culture than Japanese men (Gianaca, 2022). This gap shows the limitations in K-pop being able to reach all spheres of the Japanese population and therefore less people showing affinity towards South Korea. In addition to the gender gap, K-pop’s spread during the second Korean wave was largely focused on the Japanese peoples in their 20s and 30s. The ages of Japanese consumers became smaller as time passed by and 85.9 percent of teenagers came to say that the Korean Wave is a hot topic (Lee at el, 2018). The spread of the range of Japanese K-pop consumers to the younger generation was supported by the media age, being able to selectively choose contents in their favor, avoid any controversial or sensitive matters, and share them with others in the community. 3.2 Disconnection of cultural consumption with political views Among the youth generations in their 20s and 30s, 15 participants were selected to be surveyed on the connection of their cultural interest with political views. As a result, 13 out of the 15 participants answered that the positive or negative political relationships would not be able to influence their cultural intakes (Ahn, 2020). In addition, a survey was taken when the conflict between Liancourt Rocks increased due to South Korea’s former President Lee visiting the island. When a Japanese women magazine conducted a survey on 100 of the K-culture consumers in the midst of the risky relationship between Japan and Korea, 71 percent replied that they will not stop being fans as entertainment and politics are distinguished (Ahn, 2020). Most of the respondents in the 71 percent were revealed to be Japanese consumers in their 20s and 30s (Ahn, 2020). In contrast, most of the older people in their 50s aligned themselves with the rest of the 29 percent who responded they are not pleased by the president of South Korea’s visit to the Liancourt Rocks (Ahn, 2020).

Conclusion By examining the affinity rate from Japan towards South Korea, the Korean wave in Japan, and the limitations of the Korean wave being able to fully influence those affinity rates, the paper concludes that the Korean wave’s effect on Japanese views towards Korean society can only affect a certain portion of the population and hold separate views from political reactions. Although such conclusions cannot determine how strong of a soft power the Korean culture can be, the study suggests improvements in those fields for the Korean society to strengthen their cultural influence.

167


In contrast to the period from 2011 to 2017 which can be considered as one of the risky eras of Japan and South Korea’s relationship, current Japan and South Korea relations are improving under President Yoon Suk Yeol who is continuously holding summit with the Japanese authorities for national security in the midst of rising cross-national conflicts. As political relations improve, the aspect of the Japanese views towards Korean society seems positive as cultural exchanges will become more prosperous bilaterally.

References Between Love and Hate: The New Korean Wave, Japanese Female Fans, and Anti Korean Sentiment in Japan. (2020). Journal of Contemporary Eastern Asia, 19(2), 179–196. https://doi.org/10.17477/jcea.2020.19.2.179 Hallyu White Paper, 2018. (2019). Hallyu’s Significance for Us. https://welcon.kocca.kr/cmm/fms/crawling/%5BKOFICE%5D+Hallyu+White+Paper+2018(1)_5875_%EB%AF%B8%EB%A6%AC%EB%B3%B4%EA%B8%B0?atchFileId=FILE_61e85810-9d244ba7-8c6a-b453218168c1&fileSn=1 Japanese Affinity Toward the Republic of Korea Survey. (n.d.). In Japanese Government. Korean NGOs and Reconciliation with Japan - Scientific Figure on ResearchGate. Available from:https://www.researchgate.net/figure/Japanese-Public-Opinion-of-South-Korea-Data-sourceCabinet-Office-of-the-Japanese_fig1_368436219 [accessed 19 Aug 2024] Kozhakhmetova, D. (2012). Soft power of Korean popular culture in Japan: K-Pop avid fandom in Tokyo. Lund University Centre for East and South -East Asian Studies. https://lup.lub.lu.se/studentpapers/record/3460120/file/3910984.pdf Kwak, Young-jin. 2016–2017 해외한류실태조사 [2015 Overseas Survey]. Seoul: Korea Foundation for International Culture Exchange, March, 2017. http://kofice.or.kr/ b20industry/b20_industry_00_view.asp?seq=295&page=1&find=&search=; MAGBADELO, J. O. (2006). Japan And The Two Koreas: The Challenges And Prospects Of Confidence-building. World Affairs: The Journal of International Issues, 10(2), 72–87. https://www.jstor.org/stable/48577772 Mutual Perceptions in Japanese and Korean Civic Society. (2015). “Nikkan Shimin Shakai Niokeru Songoninshiki,” 3, 29–60. https://www2.jiia.or.jp/en/pdf/digital_library/world/170331_isozaki.pdf Number of South Korean Pop Music Concerts Held in Japan from 2012 to 2017. (2018). In Statista. Statista Research Department. Retrieved August 17, 2024, from https://www.statista.com/statistics/937182/south-korea-number-kpop-concerts-held-in-japan/ Song, S. (2020). The Evolution of the Korean Wave: How Is the Third Generation Different from Previous Ones? Korea Observer - Institute of Korean Studies, 51(1), 125–150. https://doi.org/10.29152/koiks.2020.51.1.125 The Effects of Popular Culture on Japanese and South Korean Attitudes. (2022.). Naval PostGraduate School. https://apps.dtic.mil/sti/trecms/pdf/AD1200519.pdf Unhealed Wounds: Japan’s Colonization of South Korea. (1994). Sigma: Journal of Political and International Studies, 12. https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=1080&context=sigma Weeks that Non-Japanese Musical Acts Appeared Number 1 on the Japan Weekly Hot 100. (2021). In Billboard Japan “Hot 100.”

168


Comparison Between the Effectiveness of Aiding with Irrigation Infrastructure and Immediate Financial Aid as a Way to Foster Agricultural Productivity and Food Security in Sub-Saharan Africa Author

Full Name

:

Kim, Junseo (Liam)

:

Korea International School, Jeju Campus

(Last Name, First Name)

School Name

Abstract To answer the question on why global food insecurity is still a pertinent issue despite the decades-worth of global aid, this research paper stages a comparative analysis on the relative effectiveness of immediate financial aid and long-term infrastructural aid in resolving food crisis in Sub-Saharan Africa by investigating on the strengths and drawbacks of each method of aid. As a resolution to resolve food insecurity in the long term, this paper suggests a balanced utilization of both types of aid depending on the target-specific needs of each specific region, applying a flexible response to the macro and micro level causes of food insecurity.

Keywords Immediate Aid: A method of global aid that provides supplies, such as cash or surplus food from inventories, which may be of immediate use to mitigate the harms of food shortages in the short-term. Infrastructural Aid: A method of global aid that provides the infrastructure that may be utilized by the local population to seek food security in the long term, which may also result in enabling the recipient countries to cultivate their own food in self-sustainable means if the harms from internal corruption could be mitigated. Food Insecurity: The state of lacking enough food in a region for the local population to maintain nutritional health. Food Sustainability: The state of continued access to a secure amount of enough food for the local population to maintain nutritional health.

169


Introduction Insecurities in agricultural productivity has been a persistent issue, producing millions of people suffering from a lack of secure food supply. The number of those starving has been quite alarming, with more than 258 million people in 58 countries facing food crisis or worse levels of food insecurity in 2022. While this digit itself is a significant increment from 2021, which was when there were 193 million people starving in 53 countries, global issues such as climate disasters and the Ukrainian War have caused severe disruptions to global food supplies, thereby intensifying the food crisis (World Food Programme, 2023). Although there has been a decades-worth of immediate aid, including direct cash transfers and donations of surplus food from global inventories, the need for more aid still remains at large, with millions who continue to starve from prolonged periods of hunger and malnutrition. While this proves how conventional approaches in providing relief for cases of food insecurity have been insufficient, there is a need to identify the underlying problems that hinder long-term improvements for food security. With this in mind, this paper aims to explore the best way to design aid programs that not only provide immediate relief but also foster sustainable development in regards to seeking for secure food productivity with a specific focus on Sub-Saharan African countries, which possesses the agrarian setting full of the potential for achieving food security through the easiest and most applicable way for long term self-sustainability, agricultural improvements. Food insecurity has been particularly depressing in Sub-Saharan Africa. In 2017, 55% of the population suffered from food insecurity, while 28% of them faced severe food insecurity. Also being the location with the lowest life expectancy and the highest infant mortality rates in 2020, Sub-Saharan Africa has been struggling under dire health conditions as an aftermath of their long-withstanding food crisis (FAO, 2023). In order to mitigate the harms of this persistently damaging issue, substantial amounts of food and financial aid has been provided to African countries. With the World Food Programme (WFP) being at the forefront, a record breaking amount of 14.1 billion dollars has been collected for food assistance in 2022 (World Food Programme, 2022). Utilizing these funds, WFP’s aiding programs have bettered the nutritious conditions of 2.5 million people in food-scarce regions such as the Sahel, Somalia, Burkina Faso, Mali, and South Sudan, where 158,000 hectares of barren fields were cultivated into productive land. However, despite these collaborative efforts, the need for aid continues to outweigh the amount of food resources available for donation as there are lots more across the world who are still in need of food aid (World Bank, 2022). The ongoing internal political conflicts that are often accompanied by war and climate-related shocks are also factors that deter the complete process of fully tackling food insecurity, too. This has inevitably led to significant funding gaps amidst the necessity of continuous and increased efforts for direct food and financial assistance (World Food Programme, 2023). However, there has been consistent debates on effectiveness and efficiency of immediate financial and long-term infrastructural aid. Immediate financial aid is conducted through a direct assistance of money, for the purpose of bringing upon an imminent betterment during times of hardship. The strength of financial aid in a food shortage is that it enables responses that may help better the situation, such as purchasing food and medical supplies, to be done quickly with the suitable amount of funds to do so. However, its shortcoming is that the money provided may be highly prone to mismanagement. Without precise monitoring, financial aid may not reach intended beneficiaries once those in charge who are supposed to utilize the funds their people use the aided money for other purposes (BMC Public Health, 2020). In addition, the impact of financial aid may persist for a short-term, since money supplies aren’t sustainable once they’re spent. In contrast, infrastructural aid with a specific focus on improving the food security status of a certain region by providing technology that enhances agricultural production places an emphasis on providing

170


a guideline for long-term sustainability, through methods of constructing or improving upon physical infrastructures. These may include facilities such as irrigation systems, food storage facilities and new types of innovative irrigation methods that may help towards maximizing the local produce of food with the most suitable way of farming for the region made possible. These initiatives may result in improved conditions for the local population to gain access to food, and build resilience against potential food crises to come through encouraging the local population to willingly participate in the initiatives for sustainable food growth by utilizing the aided infrastructure (Tomkins, 2012). One example of such aid is the Sahel Resilience Scale-up project, which was conducted by the World Food Programme. This project yielded successful results, as it created 158,000 hectares of newarable land through effective irrigation, thereby helping secure the long term food security of millions (World Food Programme, 2023). Comparing these sorts of strengths and weaknesses of both methods of aid, there has been an ongoing debate on which type of aid is more effective in resolving global food insecurity. In the meantime, while there has been a quantity of research comparing the relative effectiveness of financial aid and infrastructural aid in general, there are still some research gaps that require further investigation. One of those is the questionable status of long-term impact of these aids, regarding the sustainability of the bettered conditions. Apart from the wide focus on the immediate effects of aid on mitigating the harms of global food insecurity, the question on how both types of aid do contribute to long term systemic changes still remains under explained. To have a more accurate understanding on the impacts of aid, there is a need to investigate how they correlate with institutional development and governance reforms in the receiving countries. (Kilby, 2006, p. 137). Moreover, the connection between internal corruption and the true effectiveness of aid is another topic that needs to be further explored, particularly for countries entangled with their governments’ unstableness due to political turbulence. Specifically, the specific route of how corruption wields a significant impact on the effectiveness of foreign aid needs further exploration (Dreher and Fuchs, 2011, p. 7). Furthermore, the need for additional empirical research that contrasts the long-term impacts of infrastructural aid as compared to that of direct financial aid on the overall economic stabilization and poverty reduction across varying parts of the globe needs to be addressed (Moyo, 2009, p. 214). This paper aims to bridge these gaps in the hopes of providing a guideline on the best way to balance immediate aid and infrastructural aid, to maximize the effects of global aid to Sub-Saharan African countries.

Debates on the effectiveness of immediate financial aid When considering factors that ruin a country, drastic events such as wars or natural disasters may be one of the first causes to come to mind. However, countries are also ruined due to a lack of daily necessities, food insecurity and the corresponding issues of hunger and malnutrition being the primary factors that cause struggle. On a wide view, there are two ways of aid to help resolve these issues: immediate financial aid and long-term infrastructural aid. While there has been ongoing debates on which method is a better approach to take for bringing upon maximum levels of betterment, when it comes to aiding countries in need, there is a specific need to consider the distinct advantages and disadvantages to each approach (Klitgaard, 2010, p. 28). Financial aid, along with immediate types of aid such as emergency food and water and transportation support, results in an instant improvement in the status quo when considering the rapid relief they yield to the recipients in need. However, there are some drawbacks that hinder the efficiency of these types of aid. Especially in countries with government corruption, not only does immediate aid lead to a potential for misuse without a way to ensure the transparent use of the aided funds, but the method may also not be a clear solution for self-sustainability in the long term (Alesina & Weder, 2002, p. 1131). In addition, immediate financial aid may lead the recipient country to be overly dependent on aid, thereby leading to the undesired results of deterring local economies through blocking the growth of local markets. This, in effect, may disturb initiatives for a sustainable maintenance of food (Harvey and Bailey, 2001, p. 5). Hence, materialistic types of direct aid that results in an immediate improvement of food conditions must be allocated strategically, in a way that does not hinder sustainable development in the long run.

171


The questionable status of the transparency in the use of the aid money is well reflected from the immediate aid directed towards Haiti, following their massive earthquake in 2008. At the time, four hurricanes, which brought devastating losses by costing 15 to 20 percent in gross domestic product (GDP), were met by food riots due to a downturn in food security. While immediate global aid had been provided to mitigate the harms of these issues at the time, Haiti had an insatiable government that wasn’t capable of managing the billion dollars worth of aid. As a result, it has been found that aid worth $197 million had been used elsewhere, while they were supposed to be used for recovering from the losses of the hurricanes (Klitgaard, 2010, p.14). As seen from this case study, despite the fact that billions of dollars are being poured into countries in need for support in relief efforts, internal societal problems such as inefficient management along with governmental corruptions may hinder the proper use of the aided materials. What increases the likelihood of these cases is that the countries that are in need of immediate aid are likely to be countries that suffer from internal corruption. According to Transparency international, a leading Non-Governmental Organization in fighting corruption, recipient nations of foreign aid demonstrated a similar trend by possessing a fragile judiciary system, creating an even more suitable environment for corruption to grow by being paired with dysfunctional public frameworks that deter a nation’s capability to be ready in advance before humanitarian disasters strike and cause problems in areas such as food security (Nocita, 2020, p.26). Such examples are cases of Somalia and Afghanistan, which are two of the countries that are most in need of aid. According to Transparency International, a Non-Governmental Organization that specializes in analyzing statistics that represent the rate of corruption in countries, Afghanistan is the 9th country out of 180 countries in regards to the severity of internal corruption, whereas Somalia ranks in first place out of 180 countries (Transparency International, 2023). Statistics such as these that portray a high correlation between intense corruption and the dire need for global aid is indeed alarming, in the sense that the people who are most in need of support for sustenance are also the people who most lack access to the efficient government policies for distributing the aided products. Due to these types of corrupt government that is swayed by bribery and extortion, which often utilizes the aided materials to favor the benefits of the elites, the vulnerable majority suffering from poverty and famine are unable to meet the intended benefits that they were supposed to earn through global aid (Nocita, 2020, p. 28). Looking back to the case of Haiti, while the funds were supposed to be used for rebuilding hurricane-damaged houses, the initiative met little results as the aided materials ended up being diverted or misused.

Prospects of applying immediate financial aid Immediate financial aid, however, has its own prospects too if they could be used in the right way under transparency. In fact, the global community has leaned towards this method of aid over the past decades for multiple reasons. A prospect of immediate financial aid is that it is a direct way of providing people in need with the funds to purchase goods, usually food itself, that each imminently needs for survival. As money is a medium of exchange for goods and services in a capitalistic society, financial reinforcements for individuals is the best way to allocate aid for customized needs. As a side effect, local economies may be more active as a result too, as increments in cash may increase demand for local goods and services such as ordering quality meals in restaurants. For the amount of labor to meet the increased demands, there might be some increments in employment opportunities as well in an idealistic setting. Moreover, financial aid may benefit individuals by preventing the recipients from resorting to coping mechanisms, such as selling off productive assets, that may negatively impact their future well-being in exchange for food. The long term health risk in reducing the amount of food-intake at the time being, such as damaged health due to prolonged periods of malnutrition, may also be prevented by enabling purchases that are good enough to care for one’s nutrition. The drawback of immediate financial aid, which is the lack of the ability to better levels of food security in sustainable means, may also be mitigated at times if the aided funds may be managed wisely so that portions may be kept for future savings.

Debatable factors that make immediate financial aid more preferable than infrastructural aid In some cases, immediate financial aid may be even more preferable compared to infrastructural aid.

172


Immediate financial aid does not require the lengthy process that includes planning, approving, and building that infrastructural aids require. Rather, it may be used immediately by its recipients in the way that best addresses their urgent needs (Honorati, Gentilini, & Yemtsov, 2015). In addition, while infrastructures established through aid serve a target-specific purpose that may at times fail to address immediate or varied needs, cash, as a medium of exchange for varying types of goods and services has a better potential to address customized priorities for individuals in a more fluid manner. With a quantity of financial aid that may be enough to enable individuals to gain purchasing power, one could expect increments in local market activities too, as contrasted to the agricultural infrastructures that do not possess the direct power to provide immediate benefits for local economies (The Kenya CT-OVC Evaluation Team, 2012). It has also been found that a year’s worth of direct cash transfers that resulted in household income increases led to higher life satisfaction and positive beliefs for a better future (Kilburn et al., 2018). From these perspectives, financial aid may benefit local communities better than infrastructural aid in regards to addressing individual-focused needs through fluid means while also reaching out the benefits to a larger and more diverse population, including those from remote areas. While there still may be some limits on the access to diverse kinds of goods and services due to discrepancies in the supply conditions of local markets depending on the regions, with cash, a wider range of people may gain chances to better issues of food insecurity as compared to infrastructural projects that might benefit only certain groups in the region where they are established. The concerns on the risk of the misuse of funds due to internal corruption may be addressed too, if allocated amounts of direct cash transfers could be made to the individuals themselves. This method may secure the effectiveness of aid even more than that of large-scale infrastructural aid, which is usually entangled with numerous stakeholders bound with a massive number of contracts that may cause loopholes of oversight in the proper management of funds in the manufacturing process (Owusu-Addo et al., 2016). Concerns regarding the questionability on whether individuals may use their cash in a responsible way that may directly correlate with the original initiative to alleviate food insecurity problems still remain at large, however. While cash transfers have been demonstrated to improve education and health outcomes and alleviate poverty in various contexts through the aforementioned measures, cash at the hands of individuals with free will in a free market economy may be tempted to use it for purchasing “temptation goods,” such as tobacco or alcohol (Evans & Popova, 2014). While the harms of these issues could be mitigated through the forms of social messaging necessary, there is still the risk that individuals who had little to no access to the surplus income for purchasing the normal goods that aren’t necessarily related to survival and nutrition, such as tobacco and alcohol, may not be persuaded by those efforts. As seen, financial aid contains its own risk by not only being prone to misuse on the macro level due to corruption, but also in the micro level due to the natural willpower of individuals that are easily attracted to goods that fulfill their temptation, often overpowering rational decision making that is necessary for securing food for survival.

Efficiency of immediate aid on agricultural productivity and food security in Sub-Saharan Africa Now, let’s narrow down the focus down to Sub-Saharan Africa, which has the best potential for enhancing their status of food security through self-sustainable agricultural means through its agrarian societies. Immediate aid, either in the form of cash transfers or food assistance, is a pivot response toward alleviating food insecurity and improving agricultural productivity in Sub-Saharan Africa. SubSaharan Africa is still plagued by food insecurity, the consequence of several factors having various dimensions such as climatic variability, economic instability, and political unrest. Immediate aid interventions—in the form of cash transfers and food assistance programs—target mitigation of hunger and poverty in the short term while focusing on agricultural productivity enhancement in the long run (Baro & Deubel, 2006). A general overview on actual cases in which this method of aid worked in Sub-Saharan African countries

173


may be a guidance for future plans of aid. Cash transfers towards recipients in Kenya have had significant improvements in local food security, in that they directly enabled households to buy food during periods of increased famine. Through these processes, utilizing direct cash transfers results in stabilizing enough supplies for a nutritious consumption of food and also reduces the stress levels of many households (Haushofer & Shapiro, 2016). Financial aid is not the only method of immediate aid: surplus food from global inventories could also be directly aided to help secure enough amounts of food supply for Sub-Saharan African countries, too. This way of directly transferring food met actual results by alleviating hunger within poverty-stricken homes in Ethiopia. Targeted food aid may avert acute food shortages and stabilize household food consumption during times of crisis (Isenman & Singer, 1977). This can then develop as an agricultural buffer in times of downturns, hence ensuring adequate nutrition to farmers and their families, maintaining health, and supporting agricultural productivity (Hoddinott and Yohannes 2002). Furthermore, cash transfers bring multiple advantages, not only empowering beneficiaries with choice and dignity in purchasing decisions but also stimulating the economy and thus boosting agricultural markets and livelihoods (Aker, 2017). Another form of cash transfer programming that showed success in emergency response showed more cost-effective and timely transfer. This form of cash transfer showed quick responses to food insecurity. The means are better targeted to the poorest, most vulnerable populations, to indirectly help stabilize household incomes, which, in turn, can support agricultural productivity (Harvey & Bailey, 2011). Such carefully designed and sustained social protection programs, including cash transfer programs, may potentially develop household resilience to shock, encourage investments in agriculture and other productive activities related to agriculture, and improve long-term food security outcomes (Devereux, 2002).

Possible use of immediate aid as a way to tackle food insecurity The last five decades in the evolution of food aid is a critical review that calls for nuanced strategies that can integrate food aid within broader development objectives aimed at increasing agricultural productivity. Enhanced local food production and market development are highly desired and strongly advocated for, with policies to that effect towards building resilience against future food crises (Barrett & Maxwell, 2005). To get the best value for money on the impact of aid on food security and agricultural productivity, region-specific adaptive approaches should be used that take into account local contexts and preferences. A transformative social protection strategy for children in Africa, which focuses on the potential for social safety nets to break both the intergenerational cycles of poverty and enhance human capital development, could be a way. The promotion of nutrition-sensitive interventions by their integration with agricultural development programs is necessary for achieving sustainable food security outcomes as well as the long term nutritious diet of the stakeholders (Sabates-Wheeler & Devereux, 2009). Whether in the form of cash transfer or food assistance, immediate aid has crucial potential to improve agricultural productivity and thus, ultimately, food security in Sub-Saharan Africa. Such interventions can respond to short-term needs for food while building up resilience among poor households, helping to realize broader development objectives by laying the basis for sustainable agricultural growth (Gentilini, 2016). While attendant design and implementation challenges remain to be overcome, evidence to this effect suggests that very well-targeted and timely immediate financial aid are likely to spike significant improvements in food security and agricultural productivity outcomes in Sub-Saharan Africa.

Debates on the effectiveness of infrastructural aid In direct contrast to the purpose of immediate financial aid, infrastructural aid places an emphasis on providing assistance that may result in long-term sustainable improvements in food security. Projects of such sort may include initiatives for overall economic stabilization, ridding governments of corruption

174


through means of surveillance in regards to ensuring a transparent and proper use of aids, creating facilities for healthcare and constructing infrastructures such as innovational irrigation facilities that may result in higher agricultural produce. While infrastructural aid is suitable for conditions in which sustainable improvements in the long-term is desired, it also has its downsides when it comes to the extensive periods of planning and the abundance of costly resources that it takes to build the infrastructure. This risk chains infrastructural aid to the potential for corruption in the building process as well due to human greed, which leads to the needs for some effective governance frameworks that may ensure that no cases of resource misuse and corruption takes place in the construction process (Moyo, 2009, p. 48). However, despite these drawbacks, infrastructural aid does in effect bring upon meaningful improvements in poverty and food scarcity problems over a long term, through reinforcing opportunities for a continuous source of economic activity and durable public services (Dabla-Norris et al., 2012, p. 59). In addition, concerns cast towards the potential of corruption may be smoothly resolved through systemic improvements as a way of an anti-corruption strategy, more specifically in the context of raising the salaries of government officials with the funds collected from foreign aid. Paired with institutional reforms of this kind, this may be a resolution that could produce enhanced qualities of governance by incentivizing the government officials to work with efficiency and increased morale, thereby reducing the factors, such as the perception that they are not being paid well enough, that may trigger them to fall into corruption for self-gain (Quibria, 2017, p.13). If this could be made possible, long term improvements in the sustainability of prolonged issues such as food security could be resolved, paired in synergy with the way to seek for a healthy government with less corruption.

Efficiency of irrigation infrastructure on the agricultural productivity and food security in Sub-Saharan Africa Sub-Saharan Africa faces significant challenges in agricultural productivity and food security due to its reliance on rain-fed agriculture. This is highly susceptible to climatic variability. Thus, introducing and scaling irrigation infrastructure is perceived as critical to countering these challenges through the stabilization of agricultural production and improving food availability (Sauer & Tsegai, 2007). The infrastructure for irrigation has deep impacts on agricultural productivity. There is empirical evidence from Sudan in 2012 that small-scale irrigation has been very effective in bringing huge benefits to crop yields and incomes. Dependable water supply via irrigation reduces dependence on rainfall, which could enable farmers to grow crops year-round and plant a variety of crop seeds that resulted in high yields in agricultural produce (Burney & Naylor, 2012). Both biophysical and socioeconomic analyses indicate immense potential for irrigation development in Africa. While irrigation makes a significant contribution to productivity in agriculture, potential areas for irrigation need improved targeting of investments to realize their potential (You et al., 2011). A review of trends in water and agricultural development shows that better management of water resources—for instance, through irrigation—can lead to drastic increases in productivity. Investment in irrigation infrastructures has to go with policy ensuring efficient and sustainable water use. Irrigation is very important in enhancing food security. Adoption and impacts of micro-irrigation technologies in India, with lessons for Sub-Saharan Africa: Micro-irrigation increases crop yields and improves household food security; enables year-round production; and minimizes crop failure risks (Namara et al., 2005). Efficient irrigation infrastructure enhances water use efficiency, and reliability of agricultural production, minimizes losses, and optimizes water use for food security. The constraints and opportunities to improve irrigation systems in Ethiopia highlight the significant bottlenecks of a lack of infrastructure and technical know-how (Moges & Holden, 2008). Addressing these bottlenecks that constrain irrigation infrastructure is likely to achieve food security. There is immense economic gain attributed to irrigated agriculture in Ethiopia. This is through the stimulation of the economy due to improvement in agricultural production and jobs, and hence

175


enhancement of food security by making food more available and accessible to households. However, huge potential in agrarian water productivity could become a reality if practiced sustainably without causing harm to the environment (Molden et al., 2010). Low initial investment cost, access to finance, and lack of technical expertise in developing and maintaining irrigation infrastructures; socio-political and governance issues are among the other constraints to proper implementation in these countries. Much of these potentials require focused investment and capacity-building efforts if they are to be maximized (Hagos et al., 2009). Governments and development agencies can support irrigation projects in areas that boast great potential and help farmers adopt advanced irrigation technologies that would enhance the efficient use and sustainability of water. There is plausible cause to use empirical evidence of the effectiveness of irrigation infrastructure in improving agricultural productivity and food security in Sub-Saharan Africa. Challenges remain therein, but targeted investment, efficient water management, and practices make for sustainable development that could unlock such potential (Molden et al., 2007). Addressing them would increase the agricultural productivity for a rising population and alleviate food insecurity in Sub-Saharan Africa.

Prospects that could be brought about through utilizing immediate financial aid and infrastructural aid at the same time Meanwhile, there have actually been some experimental approaches to resolve food insecurity by utilizing immediate and infrastructural aid at the same time in Sub-Saharan Africa. Pilot projects with integrated aspects of cash transfers and aids on local agricultural infrastructure have met success in the past by alleviating hunger issues while also setting the stone for long-term food security. The Food Assistance for Assets (FFA) program in Guatemala, which was conducted by the World Food Programme (WFP) to respond to food insecurity brought about by severe droughts, is a successful example of such aid. Starting from the early 1990s, this program has presented ways such as planting diverse kinds of crop seeds on rehabilitated lands by drawing funds from cash transfers to do so and also installed irrigation systems as a resolution to recover from agricultural losses. Through these means, the immediate capability of the local lands to produce food and a long-term resilience against future climate shocks could be successfully established in Guatemala, with the project yielding up to a three-fold increase in the productivity of the region over 20 years (World Food Programme). This case study hints at how a synergetic mix of immediate and infrastructural aid may address highly effective results while bringing upon the respective prospects of each method of aid in recipient nations.

Conclusion Food security, despite the continuity of these initiatives over decades, is still a pertinent issue that affects the lives of millions due to malnutrition. Reflecting on why food security never ceases despite the continued global initiatives of providing immediate and infrastructural support over decades, in order to provide the best way of aid possible towards nations that suffer from their long withstanding food insecurity issues in Sub-Saharan Africa, case studies on the comparative advantage between immediate relief and a long-term resolution to reduce dependency on aid must be done (Collier & Dollar, 2002). These studies could include examinations on the level of governmental corruption in the recipient nations, as well as the climate conditions of each region, which may determine the efficiency of the initiatives of aiding irrigation infrastructure to foster self-sustainable means of securing food supplies in the long term. This in effect may directly address the target-specific needs of the recipients and function as a groundwork while also mingling the benefits that come from the two types of aid, thereby maximizing the efficiency of aid. Delving more into the prospects of infrastructural aid, though, an interesting thought on how the current food situation in Sub-Saharan Africa may have been markedly different from that of now if substantial infrastructural investments had been made 20 years ago comes across. For instance, if transportational networks were established along with reinforcements in irrigation infrastructure, then farmers may have

176


been able to engage within long distance sales with their increased crop yields by having a better access to markets in physical and timely terms. Several harms that arise from macro-level factors that lead towards food insecurity, such as climate conditions, may also have been mitigated through innovative infrastructural means. For example, efficient irrigation systems such as aquaponics, which utilizes the minimal amount of water needed for agriculture and saves surplus water for later use, could have helped in mitigating losses in agricultural produce during drought seasons. If assurances of these sorts could have prepared farmers in advance for approaching climate challenges, the viability of agricultural produce even under dire circumstances would have been made possible, potentially resulting in securing food resources at all times. Secure storage facilities may have also had similar effects, by reducing food shortages during post-harvest seasons. However, what must be remembered is that these are the expected results of infrastructural enhancements under the premise that transparency is secured during the construction processes. Since infrastructural aid withholds its own risk for this reason and is therefore not the ultimate solution to resolve issues of food insecurity wholly, it must be complemented by the benefits that derive from aids that promote immediate relief. These initiatives could be strengthened by being paired with some global efforts to build transparent governance structures through methods such as but not limited to open monitoring, under reliable global frameworks such as the United Nations. If this happens, global agreements on the guidelines for surveillance that assures no infringements upon national sovereignty must be made for an efficient and rightful operation of the system. In a nutshell, a nuanced approach that draws on from the benefits of both immediate financial aid and infrastructural aid is the key to better the issue of food insecurity in Sub-Saharan African regions. Through adopting this balanced strategy, Sub-Saharan African countries may take significant strides towards resolving their long withstanding issues of food insecurity.

References Aker, J. C. (2017). The World Bank Economic Review, 31(1), 44-70. Alesina, A., & Weder, B. (2002). Do Corrupt Governments Receive Less Foreign Aid? American Economic Review, 92(4), 1126-1137. Baro, Mamadou, and Tara F. Deubel. “Persistent Hunger: Perspectives on Vulnerability, Famine, and Food Security in Sub-Saharan Africa.” Annual Review of Anthropology, vol. 35, 2006, pp. 521–38. JSTOR, http://www.jstor.org/stable/25064936. BMC Public Health (2020). The impact of food insecurity on health outcomes: empirical evidence from sub-Saharan African countries: BMC Public Health (BioMed Central) (Directory of Open Access Journals – DOAJ)(BioMed Central). Burney, J. A., & Naylor, R. L. (2012). "Smallholder Irrigation as a Poverty Alleviation Tool in SubSaharan Africa: Evidence from Sudan." Proceedings of the National Academy of Sciences, 109(31), 12325-12330. Collier, P., & Dollar, D. (2002). Aid Allocation and Poverty Reduction. European Economic Review, 46(8), 1475-1500. Dabla-Norris, E., Allen, R., Zanna, L. F., Prakash, T., Kvintradze, E., Lledo, V., & Gollwitzer, S. (2012). Budget Institutions and Fiscal Performance in Low-Income Countries. Journal of International Development, 24(8), 1031-1050.

177


Devereux, S. (2002). Development Policy Review, 20(5), 657-675. Evans, David and Popova, Anna, Cash Transfers and Temptation Goods: A Review of Global Evidence (May 1, 2014). World Bank Policy Research Working Paper No. 6886, Available at SSRN: https://ssrn.com/abstract=2439993 FAO (2023). The State of Food Security and Nutrition in the World: FAO - The State of Food Security and Nutrition in the World 2023. Accessed 18 July 2024. Gentilini, Ugo. 2016. The Other Side of the Coin: The Comparative Evidence of Cash and In-Kind Transfers in Humanitarian Situations. World Bank Studies. Washington, DC: World Bank. doi: 10.1596/978-1-4648-0910-1. License: Creative Commons Attribution CC BY 3.0 IGO. Global Report on Food Crises (2023). World Food Programme: https://www.wfp.org/publications/global-report-food-crises-2023 Hagos, F., Makombe, G., Namara, R. E., & Awulachew, S. B. (2009). "Importance of Irrigated Agriculture to the Ethiopian Economy: Capturing the Direct Net Benefits of Irrigation." IWMI Research Report 128, International Water Management Institute. Harvey, P., & Bailey, S. (2011). Cash Transfer Programming in Emergencies. Humanitarian Practice Network, Overseas Development Institute. Haushofer, J., & Shapiro, J. (2016). The Quarterly Journal of Economics, 131(4), 1973-2042. Hoddinott, J., & Yohannes, Y. (2002). International Food Policy Research Institute (IFPRI) Discussion Paper. Honorati,Maddalena; Gentilini,Ugo; Yemtsov,Ruslan G. The state of social safety nets 2015 (English). Washington, D.C. : World Bank Group. http://documents.worldbank.org/curated/en/415491467994645020/The-state-of-social -safety-nets-2015 Isenman, Paul J., and H. W. Singer. “Food Aid: Disincentive Effects and Their Policy Implications.” Economic Development and Cultural Change, vol. 25, no. 2, 1977, pp. 205–37. JSTOR, http://www.jstor.org/stable/1152858. Kilburn, Kelly, et al. “Paying for Happiness: Experimental Results from a Large Cash Transfer Program in Malawi.” Journal of Policy Analysis and Management, vol. 37, no. 2, 2018, pp. 331–56. JSTOR, http://www.jstor.org/stable/45105254. Klitgaard, Robert. Addressing Corruption in Haiti. American Enterprise Institute, 2010. JSTOR, http://www.jstor.org/stable/resrep03087. Maxwell, Daniel G., and Christopher B. Barrett. Food Aid after Fifty Years: Recasting Its Role. 1st ed., Routledge, 2005 Moges, S. A., & Holden, N. M. (2008). "Irrigation Potential in Ethiopia: Constraints and Opportunities for Enhancing the System." International Water Management Institute Conference Paper. Molden, D., Frenken, K., Barker, R., de Fraiture, C., Mati, B., Svendsen, M., ... & Finlayson, C. M. (2007). "Trends in Water and Agricultural Development." Water for Food, Water for Life: A Comprehensive Assessment of Water Management in Agriculture, Earthscan, London.

178


Molden, D., Oweis, T., Steduto, P., Bindraban, P., Hanjra, M. A., & Kijne, J. (2010). "Improving Agricultural Water Productivity: Between Optimism and Caution." Agricultural Water Management, 97(4), 528-535. Moyo, D. (2009). Dead Aid: Why Aid Is Not Working and How There Is a Better Way for Africa. Penguin Books. Namara, R. E., Upadhyay, B., & Nagar, R. K. (2005). "Adoption and Impacts of Microirrigation Technologies: Empirical Results from Selected Localities of Maharashtra and Gujarat States of India." Research Report 93, International Water Management Institute. Nocita, Nick. “WHERE THAT USED TEDDY BEAR REALLY GOES: CORRUPTION AND INEFFICIENCY IN HUMANITARIAN AID.” Harvard International Review, vol. 41, no. 1, 2020, pp. 24–29. JSTOR, https://www.jstor.org/stable/26917277. Owusu-Addo, Ebenezer, et al. “Cash Transfers and the Social Determinants of Health: A Conceptual Framework.” Health Promotion International, vol. 34, no. 6, 2019, pp. e106–18. JSTOR, https://www.jstor.org/stable/48547018. Quibria, M. G. “Foreign Aid and Corruption: Anti-Corruption Strategies Need Greater Alignment with the Objective of Aid Effectiveness.” Georgetown Journal of International Affairs, vol. 18, no. 2, 2017, pp. 10–17. JSTOR, http://www.jstor.org/stable/26396014. Sabates-Wheeler, Rachel & Devereux, Stephen. (2009). Social Protection for Transformation. Sauer, J., & Tsegai, D. (2007). "Irrigation Infrastructure and Water Use Efficiency in Sub-Saharan Africa." Quarterly Journal of International Agriculture, 46(3), 265-284. The Kenya CT-OVC Evaluation Team. (2012). The impact of the Kenya Cash Transfer Program for Orphans and Vulnerable Children on household spending. Journal of Development Effectiveness, 4(1), 9–37. https://doi.org/10.1080/19439342.2011.653980 Tomkins, Alan J. (2012) "Combating Food Shortages in Least Developed Countries: Current Development Assistance Approaches," The Law and Development Review: Vol. 5: No. 2, Article 3. DOI: 10.1515/1943-3867.1170 World Bank (2022). Rising Food Insecurity: World Bank - Rising Food Insecurity. Accessed 14 July 2024. World Food Programme (2022). Annual Review: World Food Programme - Annual Review 2022. Accessed 17 July 2024. World Food Programme. “Food Assistance for Assets: World Food Programme.” UN World Food Programme, www.wfp.org/food-assistance-for-assets. Accessed 23 July 2024. You, L., Ringler, C., Nelson, G., Wood-Sichra, U., & Robertson, R. (2011). "What is the Irrigation Potential for Africa? A Combined Biophysical and Socioeconomic Approach." Food Policy, 36(6), 770-782.

179


Beyond Sacrifice: Enhancing Human Rights with Minority Inclusion in South Korea’s Labor Market Author

Full Name

:

Kim, Yejin

:

The Masters School

(Last Name, First Name)

School Name

Abstract In this paper, I examine South Korea’s struggles with balancing human rights in a labor environment rooted in an authoritarian history, Confucian values and social norms. The 2024 medical strike that recently occurred in the country, highlights the tension between individual aspirations and concern for society as a whole. South Korea’s history of struggling with human rights, its sociocultural expectations, and insideroutsider dynamics further complicates this balance. At the end of this paper, I argue for greater inclusion of marginalized groups, such as women, non-Koreans, and people who live with disabilities. These people need to be included in decision-making processes, which can foster a more equitable society that respects human rights. Through education, cross-cultural exchanges, and increased presentation of minorities, South Korea can create a more inclusive society that values diversity and equity.

Keywords Human rights, South Korea, marginalized groups, cultural dynamics, inclusive decision-making

180


Introduction In a world where people are free to chase their dreams, we should all have a chance to succeed. But what if our dreams put others in danger? This issue became a very real problem on February 20, 2024, when about 12,000 medical residents at some of the best schools in South Korea suddenly quit. Their strike rocked the nation, disrupting medical services with surgery cancellations and treatment refusals. Sadly, a few people even died, including a 17-year-old girl whose ambulance was turned away by at least three hospitals that were too backed up with patients to help (Kim 2024). The media had a tough time trying to explain the strike. Some reporters blamed the government. It had done little to improve working conditions for these young doctors and they’d finally had enough (Kim 2024; Kuhn 2024). Others pointed to elitism. These future doctors wanted to ensure their fancy lifestyles and maximize their profits in a country where more and more people are getting older and depend on doctors to survive (Lee 2024; Suh 2024). While this situation is complex, it’s not the only time South Korea has faced problems balancing human rights and attractive working conditions. This paper exposes why this problem occurs and argues that inclusive participation of marginalized groups, such as youth, women, and non-Koreans, in legislation and positions of power could help to create a workplace that respects human rights while also being attractive to all workers. By understanding the challenges of protecting human rights in different countries, we can see how far we’ve come and what we need to do to ensure everyone’s rights are respected. To get a better picture of the struggle itself, we first need to look at the historical development of human rights in South Korea.

A Difficult Journey: Sacrifice for Success In South Korea, human rights is a relatively recent concept that blossomed through struggles with authoritarian rule.1 From 1962 to 1979, Park Chung-hee ruled the country with an iron fist and served as its third and longest-presiding president. Park’s approach often overlooked basic human rights in favor of economic advancement and “industrialization.” Personal sacrifice was seen as necessary for the greater good. One example was his Midnight Curfew Law (1945-1982). This policy set out to establish social order but played out more as a means of exploiting the citizens during the day. Force them to rest, only to work them senseless after they wake up (Encyclopedia of Korean Culture, n.d.).2 Such policies were rampant despite public opposition. There seemed no end in sight. Eventually, the president’s own security forces assassinated him and took power in a coup d’etat. Unfortunately, the next president, Chun Doo-hwan, was no better than his predecessor. His selfappointment immediately triggered nationwide unrest, which reached a boiling point in Gwangju, located in the country’s southwestern tip, and ended in a massacre. Chun’s forces led a brutal crackdown that killed 600 citizens and wounded countless others (Yonhap News, 2005). This tragic event, referred to today as the Gwangju Uprising,3 dashed any hopes for democracy and human rights. It wasn’t until the fall of Chun’s regime and the democratic election of Roh Tae-woo in 1992 that the political situation renewed hopes for expanding civil liberties. This new situation fostered an environment that allowed for discussions on human rights. The National Museum of Korea captured the times as follows: The form emerged before the content. Representatives of the people selected by the election, division of powers, and political freedoms were introduced first but it took much time before the principles became realities. Initially, Koreans did not pay for democracy. It was handed to them on a platter. Only later, by paying the price, did they actually get real democracy.4 This historical account might be brief but it is crucial. The toppling of dictatorships helped to form a country and labor system founded on “multiple unionism at apex levels” (Song 1999; 5). More importantly, though, it’s noteworthy to keep in mind that this history leading to today’s democratic

181


environment began forming only 30 years ago. There are two additional problems that seem to play roles in why human rights is such a complicated struggle in the country and both point to culture: sociocultural expectations and insider-outsider dynamics.

Sociocultural Expectations and Gender Equality The history above hints at why Koreans often submit to the idea that sacrifice for the common good is necessary and, at times, a national duty.5 Unfortunately, putting the group before the individual is not an idea that equally impacts all groups, and we see this clearly with the largest marginalized group in the country, namely women. Connected to both its male-led dictatorial past and its Confucian hierarchical roots, Korean society has historically categorized and treated people with ‘justified inequality.’ While the country has long done away with its caste system, it still treats people unequally. We see this clearly with the unequal treatment that women often face in the country. For instance, women who become pregnant and have children may face challenges in the workplace, with some experiencing job loss or not receiving adequate support. The general idea is that such women should not have left the home or burden companies with their family affairs. Currently, South Korea has one of the world’s highest gender pay gaps. According to a 2018 BBC report, among 29 developed nations, South Korea ranked as the worst developed nation to work as a woman (Bicker, 2018). This gap in wealth and treatment is a human rights violation. In 1985, South Korea joined the Convention of Elimination of All Forms of Discrimination Against Women (CEDAW). The CEDAW is a UN sanctioned agreement between countries that aims to eliminate all forms of discrimination towards women and girls in all areas and promote equal rights. In addition, the country has publicly expressed its commitment to achieving the 2030 UN Sustainable Development Goals. In terms of gender equality these goals include valuing unpaid workers and providing public services with protection, undertaking reforms to give women equal rights to economic resources, and adopting legislation for gender equality and empowerment of women (United Nations, n.d.). Many of these efforts, however, rely on social changes that conflict with the country’s traditional roots. South Korea is founded on patriarchal values. Unlike many Western cultures where equality is stressed, Korean culture values group harmony and hierarchical systems of respect. It is common in Korea to address your boss with an honorific title instead of a name to show respect, which can be different from how it’s done in Western cultures where people speak to one another as equals. So even in language, inequalities are reproduced. This also makes it challenging for marginalized people to have their voices heard. It also sustains environments where violence against these people might thrive. For instance, on November 6th 2023, a man was arrested after attacking a woman working at a convenience store whom he assumed was a feminist due to her short hair. In such an environment, it’s hard to say that women enjoy equal treatment when it comes to work. Despite these challenges, women’s economic participation increased from 34.4% to 48.1% from 1965 to 1999 (KOIS, n.d.). This increase has come to challenge cultural views that women’s rightful place is in the home. In addition, their newfound financial empowerment and participation in the non-domestic market has helped to shine a light on their unequal treatment in the workplace and society in general. The popularity of feminist literature such as Kim Ji-yong Born 1982 and women’s participation in feminist movements like the #MeToo Movements have further strengthened their voices. In order to promote and protect human rights in the workplace, the country must grapple to unravel its cultural expectations that favor hierarchical top-down at the expense of its marginalized populations. One creative approach was that of the CJ Group. It was the first South Korean company to do away with job titles, deciding instead to attach the honorific suffix “nim” to each of its employee’s names. While there have been other attempts to eliminate job titles in the past, they have been unsuccessful due to the confusion experienced by employees and lack of significant impact or change (Kim, 2016). This fight between modernity and tradition makes it difficult to ensure equality for all and protect marginalized people, and as I argue, might require more participation from marginalized communities,

182


i.e., a bottom-up approach, to win.

Insider-Outsider Dynamics and Further Marginalized Groups Another reason for why it’s difficult to create a fair labor system that harmonizes well with human rights in South Korea relates to the society’s tendency to interact based on insider-outsider dynamics. This insider versus outsider dynamic particularly matters when non-Koreans are involved. South Korea is a small country that provides visas and treats non-Koreans based on the assumption that these people are ‘guests’. In some cases, this leads to either indifference to foreigners’ problems or a measuring of them as tools contributing in their capacity as guests. One example of this would be the weighing of their earning potential in their own country in relation to what they might make in South Korea. For example, South Korea’s minimum wage is roughly six times greater than that of the Philippines. Workers from the Philippines are thus seen as guests who are reaping the rewards from being honored with a job in a ‘richer country.’ However, the underlying assumption here is that these ‘outsiders’ and ‘guests’ will (and should) eventually go back to the Philippines. The government’s recent visa program targeting Philippine nationals to work as domestic care workers in South Korea is a case in point here. Starting from August 2024, the government launched a pilot program that allows 100 foreign domestic workers to work in Seoul for six months. The program aims to address the shortage of domestic helpers, providing them opportunities to earn more than the typical salaries in their home countries. However, this is a pay rate below Korea’s set minimum wage. The program offers these women cross-cultural service and technical training, positions them as connectors between companies in both countries, builds their résumés, and gives them a chance to take some extra cash home. Yet, because the program is only six-months long, it presents several potential problems beyond mere language and cultural differences. Employers and housing providers may not care about how they treat workers, as they know the workers will leave in 6 months. The short duration might also cause workers to join the program even though they lack sincerity when it comes to doing the work. Conversely, some of these workers might risk staying as illegal immigrants or visa overstayers, either because there are no smooth reintegration programs for them in the Philippines or because they like their jobs in South Korea. The program itself is rife with potential problems rooted in its birth from insider-outsider thinking. The main issue is that South Korea fails to see these workers as part of its society, even though it needs them. To bridge the gap between labor needs and respect for human rights, the country must first see these people as fellow humans, deserving of respect and dignity, rather than as tools to help an aging population and weakening economy. Unfortunately, the country has not done this; it has invited these workers without defining clear protections for them as a very new population of laborers that deserve opportunities for integration in the country (Yeung and Bae, 2023). We also see this insider-outsider dynamic when it comes to the treatment of Koreans who live with disabilities in the country. Even though these people living with disabilities are citizens of the country, sometimes their dignity as humans are not respected when it comes to equity, employment opportunities, and labor. An example of note is the “Brother’s Home Scandal,” which occurred several decades ago but resurfaced as a hot topic today. The Brothers Home scandal took place in South Korea from 1976 to 1987, before the 1988 Seoul Olympics. While the incident itself does not mirror the labor problems today, it does tie historically to the dilemma of labor-based exploitation. The Brothers Home was a welfare facility run by Park In-keun, an army sergeant-turned-social welfare director. The organization kidnapped young homeless and disabled people, taking them off the streets. The goal was to help them, but secretly served as a way to hide them from the public eye in preparation for the ‘88 Olympics. Survivors described their experiences as “sheer hell.” The facility was no different from a concentration camp with its residents called by numbers, abused, and subject to forced labor. Investigators reported that Brothers Home

183


pocketed more than half of their savings or paid them nothing (Choe, 2022). This exploitation primarily relates to the view of these individuals as outsiders–runaways, the homeless, and people living with disabilities. These discriminatory practices on those perceived as outsiders resonate as human rights abuses. Many individuals living with disabilities, for example, face significant challenges navigating infrastructures that hinder or outright prevent their mobility.6 Over the past year, wheelchair users have protested against a transportation system that largely ignores their needs. As one respondent stated in a recent BBC report, “a lot of people in South Korea think disabled people live comfortably on welfare” (Marsh, 2023). However, the reality is that they only receive on average about 300,000 won per month, which is nearly seven times less than minimum wage. They need jobs and a means of traveling to their places of employment. The three examples above–the gender pay gap, the inconsiderate treatment of foreign workers, disregard of people living with disabilities–have several overlapping aspects that makes me believe that culture plays a role in complicating human rights efforts in the country. While Koreans respect equality, equity is a more controversial concept. Equity is the idea of providing support to certain groups that need extra help to compete fairly with others. One example is when the government gave extra points on the college entrance exam to students living in rural areas. These types of policies are either not created at all or highly criticized. There are several cultural reasons that might explain this phenomenon. One assumption points to Korea’s historical homogeneity. As a group, Koreans often see themselves as sharing identical or mostly similar histories, which reduces the need for equity. In other words, for many, especially those who are privileged, equality and equity feel like oppression. Another view interprets equity as a form of favoritism for certain groups over others. In other words, it gives certain groups routes to cheat the system. Because of these cultural tendencies, in most cases, Koreans often care more about the majority than the minority, which fundamentally conflicts with the type of thinking we need to ensure human rights.

Empowering Marginalized Voices: The Pathway to Inclusive Human Rights Reform Historically, South Korea has been a very homogeneous society, sharing the same backgrounds without being exposed to different cultures. This makes them treat people from minority groups differently or separate from themselves. However, to strengthen human rights efforts we must change this majority rules mentality to one that values and considers minorities more. To cultivate a more inclusive society, the government should take steps to provide minorities with a platform to voice their needs and concerns. Education is a good place to start. Hiring more teachers with foreign backgrounds and those living with disabilities to work at schools can raise cultural awareness and visibility, providing students with diverse perspectives and encouraging them to be open to differences. Even beyond simply having these people present though, I think it’s important for Koreans to teach their children by example, illustrating exactly what accepting others means. Students need to see their Korean teachers speaking English. Korean teachers should show their openness to cultural exchanges such as eating non-Korean foods, celebrating overlapping cultural practices like birthdays or holidays, and most importantly, treating foreigners as their equals. This is important in every level of education. It’s also a problem that if not fixed will simply reproduce the same inequalities that we see today. In addition to simply hiring more teachers, the country needs to increase efforts at interacting with other cultures. As I write this paper, I can’t help but think of the positive effects of the Hallyu moment and how it has brought kimchi into everyday speech around the world. However, Korea needs to do the same and open itself up to cultures that encourage acceptance of marginalized peoples, rather than simply absorbing American (white) culture and sending Korean culture abroad. Korea has no doubt had to deal with a difficult past with its neighbors; however, it should still strive to find connections with them. Increasing exchanges between secondary and post-secondary students for sports, short-term study abroad experiences, and friendly academic competitions can lead to strong relationships between culturally different people and enrich youth with cross-cultural experiences.

184


While education is often seen as a fail-safe for solving most problems, in this case, change also requires the courage to give power to those who might need it. Today, few people know of any non-ethnic Koreans who have played significant roles in legislation. Perhaps the two most famous are Jasmine Lee, a naturalized citizen of Philippine ethnic descent who served in Parliament from 2012 to 2016, and John Linton, a white ethnic Korean born and raised in the country who became a lawmaker in the National Assembly this year. These people have helped to bring visible diversity into the country’s political scene. Yet, even though Korea tends to see power in numbers, as mentioned above, it has never elected an ethnic Chinese-Korean (joseonjok) citizen into parliament despite this group being the largest percentage of ethnic minorities living in the country, amounting to roughly 17% of the entire non-ethnic Korean population (KOSIS, 2023). This reluctance when it comes to sharing power is not limited to the political arena; few non-Koreans and even Korean women work as heads of companies or are seen in the public eye in positions of power. It’s hard to say what the country can do to get women and minorities into higher positions in companies. A quota system would not be a workable solution because you’re often picking only one person for the position. To some extent, I believe Koreans who are already in positions of power should be more open to forming strong connections with and empowering women and minorities to take on these roles.1 There are several cultural difficulties with doing so, especially considering that Koreans often care a great deal about their image. Most executives want to be seen with their peers or those who can increase their social status. However, people in power should not be afraid of being judged by others. Giving women and minorities more power platforms can greatly increase the country's awareness, which will help us to see them as humans rather than mere laborers. This new way of thinking and the added visibility of women and minorities in respectable positions will help motivate society to create the labor laws that these groups need. Finally, two remaining factors ensure human rights protections that need attending to, namely accessibility and legislation. Many minorities living in the country struggle to access and understand the law. Korean laws are complicated and difficult to find. Even accessing information with your phone can be a nightmare. There are several governmental numbers for different needs, for example, 120 for public information, 119 for emergencies, 118 for reporting spam, 117 for school bullying, 116 for the standard time, 114 for Korean Telecommunications and IT assistants, 113 for reporting spies, 112 for the police, 111 to report terrorists, 110 for information, 100 for consulting, 1345 for immigration related needs, and 1331 for reporting human rights abuses. The lack of a central agency or place to get legal and public help creates an environment where foreigners can feel powerless. To improve accessibility, we need to have more centralization and an organization dedicated to assisting the various needs of nonKoreans living in the country. The country has only recently taken a step in this direction, creating the Korean Immigration and Integration Agency or imincheong on July 1, 2024. Many in the country still fear that this organization and such centralization might serve as a source of gestapo-style monitoring and “crackdowns on undocumented” foreigners living in the country (Lee, 2024). In terms of needed legislation, the current system heavily favors South Korean men. Specifically, when we look at the gendered structure of the system alone this reality only makes sense. According to recent stats, there are 1097 female judges which is a record penetration of women in the field, representing 35% of all judges. Of the 14 Supreme Court justices, 21% (three) are women, and there are concerns that the number may decrease further. And 50.8% of high courts have no female judges at all (Kwon, 2023). As can be imagined, this lack of representation bleeds into our daily lives with men receiving less punishment and women defendants often having difficulties communicating with judges. It also reproduces a system where the needs of women and other marginalized groups go overlooked. The country must establish a legal system that can better attend to these groups’ needs through their members’ active participation in law making. Again, we’re reminded of the need for greater representation of

1

It could be argued that the former and first president to be elected by the nation, Park Geun Hye, rose to power through her ties with politicians that go back historically to her father, Former President Park Chung Hee.

185


minorities and marginalized members in Parliament.

Conclusion South Korea faces various challenges when it comes to balancing human rights with an attractive fair labor system, which is rooted in its cultural history, social norms, and political structure. The country’s authoritarian past and Confucian values, have left a legacy where sacrifice for the common good overshadows individual rights and equality. We see this especially when it comes to marginalized groups such as women, non-Koreans, and people living with disabilities, who are all fighting even today for recognition and fair treatment in society and the workplace. While education and cross-cultural exchanges can play roles in fostering more inclusion and help to create a society that values diversity and equity, empowering marginalized communities through representation in government and in corporations can create a visibly more equal society where we all live as neighbors and respect one another as humans. By taking a bottom-up approach that includes marginalized people in decision-making processes, South Korea can create a society based on equity that truly respects human rights while remaining attractive to all workers.

Endnotes 1. While South Korea technically became a “democratic” government in 1948 under Sygman Rhee’s leadership (1948-1960), it began to show signs of moving towards authoritarianism with massive election frauds in 1960, which triggered the April 19 Revolution and led to the deaths of many protestors at the hands of the government (https://www.youtube.com/watch?v=Bamd-GpJvdo). 2. Restricting nighttime mobility maximized the exploitation of citizens during the day. 3. It is also referred to as the “Gwangju Democratization Movement” and the “Gwangju Massacre.” 3. https://www.youtube.com/watch?v=Bamd-GpJvdo 4. The Asian Financial Crisis in 1996 forced society to return to prioritizing economic stability over civil liberties. However, this time citizens took a more active role. They volunteered their sacrifices epitomized with the Gold Collection Campaign. Citizens nationwide donated their precious gold items, including their traditional dol rings, to repay the country’s $57 million debt to the IMF. While these efforts were remarkable, accumulating $20 million in only a week, the sacrifices were stark and even led to cases of gwarosa or death by overwork (Breen, 2010) 5. The view of the “outsiders” as a singular group explains why people with disabilities find solidarity in broader agenda items. For example, people living with visible disabilities might not have the same needs as those who carry invisible disabilities.

References (By Order of Appearance) Kim, Heejin. 2024. “Deaths from Doctor Shortage Fuel Election Angst in Korea.” Bloomberg. April 8. (Last Accessed Jul 6, 2024: https://www.bloomberg.com/news/articles/2024-04-07/deaths-from-doctor-shortage-fuelelection-angst-in-south-korea). Kuhn, Anthony. 2024. “Doctors in South Korea Walk Out in Strike of Work Conditions.” NPR. February 29. (Last Accessed Jul 6, 2024: https://www.npr.org/transcripts/1234996311).

186


Lee, Eunwoo. 2024. “South Korean Doctors Should Return to Duty.” The Diplomat. March 8. (Last Accessed Jul 6, 2024: https://thediplomat.com/2024/03/south-korean-doctors-should-return-to-duty/) Suh, Kyoung-ho. 2024. “Are Doctors Really a Special Case.” Korea JoongAng Daily. March 5. (Last Accessed Jul 6, 2024: https://koreajoongangdaily.joins.com/news/2024-03-05/opinion/columns/Are-doctors-real ly-a-special-class/1995401) Encyclopedia of Korean Culture. n.d. “Yagan Tonghaeng Geumjibeop” (trans: Midnight Curfew Law). (Last Accessed Jul 6, 2024: https://encykorea.aks.ac.kr/Article/E0035247). Yonhap News. 2005. “5-wol Dan-che, ‘5.18 Gwanlyeon Samangja 606 Myeong’." (trans: The May Organization, 606 Deaths During the May 18 Democratic Uprising). May 13. (Last Accessed Jul 10, 2024: https://n.news.naver.com/mnews/article/001/0001001551?sid=103). Song, Ho Keun. 1999. “Labor Unions in the Republic of Korea: Challenges and Choice.” Labor and Society Programme (Discussion Paper). https://library.fes.de/pdf-files/gurn/00164.pdf. Bicker, Laura. 2018. “#MeToo Movement Takes Hold in South Korea.” BBC. March 26. (Last Accessed Jul 20, 2024: https://www.bbc.com/news/world-asia-43534074). United Nations. n.d. “Goal 5: Achieve Gender Equality and Empower All Women and Girls.” (Last Accessed Jul 20, 2024: https://www.un.org/sustainabledevelopment/gender-equality/). Korean Overseas Information Service (KOIS). n.d. “Women’s Role in Contemporary Korea.” Asia Society. (Last Accessed Jul 17, 2024: https://asiasociety.org/education/womens-role-contemporary-korea#:~:text=With%20the %20establishment%20of%20the,during%20the%20past%20three%20decades). Kim, Da-sol. 2016. “Korean Companies Seek to Drop Job Titles.” The Korea Herald. April 5. (Last Accessed Jul 27, 2024: https://www.koreaherald.com/view.php?ud=20160405001029). Yeung, Jessie and Bae, Gawon. 2023. “South Korea Needs More Babies and Workers. It’s Hoping Foreign Housekeepers Will Fix That.” September 1. (Last Accessed Aug 3, 2024: https://edition.cnn.com/2023/09/01/asia/south-korea-migrant-domestic-worker-intl-hnk/in dex.html). Choe, Sang-Hun. 2022. “Decades After a ‘Living Hell,’ Korean Victims Win a Step Toward Redress.” The New York Times. Aug 25. (Last Accessed Aug 10, 2024: https://www.nytimes.com/2022/08/25/world/asia/korea-abuse-brothers-home.html) Marsh, Nick. 2023. “South Korea: Protesting for 20 Years and Still No Equal Rights.” BBC. Jan 27. (Last Accessed Aug 10, 2024: https://www.bbc.com/news/world-us-canada-64369810). KOSIS. 2023. “Gukjeok (Jiyeok) Mit Yeollyeongbyeol Deungnog Oegugin Hyeonhwang.”(trans: Status of Foreigners Registered by Nationality (Region) and Age. (Last Accessed Aug 14, 2024:

187


https://kosis.kr/statHtml/statHtml.do?orgId=111&tblId=DT_1B040A8&vw_cd=MT_ZTI TLE&list_id=A8&seqNo=&lang_mode=ko&language=kor&obj_var_id=&itm_id=&con n_path=MT_ZTITLE). Lee, Hyo-Jin. 2024. “Immigration Agency Plan Gains Momentum in National Assembly.” Korea Times. February 2. (Last Accessed Aug 19, 2024: https://www.koreatimes.co.kr/www/nation/2024/08/113_368127.html). Kwon, Seok Chun. 2023. “Itorok Namseongjeogin Daehanminguk.” (trans: A Republic of Korea This Hypermasculine). Oct 23. (Last Accessed Aug 19, 2024: https://www.lawtimes.co.kr/opinion/192298). Breen, Michael. 2010. “Financial Crisis in 1997-1998: Shock and Recovery.” Korea TImes. October 17. (Last Accessed Aug 18, 2024: https://www.koreatimes.co.kr/www/nation/2024/08/113_74690.html).

188


The Reasons Why Belarus’s Relationship with Russia is Significant to the EU and surrounding European nations Author

Full Name (Last Name, First Name)

School Name

:

Lee, MinSeok

:

Blair Academy

Abstract The close relationship between Russia and Belarus have had a notable impact on the geopolitical landscapes of Europe, especially in regard to recent conflicts and sanctions being imposed. The Russian Federation has been susceptible to international censure and pressure from other nations such as the US, Ukraine, and Poland since the annexation of Crimea in 2014 and being charged with the violation of human rights. Amongst the international opposition against Russia, Belarus has stood up as a faithful and loyal ally for Russia. Belarus has supported Russia’s military operations, housing Russian nuclear warheads and acting as a transit corridor for the invasion of Ukraine in 2022. Deep cultural, religious, and political ties have existed between Belarus and Russia historically, such as sharing the same imperial and Soviet heritage. This has conducted agreements on mutual cooperation between the two nations and has brought the two nations together closer. Belarus has played a significant role in enabling it to evade international sanctions by serving as a transit state for the exporting of Russian gas and oil to Europe. Due to this strategic alliance, Russia manages to decrease the economic effects of sanctions. Nevertheless, this additionally prompts nearby EU and NATO countries to raise their concerns over security. The relationship highlights Belarus and Russia’s long-standing and intricate interdependence, as well as the important consequences this has for regional and international geopolitics.

Keywords Belarus, Russia, Sanction, natural gas, oil

189


Introduction In recent years, Russia’s reputation was ruined by the nation’s actions of annexing the Crimean peninsula in 2014 and abusing human rights. Many nations, such as Ukraine, United States (US), and Poland have publicly denounced Russia and have turned their backs against the country, imposing sanctions. While China did not impose sanctions, it has remained a neutral stance on Russia’s actions. Nevertheless, there has been one nation that has conducted closest ties with Russia since the late 90s and the early 2000s. Bordering Russia to its east, Belarus has provided arms and even allowed Russia to use their land to cross to Ukraine in the recent invasion. Not only did Belarus provide physical aid, but the nation also acted as a transit state for exporting Russian gas to the European Union (EU) and other nations that sanctioned Russia. Compared to other nations that have experienced Russian occupation in the past, Belarus has expressed the most positive view on Russians, considering Polish and Ukrainian discontent with Russia. Their genetic makeup highly resembles that of the Russians, and the people of Belarus have Russian as their official language alongside with Byelorussian. The two nations are also east-slavic orthodox Christian nations, and they were once part of the Kievan Rus’, alongside Ukraine. The sense of brotherhood as eastern slavs strengthened their relationship with each other as well as numerous invasions from people like the Mongols, Polish, and Lithuanians. For several centuries Russia and the Byelorussians lived together until Belarus became an autonomous state in 1917, after the Russian Revolution. However, in 1922, the Byelorussian people’s republic was absorbed into the Union of Soviet Socialist Republics (USSR). From then on until 1991, Belarus was together with Russia as one nation. Even after the breakup of the USSR, the two nations, Belarus and Russia have agreed on the creation of a Union State in 1997. One example that clearly projected the two nations’ friendship was in June of 2023, when president Vladimir Putin announced the official installation of nuclear warheads into Belarus, aggravating the already hostile atmosphere in the region. The president of Belarus, Alexander Lukashenko has also discussed with Vladimir Putin, the president of Russia about uniting the two nations into one. While this relationship is tight, it is also important for nations adjacent to those two countries. The reason behind why this relationship is special and significant to nations bordering Russia and Belarus is because of how sanctions for Russia can be neutralized and Russian exports, such as natural gas and oil can be transported through Belarus to Europe. Not only are sanctions neutralized but also neighboring nations would also be affected by this relationship. Statistics and graphs showing the circulation of Russian products, especially energy through Belarus would display the country as a transit state for Russian exports.

Discussion Background 1. Modern History of Belarus focused on association with Russia After the Russian Revolution in 1917, the Byelorussian People’s Republic came to existence alongside with the collapse of the Russian Monarch. From then on until 1922, the Russian Civil War ignited, and the Byelorussian People’s Republic (BPR) was absorbed into the Russian Soviet Federative Socialist Republic (RSFSR). In 1922, the USSR formed, consisting of the RSFSR, Ukrainian People’s Republic, BPR, and the Kavkaz People’s Republic. Those Soviet republics had their own autonomy and sovereignty within the USSR. However, the BPR was divided into half by Poland and the RSFSR during the Polish-Soviet wars of 1919-1920. From 1922 to 1939, new industries were established in the Byelorussian Soviet Socialist Republic (BSSR) with the start of the five year plan, and there were several purges following the Holodomor (Britannica). In 1939, the BSSR expanded to the east following the invasion of Poland by the USSR.

190


After two years, on June 22nd of 1941, the USSR was invaded by the Third German Reich and the BSSR was occupied by the Germans in August of 1941 (Britannica). The republic became part of the German occupied region called Reichskommissariat Ostland which means the Reich’s commissariat of east land until its liberation in August of 1944, exactly 3 years after the start of the occupation (Britannica). After the end of the 2nd World War, the era of the Cold war descended upon the globe and both the USA and the USSR confronted each other as foes. However, a catastrophe struck the BSSR in 1986; the Chernobyl disaster. This disaster contaminated roughly two-thirds of Belarus’s whole land area and also contributed to the collapse of the USSR (Atlas Obscura).

2. History of Russia’s natural gas and oil industries from the 19th century to the modern day transported to Europe After the start of the industrial revolution, Russia started its oil business in the 19th century, obtaining oil from Baku, Azerbaijan, which was part of a khanate annexed by the Russian Empire. In 1829, there were 82 hand-dug oil wells throughout the Empire, but the oil production was still not very major (Carnegie Endowment for International Peace). Besides, there was a significant rate of corruption and the region was underdeveloped compared to the industrial centers of the cities. However, entering the 1870s, the Russian empire started to open up the Baku region for private corporations to compete with each other in oil production. The consequences were a drastic growth in entrepreneurial activities and the establishment of drilled oil wells (Carnegie Endowment for International Peace). In 1874, Russian crude oil output reached 600,000 barrels and to 10.8 million barrels within just a decade (Carnegie Endowment for International Peace). By the 1880s, about 200 oil refineries were put in place in the Baku oil fields, and even earned the name the “Black City.” Although at the start of the 20th century, the Baku oil fields were the world’s biggest oil producer, the oil extracting technologies began to fall behind that of other Western nations. The total oil exports of Russia dropped from 31% to 9% from 1904 to 1913 (Carnegie Endowment for International Peace). After the Russian Empire collapsed and the USSR was set up, the government first started to make oil production and natural gas into the energy pedestal of the nation’s production. Additionally, more deposits of oil were found abundant in the Volga-Ural regions, or Western Siberia in 1929 (Carnegie Endowment for International Peace). While more deposits were found during WW2, the actual drilling and extracting of oil began in 1955, after the war (Carnegie Endowment for International Peace). Enormous investments and efforts were put into the extraction and obtaining of those resources which led the Russian industries to surpass its previous levels of production. Moreover, the Volga-Ural and Western Siberia turned out to be a jackpot for the USSR. In the late 50s, the oil production of the USSR nearly doubled, putting the nation in second place of oil production after the United States of America. The production rate of the USSR covered approximately three quarters of the total oil production of all the Middle eastern countries combined (Carnegie Endowment for International Peace). On the other hand, the Baku oil fields failed to produce the same levels of oil production as the pre-war period. As for the Volga-Ural and Western Siberian oil fields, they peaked in 1975 and the oil businesses were going in a state of decline afterwards due to the immense pressure from the government to increase outputs from the oil fields (Carnegie Endowment for International Peace). The first record of natural gas extracted from Western Siberia was in 1953, and large deposits of natural gas were found not so long after. However, the industrial utilization of natural gas was established in 1965 because the planning of a network of gas pipeline systems throughout the USSR was considered insufficient until then (Carnegie Endowment for International Peace). This led to the boom of oil and natural gas industries in Western Siberia and inside of the USSR and made those industries cover a significant portion of the nation’s economy and exports. After the collapse of the USSR, Russia was reliant on its oil and natural gas exports to other wealthier

191


nations in order to sustain the economy (ScienceDirect). Throughout the late 90s and the early 2000s, Russia exported its natural gas and oil to Europe through the gas pipelines. Nevertheless, sanctions from the EU and other countries made Russian natural gas and oil unavailable to other European nations, leading prices to rise significantly since the invasion of Ukraine.

Literature Review 1. Brotherhood and Historical Unity Between Belarus and Russia The relationship of Belarus and Russia consists of a great unity of brothers and friendship between the peoples. The prime ministers of Belarus and Russia have been cooperating quite successfully, supporting each other in hard times and relying on each other (Soyuz). For Belarus, Russia is a protective, reliable, and strategic partner as comrade-in-arms. The ambassador of Belarus stated that the two nations remember their history and said that the history of themselves unites them, and that together they are stronger but when divided they are weakened (Soyuz). Belarus is losing its sovereignty and independent power from Russia, being in the range of Vladimir Putin’s power. The installation of Russian nuclear warheads in Belarus’s soil shows how Belarus succumbed to Russia’s authority and is gradually stepping closer to Russia. Russia’s control and influence within Belarus is enormous and increasing in many fields. In the post-cold war era, the president of Belarus, Aleksandr Lukashenko utilized the Soviet slogans of “Brotherhood and unity” with Russia but also made sure Belarus did not get too involved or close with the Russian federation (The New York Times). After the annexation of Crimea in 2014, president Lukashenko regarded the factors of Belarus also being part of its greater neighbor. However, in 2020, president Lukashenko suppressed hundreds of thousands of protestors who protested for democracy and the end of his authoritarian regime (The New York Times). This created further chaos and turbulence within Belarus, & president Lukashenko’s reputation was ruined. Fortunately for Lukashenko, Putin stepped in and provided Belarus with cheap energy, economic lifeline, and assistance for security (The New York Times).

Figure 1. Putin on the left with Lukashenko on the right (Source: Sky News https://news.sky.com/video/vladimir-putin-arrives-in-belarus-for-talks-with-alexander-lukashenko12771642, 2022. 12.19) This made Belarus highly dependent on Russia and showed how Russia had become a vital partner for Belarus. During the Russian invasion of Ukraine, Belarus provided its land for Russia to use to transport military supplies and aid for the army. Sanctions were carried out by the US and the EU to Belarus due to the violation of human rights and the crushing of pro-democracy protests. The installation of nuclear warheads was also part of the agreements included in the Union State, and that the warheads would be

192


facilitated by the Russian government exclusively (The New York Times). Many human rights organizations and other candidates going against Lukashenko were jailed, and pro-democracy movements are diminishing alongside with the growth of pro-Russian propaganda inside of Belarus. The alliance between Belarus and Russia is an important geopolitical partnership in Eastern Europe. Due to its strategic significance, this alliance has been a concern for Europe, especially in the ongoing conflict in Ukraine and the regional geopolitics of Eastern Europe. The shared Soviet past between Belarus and Russia is a primary era for the historical principles of their relationship (The New York Times). Through numerous treaties and accords they had made together, Belarus and Russia anticipated the maintenance of their strong and tight bond following the collapse of the USSR. While both nations’ political, economic, and military systems were intended to be integrated by 1999 according to the Union State treaty, the process has been largely halted (The New York Times). Even so, the two nations have consistently kept their alliance, their east slavic ancestry and their geopolitical interests despite several disputes on the question of Belarus’s economic dependence on Russia. Russia's strategic moves in the Ukraine war were done thanks to the lending of Belarus’s land. While no Belorussian troops participated or were active during the war, Belarus served as an essential base of operations for Russian forces advancing on Kyiv in the early phases of the invasion. The war's dynamics have changed dramatically as a result of this logistical support, enabling Russia to conduct attacks with more efficiency (The New York Times). Belarus significantly integrates its military capabilities with those of Russia by housing Russian military soldiers and the Wagner Group. By raising tensions with nearby NATO allies like Poland, Lithuania, and Latvia, these measures have brought attention to Belarus's growing role as a Russian stooge in the area.

2. Belarus's Role in the Russian Invasion of Ukraine Throughout Russia’s invasion of Ukraine, Belarus has allowed Russian troops to cross over their land in order to assault Ukraine from the North. While there was no deployment of troops or the utilization of arms in the war by Belorussian soldiers, concerns were raised about the nation’s direct involvement as the war progresses on. In 2023, Belarus housed the Wagner group, an independent Russian militant group after they rebelled against the Russian federation (NPR). Since the invasion, Belarus has allowed Russian nuclear missiles and warheads to be placed inside their land and has supported Russia’s invasion throughout the years. Numerous joint military actions of Belarus and Russia have been conducted to train Russian troops (NPR). The West has also regarded Belarus as the supporter of Russia’s barbaric and vicious invasion (NPR).

Case studies The close relationship between Russia and Belarus-especially in the present situation is extremely crucial to European security and international trading. As the war between Russia and Ukraine continues on, the atmosphere in Eastern Europe has become tense with the fear of Russia expanding its influence even further into regions such as the Baltics, and Belarus can play a significant role in assisting Russia both physically and economically. Numerous economic and military cooperations between those two nations have been conducted after the ignition of the Russo-Ukrainian war. The dependence of Belarus for Russia’s resources such as oil and natural gas, alongside with armed personnel may have linked the two countries to become great friends of each other, especially when Russia is heavily sanctioned by numerous nations (The New York Times). Moreover, the geological location of Belarus makes Russia become good partners. Since Russia received sanctions from countries all across the globe, it is crucial for Russia to acquire as many allies as possible. Considering the economic and trading restraints on the Russian federation, having Belarus as a partner is highly beneficial to Russian exports internationally. For instance, since the early 2000s, Belarus helped Russia sell their exports to other European nations in the EU by acting as a transit state and selling natural resources (German Economic Team). Also, the oil and natural gas is crucial for Europe to be able to use light and alter the temperature indoors, which makes the prices increase even more. The Yamal and Mozyr pipelines that go through Belarus also go directly to Germany and Poland, which are the more powerful and influential nations of the EU.

193


1. Germany vs Russia and Belarus While there is another pipeline network system in Ukraine, it was deactivated as Russia invaded Ukraine. There were severe crises in Germany when Russia locked its gas valves going through Belarus and the Nord Stream back in 2022 after the invasion of Ukraine, and the price for natural gas index in every household increased drastically, as well as the prices for German industries (Nature News). This way Russia profited by utilizing the sanctions imposed on it by using Belarus as a pathway. Figure 2: Natural gas prices growth in Germany throughout 2020 to 2022

(Source: Nature Energy(https://www.nature.com/articles/s41560-023-01260-5, 2023 5.04 ) Figure 2 displays the growth in the prices of natural gas in Germany from 2020 to the end of 2022. As shown in the graph, the gas price index for households, for Title Transfer Facility, and for industries rose significantly since the invasion of Ukraine. The TTF is the virtual market for conducting purchases and reselling in natural resources or energy, and as shown it rose drastically since the invasion due to the deactivation of gas valves.

Figure 3. The flow of all gas pipelines that goes through Russia to Europe and the Kavkaz (Source: https://www.bbc.com/news/world-europe-59246899 Accessed: 2021 11. 12)

194


Figure 3 displays all the main gas pipelines that flow from Russia to Europe. From the Volga-Ural deposits, the pipelines are connected to a network of them leading to European nations such as Germany and Poland. The pipeline Northern Lights links to the Nord Stream, Yamal-Europe, and Trangas. Among them, the Yamal-Europe and part of the Northern Lights passes through Belarus.

Figure 4: The oil pipelines that pass through Belarus to Europe (Source: jamestown.org/program/belaruss-role-in-east-european-energy-geopolitics/ Accessed: 2020 1.31) Figure 4 shows the oil pipelines that pass Belarus in order to reach Europe. The oil pipelines are all connected to Mozyr, and from there it branches off to various European nations. Additionally, a significant portion of Northern Druzhba is located in Belarus that connects Poland and Germany.

2. Bulgaria vs Russia and Belarus Moreover, nations located in Eastern Europe such as Bulgaria trade with Russia and Belarus more than Western European nations such as Germany or France and help circulate Russian natural gas and oil. For instance, Bulgaria imported Russian oil from August to October 2023, netting 430 million euros in direct taxes to the Russian federation, despite the EU putting up sanctions against trading with the Russian federation (Politico). Russia has been able to benefit greatly from those deals with Bulgaria, and several EU diplomats have argued about Bulgaria’s exception in trading with Russia and that this would only fuel Russia’s war as well as threatening the security of the EU (Politico). This does not only apply to Europe, but also nations in Asia and Oceania in which they imposed sanctions on Russian exports. For this case, Belarus can also act as a transit state for trading Russian exported products to other countries. Additionally, gas prices would rise instantly, because Russian gas

195


is not available for European nations, profiting Russia when buying their natural gas. This allows Russian exported products to flow in the international market involving the West instead of being sanctioned and restricted. The fact that sanctions imposed on Russia can be neutralized through Belarus raises concern for the EU, US, and other western-allied nations because this would not thwart Russia’s actions through economic restraints on international trading or exporting their products.

Conclusion The two nations are mutual partners and Belarus being a transit state neutralizes sanctions imposed on the Russians. Their economic and geopolitical positions are also vital for each other, making the cooperation between the two nations successful. Russia has been the biggest economic partner of Belarus as well as being the most influential nation of Belarus. While the two may benefit from this relationship, the EU and NATO would not perceive this as a positive impact on other nations, especially in Europe. There would be territorial and political disputes along adjacent countries bordering those two countries, such as Lithuania, Poland, or Latvia regarding security and military issues. Regarding the geopolitics and political situation the two nations are in, the relationship would not do any good to the West, and would most likely go against the policies of them. The mutual relationship between Belarus and Russia seems impactful to nearby nations and allows Russia to benefit from natural resource industries as well as evading sanctions.

References Jack, Victor. “Bulgaria Moves to End Russia Sanctions Opt-out as Pressure Mounts.” POLITICO, POLITICO, 13 Dec. 2023, www.politico.eu/article/bulgaria-end-russia-sanctions-opt-outprice-cap-loophole/. “Kochanova: Belarus and Russia Are Good and Close Friends.” Soyuz.By, soyuz.by/en/politics/kochanova-belarus-and-russia-are-good-and-close-friends. Accessed 31 July 2024. Hopkins, Valerie. “Belarus Is Fast Becoming a ‘vassal State’ of Russia.” The New York Times, The New York Times, 22 June 2023, www.nytimes.com/2023/06/22/world/europe/belarus-russialukashenko.html. Masters, Jonathan. “The Belarus-Russia Alliance: An Axis of Autocracy in Eastern Europe.” Council on Foreign Relations, Council on Foreign Relations, www.cfr.org/backgrounder/belarus-russiaalliance-axis-autocracy-eastern-europe. Accessed 31 July 2024. German Economic Team in Belarus, Belarus as a Gas Transit Country, Research Center of the Institute for Privatization and Management, 4 Mar. 2004, www.files.ethz.ch/isn/125717/RU_21.pdf. Mammadov, Rauf. “Belarus’s Role in East European Energy Geopolitics.” Jamestown, 31 Jan. 2020, jamestown.org/program/belaruss-role-in-east-european-energy-geopolitics/. Ermolaev, Sergei. “The Formation and Evolution of the Soviet Union’s Oil and Gas Dependence Carnegie Endowment for International Peace.” Carnegie Endowment For International Peace, 29 Mar. 2017, carnegieendowment.org/2017/03/29/formation-and-evolution-of-soviet-union-soil-and-gas-dependence-pub-68443. Siccardi, Francesco. “Understanding the Energy Drivers of Turkey’s Foreign Policy - Carnegie Russia Eurasia Center.” Carnegie Europe, 28 Feb. 2024, carnegieendowment.org/research/2024/02/understanding-the-energy-drivers-of-turkeys-

196


foreign-policy?lang=en&center=russia-eurasia. Sullivan, Becky. “Why Belarus Is so Involved in Russia’s Invasion of Ukraine.” NPR, NPR, 11 Mar. 2022, www.npr.org/2022/03/11/1085548867/belarus-ukraine-russia-invasion-lukashenko-putin. Rostovtsev, Mikhail Ivanovich, and David R. Marples. Encyclopædia Britannica, Encyclopædia Britannica, inc., 30 July 2024, www.britannica.com/place/Belarus/The-emergence-of-theBelorussian-Soviet-Socialist-Republic. Richter, Darmon. “Exploring Chernobyl’s Imprint on Neighboring Belarus.” Atlas Obscura, Atlas Obscura, 21 Sept. 2020, www.atlasobscura.com/articles/belarus-chernobyl-radiationcontamination. Kutcherov, Vladimir, et al. “Russian Natural Gas Exports: An Analysis of Challenges and Opportunities.” ScienceDirect, Elsevier, 15 June 2020, www.sciencedirect.com/science/article/pii/S2211467X2030064X. Ruhnau, Oliver, et al. “Natural Gas Savings in Germany during the 2022 Energy Crisis.” Nature News, Nature Publishing Group, 4 May 2023, www.nature.com/articles/s41560-023-01260-5.

197


China’s Soft Power Strategies: The Influence of Panda Diplomacy and Global Engagement Author

Full Name

:

Lee, Haryeong

:

HPrep Academy

(Last Name, First Name)

School Name

Abstract Panda diplomacy has gained significant attention from the public in recent years due to the growing global interest in pandas. As a form of soft power, it exemplifies China’s ability to influence other nations through traditions and culture rather than coercion or financial incentives. Unlike China’s soft power which is met with skepticism, panda diplomacy has proven remarkably successful. Pandas have become iconic symbols of China, promoting the nation’s culture worldwide. This success is financially beneficial, as tourists visit China in order to see the famous pandas in person. Also, China generates substantial revenue through international loans, earning billions of dollars annually. This essay seeks to investigate how China's use of panda diplomacy, which is a reflection of the good relationship China has with other countries, has been a successful tool of China's soft power.

198


I. Introduction There is a general agreement that China does not excel when dealing with soft power. Soft power is the ability to influence or persuade others, particularly countries, to obtain the results they desire without forcing threats or requiring payments. However, panda diplomacy represents a notable exception to this generalization. Panda diplomacy is a part of “soft power,” a diplomatic strategy where nations use their customs and traditions to influence international relations. In the case of China, pandas became emblematic of its international image. Since the Tang Dynasty, China has gifted pandas to other countries as diplomatic gifts, which is a symbol of friendship, goodwill, and strengthened diplomatic relations (Diplo, 2021). Pandas have also facilitated the establishment of “guanxi,” meaning trust in Mandarin. However, in 1984, the concept of panda diplomacy changed significantly. While initially, pandas were offered as gifts, they are now loaned for ten years or longer. The countries that loan pandas have to pay $1,000,000 a year per panda, and panda cubs born abroad should return to China before turning four. Thanks to panda diplomacy, pandas, especially giant pandas, have increased in population. In 2016, the International Union for Conservation of Nature (IUCN) announced that there were more than 1,800 pandas around the world and that they were no longer an endangered species (China Highlights, n.d.). On the other hand, a notable example of China's unsuccessful soft power is the Belt and Road Initiative (BRI). While the BRI has succeeded in expanding China's economic influence, it has also drawn criticism for creating debt dependency among participating countries and for its adverse environmental and social impacts. These issues have led to negative perceptions of China’s intentions and raised doubts about the effectiveness of using economic aid as a tool of soft power. In addition to the BRI, other examples of China's less successful soft power efforts include the China Dream campaign, the global expansion of Confucius Institutes, and the country's handling of the COVID-19 pandemic (European Bank for Reconstruction and Development, n.d.). This research paper will primarily focus on how panda diplomacy impacts the relationship between China and the three countries to which China has gifted pandas–South Korea, the United States, and Japan. The study aims to explore why panda diplomacy is seen as a result of China’s economic or political relationship with the aforementioned countries. Additionally, it will examine the broader context of Chinese soft power, with a particular emphasis on panda diplomacy. The central argument is that if South Korea, the United States, and Japan continue to maintain positive economic and political ties with China, the practice of panda diplomacy is likely to increase, serving as a strategic tool to further solidify these relationships.

II. Literature Review According to Foreign Policy, an American news publication focusing on diplomatic relations, China struggles with effectively dealing with soft power. The Eurasia Group Foundation gathered a new statistic and revealed that people in Singapore, the Philippines, and South Korea generally do not view China’s soft power favorably. While three-fourths of respondents expressed a preference for U.S. soft power, only one-third favored China’s approach. This presents a significant challenge for Beijing, especially in countries like the Philippines and South Korea, where China’s objectives often conflict with local policy changes. China’s soft power frequently raises concerns as countries that China funds for soft power assets are anxious about China’s international ambitions. Its financial, traditional, educational, and research investments are examined frequently and viewed with suspicion. In fact, China’s efforts to practice soft power have the opposite effect, repelling other countries rather than drawing them since China’s actions can often be perceived as bullying (Zuri Linetsky, 2023). According to Reuters, panda diplomacy, a practice initiated by China in 1949, involves China continuously lending or gifting pandas to other countries as a form of goodwill. These pandas, often referred to as "ambassadors," are used to enhance China's global image. Initially, pandas were gifted, but by 1984, China began loaning them due to their declining population. Typically, a pair of pandas is loaned for 10 years, with an annual fee of $1 million. At the end of the loan period, pandas, along with any cubs born abroad, are returned to China before the cubs reach their fourth birthday. Historically, China has used panda loans as a diplomatic tool, often in conjunction with financial relations. According

199


to a 2013 Oxford University study, China loaned pandas to Canada, France, and Australia during discussions about uranium trade. However, China has also demonstrated displeasure in using pandas as leverage. For instance, in 2010, China forcibly retrieved two pandas, Tai Shan and Mai Lan, from the U.S. after Washington planned a meeting between Barack Obama and the Dalai Lama, a Tibetan Buddhist leader. China’s continued efforts to increase the panda population, especially giant pandas, have been largely successful. The number of pandas in the wild increased from approximately 1,100 in the 1980s to around 1,900 in 2023 so they are not considered endangered. Additionally, there are over 700 pandas in zoos and breeding centers worldwide (Reuters, 2024). Although China has encountered some setbacks with its soft power efforts, panda diplomacy stands out as a notable success. To begin with, panda diplomacy was employed to enhance China’s cultural appeal. With their adorable appearances, pandas symbolize friendship and peace, capturing global attention and fostering a positive image of China. Furthermore, panda diplomacy has tourism and economic benefits. Pandas attract tourists to the host countries’ zoos, which can boost China’s local economies. The media coverage surrounding pandas further amplifies benefits by generating additional revenue and promoting bilateral cultural exchanges between nations. Finally, panda diplomacy plays a strategic role in bilateral relations. The loaning of pandas is often linked to broader diplomatic and economic engagements, enabling China to build goodwill and strengthen ties with other nations, potentially resulting in preferable diplomatic or financial outcomes.

III. Case Studies Panda diplomacy has emerged as a significant issue in recent years, and a comprehensive analysis can be best achieved through examining specific case studies. This research paper will focus on the three countries: South Korea, the United States, and Japan. These nations are selected due to their active trade and frequent diplomatic interactions with China. In addition, China has a history of sending pandas to these three countries, both in the past and present. Table 1 illustrates that the United States, Japan, and South Korea are China’s key trading partners and have received the highest number of gifted or loaned pandas. Table 1. Countries’ Trade Volume and Number of Gifted or Loaned Pandas By China Country Trade Volume (2019, millions USD) Number of Gifted or Loaned Pandas United States 541,820 11 Japan 314,747 9 South Korea 284,538 4 Taiwan 227,881 2 Germany 184,743 4 Australia 167,712 2 Vietnam 162,083 0 Malaysia 124,112 2 Brazil 114,681 0 Russia 109,742 2 (Source: Author’s research; International Monetary Fund, “Exports, FOB to Partner Countries,” accessed August 15, 2020, https://data.imf.org/regular.aspx?key=61013712; and International Monetary Fund, “Imports, CIF from Partner Countries,” accessed August 15, 2020, https://data.imf.org/regular. aspx?key=61013712. by author) 1. Panda Diplomacy in South Korea To examine why panda diplomacy reflects a favorable relationship between China and recipient nations, it is crucial to investigate if these countries possess mutual relationships with China. South Korea received pandas from China twice- first in 1994 and again in 2016. Before 1994, China and South Korea’s economic relations began to improve. Indirect trade and economic relations were permitted and China decided to perform at the 1988 Olympics held in Seoul. Additionally, in August 1992, China and South Korea formally constituted modern diplomatic relations. Consequently, these positive relations

200


continued and South Korea was gifted two pandas LiLi and Ming Ming in September 1994. However, due to the Korean government’s financial difficulties during the International Monetary Fund (IMF) crisis, the two pandas were returned to China in 1997. Around 20 years later, South Korea received another pair of pandas from China, named Ai Bao and Le Bao. This event was notable given the deteriorating political relations between South Korea and China in 2016. South Korea’s decision to deploy an American anti-ballistic missile defense system met with strong objections from China. Consequently, there was an unofficial boycott of South Korea in China by prohibiting imports, refusing South Korean tourists, and abolishing K-pop concerts. Despite these tensions, in 2015, they signed the bilateral China-South Korea Free Trade Agreement to increase trade and escalate both countries’ GDPs. Although having negative relationships in political aspects, probably due to each others’ economic impacts, China gifted South Korea pandas, and they are still present in Yongin. In July 2020, the first giant panda was born between Ai Bao and Le Bao in South Korea. The time Fubao was born was when people were experiencing challenging times from the COVID-19 pandemic. Fubao’s adorable appearance and behavior helped and gave emotional support to Koreans, and Fubao became famous nationwide. Figure 1 shows Le Bao, Fubao, and Ai Bao. In addition, China’s ambassador, Xing Haiming, visited Everland and presented a thank you plaque on Fubao’s third birthday. Fubao recently left South Korea and returned to China due to the policy, which evoked sadness among many Koreans. This incident made a majority of Koreans interested in the roles pandas play in strengthening SinoKorean relations. In order to determine how much Koreans know about panda diplomacy, I interviewed two South Koreans. The first question was whether they knew about panda diplomacy. Both of the respondents answered that they learned about it through Fubao. For the second question of how much they knew about panda diplomacy, the respondents only knew about the basic definition of it. The last question was if the respondents think panda diplomacy is beneficial for the South Korea-China relationship. One of the respondents answered “Yes, South Korea and China’s relationship used to be unstable and bad, but I believe that the panda diplomacy lessened the tension. However, recently, due to Fubao’s return to China, the relationship between those two countries had become uncertain.” On the other hand, another surveyor mentioned “I’ve heard that panda diplomacy, while intended to foster goodwill between South Korea and China, can lead to perceptions of manipulation, financial burdens, and political symbolism that strain bilateral relations. Balancing this symbolic gesture with substantive actions is essential to mitigate potential negative impacts.” The survey shows that many Koreans were able to learn more about panda diplomacy due to Fubao. Also, everyone has different perspectives on whether panda diplomacy has positive impacts on the South Korea-China relationship. As mentioned earlier, Korean’s affection towards pandas can positively influence their perception of China. Moreover, Everland’s panda videos are going viral on Chinese social media platforms. This could lead to new civilian exchanges and public diplomacy programs, ultimately influencing political and diplomatic relations as well (Arirang News, 2023). 2. Panda Diplomacy in the United States For the United States, they got two pandas, Ling Ling and Hsing Hsing, in April 1972. The US and China have a long history, but major events have occurred since 1941. In 1941, during the occupation of China and Japan, Americans were incredibly generous to the Chinese people as they sent resources and food. Madame Chiang knew that the Bronx Zoo really wanted pandas, and so two baby pandas were caught in the wild and shipped to the United States. Additionally, there was a nationwide contest to name them because they needed names that were eventually selected as Pan-dee and Pan-dah. Although the names were not inventive, it can be said that the use of the panda was a tool for solidifying the relationship, for China gave something at a point where it had very little to give to the United States. Later in 1949, Mao Zedong established the People’s Republic of China while Chiang Kai Shek, who was a Nationalist government in China, fled to Taiwan. Chairman Mao was openly anti-American, and there was no hope for America to build a relationship with China. While China used panda diplomacy to reward its communist friends, it did not send pandas to the democratic nations of the world. Eventually, by the early 1950s, none of the zoos in the United States were able to obtain new pandas to replace those that died, and the US would be left without pandas for the next 20 years. In 1972, China was a very depressed nation due to unstable economies. In contrast, America was a rich, powerful nation with economic resources. Thus, the Chinese could see forward that if they improved their relationship

201


with Washington, other democratic nations would follow. Both countries decided to enhance their relationship, and as part of the initiative, China and the US agreed that China would dispatch pandas to the US. Furthermore, the total value of merchandise trade between China and the United States was 95 million U.S. dollars. On November 22, 1972, President Nixon declared that the United States Figure 1. Le Bao, Fubao, Ai Bao would lift travel restrictions on U.S. aircraft and ships heading to China. Then, it can be said that U.S.-China relations made some progress in 1972, although there were no changes of particular significance. After an improved relationship, Ling Ling and Hsing Hsing, which were the first pair of pandas gifted to a democratic country by China, were sent to the US. President Nixon anaconda “I think ‘pandemonium’ is going to break out right here at the zoo.” For China to send over two of its most precious national treasures to a country that only a few years before it had been openly feuding with. This showed that the two countries were not going to have a short-lived relationship. The Chinese government did a lot of media reports about sending the giant pandas to the United States. China went on to offer the US the most number of pandas out of all countries that received pandas – 11 pandas to four zoos. By then, these pandas were being loaned out by China rather than gifted, due to the panda’s status as a species vulnerable to extinction. Often, these loans were renewed. Nevertheless, in 2019, Donald Trump mentioned that “China raided out factories, offshored out jobs, gutted out industries, stole out intellectual property, and violated their commitments under the World Trade Organization.” President Trump had gotten his mandate by being quite adversarial when it comes to China. So in 2018 and 2019, this was the start of President Trump's imposed tariffs on some of these goods from China, which made Chinese goods more expensive for American consumers. Trump effectively priced China out and made China’s growth model of being reliant on exports, which is not the best strategy. China retaliates in kind, with tariffs of their own on US goods. The start of what the economic history books like to call the trade wars. The consequence of that is that at the end of two, or three years, China’s trade to the US has fallen by around 20 percent. Nowadays, China is a major world leader and it is more able and willing to show its assertive side. Associate professor of the Department of Political Science at the National University of Singapore, Chong Ja Ian, mentioned “I want to emphasize that many great powers act in a similar way. China’s no exception.” As expected, China allowed the loans to expire instead of renewing the panda loans, and pandas started leaving the US zoos. By the end of late 2024, the last pandas will leave Zoo Atlanta. As for the American zoos, they are confused as to why they were caught in what is called punitive panda diplomacy. Although, America has some hope. Xi Jinping stated “Not long ago, three pandas at the National Zoo in Washington DC returned to China. I also learned that the San Diego Zoo and the Californian people very much look forward to welcoming pandas back. We are ready to continue our cooperation on panda protection with the US.” Xi Jinping said that the San Diego Zoo will get its pandas back again as panda diplomacy is a win-win for both sides (CNA Insider, 2024). [1] 3. Panda Diplomacy in Japan Japan received pandas from China in 1972 and 2011. Japan’s former foreign minister, Maliki Tanaka, is the daughter of Kakuei Tanaka who normalized relations with China 50 years ago. On September 29, 1972, her father who was then-Prime Minister met his counterpart, Zhous Entail, in Beijing to sign a joint communique. Tanaka said her father was prepared to risk his life to make amends with Chin over Japan’s wartime aggression. However, given the tense ties between the two countries now, Tanaka says there is little hope for politics to mend the relationship. Makiko Tanaka said “It will be the private sector. If business, scientists, and cultural exchanges were promoted more, there would be a sense of closeness.” Kakuei Tanaka said leaving the China issue dangling is not good for the future of Japan and it’s important to create a win-win situation for both countries. He faced fierce opposition in Japan against his trip to China and was prepared to resign if his mission failed. Over the past decade, the relationship

202


between the two countries has been strained over territorial disputes and wartime history. Makiko Tanaka also mentioned that “If you cooperate on something good, any country will definitely be on board. But we don’t have that with China, instead, we are just bonding together and being confrontational.” But since then, China has become Japan’s greatest trade partner. Japan has given China more than 25 billion dollars in development aid over the years (Associated Press, 2023). Furthermore, in December 1971, the Chinese and Japanese trade communication offices began discussing the probability of reconstituting their diplomatic trade relations. The relation was alleviated by unofficial trade between China and Japan and people-to-people interchanges. Thanks to relieved relations between the two countries, Japan was loaned two pandas, Kang Kang and Lan Lan. After they arrived, tens and thousands of Japanese went to the airport and visited the zoo they belonged to. Thus, Japan sought economic benefits from both its citizens and increased tourists. Also, they had babies named Xiao Xiao and Lei Lei during the COVID-19 pandemic. Only 1080 people were selected to see the panda cubs. Figure 2. Ling Ling, Hsing Hsing

The next time pandas were sent from China was in 2011. There was an earthquake and the Fukushima nuclear accident occurred. After the disaster, Japan faced severe economic challenges and aimed to enhance its relationship with China, its economically robust neighbor, to aid in its recovery. Japanese leaders emphasized Japan’s substantial role in China’s modernization as a rationale for this renewed focus. They highlighted that Japanese companies had created 9.2 million jobs in China and had contributed a total of 5 billion yuan in taxes since 1979. Additionally, more than 200,000 Chinese students have studied at Japanese universities. Japan’s support for China went beyond economic aid, including favorable loans, assistance with infrastructure and environmental projects, energy development solutions, and research and development support. According to Japanese diplomat Keii Ide, these contributions significantly accelerated China’s growth. Consequently, Japan’s shift in policy toward China reflected both a recognition of its past support and a strategic move to leverage China’s economic strength to aid Japan’s recovery (Larisa Zabrovskaia, 2013). [2] IV. Conclusion In conclusion, panda diplomacy can be considered China’s most successful soft power, and pandas are gifted or loaned when a country has positive relations with China. It is also evident that panda diplomacy helps countries to improve their economic and political relationships with China. Panda diplomacy is effective in increasing the number of pandas, and they are not endangered species anymore. This diplomacy can be used in other countries that have endangered species, which can increase both the population and fame. For instance, South Korea could have Asian black bear diplomacy, or Costa Rica could have sea turtle diplomacy. The limitation includes that in 2016, South Korea received pandas as a diplomatic gift from China despite a strained and deteriorating political relationship between the two nations. Since China gifts pandas to countries with positive relations and an expectation of improved relations, this makes it ironic. The scarcity of scholarly research on the political dynamics between China and South Korea complicates the task of elucidating the political context preceding and during 2016. It is feasible that the economic relationship matters more than the political relationship since past research mentions that China gifts or loans pandas to their trade partners. In the future, there is optimism that panda diplomacy will further strengthen bilateral relationships with China and inspire other countries to implement similar conservation efforts for endangered species in their own country. Additionally, China can improve its strategies in soft power and enhance them in future projects.

203


References Arirang News. (2023, July 28). Fubao sensation and China’s Panda Diplomacy. YouTube. https://youtube.com/watch?v=Q5SgK83FEIo Associated Press. (2022a, September 26). Japan and China relations 50 Years on. YouTube. https://www.youtube.com/watch?v=JLMmKtGwrk0 Bevan, M., & Parry, Y. (2023, December 8). Panda diplomacy: How China uses pandas to signal which countries they like, and which ones they don’t. ABC News. https://www.abc.net.au/news/202312-09/what-is-china-panda-diplomacy/103185260 China Highlights. (n.d.). How China protects giant pandas - pandas now not endangered!. How China Protects Pandas, Pandas Now NOT Endangered! https://www.chinahighlights.com/giantpanda/protect-panda.htm CNA Insider. (2024, April 20). US-china rivalry: Why America is losing pandas - but it won’t be forever. YouTube. https://www.youtube.com/watch?v=WrwSgd3OpZY&t=63s Congressional Research Service. (2022, May 25). The People’s Republic of China’s panda diplomacy. The People’s Republic of China’s Panda Diplomacy. https://crsreports.congress.gov/product/pdf/IF/IF12122/2 Diplo. (2021). Panda diplomacy. https://www.diplomacy.edu/topics/panda-diplomacy/ European Bank for Reconstruction and Development. (n.d.). Belt and road initiative (BRI). Belt and Road Initiative. https://www.ebrd.com/what-we-do/belt-and-road/overview.html Fatima, Z. (2021, February 23). Panda diplomacy. Centre for Strategic and Contemporary Research. https://cscr.pk/explore/themes/politics-governance/panda-diplomacy/ Fraser, M. (2023, April 14). China’s panda diplomacy isn’t as cuddly as it seems. UnHerd. https://unherd.com/newsroom/chinas-panda-diplomacy-isnt-as-cuddly-as-it-seems/ Hsu, C.-W. (2024, Spring). The softest of powers: Chia-Wei Hsu. CABINET. https://www.cabinetmagazine.org/issues/68/hsu.php Huaxia. (2022, October 1). Japanese people fascinated with China’s giant pandas for half a century. Xinhua. https://english.news.cn/20221001/17f06c517ea740d7af7deee7704f26d1/c.html Jacobs, J. (2019, April 30). Ru Yi & Ding Ding arrive in Moscow. GiantPandaGlobal.com. https://www.giantpandaglobal.com/zoo/moscow-zoo/ru-yi-ding-ding-move-arrive-in-moscow/ Kim, I. (n.d.). Korea’s relations with China and Japan in the post-cold ... University of Connecticut. https://ciaotest.cc.columbia.edu/journals/ijoks/v2i1/f_0013360_10856.pdf Linetsky, Z. (2023, June 28). China is bad at soft power in Asia. China Can’t Catch a Break in Asian Public Opinion. https://foreignpolicy.com/2023/06/28/china-soft-power-asia-culture-influence-koreasingapore/ Ma, Y. (2024, February 23). Topic: Sino-u.s. trading relationship. Statista. https://www.statista.com/topics/4698/sino-us-trading-relationship/#topicOverview MasterClass. (2022, June 17). What is soft power? 5 examples of soft power - 2024. https://www.masterclass.com/articles/soft-power Masuda, T. (2022, October 30). 50 years of affection for Ueno pandas. The Japan News by The Yomiuri Shimbun. https://japannews.yomiuri.co.jp/society/general-news/20221030-67918/ Ministry of Foreign Affairs of Japan. (1972, September 29). Joint communique of the government of japan and the government of the people’s republic of china. MOFA. https://www.mofa.go.jp/region/asia-paci/china/joint72.html

204


Nye, J. (2019, April 1). Soft Power and the Public Diplomacy Revisited. Harvard Kennedy School. https://www.hks.harvard.edu/publications/soft-power-and-public-diplomacy-revisited Onion, A. (2023, November 15). Major milestones in US‑China relations. History.com. https://www.history.com/news/us-china-relations-history-diplomacy-taiwan Reuters. (2024, June 19). Explainer: What is China’s panda diplomacy and how does it work? | Reuters. https://www.reuters.com/world/china/what-is-chinas-panda-diplomacy-how-does-it-work2024-06-18/ Review of Foreign Relations. (n.d.). Rekations between the United States and the People’s Republic of China. Relations between the United States and the People’s Republic of China. https://www.mofa.go.jp/policy/other/bluebook/1972/1972-1-3.htm The Soft power. (n.d.). What is soft power?. Soft Power. https://softpower30.com/what-is-soft-power/ Staff, P. (2024, July 30). Panda diplomacy and Business Negotiations: Applying Soft Power. https://www.pon.harvard.edu/daily/business-negotiations/panda-diplomacy-and-business-negotiationsapplying-soft-power/ Tamura, M., & Yoshida, K. (2023, February 26). Panda Diplomacy Long serves as gauge of JapanChina relations. The Japan News by The Yomiuri Shimbun. https://japannews.yomiuri.co.jp/world/asia-pacific/20230226-93534/ Taylor, M. (2024, February 26). A brief history of “panda diplomacy” - with new additions to Global Zoos. BBC News. https://www.bbc.com/future/article/20240226-a-brief-history-of-panda-diplomacy--with-new-additions-to-global-zoos Yang, D., & Lin, C. A. (2022, September 10). Are Pandas effective ambassadors for promoting wildlife conservation and international diplomacy?. MDPI. https://www.mdpi.com/20711050/14/18/11383 Zabrovskaia, L. (2013, November 26). Poland; Acta Asiatica Varsoviensia. Zhang, L. (2021, June). Pandas: China’s most popular diplomats. AEI. https://www.aei.org/wpcontent/uploads/2021/06/Pandas-Chinas-Most-Popular-Diplomats.pdf

205


How did Christianity influence the arts and techniques of 17th century Japanese art, with respect to the Jesuit painting Seminario and the Virgin Mary Kannon sculpture? Author

Full Name (Last Name, First Name)

School Name

:

Lee, JoonSeok

:

Blair Academy

Abstract The introduction of Christianity by the Jesuits to Japan created waves of religious and cultural change that reshaped Japan’s artistic styles. Before the Portuguese Jesuits arrived in the mid-sixteenth century, Japan had little interaction with the West and was not acquainted with Western religion and culture. Gradually, with the introduction of Christianity by the Jesuits, a proportion of the Japanese population converted to Christianity, and this influence spread to Japanese painting and sculpture. This essay seeks to address the following research questions: “How did Christianity influence the subjects and techniques of 17th-century Japanese art, with respect to the Jesuit Seminario paintings and the Virgin Mary Kannon sculpture?” This question provides a close exploration of how the Japanese took the advantage of religion and artistic styles from the Western culture, which influenced 17thcentury Japanese art and this acceptance showcases the evolution of Japanese art through foreign influences. The methodology in this essay includes a visual analysis of the Seminario paintings and the Virgin Mary Kannon sculpture, alongside secondary peer-reviewed research on Christianity's impact on 17th-century Japanese art. By examining these artworks and synthesizing existing scholarship, the study aims to provide a comprehensive understanding of Christian influence on Japanese art during this period. This research contributes to the body of Japanese history, by providing insight into the brief period of Christian influence on Japanese art prior to anti-Christian persecution in Japan and to the Meiji Restoration's pro-Western reforms in the 19th century. It showcases how foreign religious concepts could be highly influential to local art forms, and how these adherents in Japan held onto their Christianfaith through their artistic expression during periods of significant religious persecution. In terms of limitations, this study is limited by the fact that Christian-influenced artworks that could be studied for this thesis are rare due to the persecution of Christians starting in the late 1610s and Tokugawa Shogunate’s isolationist policy, when many artworks were destroyed. As a result, two of the most wellpreserved sources have been selected for study in this thesis: the Virgin Mary Kannon and the Jesuit Seminario paintings. These two sources may not fully represent the entire spectrum of Christian influence on Japanese art.

206


Historical Background Prior to Christian influence in which period or the introduction of when, Japanese religious art was dominated by Buddhist and Shinto imagery and styles. The arrival of European missionaries (Francis Xavier, 1549) brought new artistic subjects and techniques, including Christian iconography and more realistic representational styles. Christianity was first introduced to Japan approximately during the mid-16th century, primarily through the efforts of Jesuit missionaries, including St. Francis Xavier. There were communication difficulties between the Jesuits, who only had one interpreter named Anjiro, and the local Japanese people who did not fully understand the concepts of Christianity. Given that the concept of “god” in Christianity did not align with the Japanese understanding of “kami,” or polytheistic Japanese Shinto deities and natural forces, it was hard for the Japanese to grasp the idea of Jesus Christ as the son of God, or the existence of an monotheistic Christian God. Nonetheless, it gained some acceptance with considerable numbers of converts: at first, the Jesuits aimed to convert the ruling class members first, so that the elite class members could influence the people below them to convert to Christianity too. Missionaries from Spain, Portugal, and Italy focused on converting the Japanese population and utilized art as a visual medium to facilitate their evangelization efforts. The Jesuits established the first painting Seminario (a Portuguese word for a meeting at which a group of people discuss a particular subject, especially for training purposes) in Japan, 1583, led by an Italian artist, to introduce European painting techniques and materials to Japanese artists. During the introduction of Western culture, Christian iconography and themes were also introduced, such as depictions of Jesus, the Virgin Mary, and other biblical scenes. Hence, Japanese artists also began portraying European figures, ships, and objects in their artworks, often on folding screens. This period, often referred to as Japan’s “Christian Century,” saw a flourishing of Christian art, particularly in southwestern Japan, before the Tokugawa Shogunate outlawed Christianity in 1614. Before the Tokugawa Shogunate began to ban Christianity in Japan, Toyotomi Hideyoshi had opposed it. Even though he had accepted them to form a culture, he later retrieved his words and banned Christianity for his own wealth. Earlier opposition by Toyotomi Hideyoshi was strengthened due to the strong voices of opposition from the Buddhists, leading the Tokugawa Shogunate to eventually implement an outright ban on Christianity (Boscaro 219). This led to the emergence of “hidden Christians” (Kakure Kirishitan) who continued practicing secretly. The practices of the hidden Christians drove the Japanese Christian art to evolve into a more covert style. This will be discussed in this essay through the Virgin Mary Kannon sculptures, which subtly incorporates the Christian image of Mother Mary into the Buddhist deity Kannon. As a result this allowed Christian adherents to incorporate Christian elements in what was outwardly a Buddhist status, enabling Christians to secretly worship Virgin Mary and Jesus Christ. The Jesuit painting schools, the ones that likely produced the Seminario paintings, played an important role in introducing European painting techniques and religious subjects to Japanese artists. However, their influence was limited by the increasing restrictions on Christianity. During the brief period when Christianity was prohibited, Japanese artists began incorporating Christian themes and European artistic elements into their work, creating a hybrid style. A key example is the Virgin Mary Kannon sculpture, which is a Christian imagery disguised as the Buddhist deity Kannon. These sculptures maintained the basic form of Kannon but incorporated Christian elements, allowing hidden Christians to continue venerating Mary. Overall, Christian influence on 17th century Japanese art was characterized by an initial period of open adoption and experimentation, followed by a shift towards hidden or disguised imagery as persecution intensified. This resulted in unique artworks that blended Japanese and European elements in response to the changing religious and political climate.

Literature Review To better understand the extent to which Christianity had influenced the techniques and subjects of 17th century Japanese art, there is a need to review the secondary sources to identify the key approaches in

207


which Christianity shaped the style and subject matter of Japanese painting and sculpture. This literature review provides a list of sources relevant to the thesis question. Ogawa's thesis analyzes how the mixed style used by the cloistered and hidden Christian communities on Ikitsuki Island during the persecutions of the Tokugawa Shogunate (ca. 1640-1873) were indicative of the impact of Christianity on Japanese art (Ogawa, 18-19). Ogawa shows how the hidden Christian communities used local visual symbols and elements to resist and subvert the oppression of the Tokugawa Shogunate, co-opting these local symbols and elements (which would not attract suspicion) for Jesuit Christian worship. This source is very important in our understanding of how Christianity influenced the techniques and subjects of 17th century Japanese art, as it shows how Christian art could adapt and integrate into Japanese culture even under heavy persecution and oppression by the Tokugawa shogunate. The influence of Christianity on the hidden Japanese-Christian communities shows how Western cultural and artistic traditions shaped Japanese art. On Ikitsuki Island, the Kakure Kirishitan developed a hybrid style of art during the Tokugawa shogunate's persecution. “Due to the lack of material resources, they fused their recollected memories of the scripture with indigenous traditions, creating a unique expression of the Christian faith" (Ogawa 18). When the Jesuits first arrived in Japan, they brought gifts from Europe, including watches, music boxes, eyeglasses, and Christian paintings, which introduced Japanese officials to European artistic traditions (Ogawa 19). At the Seminary of Painters, Jesuit missionary Niccolo also taught young Japanese artists the new medium of oil painting and exposed them to European styles and techniques, which were distinctively different from traditional Japanese painting. Japanese artists took European principles and melded them with their own artistic styles. As Ogawa discussed, "the stylistic difference between the two cultures lay in the attitude towards reality. While European Renaissance painters valued optical reality, traditional Japanese artists honored the essence of the objects" (Ogawa 22). European painters focused on "linear and atmospheric perspectives to create a three-dimensional view of the buildings and to convey the distance between the foreground and background. On the contrary, traditional Japanese painters emphasized "flat, stylized, and calligraphic qualities and depict idealized beauty as interpreted by the artists. [...] In addition, clear contour lines and the lack of foreshortening also give the image a very linear quality. Furthermore, Japanese artists utilized poetry and tales to capture and idealize the essence of the objects” (Ogawa 23). The “poetry and tales” refers to the fact that certain paintings done on scrolls, like the Illustrated Legends of the Kitano Tenjin Shrine, tend to have texts or short literature to tell a story as they proceed with the drawings; otherwise, if it is a page length art, the writings within the art will be the name of the painter or their signature. The merger of these two styles eventually led to a uniquely ChristianJapanese approach in Japanese art, with hybrid styles of art that were radically different from art prior to Christian influence in Japan. For instance, for one of the paintings, the "shadow on the left side of the Virgin and the angel's contour, as well as the bottom contour of the Child Jesus enhances roundness of the flesh" (Ogawa 23). Rather than leaning towards the Japanese style of flat, two-dimensional shapes, the painting leans towards the style depicted in the Italian renaissance. Hioki’s treatise demonstrates how sophisticated European crafting and artistic techniques such as threedimensional perspective, oil painting and realistic representation were co-opted and adopted by Japanese artisans in the modernization of the aesthetics of Jesuit folding screens. This shows how the Japanese artisans adapted their artistic styles based on the latest technologies provided by the Christian Jesuit missionaries. For example, the artwork Europeans Playing Music can be appreciated as both a traditional Japanese folding screen and an European landscape painting, and the difference in the perspective generates a uniquely European-Japanese art style that encompassed both the cultures of 17th century Japan and early modern Catholic Europe (Hioki 5-21). Hioki further contends that “the artists successfully presented traditions of East and West in concordance [with] divergent artistic traditions and aesthetic sensibilities co-existing without commingling or overcoming each other.” (Hioki 5)

208


The Virgin Mary Kannon exemplifies the “successful blending of East and West,” or Christian and Buddhist iconography, symbolizing the cultural and religious hybridity (Foxwell 332). While the mother and the child’s appearances resemble the Buddhist deity, “elements such as the tactile rendering of the spherical mandorla and the fluidity of the child’s pose during an age when fidelity to nature was perceived as an overwhelmingly Western value” (Foxwell 332). Created by Japanese Christians during the Tokugawa shogunate's persecution, this figure allowed the faithful to secretly practice Christianity under the disguise of Buddhist worship. Combining various elements around the world, the Virgin Mary Kannon highlights the adaptability and resilience of the hidden Christians. This fusion ultimately reveals the dynamic nature of cultural identity, capable of evolving and sustaining itself through innovative expressions.

Case Studies

Figure 1. Westerners playing Music, Early seventeenth century, Shizuoka Museum of Art Westerners playing Music, done by an anonymous painter from Japan, is assumed to be made out of elements copied from the Western paintings.

209


Figure 2. Portrait of the Zen Monk Daruma, Early 16th century, Tokyo National Museum

Figure 3. Illustrated Legends of the Kitano Tenjin Shrine, late 13th century, The Museum of Modern Art Based on a visual analysis of the primary sources (the Seminario paintings of Japan and the Virgin Mary Kannon statue), there is clear evidence that Christianity had a significant, but eventually waning, influence on the seventeenth century Japanese art. Furthermore, the introduction of Christian themes such as the sacrifice of Jesus Christ, the adoration of the saints, or the figure of Mother Mary to the Japanese shows how the Jesuit influences made a remarkable impression during this period. Foremost, the Seminario paintings showcase how European missionaries brought oil paints to Japan, introducing the use of techniques like chiaroscuro (Italian term for the balance and pattern of light and shade in paintings) and linear perspective. Prior to the arrival of Jesuit missionaries, these techniques were never used by Japanese artists. Evident in Figure 4 and 5, these artworks lack significant light and color contrast. Additionally, the depiction of distance is limited to lines representing land or mountains, which indicate approximate rather than precise distances, lacking the detailed light contrast compared to that of Figure 3. Moreover, it is difficult to evaluate the magnitude of the mountains as they appear

210


similar in size to a man standing behind a monster and a mythical Asian goblin wearing red cloth. In contrast, Figure 3, a Japanese artwork painted with Western techniques, clearly depicts distance between people with instruments and the mountains or the port through the use of perspective and chiaroscuro. Thus, the introduction of Western art to Japan marked a transformation towards a more realistic portrayal, moving away from the flat painting style with no variance in color and light.

Figure 4. Nara Daibutsu, 8 AD, Todaiji temple The Nara Daibutsu, also known as the Great Buddha of Nara, is a monumental statue located in the Todai-ji temple in Japan. From 743 to 751, Emperor Shomu was commissioned to promote Buddhism and bring peace and prosperity to Japan.

Figure 5. Kamakura Daibutsu, 1252, Kotoku-in temple The Virgin Mary Kannon sculptures, which disguised Christian figures as Buddhist deities, exemplify the fusion of Christianity and East Asian beliefs. Most statues of Buddha, like the Nara Daibutsu or the Kamakura Daibutsu, are placed in the lotus position, symbolizing meditation and peacefulness. His legs

211


are crossed with the soles facing upward. The Buddha’s right hand is raised with the palm facing outward in the abhaya mudra, symbolizing fearlessness and protection. His left hand rests on his lap in the dhyana mudra, indicating meditation and spiritual perfection. His face is calm and serene, with halfclosed eyes, portraying a state of meditation and inner peace. These combinations of mudras and the meditation posture represent the Buddha’s teachings and his merciful, protective nature (including the hands position). However, the sculptures of Virgin Mary Kannon, are mostly holding their baby, not able to have their hands positioned equivalent to that of Buddha’s. Also, the Virgin Mary Kannon sculptures are mostly radically smaller in size compared to the various Daibutsu or the Buddha’s spread across Japan. The statue in Figure 4, is 14.92 meters tall, and the statue in Figure 5 is 13 meters tall, showing that they are immense in general. Since these sculptures of Kannon were made when Christian persecution was active, it was impossible for Christians to sculpt a massive sculpture like the statue in figure. Daibutsu

Nara

Kamakura

Gifu Great

Tokyo

Showa

Nihon-ji

Ushiku

Takaoka

Asuka

Height

14.92m

13.35m

13.63m

13m

21.35m

31.05m

120m

15.85m

2.75m

Table 1. Average Height of Daibutsus Across Japan The nine Buddha statues in Japan are 27.32 meters tall on average, emphasizing their monumental presence and the importance of Buddhism in Japanese society. On the other hand, the Virgin Mary Kannon sculptures are considerably smaller, often shorter than the height of an average male (1.7 meters), reflecting their secretive role in preserving Christian faith during the period of persecution. The stark contrast in size, also, highlights the varying degrees of public prominence and intended visibility between the dominant Buddhist and the covert hybrid-Christian statues. Additionally, one of the other big differences between the two is the variance in gender depiction. There are undoubtedly numerous Buddhist deities in existence, yet Buddha is only represented and created as a male. However, for Virgin Mary Kannon, it is depicted in both male and female. For instance, in Hogai Kano’s painting (Figure 6), Hibi Kannon, the Virgin Mary Kannon is depicted as a male, showing another version of the figure. Here, it is questionable why the Virgin Mary, who is supposed to be a female, is painted as a male. One surmise is that Hogai gave another interpretation of the Virgin Mary Kannon through synthesizing Christianity and Buddhist into one piece of art. According to Okakura Kakuzo, “who wrote about the painting in both English and Japanese,” he states that “Kannon is ‘the Universal Mother’” (Foxwell 334). Taking this into consideration, the Christian God, or also, the father of Jesus Christ, is the universal father. Since these two iconic figures are parents of both universes, Hogai might have merged the two, showing another interpretation of the creator of the universe. Besides the size, there are also unique characteristics of Virgin Mary Kannon that are not found in other Buddhist statues: the presence of the baby and the virgin’s veil. In any kind of interpretation of Virgin Mary Kannon, the two elements are always there: the baby and the virgin’s veil. Whether the Virgin Mary Kannon is painted as a male or female, the virgin’s veil is always present: the sculpture in Treasure of Kawaguchi City or the Nantoyoso collection, or the Hogai Kano’s painting, all have the virgin’s veil. On the contrary, the Buddha Statues do not have any sorts of accessories worn. Likewise, the Buddha statues do not have any babies near them to represent Jesus Christ, yet the Virgin Mary Kannon always has the baby on her lap or nearby her. Thus, these distinctive characteristics separate the Virgin Mary Kannon from being the Buddha, and vice versa. Altogether, the impact of Christianity on Japanese religious art is also clear from the shift in Buddhist and Shinto religious imagery and styles, to the inclusion of Christian iconography (Jesus, Mother Mary and the saints), the increasing adoption of a more Realist style of painting, and the mixing of European and Japanese styles. In particular, the Nanban art style was created during this period.

212


Figure 6. Hogai Kano, Hibo Kannon, 1883, Yamanashi prefectural museum of art

Analytic Results The case studies from the primary and secondary source analysis presents how Christianity had a material impact in shaping the subject matter and techniques of 17th century Japanese art. In the early period of 17th century Japanese art, new techniques such as linear perspective, oil painting and chiaroscuro were integrated by the Japanese, while Christian iconography and themes became more prominent in Japanese artworks. This also allowed for fusion of hybrid styles, such as Nanban art, to emerge, which combined the best of Japanese and Christian art techniques. As discussed by Ogawa (2010), the hidden Christian communities developed a hybrid style of art based on the influences of European painters and European-style gifts (Ogawa 19). This led the Japanese painters to create paintings that were more three-dimensional, with the use of shadows, contours and linear perspectives that clearly departed from Japanese styles of flat, calligraphic artwork. It is also surprising how there were continuation of Virgin Mary Kannon even after the persecution had ended.

213


Conclusion Christianity had a short but lasting impact on 17th-century Japanese art, forming a unique blend of Western and Japanese artistic traditions that emerged in painting, sculpture and subject matter of Japanese art. The Jesuit painting Seminario and its wide collection of oil paintings exemplifies the initial open adoption of Christian iconography and European line art perspective and oil painting, while the Virgin Mary Kannon sculpture showcases the cunning and manipulative artistic strategies used by hidden Christians during times of persecution. There are a few key limitations to this study that should be highlighted. Foremost, Christian-influenced artworks are highly limited, due to the persecution of the Christians starting in 1614 and the isolationist policy adopted by the Tokugawa Shogunate rulers thereafter. This limited the amount of artworks that could be studied as several were destroyed during the Tokugawa era, leading me to select the two most renowned and well preserved sources of art: the Virgin Mary Kannon and the Seminario paintings. However, by choosing only two specific sources of artwork, the artwork may also not be fully representative of the entire spectrum of Christian influence on Japanese art. Finally, there is a need to rely on secondary sources given the limited historical documents and records available from 17th century Japan. The impact of Christianity on Japanese art during this period was characterized by the introduction of novel Western techniques such as oil painting and perspective, the incorporation of Christian iconography, and the creation of hybrid art forms. As persecution of Christians intensified, Christian art had to become more covert, leading to innovative techniques that combined Christian iconography with traditional Japanese forms as a form of subterfuge. This research showcases the evolving and complicated interplay between Jesuit and Japanese artistic traditions during the 17th century. It shows how religious art can remain adaptable in the face of persecution and the enduring impact of this brief cultural exchange on Japanese artistic development.

References Hioki, Naoko Frances. The Shape of Conversation: The Aesthetics of Jesuit Folding Screens in Momoyama and Early Tokugawa Japan (1549–1639). Graduate Theological Union, 2009. Foxwell, Chelsea. “‘Merciful Mother Kannon’ and Its Audiences.” The Art Bulletin, vol. 92, no. 4, 2010, pp. 326–47. JSTOR, http://www.jstor.org/stable/29546135. Accessed 1 Aug. 2024. Ogawa, Suharu. Surrender or Subversion? Contextual and Theoretical Analysis of the Paintings by Japan's Hidden Christians, 1640-1873. MS thesis, University of Cincinnati, 2010. McCall, John E. "Early Jesuit Art in the Far East. III: The Japanese Christian Painters." Artibus Asiae 10.4 (1947): 283-301. Ruiz-de-Medina, Father Juan G., Cultural Interactions in the Orient 30 years before Matteo Ricci. Catholic Uni. of Portugal, 1993.

214


How did the People's Republic of China develop its foreign policy to foster the growth of the domestic semiconductor market by changing diplomatic strategies leveraging China-U.S. geopolitical relations? Author

Full Name

:

Oh, Seungwoo

:

Seoul International School

(Last Name, First Name)

School Name

Introduction Since 2014, China has been largely focusing on expanding the economic growth of the semiconductor industry. The Chinese communist party intended to boost the scale of the Chinese domestic semiconductor industry aimed at amplification of the monetary value of the semiconductor market through both direct capital support and investment in technology. From 2000 to 2014, the Chinese government patronized private sector companies to invent novel types of CPU chips through technological breakthrough, like CPU, Loongson CPU, Arca CPU, and MPRC CPU with the help of 863 national programs under the name of Core, High, and Basic, or Hegaoji. In 2014, more than 138.7 billion of Yuan were used with sponsors of the government and SOE (State-owned Enterprises) for the development of Chinese domestic semiconductor industry, including the financial aids from China Development Bank Capital (CDB Capital) and China National Tobacco Corporation (CNTC).[1] The expansion of the semiconductor market was realized by the Chinese government’s constant cooperation with private sectors. Also, China was investing hundreds of billions of dollars into its domestic semiconductor industry through state-led investment funds and new public-private partnerships while reforming its tax incentive system to boost its research and development capabilities. [2] To accelerate the development of the Chinese semiconductor industry, the Chinese government has been attempting to increase its domestic production while significantly decreasing its foreign dependency on semiconductors. Although China imported over $430 billion worth of semiconductors in 2021, its purchases declined over the years because of the zero-Covid restriction to limit economic growth. [3] China is trying to augment its domestic semiconductor production by improvising governmental supporting programs, and it is trying to secure its domestic market from over-dependence on foreign countries by weaponization of its rare earth materials. Ultimately, throughout the past several

215


decades, the Chinese government changed its foreign policy strategy on semiconductor industry from passively fostering the growth of the domestic semiconductor market by technological advancements and provision of financial aid. This helped to actively stimulate the growth of the industry by weaponizing its rare earth materials, in which China has been a prime supplier to U.S, accounting over 70% of U.S imports of rare-earth compounds and metals. [4] This changing attitude of China prompted the U.S. to develop a strategic survival strategy that helped them to resist the threat of rare earth material weaponization.

Growth of Chinese Semiconductor Market through Governmental Support Through constant national efforts to secure a stabilized supply chain of semiconductor via massive scale production, Xi Jinping was able to realize the primary step of the Chinese Dream in the industry by significantly marginalizing the influence of foreign companies. The massive Chinese government investment in semiconductor production showed how the industry could change into a nation lednurturing business under domestic developmental policies. Specifically, the persistent continuance in augmenting the growth of semiconductor industry under governmental support culminated by ultimately prompting critics to admit that the gap between the consumption and domestic production started to decrease over the last few years.[5] However, China’s consistent efforts to foster the growth of the domestic semiconductor industry not only aimed to magnify the production domestically but also to abate Chinese dependences on foreign semiconductor companies, especially those of the U.S. Moreover, it is undeniable that China’s economic kinship with the U.S. in the semiconductor market had been augmenting before strong Chinese national policy on semiconductor industry. China was the largest market for the U.S. manufacturing equipment’s export. According to the U.S. Statistics related to Chinese customs, the majority, which accounts for 62 percent of imports by China, came from the U.S. [6] Additionally, in the early 2000s, the U.S foreign policy stance aimed to make China a responsible stakeholder rather than hindering China’s technological advancements assisted China to only take a few risks in developing a stabilized chip industry. [7] However, regardless of the presence of economic intricacies between the two nations along with limited historical accordance, China under Xi Jinping aimed to substantially reduce the influence of American businesses on China’s semiconductor market. The Chinese government attempted consistently to implement a more elevated semiconductor industry strategy by reserving overreliance on the U.S. and other global suppliers and by purchasing foreign companies that advanced in semiconductor technology. [8] However, it is noted that these purchasing policies were not successful at all, as amplified support to foreign advanced companies exacerbated the domestic competition in the Chinese semiconductor industry. After learning that the import of foreign-inspired advanced technologies did not apply to the Chinese semiconductor industry, the Chinese government ultimately changed its goal from solely focusing on the development of the domestic market to extending the influence of China to the global semiconductor market. As China's economic significance in the global semiconductor industry increased to 29% in 2014, being accountable for 58.5% of global demand as the world’s largest importer [9], Xi Jinping expected China to marginalize the influence of foreign companies in its market by exercising its economic dominance on foreign semiconductor industries, especially those of the U.S. Although China was not yet part of the list of the American prescription of Axis of Evil, the establishment of export control policies implemented by the U.S. provoked China’s diplomatic attitude toward the rising hegemony of East Asia [10].

Technological Development in China Previously, China’s developmental policies had their fundamental basis in directly supporting the growth of the domestic semiconductor industry through nation-led provision of financial aid and investment to technological advancements. As can be observed from the Made in China 2025,

216


developmental technology plans for the firm sustainability of semiconductor production aim to lead global-competitive technologies by 2030. In 2014, Chinese authorities illustrated a plan to promote the stabilized chain of domestic semiconductor production, announcing the garnering of $161 billion statesponsored National IC Fund for financial aid. Moreover, in 2016, the Chinese government spent $6.6 billion to aggrandize its domestic semiconductor market by the implementations of M&A Activities [11]. The initial government policy of China in the semiconductor industry explored China's consistency in bounding its investment to the growth of the domestic market, implying the government’s indifference in utilizing diplomacy to preserve its standing place in the regional semiconductor market. Before Xi Jinping’s elongated autocracy, China had been certainly focusing on the development of its own industry due to the relative lagging semiconductor technology compared to the U.S. However, according to the Department of Commerce market’s report, these governmental efforts that were made by the Chinese government were likely to yield sustained demand for semiconductor imports (International Trade Administration). [12] Xi Jinping’s continuance in fostering the rapid growth of domestic industry in China through direct governmental support indicates the degree of eagerness derived from him to succeed in the national project that plays with the survival of China in the semiconductor market.

Chinese Abundance of Rare Earth Materials The strategic shift of Chinese economic strategy in the semiconductor market has altered due to the realistic application of rare earth materials in diplomacy. Currently, China produces nearly 95 percent of the global supply of rare earth elements. As the former Chinese president Deng Xiaoping remarked, “The Middle East has oil. China has rare earths.” [13] The quote summarizes the vast mining resource of the rare earth materials in China, and it translates to meaning that China is an ultimate superpower when it comes to sourcing the most valuable material needed for semiconductor production. Also, according to the U.S. Geological Survey, rare earths are estimated to be 140 million tons; China has 55 million tons, Brazil 22 million tons and the United States 13 million tons. Similarly, the researchers found out that both LREEs and HREEs are almost exclusively sourced in China. [14] REE (Rare Earth Elements) are a “set of 17 closely-related metals” and have various applications in semiconductor fabrication. Indeed, REE can be abundantly found in the crust of the earth, especially in that of mainland China, and they are also refined and converted and processed in China.[15] This means the Chinese government was able to establish a stabilized monopoly in the share of semiconductor production in their domestic market by not only controlling the mining of the earth materials, but also controlling their manufacturing in the Chinese mainland, helping them to efficiently decrease foreign dependence on imported semiconductors.

Case Analysis about Chinese Weaponization of Rare Earth Materials, regarding the Senkaku Islands Conflict The Chinese government’s abundance in rare earth materials led to the development of strategic offensive diplomatic policies, which forced the nations heavily relying on the Chinese rare materials to succumb to China, threatening them that China would put regulations on the exports of rare earths to such countries. For instance, in 2010, China threatened to use the ‘rare earth card’ during a territorial conflict with Japan and the Chinese unilateral embargo went on during the territorial dispute among the Senkaku Islands against Japan. This chain of events further fueled international suspicions that China was abusing its rare earth exports as a political tool, something that the government has denied for the past few decades. As can be seen from the 2011 Senkaku Island Dispute, rare earths escalated into a high-profile China-Japan conflict and then transformed into China’s conflict with key importers of rare earth elements.[16] The case revealed that the Chinese government attempted to exercise economic dominance against other nations through their utilization of earth materials which were also previously observed in history.

217


Precisely, China weaponized its rare earth exports in the past, most notably through the temporary and informal export ban of REE (rare earth elements) to Japan in 2010 in connection with the dispute over the contested Senkaku/Diaoyu Islands, as well as an imposed export quota imposed on its rare earths. [17] However, the dispute between China and major importers of rare earths revealed that China’s strategic attempt to weaponize the affluence of natural resources prompted diplomatic conflict among nations who had strong dependence, or somewhat overdependence, on them. Such disputes were treated as purely trade issues, with rulings based on short-term economic priorities rather than long-term sustainability (Esty and Ivanova 2002). [18] Thus, the growing ambition of China inevitably led to its economic tensions with the U.S., not only as a challenging force for their hegemony but also as a new rival who wanted to augment the influence of Chinese earth materials in American domestic markets.

China’s Economic Tension with U.S. regarding the Market Share at Semiconductor Market Recently, in 2014, Xi Jinping argued that Asia’s problems ultimately must be resolved by Asians and Asia’s security ultimately must be protected by Asians. [19] The nation-representative statement that was made by President Xi not only expressed China’s practical dominance in exercising socioeconomic control in Asia but also indicated the emergence of new geopolitical rivalry between the two global hegemonic powers. Undeniably, China’s indirect diplomatic maneuvering of neighboring countries helped them to condemn the foreign policies of the U.S. in East Asia by asserting it to be under the influence of American arbitrary and high-handed practices, failing to come to their senses in the field of geopolitics.[20] Supposedly, the Chinese government’s active demeanor in promoting the Chinese dream by asserting it to be realizing it to become the central point of his regime predicted China’s increased involvement in exercising socio economic dominance in the East Asian Region. [21] As the great power rivalry between the two nations heated up, semiconductor and CRM (critical raw materials) value chains were in an early stage of being weaponized, similar to how the organisation of Arab Petroleum Exporting Companies (OAPEC) used oil as a lever of power in 1973.[22]

American Interdependence on Chinese Rare Earth Materials Indeed, China is the largest market for U.S. exports manufacturing equipment and most of China’s related imports come from the U.S. However, the United States sourced 80% of its rare earths imported between 2014 and 2017 from China. The United States imported $160 million worth of rare earth compounds and metals in 2018, up from about 17% in 2017. Around 60% of it was used as catalysts for oil refining and in vehicle engines. Also, China is the biggest market for the importation of semiconductors with a 58.5% market share of the global demand. Previously, critics said that even though China only produced 9 percent of the semiconductor of its consumption, the gap between the consumption and domestic production began to decrease soon. [23] The growing role of China in the global semiconductor market created an offence counteractive measure answered by the U.S., resulting in China’s counter illogical move of weaponizing earth rare materials. According to Professor Altheraus, an associate professor of the School of Social and Political Sciences, the cause of such an incident was as follows the U.S. government widely believed that rare earth mining and refinement was a tedious and dirty business, so the U.S. left it to China, which was happy to do it cheap because it can supply U.S. manufacturers with cheap rare earths. [24] Surprisingly, not only the U.S. semiconductor industry, but also military technologies heavily depended on the import of China’s raw materials. The U.S. military is almost completely dependent on China for the rare earth elements that go into everything from batteries to precision-guided missiles, according to a report by the Congressional Research Service [25]. Due to the Chinese aggressive weaponization attempts of rare materials, the U.S. government tried to cut down on the use of Chinese rare earth materials by strengthening their Semiconductor industry through implementing the CHIPS Act.

Response of the U.S. to China’s weaponization of Rare Earth Materials with establishment of the CHIPS ACT

218


The heavy dependence on Chinese rare materials led the U.S. government to escalate the tension with China by laying down multiple regulations. As semiconductors were a critical source of materials to the U.S. economy, national security, and technology, the U.S. was forced to make a life-or-death decision to protect their domestic semiconductor market from the threat of weaponized earth materials by China. To compete with substantial foreign subsidies and to achieve parity while addressing our supply chain risks, Congress passed the CHIPS Act of 2022 to provide $52 billion in manufacturing grants and research investments. [26] The CHIPS Act investments and an investment tax credit combined to provide strong incentives to construct and expand new and existing facilities. In combination, these incentives aimed to revitalize U.S. semiconductor manufacturing and to fortify their supply chain of semiconductors. Based on the Congressional Budget Office, the CHIPS Act of 2022 had a total cost of $79.344 billion over a decade, which means that the American government was making an enormous economically scaled decision to sponsor the semiconductor industry with the massive federal government support [27], a policy observed as similar to China’s weaponization of rare earth materials. Also, the CHIPS Act aimed to facilitate cooperation between the U.S. and other allied countries to share the same interest in preventing the advancement of Chinese influence in the semiconductor market. Indeed, the CHIPS Act and Science Act already was becoming an important factor in corporate strategy, providing incentives for Taiwanese, Korean, and American companies to make big bets on new factories in place. This establishment of Chip 4 technology dialogue led by U.S. sought not only to informally engage the U.S., Japan, South Korea, and Taiwan to the global semiconductor industry discussion but also to deepen economic ties and security of the Indo-Pacific region. [28]

Conclusion Conclusively, China’s change of foreign policies in the semiconductor industry, from developing semiconductor production with strong governmental support to weaponizing rare earth materials, brought up the aggressive response of the U.S. that is synthesized by the establishment of the CHIPS Act, which not only aimed to augment U.S. production of semiconductors in regards to the threat of Chinese weaponization of rare earth materials, but also aimed to develop an anti-Chinese alliance in the global semiconductor industry. The Chinese attempt to enhance its economic interference in foreign markets by the weaponization of rare earth materials, which aimed to extend the influence of Chinese semiconductors through the control of the supply of semiconductors, provoked other nations, especially the U.S., to develop their survival strategy because of such economic aggressiveness. The U.S strengthened their ties with allied countries and countries involved in the Chip 4 dialogue.

Endnotes 1. He, Alex. “Case Study: From Paper Tiger to Real Tiger? The Development of China’s Semiconductor Industry.” China’s Techno-Industrial Development: A Case Study of the Semiconductor Industry, Centre for International Governance Innovation, 2021, pp. 14–24. JSTOR, http://www.jstor.org/stable/resrep31646.9. Accessed 20 July. 2024. 2. Gupta, Kirti, et al. “Collateral Damage: The Domestic Impact of U.S. Semiconductor Export Controls.” CSIS, 9 July 2024, www.csis.org/analysis/collateral-damage-domestic-impact-ussemiconductor-export-controls. 3. Mark, Jeremy, and Dexter Tiff Roberts. “United States–China Semiconductor Standoff: A Supply Chain under Stress.” Atlantic Council, 23 Feb. 2023, www.atlanticcouncil.org/indepth-research-reports/issue-brief/united-states-china-semiconductor-standoff-a-supply-chainunder-stress/. 4. McCartney, Micah. “Charts Show World’s Dependence on China for Critical Minerals.” Newsweek, Newsweek, 19 June 2024, www.newsweek.com/chart-shows-world-dependencechina-rare-earth-minerals-1914704.

219


5. China’s Impact on the Semiconductor Industry: 2016 Update, Jan. 2017, www.pwc.com/gx/en/technology/chinas-impact-on-semiconductor-industry/assets/chinaimpact-of-the-semiconductor-industry-2016-update.pdf. 6. Hammer, Alexander. “Exporting U.S. Innovative Capacity to China?: A Case Study of Semiconductor Manufacturing Equipment.” China’s Uneven High-Tech Drive: Implications for the United States, edited by Scott Kennedy, Center for Strategic and International Studies (CSIS), 2020, pp. 37–41. JSTOR, http://www.jstor.org/stable/resrep22605.13. Accessed 10 July. 2024. 7. Miller, Chris. “The History of U.S. Industrial Policy toward Semiconductors.” Rewire: Semiconductors and U.S. Industrial Policy, Center for a New American Security, 2022, pp. 4– 8. JSTOR, http://www.jstor.org/stable/resrep43411.5. Accessed 01 August. 2024. 8. TADJDEH, YASMIN. “China on Quest for Semiconductor Independence.” National Defense, vol. 103, no. 785, 2019, pp. 7–7. JSTOR, https://www.jstor.org/stable/27022529. Accessed 3 August. 2024. 9. Oh, Miyeon, et al. “STRATEGIC UNCERTAINTY AND SHIFTING GVC RISKS.” GLOBAL VALUE CHAINS IN AN ERA OF STRATEGIC UNCERTAINTY: PROSPECTS FOR US-ROK COOPERATION, Atlantic Council, 2020, pp. 5–13. JSTOR, http://www.jstor.org/stable/resrep27620.5. Accessed 30 July. 2024. 10. Miller, Chris. “The History of U.S. Industrial Policy toward Semiconductors.” Rewire: Semiconductors and U.S. Industrial Policy, Center for a New American Security, 2022, pp. 4– 8. JSTOR, http://www.jstor.org/stable/resrep43411.5. Accessed 01 August. 2024. 11. CRS Reports China-U.S TRade Issues, 80, Congressional Research Service, crsreports.congress.gov/. Accessed 13 Aug. 2024. 12. Semiconductors and Related Equipment, July 2016, legacy.trade.gov/topmarkets/pdf/Semiconductors_Top_Markets_Report.pdf. 13. Magnuson, Stew. “SPECIAL REPORT: RARE EARTH ELEMENTS: China Maintains Dominance In Rare Earth Production.” National Defense, vol. 106, no. 814, 2021, p. 30. JSTOR, https://www.jstor.org/stable/27092747. Accessed 6 Sept. 2024. 14. Yiying Zhang, et al. The Geopolitics of Chinaʹs Rare Earths: A Glimpse of Things to Come in a Resource-Scarce World? Stockholm Environment Institute, 2014. JSTOR, http://www.jstor.org/stable/resrep00363. Accessed 6 Sept. 2024. 15. Homans, Charles. “Are Rare Earth Elements Actually Rare?” Foreign Policy, Foreign Policy, 15 June 2010, foreignpolicy.com/2010/06/15/are-rare-earth-elements-actually-rare/. 16. Klinger, Julie Michelle. “RUDE AWAKENINGS.” Rare Earth Frontiers: From Terrestrial Subsoils to Lunar Landscapes, Cornell University Press, 2017, pp. 137–64. JSTOR, http://www.jstor.org/stable/10.7591/j.ctt1w0dd6d.9. Accessed 6 Sept. 2024. 17. Teer, Joris, et al. “Fragile Balance: The Semiconductor and Critical Raw Material Ecosystem.” Reaching Breaking Point: The Semiconductor and Critical Raw Material Ecosystem at a Time of Great Power Rivalry, Hague Centre for Strategic Studies, 2022, pp. 7–26. JSTOR, http://www.jstor.org/stable/resrep44057.6. Accessed 6 Sept. 2024. 18. Hao, Y. and Liu, W. (2011). Rare earth minerals and commodity resource nationalism. In Asia’s Rising Energy and Resource Nationalism: Implications for the United States, China, and the Asia-Pacific Region. NBR Reports. National Bureau of Asian Research. http://www.nbr.org/ publications/issue.aspx?id=236. 19. Blackwill, Robert D. “U.S. GRAND STRATEGY TOWARD CHINA.” Implementing Grand

220


Strategy Toward China: Twenty-Two U.S. Policy Prescriptions, Council on Foreign Relations, 2020, pp. 6–12. JSTOR, http://www.jstor.org/stable/resrep21426.7. Accessed 28 July. 2024. 20. Kim, Hong Nack. “China’s Policy Toward North Korea Under the Xi Jinping Leadership.” North Korean Review, vol. 9, no. 2, 2013, pp. 83–98. JSTOR, http://www.jstor.org/stable/43908922. Accessed 15 Aug. 2024. 21. MOHANTY, MANORANJAN. “Xi Jinping and the ‘Chinese Dream.’” Economic and Political Weekly, vol. 48, no. 38, 2013, pp. 34–40. JSTOR, http://www.jstor.org/stable/23528539. Accessed 19 July. 2024. 22. Teer, Joris, et al. Reaching Breaking Point: The Semiconductor and Critical Raw Material Ecosystem at a Time of Great Power Rivalry. Hague Centre for Strategic Studies, 2022. JSTOR, http://www.jstor.org/stable/resrep44057. Accessed 6 Sept. 2024. 23. China’s Impact on the Semiconductor Industry: 2016 Update, Jan. 2017, www.pwc.com/gx/en/technology/chinas-impact-on-semiconductor-industry/assets/chinaimpact-of-the-semiconductor-industry-2016-update.pdf. 24. Magnuson, Stew. “SPECIAL REPORT: RARE EARTH ELEMENTS: China Maintains Dominance In Rare Earth Production.” National Defense, vol. 106, no. 814, 2021, p. 30. JSTOR, https://www.jstor.org/stable/27092747. Accessed 11 August. 2024. 25. PARSONS, DAN. “U.S. Remains Dependent on China for Rare Earth Elements.” National Defense, vol. 96, no. 703, 2012, pp. 24–25. JSTOR, https://www.jstor.org/stable/27019388. Accessed 17 July. 2024. 26. Pass the Chips Act of 2022, Semiconductor Industry Association, www.semiconductors.org/wp-content/uploads/2022/07/Pass-the-CHIPS-Act-of-2022-FactSheet.pdf. Accessed 6 July 2024. 27. Ravi, Sarah. “Sia Applauds House Passage of Chips Act, Urges President to Sign Bill into Law.” Semiconductor Industry Association, 28 July 2022, www.semiconductors.org/siaapplauds-house-passage-of-chips-act-urges-president-to-sign-bill-into-law/. 28. Mark, Jeremy, and Dexter Tiff Roberts. “United States–China Semiconductor Standoff: A Supply Chain under Stress.” Atlantic Council, 23 Feb. 2023, www.atlanticcouncil.org/indepth-research-reports/issue-brief/united-states-china-semiconductor-standoff-a-supply-chainunder-stress/.

221


The Impact of Song Lyrics on Environmental Awareness Author

Full Name

:

Rim, Jaewook

:

Eaglebrook School

(Last Name, First Name)

School Name

Abstract With the increasing interest in using music to combat the environmental crisis, this study examines the impact of lyrics in rock and hip-hop music on environmental awareness. Participants (N = 118) were randomly assigned to listen to either Michael Jackson’s “Earth Song” (Rock), Mos Def's “New World Water” (Hip-hop), or “Habanera” from Carmen (Control). They subsequently completed measures of pro-environmental attitudes and perceptions of environmental threats and were instructed to write down as many words as they remembered. The findings revealed that participants who listened to “Earth Song” (Rock) reported significantly higher perceptions of the fragility of nature’s balance and the reality of limits to growth compared to those in the control condition. Interestingly, no significant differences were observed between the Hip-hop and Control conditions. The lyrical content with explicit references to “nature” and “Earth,” and the lower syllable rate in “Earth Song” (Rock), may have contributed to its stronger impact in the pro-environmental attitude subscales. Future research can control other variables, such as songs with deliberative rhetoric and similar syllable rate to better measure lyric’s impact on environmental awareness. This study is significant as it provides empirical evidence that lyrics can influence environmental awareness, providing insight into further effective strategies for using music as a tool for environmentalism.

Keywords Environmental Awareness, Music, Lyrics, Protest Song

222


1. Introduction The Impact of Song Lyrics on Environmental Awareness The environmental crisis has become worse than ever. In 1960, there were approximately 320 million CO2 mole fractions, more than 360 million in 2000, and in 2020, more than 420 million (Global Monitoring Laboratory, n.d.). Moreover, according to the World Health Organization (WHO), at least 1.7 billion people use a drinking water source contaminated with feces, which poses the greatest risk to drinking water safety (2023). People have tried to deal with the environmental crisis by implementing environmental policies, creating advertisements, and participating in activism using social media and eco-friendly consumption. There have also been efforts to utilize music from different genres such as hip-hop, rock, and folk to confront environmental issues. Starting from the 1960s and 1970s, an increasing number of songs were released on ecological problems, such as folk artists Pete Seeger and Malvina Reynolds’s “Cement Octopus” in 1971, along with hip hop artist Mos Def’s “New World Water” in 1999, and the rock band Radiohead’s “The Numbers” in 2016, which all promoted anticlimate change. Such attempts through music are particularly studied in the rising field of ecomusicology. Ecomusicology is an area of studies that examine the relationship between music, environment, and culture (Allen & Dawe; 2015). One of the first few publications on ecomusicology are Ecomusicology: Bridging the Sciences, Arts, and Humanity (2012) written by Aaron S. Allen, The Jukebox in the Garden: Ecocriticism and American Popular Music Since 1960 (2010) by David Ingram, Ecomusicology: Rock, Folk, and the Environment (2012) by Mark Pedelty, with the first ecomusicology gathering held in October 2012 (Allen & Dawe; 2015). Music is used with 3 environmentalism because it is part of human nature and ubiquitous in human life (Pedelty, 2012; Mills, 2016). Thus, with the increasing interest in the relationship between music and the environment, it is significant to understand the actual impact of music on raising environmental awareness to effectively prevent further degradation of the environment, especially songs with lyrics that convey the message about the environment. This study aims to measure the effectiveness of environmental song lyrics in raising environmental awareness. Rock Music Environmental concerns are evident within rock music as seen by the continuous release of environmental songs and rock artists’ involvement in environmental activism. In The Jukebox in the Garden: Ecocriticism and American Popular Music since 1960, David Ingram shows how Rock and R&B artists influenced the environmental activism scene since the 1960s: Rock musician Country Joe McDonald became involved in animal rights and the whale conversation, the band Jefferson Airplane sang about the indifference from humans about nature despite the harm caused by humans, the Beach Boys sang “A Day in the Life of a Tree” (1971) in which they talked about trees dying from air pollution, and Rock artists Frank Zappa and Captain Beefheart released the 1975 song “Debra Kadabra” on oceanic pollution (2010). Concerning the 1980s and 1990s, the rock band of the 1980s and 1990s, Radiohead, was very well known for environmental activism through their music and activities. The band member Thom Yorke actively posted messages on the Internet that denounced the energy policy decision made by the British government to install ten new nuclear power stations (Clément). With their concerns for the environment, Radiohead’s songs voiced the 4 band’s mistrust of the government and the fear of being manipulated (Clément). Hip-Hop Like rock music, many hip-hop songs have reflected artists’ opinions on political and social events, including environmental concerns. Mos Def’s “New World Water” (1999) is one of the earliest examples of environmental hip-hop music (Ingram, 2010), exploring the issue of water scarcity and economic inequity, which influenced more contemporary songs like ‘Trees”’ (2006) by Dr. Octagon. Dr. Octagon sings about the killing of trees caused by humans’ carbon dioxide emissions and their use

223


of harmful chemicals (Ingram, 2010). Music videos of hip-hop songs are effectively used to convey better the messages addressing the environmental crisis. For instance, in the music video for the song “Nothing Makes Sense Anymore” (2018), Mike Shinoda includes a video of the Skirball Fire in 2017 that burned the Bel Air neighborhood in Los Angeles, displaying mountains burning from the fire that started from an illegal cooking fire (Mezquita Fernandez, 2022). The nature of hip-hop music in voicing opinions on political and social events traces back to its emergence: Hip-hop culture emerged amongst Black and Hispanic youth in New York City in the late 1970s, as people of color suffered disproportionately from the urban renewal policies of Robert Moses (Ingram, 2010). Since then, hip-hop has often dealt with political and social issues, becoming a musical outlet for political resistance (Ingram, 2010; Müller, 2022). Müller even argues that hip-hop originated from concerns about the environment and the well-being of people (2022). When climate change and environmental crisis have impacted marginalized people the most, 5 hip-hop can offer a different perspective to environmental activism and scholarship by representing racially and socially marginalized groups (Müller, 2022). Hip-hop is also a great educational tool in the classroom, particularly in schools with racially and economically marginalized students. Cermak argues that hiphop as Critical Environmental Literacy can be used in creating a curriculum related to environmentalism targeting urban areas and racially diverse learners, such as asking students to compose their own hip-hop music and lyrics related to environmental problems. CEL is a combination of Critical Literacy and Ecological Literacy, majorly focusing on the environment, requiring teachers to adopt a style of education that allows students to face and learn about ecological issues (Cermak). CEL differs from other teaching methods that rely on the conventional scientific method, combining education and creative expression (Cermak). Protest Songs Songs, like those from rock and hip-hop music, are used to express human emotions and concerns related to political and social problems. Often referred to as protest songs, protest songs are expressions of discontent or dissent that imply or assert a need for change (Kizer, 1983). They may represent the attitudes of one individual or collection of individuals, such as members of a special interest group, and be adapted by and utilized as ideological statements of a social movement whether originally written for that source or not (Kizer, 1983). They also may inspire the creation of other rhetorical messages, stimulate thoughts, and reinforce or modify attitudes (Kizer, 1983). Thus, songs that voice environmental concerns are a type of protest song. Protest songs 6 comprise four components: tempo, entertainment, lyrics/words, and melody (Mills, 2016). While protest songs consist of four components, many have focused on analyzing the lyrics of protest songs, if not, the artists’ activism. Kizer (1983) states that there are two types of rhetoric in protest songs: deliberative and epideictic. Deliberative rhetoric is concerned about the future whether to do or not to do while epideictic rhetoric is censuring or praising someone in the present (Kizer, 1983). For instance, “The Numbers” (2016) by Radiohead utilizes deliberative rhetoric as an environmental protest song, talking about how humans are indifferent about how they impact the environment in the present. On the other hand, Pete Seeger’s “God Bless the Grass” (1966) uses epideictic rhetoric, conveying how the decisions humans are making right now will hurt the Earth in the future. Despite the emphasis on lyrics, there has been a lack of studies that explore the effect of these lyrics on people’s awareness of the environment. Thus, this study measures how the lyrics of rock and hip-hop music, as examples of protest songs about the environment, can impact people’s environmental awareness. I hypothesize that rock and hip-hop songs with environmental lyrics will increase people’s environmental awareness.

2. Method Procedures

224


An online survey titled “Survey on Music Preferences” was distributed via CloudResearch to recruit participants. The participants first answered questions about their music preferences as a filler measure. Then, they were assigned to one of three conditions: Rock, Hip-hop, or Control (classical music). In the Rock music condition, 7 the participants listened to Michael Jackson’s “Earth Song” (N = 38). In the Hip-hop music condition, the participants listened to Mos Def’s “New World Water” (N = 38). In the control condition, the participants listened to “Habanera” from the opera Carmen (N = 41). After listening to the assigned songs, the participants were asked to write as many words as they remembered from the song and completed items measuring environmental attitudes. Lastly, the participants answered questions about their demographic characteristics and were debriefed. Participants Initially, 120 participants were recruited, but 2 were excluded from the analysis due to an attention check failure. This left 118 participants (Mage = 37.21, SDage = 12.37), of whom 62 identified themselves as men and 52 as women. Measures Pro-environmental attitudes. Participants’ pro-environmental attitudes were measured using a scale validated by Dunlap et al. (2000). The revised NEP (endorsement of the New Ecological Paradigm) scale comprises 5 subscales: antianthropocentrism, rejection of exemptionalism, the fragility of nature’s balance, the possibility of an eco-crisis, and the reality of limits to growth. Each subscale consists of 3 items that range from 1 (strongly disagree) to 5 (strongly agree). The reliability of the scales was adequate, with the lowest Cronbach’s α of .58 and the highest .93. Environmental Threat. Additionally, I measured the extent to which participants recognized threats posed to the environment using a scale validated by Milfont & Duckitt (2011). The scale included 10 items that ranged from 1 (strongly 8 disagree) to 5 (strongly agree). The reliability of the scale was high, with Cronbach’s α of .91.

3. Results A series of one-way ANOVA tests were conducted to test the main hypotheses. The results revealed a significant difference among groups in the fragility of nature’s balance and the reality of limits to growth subscales of the revised NEP scale, F(2,114) = 5.29, p = .006; F(2,114) = 3.51, p = .033, respectively. However, no significant differences among conditions were found for the other measures, Fs < .1.35, ps > .264. To delve deeper into the significant group differences, I conducted Tukey’s HSD post-hoc tests. The post-hoc tests indicated that participants in the Rock condition reported a higher perception of the fragility of nature’s balance (M = 4.13, SD = 0.65) than those in the control condition (M = 3.60, SD = 0.81), p = .004. Additionally, participants in the Rock condition reported a higher perception of the reality of limits to growth (M = 3.18, SD = 0.87) than those in the control condition (M = 2.66, SD = 0.96), p = .038. Of note, a trend in an increase in all other subscales of the NEP scale was observed among participants in the Rock condition compared to the control condition, despite the statistical nonsignificance. Unexpectedly, no significant differences were found between the scores reported by participants in the Hip-Hop condition and the control condition, ps >.164. See Table 1 for the full descriptive statistics of observed scores of all measures used in the study, split by the conditions. Table 1 Variables Control

AntiEnvironmen tal anthropocen Treat trism 3.88 (0.82) a 3.59 (0.91) a

Rejection of Exemption 3.54 (0.82) a

225

Fragility of Nature’s Balance 3.60 (0.81) a

Possibility of Eco Crisis 3.97 (0.99) a

Reality of Limits to Growth 2.66 (0.95) a


Hip-Hop

3.87 (0.84) a

3.66 (0.86) a

3.65 (0.57) a

3.86 (0.69) a

3.91 (1.08) a

3.04 (0.91) a

Rock

4.11 (0.65) a

3.89 (0.74) a

3.66 (0.73) a

4.13 (0.65)

4.23 (0.70) b

3.18 (0.87) b

The Observed Scores of Measures Used in the Study Note 1. The numbers in parentheses indicate standard deviations Note 2. Difference subscripts indicate a significant difference

4. Discussion Michael Jackson’s "Earth Song" is a poignant and emotional plea for the protection of the planet, with lyrics that mourn the damage humanity has inflicted on nature. Mos Def’s’ "New World Water" provides a stark narrative on water scarcity and environmental injustice, with the lyrics painting a vivid picture of a world struggling with diminishing natural resources, particularly for marginalized communities. Participants who listened to Michael Jackson’s “Earth Song” (Rock) reported higher in the fragility of nature’s balance and the reality of limits to growth of the revised NEP scale. Although there was a significant difference between the Rock and Hip-hop conditions within these two subscales, the wording of the revised NEP scale’s questions might have affected the results. Questions were asked using specific words like “nature” and “Earth.” This may have affected participants to respond more toward Michael Jackson’s “Earth Song,” which explicitly uses the words “Earth” and “nature” in the song as well as “rain” “fields,” and “sea” that picture the nature. On the other hand, Mos Def’s “New World Water” focused more specifically on the problem of water contamination and scarcity with no explicit references to “Earth” or “nature.” Thus, further research Variables Environment al Threat Antianthropocen trism Rejection of exemption Fragility of nature’s balance Possibility of eco-crisis Reality of limits to growth Control 3.88 (0.82) a 3.59 (0.91) a 3.54 (0.82) a 3.60 (0.81) a 3.97 (0.99) a 2.66 (0.95) a Hip-Hop 3.87 (0.84) a 3.66 (0.86) a 3.65 (0.57) a 3.86 (0.69) a 3.91 (1.08) a 3.04 (0.91) a Rock 4.11 (0.65) a 3.89 (0.74) a 3.66 (0.73) a 4.13 (0.65) b 4.23 (0.70) a 3.18 (0.87) b 10 can control the use of songs that refer to the environmental problem more generally or songs that focus on the sameecological problem. Various musical attributes like singing style, vocal tessitura, syllable rate, instrumentation, and audio mix influence the intelligibility of lyrics (Condit-Schultz & Huron, 2015). In terms of syllable rate, hiphop generally has a higher syllable rate than rock music. Likewise, “New World Water” had a noticeably higher syllable rate compared to “Earth Song,” with faster delivery of the lyrics and rhythmic complexity. This may have affected participants’ ability to recall the words of the lyrics after listening to the songs, with more words remembered listening to “Earth Song,” and as a result, differently affecting the participants’ environmental awareness. In order to control other variables, future research can consider choosing songs with similar or equal syllable rates to better measure the effect of the lyrics in different genres in raising environmental awareness. Lastly, “New World Water” and the “Earth Song” both use epideictic rhetoric, which is concerned with the present. Future-oriented thinking positively affects environmental proactivity (Calza et al., 2016), which refers to voluntary practices and initiatives aimed at improving the environment. Environmental proactivity is also highly related to environmental awareness, as people with higher environmental consciousness or knowledge tend to adopt proactive actions that reduce their environmental impact and boost their adaptability (Sayegh, 2023). Future research can use songs with futureoriented characteristics of deliberative rhetoric to further enhance the effect of the stimuli. While the hypothesis held true only for the rock music condition, this study remains significant as it measured the actual impact of the lyrics in environmental protest songs. Moreover, such differing effects further prompt us to think about more effective ways of utilizing music as a tool to combat the environmental crisis.

226


References Calza, F., Cannavale, C., & Tutore, I. (2016). The important effects of national culture on the environmental proactivity of firms. Journal of Management Development, 35(8), 1011-1030. Cermak, M. J. (2012). Hip-hop, social justice, and environmental education: Toward a critical ecological literacy. The Journal of Environmental Education, 43(3), 192-203. Chia, H. W., & Sharon, K. P. (2013). Thinking and acting in anticipation: A review of research on proactive behavior. Advances in Psychological Science, 21(4), 679. Clément, G. (2017). Activism and Environmentalism in British Rock Music: the Case of Radiohead. Revue Française de Civilisation Britannique. French Journal of British Studies, 22(XXII-3). Condit-Schultz, N., & Huron, D. (2015). Catching the lyrics: Intelligibility in twelve song genres.Music Perception: An Interdisciplinary Journal, 32(5), 470-483. Dawe, K., & Allen, A. S. (2017). Current directions in ecomusicology: Music, culture, nature. Routledge. Dunlap, R. E., Van Liere, K. D., Mertig, A. G., & Jones, R. E. (2000). New trends in measuringenvironmental attitudes: measuring endorsement of the new ecological paradigm: a revised NEP scale. Journal of social issues, 56(3), 425-442. Global Monitoring Laboratory, n.d. Trends in Atmospheric Carbon Dioxide (CO2). Global Monitoring Laboratory. Retrieved August 23, 2024, from https:// gml.noaa.gov/ccgg/trends/global.html 13 Ingram, D. (2010). The Jukebox in the garden: Ecocriticism and American popular music since 1960. Rodopi. Kizer, E. J. (1983). Protest song lyrics as rhetoric. Popular Music & Society, 9(1), 3-11. Mezquita Fernández, M. A. (2022). Social and Environmental Awareness In the Lyrics Of Mike Shinoda:‘Kenji’ And ‘Nothing Makes Sense Anymore’. Milfont, T. L., & Duckitt, J. (2010). The environmental attitudes inventory: A valid and reliable measure to assess the structure of environmental attitudes. Journal of environmental psychology, 30(1), 80-94. Mills, L. (2016). The Use of Song in Social Movements: Where Are Songs for the Environment?. 2016 NCUR. Müller, T., & Durand, A. P. (2022). Hip Hop Ecologies: Mapping the Field (s): An introduction. Pedelty, M. (2012). Ecomusicology: Rock, folk, and the environment. Temple University Press. Sayegh, F. (2023). Proactive sustainable decision-making and climate change awareness: a Canadian study. GeoJournal, 88(6), 6407-6433. World Health Organization. (2023, September 13). Drinking-water. https:// www.who.int/newsroom/fact-sheets/detail/drinking-water

227


A Comparative Study of Korean and Japanese Policies On Low Fertility Rates and Aging Societies: Unveiling Distinctive Approaches Author

Full Name

:

Sohn, Olivia Chaeri

:

HPrep Academy

(Last Name, First Name)

School Name

Abstract In recent years, South Korea has been experiencing a rapid demographic transformation, with nearly 20% of its population now aged 65 and older. This shift is due in large part to its ever-increasing life expectancy, now estimated to be 83 years of age, and to having the lowest fertility rate/population replacement rate in the world (Our World in Data). This shift will pose a significant strain on the country's future economy, as the totality of its workforce will be insufficient to support its disproportionately large and retired population. This could jeopardize economic productivity and the sustainability of retirement benefits as predictions continue to paint a dire picture of a pension system bankruptcy looming in the near future.. Therefore, this study investigates current measures and strategies implemented to sustain the pension system's solvency and ensure long-term economic stability while maintaining and even increasing the country’s life expectancy and fertility rate. The consequences are dire if the right policies are not implemented, which ensures that this study and its results will continue to be highly significant. By comparing South Korea with Japan, a neighboring country with similar historical and present-day demographic challenges, this study provides valuable insights into managing an aging economy. The existing research focuses on the low fertility rate and increased life expectancy of the aging populations in both countries, the impact of a reduction in population growth, and the broader impact on economic stability and social welfare. This study reviews relevant literature and compares current policies in South Korea and Japan, as well as alternative policies implemented in Finland, Sweden, and Denmark. The findings emphasize the importance of proactive measures and international collaboration in addressing the multifaceted issues posed by an aging population.

228


Introduction South Korea has one of the fastest-aging populations in the world. As of recent estimates, nearly 20% of the population is 65 years of age or older, and this is expected to increase significantly in the coming decades. This is because people are living longer due to better healthcare standards, with the average life expectancy reaching over 83 years and beyond (Our World in Data). This increase in longevity is viewed as a positive development regarding health and longevity. A higher average life expectancy, combined with a fertility rate that is the lowest in the world, puts a significant strain on the future of South Korea's economy, especially if little to nothing is done about reversing the statistics. If these problems are not fully addressed, this generation’s low fertility rate will cause dire consequences for the next generation’s workforce, which will not have the number of people needed to generate a productive economy while sufficiently supporting the largest demographic in the country, which are those that happen to be healthy, unemployed, and entitled to receive retirement benefits (Choi 52-65). This study will look into the steps being taken and prospective tactics that may be used to keep the pension system solvent and guarantee a steady economy for years to come. Given that South Korea and Japan have comparable demographic trends, albeit with Japan's population difficulties emerging sooner, Japan's experience with an aging economy will be a valuable point of comparison. This thorough examination aims to comprehend the difficulties and consider various options for preserving economic stability in the face of changing demographic patterns. In order to do so, this research will consider two aspects of an aging population: low fertility and increased life expectancy. It will also highlight two aspects of the low fertility rate, such as statistics that highlight the decline of the fertility rate and its impact on population growth. This information will be used to explain the current state of South Korea's aging society. Overall, I will focus on the current challenges that South Korea’s aging society faces and how to balance the needs of different population segments to ensure intergenerational equity. By examining the statistics on aging in Korea and Japan and conducting a review of relevant literature taken from government websites, this paper will discuss the benefits and drawbacks of these policies and the problems they hope to address while introducing effective alternative policies implemented in three Scandinavian countries.

Discussion Background and Literature Review 1. Discussion Background

Figure 1. Timeline of Population Count for South Korea and Japan. Data Commons

229


Year

[Table 1. Total population of South Korea and Japan (unit: million, M)] Japan South Korea

1990

123M

42.9M

1995

125M

45.1M

2000

127M

47M

2005

128M

48.2M

2010

128M

49.6M

2015

127M

51M

2020

126M

51.8M

2024

126M

51.7M

source: Worldometer (2024), World Population Review (2024) Up to the year 2000, the total population of both South Korea and Japan was increasing. However, from the 2000s onward, Japan's total population began to decline, whereas South Korea's total population did not start to decrease until 2020. This indicates that Japan experienced the issue of a decreasing total population much earlier than South Korea. The lack of a massive change in the total population of South Korea, despite low fertility rates, can be attributed to increased life expectancy and a corresponding decrease in the mortality rate. As a result, South Korea’s overall population numbers remained relatively stable. To fully understand the severity of the current demographic situation, it is essential to consider the timeline of fertility rates in both countries. This timeline highlights the critical role that fertility rates play in shaping population trends and underscores the importance of addressing this issue to ensure sustainable population levels ("Population Trends: South Korea and Japan").

Figure 2. Timeline of Fertility Rate for South Korea and Japan. Data Commons [Table 2. Fertility rate of South Korea and Japan (unit: # of baby per couple)] Year Japan South Korea 1990 1.54 1.57 1995 1.42 1.63 2000 1.36 1.48 2005 1.26 1.09 2010 1.39 1.23 2015 1.45 1.24 2020 1.33 0.83 2024 1.3 0.68 source: The Chosun Daily (2024), CNN World (2024) by author

230


These figures show that the fertility rate dropped significantly for both South Korea and Japan. As previously mentioned, Japan faced this issue much earlier than South Korea, with its fertility rate beginning to decline even before the 1970s. In contrast, South Korea's fertility rate started to decrease in the 2000s. Currently, in 2024, the fertility rates of both countries are below the replacement level of 2.0, underscoring the severity of the demographic challenges they are confronting. This situation highlights the urgent need for effective policies to address the declining fertility rates and their critical role in ensuring future population stability ("Population Trends: South Korea and Japan"). Korea

Figure 3. Population Pyramid of South Korea in 2024. PopulationPyramid.net, 2024 The 2024 population pyramid of South Korea shows the country's aging population and low birth rate trends. The narrow base of the pyramid, which stands for younger age groups (0–14 years), denotes a drop in the number of children born and low birth rates. The middle-aged population (15–64 years old) is the most significant cohort, suggesting that a sizable fraction of South Korea's population is employed. The pyramid's vast upper section, representing older age groups (65 and above), indicates a sizable senior population and highlights the difficulties an aging society faces as more people approach retirement age. In addition, there is a clear gender gap among the elderly, with more women than men. This is typical of aging populations because women often live longer than men. This gender gap has significant implications for designing and implementing aging society policies, particularly healthcare and social security. For instance, it may necessitate a gender-sensitive approach to healthcare and pension systems ("Republic of Korea 2024").

231


[Table 3. Age-specific population of South Korea (in thousands)] Year

0-14

15-24

25-54

55-64

65+

1990

11,135

8,060

17,928

3,016

2,219

1995

10,349

7,730

21,582

3,399

2,928

2000

9,490

7,190

23,449

4,380

3,370

2005

8,456

6,610

24,171

5,320

4,414

2010

7,383

6,590

24,339

6,235

5,379

2015

6,409

6,245

23,472

7,451

6,678

2020

5,134

5,567

22,434

8,215

8,125

2024

4,800

5,200

21,500

8,500

9,300

[Table 4. Gender-specific population of South Korea (in thousands)] Year

Women

Men

1990

22,989

22,549

1995

23,789

23,429

2000

24,486

24,253

2005

24,788

24,701

2010

25,030

24,939

2015

25,393

25,337

2020

25,760

25,829

2024

25,800

26,200

South Korea's youngest population bracket, aged 0-14 years, has steadily declined from 1994 to the present, raising questions about the implications of an aging society (Statistics Korea). This age group is crucial and has resulted in the implementation of current governmental policies to increase the birth rate. The young adult population, aged 15-24, has remained relatively stable despite a slight decline in recent years. However, the working-age population, aged 25-54, has significantly declined, indicating a shrinking workforce (World Bank Population Data). This trend contrasts with the pre-retirement population (55-64 years), which has significantly increased over the past few years. The most notable change is in the elderly population, aged 65 and older, which has grown substantially due to increased life expectancy and declining birth rates (OECD Data). Gender statistics reveal steady growth for men and women since 1994, with the women population increasing faster than the men population (UNDESA). As a result, this indicates that there will be fewer resources from the younger, more productive population to support the older, less productive population in the future.

232


Japan

Figure 4. Population Pyramid of Japan in 2024. PopulationPyramid.net, 2024 The 2024 population pyramid for Japan shows an advanced stage of demographic transition, a vital aspect of the aging society policies. This transition is characterized by several factors, including a limited youthful population and meager birth rates, as shown by the pyramid's narrow base. The form is nearly inverted with a declining younger population and a somewhat constant but reduced workingage population. Given that Japan has one of the most significant percentages of senior persons worldwide, the broad apex draws attention to a sizable elderly population (65 years of age and above). The employment and social security systems are under great strain because of this generation. Furthermore, because women in Japan live longer than men statistically, there are more older women than men. This gender gap is especially noticeable in Japan ("Japan 2024"). [Table 5. Age-specific population of Japan (in thousands)] Year

0-14

15-24

25-54

55-64

65+

1990

21,750

17,274

50,000

13,555

14,346

1995

20,351

16,348

52,244

16,029

18,478

2000

18,633

14,264

53,302

17,127

22,005

2005

17,127

13,266

51,710

17,320

25,787

233


2010

16,174

11,334

48,374

15,783

29,245

2015

15,858

10,194

43,401

16,823

33,748

2020

14,933

9,243

39,238

16,876

36,180

2024

14,200

8,800

36,000

16,700

37,500

[Table 6. Gender-specific population of Japan (in thousands)] Year

Women

Men

1990

62,806

65,085

1995

63,760

65,782

2000

64,097

66,212

2005

63,876

66,270

2010

62,711

65,979

2015

61,505

65,379

2020

60,264

64,532

2024

59,000

63,500

In Japan, population statistics reflect many similarities to its neighbor, South Korea. However, Japan's overall declining population and efforts to increase birth rates began years earlier than in Korea (Statistics Bureau of Japan). The youngest population bracket, aged 0-14 years, has been steadily declining since 1990, causing concern for the current workforce, who hope to retire when this younger population reaches adulthood (World Population Prospects). The young adult population, aged 15-24, has drastically decreased, raising concerns about whether enough people will be employed and paying taxes to support the elderly (World Bank Population Data). The working-age population, aged 25-54, has generally remained steady, although a decline has emerged in recent years (OECD Data). The preretirement population decreased rapidly until around 2015 when it began to stabilize. Meanwhile, the elderly population has risen rapidly due to increased life expectancy (Statistics Bureau of Japan). Gender statistics in Japan show a steady decrease in both men and women populations, reflecting the country's low birth rate and aging population. In Korea, the number of women is higher than men, likely due to the tendency for women to live longer (World Population Prospects).

2. Comparative Analysis of Population Trends in South Korea and Japan An analysis of population statistics focusing on age and gender in South Korea and Japan reveals notable differences amid significant similarities. Both countries face significant challenges due to their aging populations. Japan's situation is more extreme, with a higher proportion of elderly citizens. The low birth rates in both countries has resulted in narrow bases of their population pyramids, raising concerns about future labor shortages and the sustainability of social welfare systems. The large working-age populations are crucial for economic stability. Still, the increasing number of retirees presents challenges for economic growth and pension systems (UNDESA). Both countries also exhibit gender disparities in the older age cohorts, with more older women than men, typically due to longer women's life expectancy.

234


These population structures indicate the need for policies to address the challenges of aging populations. To mitigate these issues, both countries should consider boosting birth rates, encouraging higher labor force participation, particularly among women and older workers, and adapting social security systems to the changing demographic realities.

3. Literature Review Due to its low birth rates and increasingly aging population, South Korea is facing severe demographic issues. As seen in Kyungsoo Choi and associates' 2003 study conducted for the Korea Development Institute (KDI), the nation's fertility rate fell to 0.78 births per woman in 2022, a substantial decrease from the fertility level of 2.1, which is the recommended rate for a country to stay stable. The aging population has grown significantly due to this drop, which led to longer life expectancies, accounting for 15.7% of the total population by 2020. The South Korean government has implemented several measures to combat these tendencies, including increased parental leave, better daycare options, and financial incentives for having children. However, their efficacy is frequently restricted by societal and professional conventions such as traditional gender roles, long working hours, and stigma surrounding parental leave and elderly care. Improving healthcare financing and community support networks for older people is also essential to lessen the financial strain on future generations. The KDI paper emphasizes the necessity of more extensive and ongoing policy interventions, drawing comparisons with Japan's integrated child welfare programs that may provide helpful guidance for South Korea's management of its demographic challenges (Choi et al. 60-67). Ki-Soo Eun (2007) reported that changes in labor market conditions, cultural norms, and economic shifts following the 1997 financial crisis are the primary reasons for South Korea's low birth rate. Economic factors such as high living costs and job instability hinder family formation and childbearing. Despite women's equal access to education and employment, they still face the dilemma of choosing between a career and a family, exacerbating the fertility decline. In response, the South Korean government launched the 'Saero Maji 2010' project to improve work-life balance and provide comprehensive family support to address the low birth rates and aging population. However, a significant obstacle remains between the rising status of women and deeply rooted traditional beliefs, necessitating more comprehensive and long-term governmental measures that are urgently needed to increase the birth rate (Eun). According to Sunho Cho's (2023) research, Japan has protected children's rights and welfare for several years. These initiatives culminated when the Basic Act on the Rights of the Child was passed into law, a significant milestone [specific implications and provisions of the Act]. This gave rise to the legal foundation for integrating the implementation of multiple child policy programs as opposed to doing it separately in the past. The Children and Family Agency, founded by the Japanese government, was given general authority to enforce child-related laws, including those that attempt to increase the birthrate. Concerned that the next six to seven years may be the final opportunity to address the lowfertility crisis, the government has insisted that it continue with policy initiatives in ways that differ from previous ones to reach 2030. This article examines the Basic Act on the Rights of the Child, the role of the Children and Family Agency, and the Japanese government's policies on low fertility (Cho 36-39). The findings from the work of Paul S. Hewitt, John Creighton Campbell, and Chikako Usui (2003), "The Demographic Dilemma" Japan's Aging Society, focuses on the consequences of Japan's aging society. Three perspectives from three different authors are analyzed. The first is that Japan's aging society will cause its economy to collapse. The second is that Japan's population presents a crisis rather than a chance to manage this issue gradually. The third perspective is that a more productive economy would lead to greater efficiency and thus significantly boost Japan's economy (Hewitt et al. 3-6).

235


In the book "Global Political Demography," Chapter 8, "The Oldest Societies in Asia: The Politics of Aging in South Korea and Japan," written by Axel Klein and Hannes Mosl 2021, focuses on how South Korea and Japan are using public policy to combat labor shortages and as a result of their aging populations. Both countries' populations are aging quickly due to falling birth rates and rising life expectancy. This significantly strains their social welfare systems, healthcare infrastructure, and labor markets. Japan's population has been aging at a faster rate than South Korea, with a higher proportion of elderly residents. In contrast, South Korea's fertility rate has dropped sharply in recent decades (50.4%), leading to a more rapid aging process. Japan has had a more extended history of addressing the challenges of an aging society, implementing various policies earlier than South Korea. Japan's policies have focused on expanding eldercare services and promoting automation and robotics to compensate for labor shortages. Conversely, South Korea's policies have emphasized raising the retirement age and incentivizing higher fertility, though with limited success. The politics surrounding population aging are complex in both countries, involving debates about intergenerational equity and the sustainability of social programs. However, the political urgency of the issue may be higher in Japan, where the aging demographic trends are more severe. Both South Korea and Japan face the common challenge of maintaining economic competitiveness and social cohesion as their populations continue to age, and policymakers in both countries are grappling with balancing the issues facing younger generations and the requirements of the aging population (Axel et al.).

Policy Comparison 1. Fertility rate Policies of South Korea and Japan [Table 7. Maternity and Childcare Leave Policy] Policy Names

South Korea

Japan

Maternity and 출산 및 육아 휴직 Childcare Leave - Maternity leave: 90 days of rest in total - Childcare leave: Up to one year for each parent, with financial support from employment insurance.

育 休業 (Ikuji Kyūgyō) - Maternity leave: Up to 14 weeks - Childcare leave: Both parents can take up to one year off work to care for their child, with financial support from employment insurance.

Benefits

1) Health Benefits 2) Work-Life Balance 3) Increased Gender Equality

1) Comprehensive Support 2) Gender Inclusivity 3) Child Health

Drawbacks

1) Workplace Stigma 2) Economic Impact 3) Low Uptake

1) Cultural Barriers 2) Financial Strain 3) Inequality in Access

In South Korea, the Maternity and Childcare Leave policy (출산 및 육아 휴직) grants mothers a total of ninety days of leave, divided equally before and after childbirth. On the other hand, both parents (mother and father) are entitled to one year of leave. In doing so, financial support from employment insurance is available for up to one year. This policy aims to provide health benefits, enhance work-life balance, and promote gender equality by ensuring mothers have adequate recovery and bonding time while encouraging both parents to participate in childcare. However, cultural norms often result in negative perceptions of those taking leave, particularly fathers, and smaller businesses may struggle with the financial burden of extended leave. Also, the ability to take leave has a negative economic

236


impact. Given its low uptake, many parents do not feel as though this policy is attractive enough for them to consider becoming parents (Ministry of Employment and Labor). In Japan, the Maternity and Childcare Leave policy (育 休業, Ikuji Kyūgyō) provides maternity leave for up to fourteen weeks (98 days), split between six weeks before and eight weeks after childbirth, with up to one year of childcare leave supported by employment insurance for both the mother and father. This policy ensures significant time off for newborn care, encourages fatherly participation in childcare, and fosters gender equality in the workplace and home. Additionally, it improves health outcomes for mothers and children due to extended bonding time. These are factors that aim to encourage many Japanese spouses to have children. However, this Japanese policy comes with cultural barriers and does not grant complete equality of access. In addition, the financial compensation in Japan for becoming parents does not seem to eliminate all financial concerns of being a parent (Ministry of Health, Labour and Welfare). Both countries' maternity and childcare policies encourage spouses to become parents by providing similar benefits that allow the parents to focus more on their children rather than on the financial burden that can generally accompany parenthood. However, as we have seen, these policies need to be revised, to better persuade married couples to become parents. [Table 8. Parental Leave Benefits] Policy Names

South Korea

Japan

Parental Leave Benefits

육아휴직 급여 - Financial support during parental leave

育 休業給付金 (Ikuji Kyūgyō Kyūfukin) - Financial support during parental leave

Benefits

1) Financial Security 2) Encourages Leave 3) Workforce Participation

1) Financial Support 2) Workforce Retention 3) Family Well-being

Drawbacks

1) Insufficient Compensation 2) Administrative Complexity 3) Stigma

1) Incomplete Coverage 2) Employer Reluctance 3) Cultural Resistance

South Korea’s Parental Leave Benefit policy (육아휴직 급여) provides financial support during parental leave, covering around eighty percent of the parent's salary for the first three months and fifty percent thereafter. It aims to reduce financial stress for families, encourage more parents to take leave and help retain employees in the workforce post-leave. Nonetheless, the salary percentage covered may need to be increased to support families fully. Additionally, the process for claiming benefits is often seen as complex and unwieldy, with societal and workplace stigma further discouraging utilization (Ministry of Employment and Labor). Japan’s Parental Leave Benefit policy (育 休業給付金, Ikuji Kyūgyō Kyūfukin) supplies financial support during parental leave, covering about sixty-seven percent of the parent's salary for the first six months and fifty percent thereafter. It encourages both parents to take leave, maintain workforce connections, and potentially reduce turnover while supporting family bonding and child development. However, the benefits may not fully cover high living costs, and parents covered under these benefits may still face financial strain. Employers may also hesitate to support long-term leave, which can affect career progression, and persistent cultural norms may discourage leave-taking, especially among men (Ministry of Health, Labour and Welfare).

237


Overall, there are just a few subtle differences between Korean and Japanese parental leave policies. Both countries allow parents a year off from work, but in Korea, parents receive 80% of their monthly paycheck for the first three months, while in Japan, parents receive 67% for the first six months. Both countries agree that 50% of the monthly paycheck is sufficient for the remaining six months of leave. This reduced pay during the final half of the leave period may not be enough, potentially discouraging couples from parenthood as financial strain is likely to be highest during these months. [Table 9. Childcare Services] Policy Names

South Korea

Japan

Childcare Services

보육 서비스 - Expansion of public and private daycare facilities to financially support daycare costs.

保育サービス (Hoiku Sābisu) - Expansion of daycare facilities to reduce waiting lists. Financial support for daycare costs.

Benefits

1) Accessibility 2) Quality Care 3) Economic Benefits

1) Widespread Access 2) Economic Participation 3) Early Childhood Development

Drawbacks

1) Limited Availability 2) Quality Variations 3) Financial Constraints.

1) Shortages 2) Cost Issues 3) Quality Concerns

The Childcare Services policy (보육 서비스) in South Korea focuses on expanding public and private daycare facilities to reduce waiting lists and provide financial support and subsidies for daycare costs. Free childcare is available for children under five for low-income families and beyond, enabling parents, especially mothers, to return to work and boost the economy. However, high demand and government funding limitations may restrict service expansion and lead to waiting lists. At the same time, inconsistencies in care quality across providers have resulted in numerous complaints (Ministry of Health and Welfare). The Childcare Services policy (保育サービス, Hoiku Sābisu) in Japan aims to expand daycare facilities to reduce waiting lists, with services like "hoikuen" for younger children and "yōchien" for preschoolaged children. Financial support and subsidies for daycare costs help working parents and structured early education benefits child development. Despite efforts to reduce waiting lists and increase daycare availability, high demand and funding limitations can cause service variability and long waiting lists, leading to disparities in care quality and numerous complaints (Ministry of Health, Labour and Welfare). Overall, the benefits and the drawbacks in both Korea and Japan are very similar. The benefits represent the good intentions of policymakers; however, the drawbacks weigh heavily enough on prospective parents to make them think otherwise about raising children in these societies. [Table 10. Free Early Childhood Education and Care] Policy Names

South Korea

Japan

Free Early Childhood Education and Care

유아교육·보육비 지원 - Free education and care for children aged 3-5 and support for early childhood education

幼 育・保育の無償化 (Yōji Kyōiku Hoiku no Mushōka) - Free education and care for

238


programs.

children aged 3-5 and infants and toddlers from lowincome families.

Benefits

1) Universal Access 2) Educational Benefits 3) Workforce Support

1) Equity 2) Developmental Gains 3) Economic Relief

Drawbacks

1) Funding Challenges 2) Implementation Gaps 3) Overcrowding

1) Resource Allocation 2) Quality Variability 3) Capacity Issues

The Early Childhood Education and Care Support policy (유아교육 보육비 지원) in South Korea offers free education and care for children aged three to five, supporting early childhood education programs and reducing financial burdens on families. It contributes to better long-term academic and social outcomes, aiding parents in balancing work and family life through reliable childcare. Despite its benefits, high demand and the high costs of providing free services can lead to overcrowded facilities and strain government budgets, impacting the quality of care across regions (Ministry of Education). Japan’s Free Early Childhood Education and Care Policy (幼 育・保育の無償化, Yōji Kyōiku Hoiku no Mushōka) ensures access to early education for all children, providing free education and care for children aged three to five and infants and toddlers from low-income families. It reduces financial pressure on families and facilitates parents' return to work. Nonetheless, maintaining these programs requires significant government expenditure and oversight (Ministry of Education, Culture, Sports, Science, and Technology).

2. Aging Society Policies of South Korea and Japan [Table 11. Employment Programs for Older Adults] Policy Names

South Korea

Japan

Employment Programs for Older Adults

노인 일자리 사업 シルバー人材センター - Provides incentives to (Shirubā Jinzai Sentā) companies that hire senior - Provides part-time and shortworkers and offers training term employment opportunities for older adults programs to help older individuals update their skills. in various fields.

Benefits

1) Extended Workforce Participation 2) Financial Support 3) Skill Utilization

1) Active Engagement 2) Income Supplementation 3) Labor Market Support

Drawbacks

1) Adaptation challenges 2) Job availability

1) Job Match 2) Income Sufficiency

South Korea’s Employment Programs for Older Adults (노인 일자리 사업) provides incentives to companies that hire senior workers and offer training programs to help older individuals update their skills, which keeps older adults engaged in the workforce longer and leverages their experience and skills. This puts less of a financial burden on the pension system. Unfortunately, older workers may face difficulties in adapting to new technologies or job roles and may only be able to access limited job

239


opportunities that match the limited skills of senior workers (Ministry of Employment and Labor). This suggests that they may remain alienated from the labor force. Japan’s Employment Programs for Older Adults (シルバー人材センター (Shirubā Jinzai Sentā)) policy provides part-time and short-term employment opportunities for older adults in various fields, helping seniors stay active and engaged. Nevertheless, the availability and the type of jobs offered may not meet all seniors’ needs, and part-time work may not provide sufficient income for all elderly individuals (Ministry of Employment and Labor). The employment programs for older adults are very similar in both countries; however, it should be noted that Korea offers training for these people so that they can update their skills. Given this, this shared policy by both countries needs to provide more substantial encouragement for those who can collect monthly payouts from the pension system to return to work and volunteer to undergo extra training at their age. Given the drawbacks of both nations’ policies, more incentives need to be provided to this age bracket so that they are more willing to sacrifice their free time and entitlements to retirement for a return to the workplace. [Table 12. Pension System Reforms] Policy Names

South Korea

Japan

Pension System Reforms

국민연금제도 개혁 - Increases the retirement age and adjusts pension benefits to align with contributions and life expectancy.

年金制度改革 (Nenkīn Seido Kaikaku) - Implements changes such as raising the pensionable age and recalculating benefits to ensure long-term sustainability.

Benefits

1) System Sustainability 2) Financial Security 3) Budget Relief

1) Financial Viability 2) Adequate Support 3) Demographic Adjustment

Drawbacks

1) Public Resistance 2) Transitional Difficulties

1) Public Opposition 2) Transition Challenges

South Korea’s Pension System Reforms (국민연금제도 개혁) have recently increased the retirement age and adjusted pension benefits to better align with contributions and life expectancy. This policy reduces the financial strain on the government’s budget while providing financial security for the elderly to sustain their pension system. Despite these benefits, it still creates a period of adjustment and potential financial hardships for some retirees, and they may face resistance from the public due to increased retirement age and potentially reduced benefits (Ministry of Employment and Labor). The Japanese 年金制度改革 (Nenkīn Seido Kaikaku) policy, on the other hand, implemented changes such as raising the pensionable age and recalculating benefits to ensure long-term sustainability. This policy aims to maintain the financial viability of the pension system, ensuring that retirees receive adequate support and adequately adjust to demographic changes. Nonetheless, the Japanese public opposes this policy of raising the retirement age, making it more difficult for future retirees to transition to retirement (Ministry of Health and Welfare). Overall, this policy, given its intentions, has very contentious drawbacks, given that readjusting or extending the retirement age is unexpected to those who began paying into the pension

240


system at a young age with the intention of retiring at a certain age. In both countries, for those who plan to retire at what was the initial age of retirement, their payouts may be less than what they expected and, therefore, they may be forced to return to work so that they can fulfill the requirements of a “readjusted” or “recalculated” policy. [Table 13. Long-Term Care and Integrated Care Systems] Policy Names

South Korea

Japan

Long-Term Care and Integrated Care Systems

장기요양보험 - Provides coverage for the costs of home care services, nursing facilities, and other forms of support for the elderly.

地域包括ケアシステム (Chiiki Houkatsu Kea Shisutemu) - Provides a combination of medical care, nursing care, preventive services, and daily living support within the community.

Benefits

1) Care access 2) Financial Relief 3) Quality of life

1) Independent Living 2) Burden Reduction 3) Community Support

Drawbacks

1) Funding Challenges 2) Care Quality Variability

1) Coordination Needs 2) Rural Disparities

Moreover, Korea’s Long-Term Care and Integrated Care System (장기요양보험) policy provides coverage for the costs of home care services, nursing facilities, and other forms of support for the elderly. This not only ensures that the elderly receive the care they need but also reduces financial stress on families, improving seniors' quality of life. However, the drawback of this policy is that the funding needed to maintain the system can be challenging, and there may be variability in the quality of care provided (National Health Insurance Service). The Japanese ‘地域包括ケアシステム (Chiiki Houkatsu Kea Shisutemu)’ policy then provides a combination of medical care, nursing care, preventive services, and daily living support within the community. This policy enables older adults to live independently in their own homes for as long as possible, reduces the burden on centralized healthcare facilities, and fosters a sense of community. Even so, significant coordination and resources are required for effective implementation, and rural areas may struggle to provide the same level of integrated care as urban areas (Ministry of Health, Labour, Welfare). Policies for the elderly, I fear, are prone to be executed poorly, given that longer life expectancy exacerbates the resourcing required to address the needs of the elderly in countries that face low fertility rates and longer life spans. Nonetheless, the governments of both countries need to take the steps necessary to ensure that their elderly populations are taken care of, and this means that more locations need to be built that are resourced with the most professional and knowledgeable staff to make sure that universal access and quality care are entitled to their elderly populations.

Conclusion The policies related to maternity leave and parental responsibilities in Korea and Japan are often undermined by prevailing societal attitudes that go against childbearing. It is crucial to shift public perceptions to recognize maternity leave not as a burden but as an essential period for mothers to recuperate before and after childbirth. In addition, creating a culture that normalizes fathers taking on childcare and home duties can aid working mothers in their stress management and advance gender parity. Public awareness campaigns highlighting the significance of these policies for family well-being and social advancement should be the primary focus of government activities to change these views.

241


Governments must act with greater urgency to address the issues these policies raise. First, optimizing the implementation process and making it more transparent and efficient can mitigate bureaucratic hurdles and guarantee that support reaches those who need it most. The success of childcare and parental leave policies could be improved by increasing financing for these initiatives and diverting funds from less impactful expenditures. To prevent the misallocation of tax revenues, the government should also enforce adherence to current legislation and guarantee the efficient use of resources. By addressing these issues systematically and promoting a cultural shift in how maternity leave and parental duties are perceived, Korea and Japan can better support families and foster a more equitable society. In understanding the situations faced by South Korea and Japan and envisioning some strategic interventions that might alleviate the problems, it is helpful to reference countries that avoid similar challenges. Contrasting to the challenges faced by Korea and Japan, several European nations have successfully established effective maternity and parental programs. Successful EU cases, such as those in Sweden, Denmark, and Finland, should be studied to understand how comprehensive support systems can improve family well-being and gender equality. These examples showcase the most significant methods and approaches for improving childcare and parental leave laws with the aging society policies. Firstly, Finland has provided substantial maternity leave with significant pay, ensuring mothers have sufficient time to recover and bond with their newborns. The Äitiysloma (Maternity Leave) policy offers 105 working days (approximately four months) of maternity leave, allowing mothers to start their leave 30-50 days before the expected due date. The leave is paid, with benefits calculated based on the mother's income (Finnish Institute for Health and Welfare). Secondly, Sweden’s parental leave system has been one of the most generous in the world, with extensive benefits, high flexibility, and policies that encourage both parents to participate. The Föräldraledighet (Parental Leave) policy offers 480 days of parental leave per child, which can be shared between parents. Ninety days are reserved for each parent and are non-transferable, ensuring that fathers take leave. For the first 390 days, parents can receive 80% of their salary, with a flat rate for the remaining days. The leave can be taken until the child turns eight years old or completes the first grade of school (Swedish Social Insurance Agency). Thirdly, Denmark’s Dagtilbud (Daycare/Childcare Services) policy provides high-quality, accessible, and affordable childcare services, supporting parents in balancing work and family life. Denmark has an extensive and high-quality childcare system. Childcare is provided for children from six months to school age. The state subsidizes the system, with parents paying 25-30% of the actual cost. The availability of childcare services is high, which encourages parents to leave their children in the care of these services which provide early learning and development opportunities for their children (Ministry of Children and Education). Next, Sweden has the Förskola (Preschool) policy, which ensures that all children have free preschool, promotes early development, and eases the financial burden on families. Sweden provides free preschool for children aged 3-6 for up to 525 hours per year. Beyond the free hours, additional childcare is heavily subsidized based on family income. The preschool curriculum emphasizes play-based learning, social skills, and early education (Swedish National Agency for Education). Not only did Scandinavian countries succeed in boosting their fertility rate, but they also managed to support the aged population. In the case of Sweden, their approach to Employment Programs for Older Adults mirrors the initiatives seen in South Korea and Japan. Sweden's Senior Job Program provides financial incentives for employers to hire and retain older workers. This program offers subsidies and training opportunities to help older adults remain active in the workforce, thereby enhancing their financial independence and reducing the burden on social security systems (Swedish Public Employment Service). Similar to South Korea’s Elderly Employment Programs and Japan’s Silver Human Resource Centers, Sweden’s program aims to extend the working lives of seniors and support

242


their economic stability. In addition, Denmark has undertaken substantial Pension System Reforms comparable to those in South Korea and Japan. The Danish ATP (Labour Market Supplementary Pension) system augments the public pension, providing additional benefits based on individual contributions throughout their working life. This system ensures a stable income for retirees, addressing the economic challenges of an aging population and promoting long-term savings (Danish Ministry of Finance). Much like South Korea’s National Pension System Reform and Japan’s pension reforms, Denmark’s system is designed to secure the financial future of its aging population. Finally, Finland supports its aging population with a robust Long-Term Care System emphasizing Comprehensive Care Models. The Finnish model integrates public and private services, focusing on home care and municipal care centers to cater to the diverse needs of older adults (Finnish Institute for Health and Welfare). This integrated care system parallels South Korea’s and Japan’s efforts, providing high-quality care and managing the growing demand for long-term care services. As the brief summary above indicates, the Scandinavian countries have implemented policies that both utilize and support the aging sector of their populations in manners similar to those of South Korea and Japan. Where the difference rests, however, is the way these regions deal with the other end of the population pyramid, in terms of increasing the fertility rate to ensure a healthy population replacement rate. Scandinavian countries, including Finland, Denmark, and Sweden, have successfully stabilized their fertility rates through effective policies. These nations exhibit population structures with a wide base, indicating the efficacy of their strategies. In contrast, the policies implemented by South Korea and Japan, while theoretically beneficial, have yet to produce significant improvements. Consequently, their population pyramids display a narrow base and a wide middle. Based on this comparison, South Korea and Japan should consider implementing Scandinavian-style strategies to boost fertility and deal with the issues associated with an aging population.

References Choi, Kyungsoo. "Population Aging in Korea: Economic Impacts and Policy Issues." Korea Development Institute (KDI) Research Policy Seminar, 2006, pp. 1-400. Accessed 18 July 2024. Cho, Sunho. "Recent Changes in Policy Directions for Low Fertility in Japan." Global Social Security Review, vol. 2023, no. 가을, 2023, pp. 35-45. Accessed 18 July 2024. "Childcare Services." Ministry of Children and Education, www.bm.dk/english/. Accessed 31 July 2024. "Comprehensive Care Models." Finnish Institute for Health and Welfare, www.thl.fi/en/web/thlfi-en. Accessed 31 July 2024. Data Commons. "Fertility Rate Trends: South Korea and Japan." Data Commons, https://datacommons.org/tools/timeline#place=country%2FKOR%2Ccountry%2FJPN&st atsVar=FertilityRate_Person_women. Accessed 31 July 2024. Data Commons. "Population Trends: South Korea and Japan." Data Commons, https://datacommons.org/tools/timeline#place=country%2FKOR%2Ccountry%2FJPN&st atsVar=Count_Person&chart=%7B%22countnone%22%3A%7B%22pc%22%3Afalse%2C%22delta%22%3Afalse%7D%2C%22countarea%22%3A%7B%22pc%22%3Afalse%2C%22delta%22%3Afalse%7D%7D. Accessed 31 July 2024.

243


European Commission. "Your Social Security Rights in Finland." European Commission, https://ec.europa.eu/social/main.jsp?catId=1117&langId=en. Accessed 27 July 2024. Eun, Ki-Soo. "Lowest-low Fertility in the Republic of Korea: Causes, Consequences and Policy Responses." Asia-Pacific Population Journal, vol. 22, no. 2, 2007. Accessed 18 July 2024. Goerres, Achim, and Pieter Vanhuysse. Global Political Demography: The Politics of Population Change. Springer Nature, 2021. Accessed 19 July 2024. Hewitt, Paul S. "The Demographic Dilemma: Japan’s Aging Society." Asia Program Special Report, vol. 107, Jan. 2003, pp. 1-24. Accessed 19 July 2024. "Japan Population 2023." World Population Review, https://worldpopulationreview.com/countrie s/japan-population. Accessed 31 July 2024. Kim, Jeong-min. "Births in South Korea at All-Time Low Despite Government Efforts." The Chosun Ilbo, 30 May 2024, https://www.chosun.com/english/national-en/2024/05/30 /B54BJJW6SZBJHEUQDNZRKDXMSE/. Accessed 31 July 2024. Kim, Wook. "고령화(高齡化) - 한국민족문화대백과사전." 한국민족문화대백과사전, 2024, https://encykorea.aks.ac.kr/Article/E0067748. Accessed 27 July 2024. Kwon, Jake. "Japan's Population Crisis: The Country Faces a 'Demographic Time Bomb.'" CNN, 1 March 2024, https://edition.cnn.com/2024/03/01/asia/japan-demographic-crisis-popula tion-intl-hnk-dst/index.html. Accessed 31 July 2024. "Maternity Leave Policy." Finnish Institute for Health and Welfare, www.thl.fi/en/web/thlfi-en. Accessed 31 July 2024. OECD (Organisation for Economic Co-operation and Development). “OECD Data.” OECD, https://data.oecd.org/. Accessed 19 July 2024. "Parental Leave Policy." Swedish Social Insurance Agency, www.forsakringskassan.se/. Accessed 31 July 2024. PopulationPyramid.net. Republic of Japan 2024. PopulationPyramid.net, 2024, https://www.populationpyramid.net/japan/2024/. Accessed 15 July 2024. PopulationPyramid.net. Republic of Korea 2024. PopulationPyramid.net, 2024, https://www.populationpyramid.net/republic-of-korea/2024/. Accessed 15 July 2024. "South Korea Population." Worldometers, https://www.worldometers.info/world-population/ south-korea-population/#google_vignette. Accessed 31 July 2024. Statistics Bureau of Japan. “Population Statistics.” Statistics Bureau of Japan, https://www.stat.go.jp/english/data/index.html. Accessed 15 July 2024. Statistics Korea. “Population Statistics.” Statistics Korea, http://kostat.go.kr/portal/eng/index.action. Accessed 15 July 2024. Swedish National Agency for Education. "Free Early Childhood Education." Swedish National Agency for Education, www.skolverket.se/. Accessed 31 July 2024.

244


Swedish National Agency for Education. "Preschool." Swedish National Agency for Education, https://www.skolverket.se/education-in-sweden/the-swedish-education-system/preschool. Accessed 27 July 2024. Swedish Public Employment Service. "Senior Job Program." Swedish Public Employment Service, www.arbetsformedlingen.se/. Accessed 31 July 2024. Swedish Social Insurance Agency. "Parental Benefit." Försäkringskassan, https://www.forsakri ngskassan.se/privatpers/foralder/foraldrapenning . Accessed 27 July 2024. Thompson, Derek. "South Korea’s Doctors Are on Strike Over a Criminal Complaint Against Their Colleague." Time, 21 July 2023, https://time.com/6835879/south-korea-doctors -strike-criminal-complaint/. Accessed 30 July 2024. UNICEF. "Reports." UNICEF, https://www.unicef.org/reports. Accessed 23 July 2024. United Nations Department of Economic and Social Affairs (UNDESA), Population Division. World Population Prospects. United Nations, https://population.un.org/wpp/. Accessed 23 July 2024. World Bank. “World Bank Population Data.” World Bank, https://data.worldbank.org/indicator/SP.POP.TOTL. Accessed 23 July 2024.

Figure Reference Data Commons. "Population Trends: South Korea and Japan." Data Commons, https://datacommons.org/tools/timeline#place=country%2FKOR%2Ccountry%2FJPN&st atsVar=Count_Person&chart=%7B%22count-none%22%3A%7B%22pc%22%3Afalse% 2C%22delta%22%3Afalse%7D%2C%22count-area%22%3A%7B%22pc%22%3Afalse %2C%22delta%22%3Afalse%7D%7D. Accessed 17 July 2024. Data Commons. "Fertility Rate Trends: South Korea and Japan." Data Commons, https://datacommons.org/tools/timeline#place=country%2FKOR%2Ccountry%2FJPN&st atsVar=FertilityRate_Person_Female. Accessed 17 July 2024. PopulationPyramid.net. Republic of Japan 2024. PopulationPyramid.net, 2024, https://www.populationpyramid.net/japan/2024/. Accessed 15 July 2024. PopulationPyramid.net. Republic of Korea 2024. PopulationPyramid.net, 2024, https://www.populationpyramid.net/republic-of-korea/2024/. Accessed 15 July 2024.

245


Investing the Effect of Piezoelectricity on Spirodela Polyrhize Author

Full Name

:

Ahn, Lugh

:

Chadwick International

(Last Name, First Name)

School Name

Abstract Piezoelectricity garnered significant attention in recent years, particularly for its effects on animal cells, including humans. Despite its significant potential, research on its influence on plant systems remains scarce. This study aims to address this gap by investigating the effects of piezoelectric stimulation on Spirodela polyrhiza, a model aquatic plant. The experimental design focuse3d on maximizing the efficiency of piezoelectric stimulation by utilizing an aquatic plant model rather than a terrestrial one. The results unveiled that piezoelectric stimulation effectively enhances several growth parameters in S. polyrhiza. Specifically, it was found to promote root development, activate Photosystem II, increase chlorophyll concentration, and boost both dry weight and nutrient uptake. These findings underscore the positive effects of piezoelectricity on plant growth and physiological processes. The implication of this study suggests, piezoelectric simulation could serve as promising tool to boost plant productivity and applicable to other photosynthetic microorganisms. Future research could further investigate its use as a converging technology for potential application across wide range of biological and industrial sectors.

Keywords Piezoelectricity, Chlorophyll, Photosynthetic Activity, Nutrient, Growth Promotion

246


1. Introduction Piezoelectricity refers to the property of materials that generate an electrical charge when subjected to mechanical pressure. This phenomenon occurs when a material with a specific crystalline structure undergoes external mechanical stress or deformation, resulting in electrical changes. For instance, materials like quartz crystals and certain ceramics exhibit piezoelectric properties, producing electrical charges or voltage when exposed to mechanical pressure. These properties have been extensively utilized in various technological applications, including sensors, actuators, and ultrasonic devices. The applications of piezoelectric materials can be broadly categorized. Firstly, piezoelectric sensors are crucial for converting physical changes into electrical signals. These sensors enable precise measurement of variables such as temperature, pressure, and acceleration, playing a key role in industrial automation and environmental monitoring. For instance, in automotive collision detection systems, piezoelectric sensors detect impacts and convert them into electrical signals, thereby activating safety mechanisms. Secondly, piezoelectric actuators convert electrical signals into mechanical deformation allowing precise control of movement and vibration. This capability is essential in precision machining, optical device adjustment, and medical instrument. Notably, in ultrasonic devices, piezoelectric devices generate acoustic waves that are utilized to capture images or data. Lastly, the application of piezoelectric materials in ultrasonic equipment is particularly significant. These materials are critical for generating and reeving ultrasonic signals, making them indispensable in medical diagnostics, non- destructive testing, and various industrial applications. Ultrasonic devices, by enabling noninvasive examination of internal structure, improve diagnostic accuracy and contribute significantly to advances in medical services. Given the unique physical properties and wide range of applications of piezoelectric materials, their use is expected to continue evolving, driving further technological and industrial progress. Piezoelectric technology has also led to innovative development in medical treatment. Its applications are particularly notable in ultrasound diagnostics, implantable devices, and physical therapy equipment, significantly enhancing the precision of clinical diagnosis and treatment. Particularly, piezoelectricbased ultrasound therapeutic devices use high-frequency sound waves to alleviate pain and promote tissue healing in physical therapy. These devices are effective in treating musculoskeletal disorders and enhance the efficiency of rehabilitation, thereby accelerating patient recovery. However, there is a significant gap in the application of piezoelectric devices to plant systems, with limited understanding of their effects. To address this, the study investigates the impact of piezoelectric stimulation on plant growth, using Spirodela polyrhiza as the model organism. Various parameters were measured including root length, chlorophyll concentration, dry weight, photosynthetic rate, and nutrient content, in response to piezoelectric stimulation. The results indicate that piezoelectric positively influences plant growth. These findings suggest that piezoelectric technology could be integrated into strategies to enhance plant growth and optimize the production of primary and secondary metabolites in plants. Such applications hold potential for industrial utilization, offering promising avenues for future research and development in agricultural and biotechnological fields.

2. Background 2.1. Piezoelectricity Piezoelectricity is a phenomenon where materials generate electrical polarization in response to mechanical strain, or alternatively, produce mechanical strain when subjected to an external electric field (Ref). This unique property enables a wide range of applications, including the development of nanoscale sensors and actuators. Also, Piezoelectricity is a material property that generates an electric potential in response to applied stress and produces mechanical strain when subjected to an external voltage. This capability is widely utilized in various applications, including sensing, actuation, transduction, and energy harvesting and conversion (Ref)

247


2.2. Factors related to Plant Growth Plant growth is influenced by a multitude of factors, including various physiological and environmental conditions. Key aspects affecting plant development include leaf concentration, sensitivity to growth conditions, and photosynthesis efficiency. Additionally, the analysis of genetic and protein expressions related to plant growth provides insights into the underlying mechanisms driving these processes. Understanding these factors can help elucidate how plants adapt and respond to their environment. Comprehensive studies integrating these elements are essential for advancing our knowledge of plant growth and optimizing agricultural practices.

3. Materials and Methods 3.1. Materials Spriodela polyrhiza was obtained from the National Institute of Biological Resources. Sodium nitrate (NaNO3) and potassium phosphate (K2HPO4), acquired from SigmaAldrich, were utilized as nitrogen and phosphorous source, respectively, for the growth of Spirodela polyrhiza. Ethanol, also purchased from Sigma-Aldrich, was employed for the extraction of chlorophyll. 3.2. Cultivation of S. polyrhiza Prior to the experiment, 0.15g of sodium nitrate and 0.37g of the potassium phosphate were dissolved in 1L of 3’DW and sterilized to prepare the culture medium for S. polyrhiza. S. polyrhiza was incubated in the environment with a 16:8 (Light: Dark) cycle, at a light intensity of 100 µmol m² s⁻¹ and a temperature of 23o C. To ensure continuous maintenance of the cultures, subculturing was performed every 3 days. 3.3. Effects of Piezoelectric Stimulation on S. polyrhiza Growth and Development Piezoelectric stimulation was introduced using a piezoelectric stimulation device (KR-JYY-T03, YESKAMO). The stimulation was conducted in 100mL beakers with an 80mL working volume, each containing 100 individuals of S. polyrhiza. The piezoelectric stimulation was applied for 30 minutes using the device. The experimental conditions were maintained to those of the parent culture, and both the control and experimental groups were tested in triplicate. 3.4. Analysis of Plant Growth-Related Factors To assess the effects of piezoelectric stimulation on S. polyrhiza, changes in growth and photosynthetic activity, chlorophyll concentration, dry weight, and nutrient (nitrogen and phosphorous) concentrations were measured, including root length. Detailed descriptions of each measurement are provided below. 3.4.1. Comparison of Plant Root Development with the Introduction of Piezoelectricity To assess root growth, which serve as an indicator of growth of S. polyrhiza, root lengths were measured. For each treatment group, including both control and experimental, root lengths were recorded from three randomly selected individuals. Measurements were then complied to present the distribution of root lengths, along with the mean and standard error of measurements. 3.4.2. Changes of ChlorophyII Concentration with the Introduction of Piezoelectricity To evaluate chlorophyll concentration as a growth indicator of S. polyrhiza, an experiment was conducted where five individual leaves were collected daily from each experimental group. The leaves were treated with 95% ethanol (EtOH) to extract chlorophyll, with the extraction process involving three cycles of freezing and thawing at - 80o C. Following extraction, the chlorophyll extracted solution was centrifuge at 13,000 rpm for 10 minutes. The supernatant was then analyzed for chlorophyll

248


concentration using a UV-VIS spectrophotometer. The chlorophyl concentration was determined based on the methodology described in previous report, and the specific calculation formula used for this measurement is as follows. Chlorophyll A : 13.36 * A644 – 5.19 * A648 Chlorophyll B : 27.43 * A648 – 8.12 * A644 Total Chlorophyll : 5.24 * A664 + 22.24 * A648

3.4.3. Changes of Dry Weight according to Piezoelectricity Introduction To assess changes in dry weight as a growth indicator for S. polyrhiza, the dry weight of the plants was measured. Ten samples from each experimental group were collected daily and dried in a 60o C dry oven for 24 hours using Whatman GF/CTM glass microfiber filters. After drying, the samples were weighed to determine their dry weight. 3.4.4. Changes in Photosynthetic Activity with the Introduction of Piezoelectricity To evaluate photosynthesis-related factors as indicators of growth in S. polyrhiza, the following measurements were conducted. Photosynthesis-related parameters were assessed using the AquaPen AP 110-C (Photon Systems Instruments, Czech Republic). 3.4.5. Change of the Nutrient Consumption with the Introduction of Piezoelectricity To evaluated nutrient uptake as a growth indicator for S. polyrhiza, the concentrations of nitrogen and phosphorous, which are key nutrients, were measured. Changes in the concentrations of these nutrients in the culture medium were assessed through total nitrogen (T-N) and total phosphorous (T-P) measurements. These measurements were conducted using a water quality meter manufactured by Humas (Humas Co. Ltd., Korea)

1. Experimental Result 1.1. Impact of Piezoelectric Stimulation on Root Growth in Spirodela polyrhiza Root growth was evaluated through length measurements to assess the impact of piezoelectric stimulation on Spirodela polyrhiza. The introduction of piezoelectric stimulation led to a notable enhancement in root growth compared to control group. On the first day, the average root length in the control group 1.72 ± 0.73 cm, while the piezoelectric treatment group exhibited a root length of 2.04 ± 0.74 cm. By the final day of stimulation, the control group’s root length averaged 1.79 ± 0.88 cm, whereas the piezoelectric treatment group demonstrated an increased root length of 3.01 ± 0.56 cm. These measurements are illustrated in Figure 1, and the average and standard error of root length changes are detailed in Table 1. The results clearly indicate that piezoelectric stimulation significantly promotes root growth. To further explore the effects of this stimulation, additional measurements were conducted on chlorophyll concentration, photosynthetic activity, dry weight and nutrient consumption. The positive influence of piezoelectric on root development suggests its potential for enhancing plant growth, warranting further investigation into its broader applications in plant biology and agriculture.

249


Figure 1. Measurement of Spirodela polyrhiza root length

Table 1. Summary of changes in root length of Spirodela polyrhiza in response to piezoelectric stimulation

4.2. Assessment of Chlorophyll Concentration and Photosynthetic Activity in S. polyrhiza Under Piezoelectric Stimulation Following the analysis of root growth changes, additional investigations were conducted to examine the effect of piezoelectric stimulation on chlorophyll concentration and photosynthetic activity. Measurements indicated that, in contrast to the control group, the piezoelectric treatment group exhibited an increase in chlorophyll concentration (Figure 2). Significant changes were particularly observed in chlorophyll-b concentration (Figure 2. B), which is known to enhance photosynthetic efficiency by capturing light energy and transferring it to chlorophyll-a. The chlorophyll-b, predominantly found in plants and algae, functions in conjunction with chlorophyll-a to enhance photosynthetic efficiency. Chlorophyll-b captures light energy and transfer it to chlorophyll-a, thereby increasing the plant’s capacity to utilize available light. These finding suggest an improvement in the photosynthesis efficiency and energy transfer efficiency of S. polyrhiza. To further validate the observed changes in photosynthesis, additional measurements of key photosynthesis-related parameters were conducted.

250


Figure 2. Changes in chlorophyll concentration in S. polyrhiza in response to piezoelectric stimulation: (A) chlorophyll-a concentration; (B) chlorophyll-b concentration; (C) total chlorophyll concentration.

Subsequent measurements of key photosynthesis-related factors revealed significant differences, particularly, in indicators associated with Photosystem II. Specifically, the treatment exhibited consistently higher values in Photosystem II-related factors compared to the control (Figure 3). From day 5, there was a notable convergence of these factors toward zero, which is likely attributed to the cessation of plant growth due to the initial nutrient supply being exhausted without additional replenishment.

251


Figure 3. Changes in the efficiency photosystem II in S. polyrhiza induced by piezoelectric stimulation.

Photosystem II (PSII) is a crucial complex involved in the photosynthesis process of plants. Located primarily in the thylakoid membrane of chloroplasts, PSII facilitates the absorption and conversion of light energy, the splitting of water molecules, electron transfer, and the production of ATP and NADPH. Consequently, the activation of PSII is likely to have enhanced the photosynthetic capacity of the plants. This increased photosynthetic activity is expected to correlate with improved plant growth, leading to greater root development and higher chlorophyll levels as a result of enhanced photosynthesis. It can be inferred that the introduction of piezoelectric stimulation activated photosystem II, leading to increased chlorophyll concentration and enhanced plant growth. Additionally, to assess other growth factors beyond chlorophyll, measurements of nutrient intake and dry weight were conducted. These analyses provided a broader understanding of plant growth dynamics influenced by the stimulation. 4.3. Measurement of Changes in Dry Weight and Nutrient Content in S. polyrhiza Under Piezoelectric Stimulation To investigate the effects of increased chlorophyll and enhanced photosynthesis on other plant parameters, changes in dry weight and nutrient uptake were measured. The results indicated an increase in the dry weight per individual plant (Figure 4). This increase suggests that under favorable growth conditions, plants tend to develop more robust leaves, stems and roots, thereby contributing to greater dry weight. Thus, it can be inferred that piezoelectric stimulation positively influences plant growth conditions.

Figure 4. Changes in dry weight of S. polyrhiza in response to piezoelectric stimulation

252


Additionally, changes in the nutrient components of the culture medium were assessed. Nitrogen and phosphorus, essential for plant growth, were measured to evaluate nutrient uptake. The results (Figure 5. A) indicated that nutrient intake increased in samples subjected to piezoelectric stimulation. For nitrogen, the experimental group exhibited an initial consumption of 3.20 ± 0.27 mg/L on Day 1, which steadily rose to 13.33 ± 0.33 mg/L by Day 7. In contrast, the control group showed a lower nitrogen consumption, starting at 0.39 ± 0.26 mg/L on Day 1 and increasing to 6.36 ± 0.19 mg/L by Day 7. Regarding phosphorus, the experimental group’s consumption began at 0.22 ± 0.05 mg/L on Day 1 and increased to 1.37 ± 0.08 mg/L by Day 7. The control group, however, had a lower phosphorus uptake, with initial consumption of 0.009 ± 0.002 mg/L and an increase to 1.07 ± 0.05 mg/L by Day 7 (Figure 5. B). These findings confirm that piezoelectric stimulation enhances the plant's nutrient uptake, which is likely correlated with increased growth. Similar effects have been observed in microalgae, where ultrasonic stimulation led to microcracks in the cell wall, enhancing nutrient absorption and biomass productivity [13]. Therefore, piezoelectric stimulation appears to increase chlorophyll concentration, enhance photosynthesis, and boost nutrient intake in plants. The detailed mechanisms by which piezoelectricity affects plant physiology are discussed in the following section. 4.4. Measurement of Changes in Dry Weight and Nutrient Content in S. polyrhiza Under Piezoelectric Stimulation This study aimed to investigate the effects of piezoelectric stimulation on plant growth. The results confirm that piezoelectric stimulation positively influences plant development. Firstly, the introduction of stimulation promotes root development and growth, leading to increased nutrient uptake, including nitrogen and phosphorus.

Figure 5. Changes in nitrogen and phosphorus consumption concentrations in S. polyrhiza due to piezoelectric stimulation: (A) nitrogen consumption concentration; (B) phosphorus consumption concentration.

253


Additionally, stimulation activates Photosystem II in the leaves, resulting in higher chlorophyll concentrations and enhanced growth conditions for the plants. The observed effects can be summarized as follows, with related details illustrated schematically in Figure 6.

Figure 6. Schematic diagram of the primary effects of piezoelectric stimulation on plant cells.

Conclusively, the results suggest that piezoelectricity stimulation influences plant growth through several mechanism. Firstly, the primary mechanism appears to be the modulation of the photosynthesis process. The stimulation likely impacts the electron transfer process within photosystem II, thereby enhancing photosynthetic efficiency. This increased efficiency results in higher production of ATP and NADPH, which in turn supports improved plant growth and development. Additionally, the piezoelectric effect may induce stress responses in plants due to physical vibrations, These vibration likely cause microscopic physical changes in plant cells, potentially leading to structural modifications or disruption in cell walls. Such changes may facilitate enhanced nutrient transfer and absorption, consistent with observed increase in nutrient uptake following piezoelectric stimulation. Moreover, piezoelectricity may influence the cell membrane’s electrical potential, which could affect the uptake of key ions such as calcium. This change in ion absorption is expected to activate signaling pathways that promote cell growth and differentiation. Lastly, piezoelectric stimulation may alter gene expression by either inducing or inhibiting specific genes involved in plant physiological responses. These changes are likely to affect hormone levels and other related genes, further influencing plant growth. These hypothesis will be thoroughly tested in future studies, which aim to demonstrate that piezoelectricity can serve as a beneficial stimulus for plant growth.

5. Conclusion This study explored the effects of piezoelectric stimulation on plant growth, addressing an underexplored area of research. Using Spirodela polyrhiza, a model aquatic plant, the research demonstrated that piezoelectric stimulation promotes root growth, activates Photosystem II, increases chlorophyll concentration, enhances dry weight, and improves nutrient uptake. These results confirm that piezoelectricity has a positive impact on plant growth by stimulating root development and optimizing nutrient absorption. Furthermore, the activation of Photosystem II supports enhanced photosynthesis and increased chlorophyll levels. Future studies should aim to optimize the parameters of piezoelectric stimulation, such as intensity and frequency, while investigating its integration with other growth-promoting technologies to improve plant growth stability. In addition to plants, the potential application of piezoelectric stimulation in photosynthetic microorganisms opens broader possibilities for future convergence technologies.

254


References [1] Hooper, T. E., Roscow, J. I., Mathieson, A., Khanbareh, H., Goetzee-Barral, A. J., & Bell, A. J. (2021). High voltage coefficient piezoelectric materials and their applications. Journal of the European Ceramic Society, 41(13), 6115-6129. [2] Qian, W., Yang, W., Zhang, Y., Bowen, C. R., & Yang, Y. (2020). Piezoelectric materials for controlling electrochemical processes. Nano-Micro Letters, 12, 1-39. [3] Wu, Y., Ma, Y., Zheng, H., & Ramakrishna, S. (2021). Piezoelectric materials for flexible and wearable electronics: A review. Materials & Design, 211, 110164. [4] Lyu, Z., & Xu, Q. (2021). Recent design and development of piezoelectric-actuated compliant microgrippers: A review. Sensors and Actuators A: Physical, 331, 113002. [5] Turner, B. L., Senevirathne, S., Kilgour, K., McArt, D., Biggs, M., Menegatti, S., & Daniele, M. A. (2021). Ultrasound‐powered implants: a critical review of piezoelectric material selection and applications. Advanced healthcare materials, 10(17), 2100986. [6] Augustine, R., Al Mamun, A., Hasan, A., Salam, S. A., Chandrasekaran, R., Ahmed, R., & Thakor, A. S. (2021). Imaging cancer cells with nanostructures: Prospects of nanotechnology driven noninvasive cancer diagnosis. Advances in Colloid and Interface Science, 294, 102457. [7] Wu, Y., Ma, Y., Zheng, H., & Ramakrishna, S. (2021). Piezoelectric materials for flexible and wearable electronics: A review. Materials & Design, 211, 110164. [8] Deng, W., Zhou, Y., Libanori, A., Chen, G., Yang, W., & Chen, J. (2022). Piezoelectric nanogenerators for personalized healthcare. Chemical Society Reviews, 51(9), 3380-3435. [9] Chen, S., Zhu, P., Mao, L., Wu, W., Lin, H., Xu, D., ... & Shi, J. (2023). Piezocatalytic medicine: an emerging frontier using piezoelectric materials for biomedical applications. Advanced Materials, 35(25), 2208256. [10] Chen, S., Tong, X., Huo, Y., Liu, S., Yin, Y., Tan, M. L., ... & Ji, W. (2024). Piezoelectric Biomaterials Inspired by Nature for Applications in Biomedicine and Nanotechnology. Advanced Materials, 2406192. [11] Mousavi, S. A., Dubin, A. E., Zeng, W. Z., Coombs, A. M., Do, K., Ghadiri, D. A., ... & Patapoutian, A. (2021). PIEZO ion channel is required for root mechanotransduction in Arabidopsis thaliana. Proceedings of the National Academy of Sciences, 118(20), e2102188118. [12] Tan, W. H., Ibrahim, H., & Chan, D. J. C. (2021). Estimation of mass, chlorophylls, and anthocyanins of Spirodela polyrhiza with smartphone acquired images. Computers and Electronics in Agriculture, 190, 106449. [13] Han, S. I., Jeon, M. S., Ahn, J. W., & Choi, Y. E. (2022). Establishment of ultrasonic stimulation to enhance growth of Haematococcus lacustris. Bioresource technology, 360, 127525.

255


ICA-DNN: Novel Neural Network Architecture for Prediction of Efficient Power Output in Large-Scale Wave Energy Farms Author

Full Name

:

Aussarbekov, Adilet

:

Nazarbayev Intellectual School International Baccalaureate

(Last Name, First Name)

School Name

Abstract Wave energy is a highly promising yet nascent field of research. Large wave energy converter (WEC) farms face significant computational costs due to non-linearities in predicting total power output relying on physics-based models. Therefore, the study developed a novel neural network architecture ICADNN, fusing Independent Component Analysis (ICA) for dimensionality reduction and feature extraction together with Deep Neural Networks (DNN) for forecasting total power output. ICA was chosen, firstly, for its ability to capture complex patterns in non-Gaussian data distribution as wave dynamics and energy conversion is complicated. Secondly, it can effectively separate independent factors such as wave direction, height, period and WEC efficiency that have an effect on power output. The study utilized a dataset “Large-scale Wave Energy Farm” with 36 000 instances from UC Irvine Machine Learning Repository. ICA-DNN demonstrated high effectiveness in identifying underlying patterns in spatial features: power of individual WEC, q-factor and many coordination points, as well as demonstrated accurate performance in forecasting total power output of 49 WECs. The proposed model ICA-DNN outperformed existing neural network models by 99.05% in MSE, 91.08% in MAE, and 17.64% in R² in predicting total power output. Thus, the study contributes to the potential for increasing efficiency of WEC farms by accurately predicting total power. This can lead to development of high performing real-time prediction models for detecting WEC inefficiencies by comparing actual and expected power output, which is helpful for scheduling and operating maintenance.

Keywords Wave Energy, Wave Energy Converter (WEC), Deep Neural Networks, Independent Component Analysis, Machine Learning, Total Power, Wave Dynamics

256


I. INTRODUCTION Wave energy is one of the most promising renewable energy sources because of its huge potential and absence of environmental impact, but is still a nascent field of research [1] [2]. Theoretically, wave energy can generate up to 2.64 trillion kWh, equivalent to 64% of U.S. utility-scale electricity usage in 2021 [3]. Wave Energy Converter (WEC) is a technology that converts kinetic and potential energy gained from waves into electrical energy [4]. However, predicting total power output of WEC farms remains a significant issue due to non-linearity in wave dynamics and processes of energy conversion [5]. Moreover, predicting total power output of WECs relying on physics-based models is computationally expensive [6]. To resolve the issue, various innovative approaches have been proposed. Considerable number of studies focused on physics-based modeling and discovering hydrodynamic interactions between complex wave dynamics and WECs [7] [8] [9]. Extending physics-based studies, researchers developed various machine learning and deep learning algorithms to forecast power output of WECs. For instance, Multi-Layer Perceptron [10], Support Vector Machines, Radial Basis Neural Network [11], 4 Long Short-Term Memory neural network architectures [12] were proposed. However, no ultimate solution, maximizing model’s performance is available yet. There are still possibilities for improved performance, which could be done by fusing unsupervised feature extraction and deep learning algorithms. Therefore, the study aims to: • • •

Develop a novel neural network architecture, fusing ICA (Independent Component Analysis) and DNN (Deep Neural Network) to enhance performance in prediction of total power output. Evaluate ICA-DNN fused model’s performance against standard benchmarks: MSE (Mean Squared Error), MAE (Mean Absolute Error), and R² (Coefficient of determination) Compare performance of ICA-DNN with existing state-of-art models in MSE, MAE, and R².

The motivation behind utilizing combined architecture of ICA and DNN is based on an assumption that ICA is probably the best unsupervised learning algorithm for extracting features from the WEC farm data for the following reasons: •

Complex wave interactions typically yield non-Gaussian data distribution. As ICA assumes that data distribution is non-Gaussian and because it can work with high order statistics, it could capture nonlinear patterns more effectively compared to PCA (Principal Component Analysis), which works primarily with Gaussian data distribution and low order statistics. These nonlinear patterns could arise from the energy conversion process due to intricate wave dynamics. Complex wave energy dynamics cause the presence of many independent factors affecting power output of individual WEC such as wave direction, wave height and wave period, along with WEC efficiency and Q-factor, which is the effectiveness of WEC arrangements. Because ICA is good at separating mixed signals, it could effectively separate these independent factors into different independent components for a highly accurate model. ICA can effectively reduce dimensions of WEC farm dataset to prepare data for further DNN regression by separating independent components and reducing noise from inaccurate measurements, environmental factors and WEC inefficiency.

Therefore, fusion of ICA for unsupervised independent feature extraction and DNN for regression specifically to predict large-scale WEC farm’s total power output holds a potential to achieve high accuracy of the model. Although ICA seems best suited for capturing patterns in complex wave energy dynamics, it wasn’t considered in previous studies. The approach presented in the study promises to address computational challenges and opens opportunities for future research in optimization of wave energy farms. Understanding Total Power Output is crucial for Large Scale WEC Farms, as it would allow them to detect operational inefficiencies based on substantial deviations between actual and predicted power output of WECs. Thus, ICA-DNN can be particularly effective for future development of adaptive real-time power monitoring systems

257


that would guide maintenance schedules. The remaining parts of the paper are structured as follows: Section II provides in-depth review of related work, Section III describes methodology in detail, Section IV discusses dataset and evaluation metrics for experimental setup, Section V presents obtained results, discusses findings, and compares ICADNN performance with state-of-art models, Section VI concludes the work.

II. RELATED WORK In recent years, a considerable amount of research has been done related to prediction of total power output in renewable energy systems, with particular focus on WEC farms. Approaches presented in the studies can be broadly classified into 2 main directions: physics based models and machine learning models. Extensive research in physics-based models resulted in a diversified set of techniques for WEC power prediction. A variety of approaches rely on simulations of wave dynamics and WEC mechanics. For instance, Khalil et all. [13] developed WEC power prediction model with real-time data assimilation technique using ensemble Kalman filter. The approach integrates physics based models and data-driven error corrections to achieve better performance. Meek [7] conducted comprehensive hydrodynamic modeling of different WEC configurations including single-tether, three-tether, four-tether, and fivetether to provide insights into their performance under different conditions. David et all. [8] give valuable insights into physics-based simulations of arrays submerged three-tethered CETO-6 wave energy converters, focusing on nonlinear modeling of complex wave dynamics. Despite these advances, physics based models remain computational expensive, therefore, mostly they are impractical for long term real-time applications. To address those limitations, much attention was given to development of machine learning and deep learning models, as they can capture complex non linear patterns without explicit use of physical modeling. For instance, Burramukku [10] proposed using the Multi-Layer Perceptron (MLP) framework as a baseline to understand data and performance of neural networks on data collected from fully submerged WEC. Wang et all. [11] conducted comparative analysis of Long Short-Term Memory (LSTM), Support Vector Machines (SVM), Radial Basis Neural Network (RBFF), and Backpropagation Neural Network. The study relied on data collected from a two-body hinge-barge WEC. Nalamati [12] employed and tested 4 LSTM models for a CETO wave energy farm total power forecasting. These models include Vanilla, Stacked. Bidirectional LSTM as well as CNN-LSTM. Numerous studies have developed physics informed machine learning techniques to understand hydrodynamics of WEC and improve performance of models. For instance, Li et all. [9] presented a physics-based Gaussian Process model to predict the hydrodynamic characteristics for all array layouts of WEC. On the other hand, Chen and Yu [14] developed the Physics Informed Neural Network (PINN) based on floating sphere WEC governing motion equations and time-series segmentation. Literature reveals there is a growing interest in understanding physics behind hydrodynamic interactions between wave dynamics and WECs, as well as predicting power of WECs using state-of-art deep learning techniques, with specific focus on LSTM. However, to the best of our knowledge, there is no prior research on application of fused unsupervised machine learning and neural network architectures. To address this gap, current study utilizes combination Independent Component Analysis (ICA), which is perfect fit for complex wave dynamics, and Deep Neural Network (DNN). The study demonstrates significant improvement in MSE and MAE compared to LSTM models presented by Nalamati [12] while using the same dataset. Specifically, ICA-DNN outperformed best performing Bi-LSTM by 99.47% in MSE and by 93.31% in MAE.

III. METHODOLOGY The study proposes a novel architecture, combining Independent Component Analysis (ICA) for dimensionality reduction and feature extraction and Deep Neural Network (DNN) for regression. ICA

258


is known for its effectiveness in separating different sources of data. It is particularly effective in separating multivariate non-Gaussian components into additive signals[15]. Therefore, it is effective for capturing underlying patterns in spatial features like many coordination points, powers of individual WECs and q-factor, while DNN with 3 hidden layers and linear activation function at the last layer is effective for predicting exact value of Total Power.

3.1 ICA layer 3.1.1 Structure of ICA Layer 1) Centering centers data around zero mean in order to make data easier to handle. 2) Whitening makes the data points uncorrelated and establishes variance of 1, simplifying the process of calculating independent components. 3) FastICA Update Rule updates weight vectors iteratively to maximize independence of components by finding correct directions for weight vectors. 4) Normalization stabilizes ICA, keeping length of 1. 5) Deflation Orthogonalization ensures independence of components by making new vectors orthogonal to previously found vectors. 6) Recovering Independent Components transforms final weight vectors into independent components. 3.1.2. Centering As a first layer of the proposed neural network, ICA was set to reconstruct input data into 128 components along 10 iterations and pass it to DNN. In the fused architecture, ICA itself consists of many sub-steps: As a first step there is a preprocessing consisting of centering and whitening. The data is centered subtracting the mean, ensuring that the average is zero. 𝑋!"#$"%"& = 𝑋 − 𝐸[𝑋] The formula simplifies the data matrix by centering data around zero for each of the features, which are columns in X data matrix to prepare for further whitening. Figure 1. represents the data distribution after centering.

Figure 1. Centered Data

3.1.3. Whitening Then, the data is whitened, meaning that it is transformed in a way, so that data points don't correlate and variance is equal to 1. First step for whitening in FastICA is calculating Covariance Matrix, using following formula:

259


1 ' С = 𝑋 𝑋 𝑚 !"#$"%"& !"#$"%"&

which looks like

𝑥 1 (( С = + ⋮ 𝑚 𝑥

(#

⋯ 𝑥)( 𝑥(( ⋱ ⋮ 0+ ⋮ ⋯ 𝑥)# 𝑥)(

⋯ 𝑥(# ⋱ ⋮ 0 ⋯ 𝑥)#

Covariance matrix gives insight into how two random variables vary together. The product of X(data matrix) and transposed X gives how each feature in the dataset varies in respect to other features. Therefore dividing by “m” data points, the covariance matrix is calculated.

Figure 2. Heatmap of Covariance Matrix of Whitened Data As shown in Figure 2, diagonal values are all close to 1, while off-diagonal values are close to 0. It indicates that all features are scaled successfully, having unit variance and no correlation. Therefore, whitening was successful, appearing to be very close to the identity matrix (I). Then, the covariance matrix goes through Eigenvalue Decomposition, where the matrix is divided into its eigenvalues and eigenvectors. This is an important step as it identifies patterns in the data. The following formula is applied: 𝐶 = 𝐸𝐷𝐸 ' Where E is eigenvectors matrix, D is a diagonal of eigenvectors and 𝐸 ' is transpose of E. 𝑒( 𝐸=+ ⋮ 𝑥#(

⋯ 𝑒(# λ( ⋱ ⋮ 0,𝐷 = + ⋮ ⋯ 𝑒## 0

⋯ 0 ⋱ ⋮0 ⋯ λ#

Such eigenvalue decomposition allows us to see the main directions of how the values in the dataset vary.

260


Whitening Transformation Matrix is calculated using the following formula: (

𝑊*+,$" = 𝐸𝐷-. 𝐸 ' 1 𝑒( 𝐸=+ ⋮ 𝑥#(

⋯ 𝑒(# ⎛;λ( ⋱ ⋮ 0⎜ ⋮ ⋯ 𝑒## ⎜ 0 ⎝

⋯ ⋱ ⋯

0

⎞ 𝑒( ⋮ ⎟+ ⋮ 1 ⎟ 𝑒(#

⋯ 𝑒#( ⋱ ⋮ 0 ⋯ 𝑒##

;λ# ⎠

To make each eigenvector have unit variance(variance of 1), eigenvalues are scaled, so that the reciprocal of the square root of D is found. Multiplication with E and 𝐸 ' gives the uncorrelated data matrix with the unit variance. 𝑋*+,$" = 𝑊*+,$" 𝑋!"#$"%"& Finally, the whitened data matrix is derived by multiplying the centered data matrix X and transformed matrix W.

3.1.4. ICA Update Rule Weights are initialized randomly and and following formula is used to update weights: 𝑤 ← 𝐸[𝑥𝑔(𝑠 − 𝑤 ' 𝑥)] − 𝐸[𝑔′(𝑤 ' 𝑥)]𝑤 where: g() is a non-linear hyperbolic tangent function (tanh) that measures to what extent the data is nonGaussian. 𝐸[𝑥𝑔(𝑠 − 𝑤 ' 𝑥)] is the mean of the data point and the transformed projection. It can be rewritten as (

' ∑) ,/([𝑥𝑔(𝑠 − 𝑤 𝑥, )] This step ensures that weights are updated in a way that increases nonGaussianity. )

𝐸[𝑔′(𝑤 ' 𝑥)] is the expected value of derivative of the transformed function. Can be opened as ( )

' ∑) ,/([𝑔′(𝑤 𝑥)]. It updates weights to ensure proper scaling.

3.1.4. Normalization After the update stage, weight vectors are normalized to ensure that they remain unit vectors. Normalization is done by dividing weight vector by its norm to ensure scaling to have a length of 1 as shown in the following formula: 𝑤 ←

𝑤 ‖𝑤‖

3.1.5. Deflation Orthogonalization ,-(

𝑤, ← H(𝑤,' 𝑤0 )𝑤0 0-(

where 𝑤, is main weight vector

261


𝑤0 are found weight vectors Their dot product (𝑤,' 𝑤0 ) gives the projection. By subtracting the projection, main weight vector 𝑤, is orthogonalized, making it perpendicular to previously found vectors. Thus, independence of components is maintained.

3.1.5. Recovering Independent Components 𝑆 = 𝑊𝑋*+,$" Finally, multiplication of weight matrix W and whitened data 𝑋*+,$" gives us 128 independent components.

3.2 DNN layer The proposed architecture’s second layer involves Deep Neural Network (DNN) for a regression model aimed to predict total power output of WEC farms,taking independent components from ICA as an input. DNN is mathematically based on Rectified Linear Unit (ReLU) as it allows DNN to learn non-linear patterns in data. Therefore, 3 hidden layers employ ReLU activation. The formula for forward pass for neuron i and j-th input is defined as: #

𝑦, = 𝑅𝑒𝐿𝑈(H 𝑤,0 𝑥0 + 𝑏, ) 0/(

The formula applies ReLU activation for addition of weighted sum and bias, ensuring non-linearity and complexity of the model to detect hidden patterns in independent components. Weights are updated every batch based on the following formula: 12

𝑤,0 ← 𝑤,0 − 𝛼 * , where multiplication of learning rate (𝛼) and derivative of error in respect to weight 12

!"

(* ) is subtracted from given weight (𝑤,0 ). Thus, DNN compares predicted output with actual and !"

updates weights increasing ICA-DNN’s performance. Finally, the output layer employs linear activation as a means to predict the continuous value Total Power Output of 49 WECs.

3.3 Hyperparameter Optimization Techniques To optimize the model’s performance, the study utilized grid search technique for hyperparameter tuning. Specifically, the grid search looked for the best combination of: • ICA components: 64 and 128 • Neurons in 3 hidden layers: 32, 64, and 128 • Learning rates: 0.001 and 0.01 • Batch sizes: 32 and 64 Overall, 216 iterations were done in hyperparameter optimization to maximize the accuracy of the proposed model architecture.

IV. DATASET 4.1 Dataset The study utilized the dataset “Large-scale Wave Energy Farm” from UC Irvine Machine Learning Repository [16]. Originally, the dataset is derived from the study conducted by University of Adelaide Phoenix HPC and published at the GECCO conference, receiving “Best Paper” award [17]. The dataset was collected from 2 locations including Perth and Sydney placing 49 CETOs, fully submerged threetether WECs. For the purpose of quality research and data consistency, the study focuses on the data collected from Perth as there are no meaningful differences in data distribution. Nevertheless, the study

262


will also test ICA-DNN performance on data collected from Sydney. The dataset includes data for 49 WECs spread in 148 features and 36 000 instances. Dataset features include: • • • •

Coordinates of 49 WECs that are crucial for understanding hydrodynamic interactions and spatial dynamics among WECs. They are denoted from X1 and Y1 to X49 and Y49. Power generated by individual WEC denoted from Power1 to Power49. Q-factor or the measure of efficiency of wave energy conversion for 49 WECs denoted by qW. Total Power Output, which is the target variable that the study aims to predict.

4.2 Data preprocessing Luckily, the chosen dataset had no missing values. Therefore, the single preprocessing step was data normalization. The study used StandardScaler to normalize all features to a range of [0, 1] in order to ensure equal contribution of data points during the learning process. For a target column “Total_Power”, MinMaxScaler was used for stable training and decreasing loss due to large values. Furthermore, the dataset was split into training and test splits, 80% and 20% respectively, in order to ensure model performance for unseen data. Thus, 28 800 data instances were used to train ICA-DNN, whereas 7 200 instances were used to evaluate the model's performance.

4.3 Evaluation Metrics To evaluate the performance of ICA-DNN the study used standard benchmarks including Mean Squared Error (MSE), Mean Absolute Error (MAE) and Coefficient of Determination (R²).

V. RESULTS Figure 3. displays correlation coefficients of the ICA, which is the first layer of the proposed algorithm (ICA-DNN). According to the figure, 128 components of ICA have a mean correlation coefficient of 0.0385. This indicates slight correlation between independent components and the target variable, which is Total Power Output. Highest correlation coefficients were demonstrated by independent components 78 and 81, where component 78 showed the highest positive correlation coefficient of 0.3325. Such strong correlation captures important aspects of WEC, potentially connected with hydrodynamic interactions or the optimal WEC conditions. Component 81, on the other hand, demonstrated moderate negative correlation of -0.2193, which could stand for mechanical losses or suboptimal wave conditions. A standard deviation of 0.0614 indicates that few components have significantly stronger correlation with the target variable. Figure 4. clearly visualizes change in data distribution after applying ICA. Overall, it can be said that ICA effectively isolated 128 independent components, reducing dimensions of the dataset and minimizing noise in data. Thus, ICA’s unsupervised feature extraction approach provided a more clear input for DNN, which influenced development of a more accurate DNN model with reduced complexity for prediction of Total Power of large scale WEC farms. It enhances the robustness of the ICA-DNN model in tackling the challenge of non-linearity in wave energy prediction.

263


Figure 3. Correlation Coefficients of Independent Components

Figure 4. Data Distribution of Centered Data and ICA-Transformed Data As a result of hyperparameter tuning using grid search best parameters were identified: {'layer1_neurons': 128, 'layer2_neurons': 32, 'layer3_neurons': 32, 'learning_rate': 0.001, 'batch_size': 64, 'n_independent_components': 128} with the MSE of 0.000016, MAE of 0.0029 and R² of 0.9993. Such results after 10 epochs indicate that fused architecture of ICA and DNN is almost a perfect model for predicting total power output of large scale WEC farms. To validate the high performance of ICA-DNN, it was compared to existing state-of-art models, where DNN without ICA performed MSE of 0.0080, MAE of 0.0735, and R2 of 0.6622. DNN with Bernoulli RBM as a first layer performed MSE of 0.0021, MAE of 0.0328, and R² of 0.9095. DNN with PCA as a first layer performed MSE of 0.0010, MAE of 0.0249, and R² of 0.9599. DNN with Autoencoder as a first layer performed MSE of 0.0013, MAE of 0.0258, and R² of 0.9457.

264


Table 5 displays performance of named models in structured form with MSE, MAE, and R². Furthermore, the Table 6 illustrates improvement in % of the proposed neural network model in comparison with the existing state-of-art models. To calculate to what extent the proposed architecture is better, the following formula was used:

Improvement (%) =

Performance - Performance of ICA-DNN × 100 Performance

Model

MSE

MAE

ICA-DNN

0.000016

0.0029

0.9993

DNN without ICA

0.0080

0.0735

0.6622

DNN with Bernoulli RBM

0.0021

0.0328

0.9095

DNN with PCA

0.0010

0.0249

0.9599

DNN with Autoencoder

0.0013

0.0258

0.9457

Vanilla LSTM

0.003286

0.045399

Stacked LSTM

0.003121

0.044115

Bi-LSTM

0.003041

0.043342

CNN-LSTM

0.003994

0.049092

Table 5: MSE, MAE, R² Metrics for ICA-DNN and state-of-art models Model

MSE, MAE, R², ICA-DNN excels by ICA-DNN excels by ICA-DNN excels by (%) (%) (%)

DNN without ICA

99.80%

96.06%

50.87%

DNN with Bernoulli RBM

99.24%

91.46%

9.87%

DNN with PCA

98.40%

88.35%

4.10%

DNN with Autoencoder

98.77%

88.76%

5.66%

Vanilla LSTM

99.51%

93.61%

Stacked LSTM

99.49%

93.43%

Bi-LSTM

99.47%

93.31%

CNN-LSTM

99.60%

94.09%

Table 6: % for which ICA-DNN outperforms state-of-art models in MSE, MAE, R²

265


According to Table 6. the minimum improvement ICA compared to DNN models in MSE is 98.40%, in MAE is 88.35% and in R² is 4.10%. Such improvements demonstrate that ICA-DNN substantially outperformed existing models. Overall, mean MSE improvement is 99.05%, mean MAE Improvement is 91.08%, and mean R² Improvement is 17.64%, which are the signs of effective performance of fused architecture of ICA and DNN. Additionally, Table 5. demonstrates MSE and MAE values of LSTM models presented in [12] by Nalamati. According to the values, ICA-DNN outperformed these models at least by 99.47% in MSE and by 93.31% in MAE metrics. Therefore, it can be said that ICA-DNN is superior to existing state-of-art models.

VI. CONCLUSION The proposed fused neural network architecture combining ICA and DNN represents a significant step forward in predicting output of large scale WEC farms. Using ICA as a first layer of DNN isolated 128 crucial independent components, which allowed development of a more accurate predictive model. Effective utilization of sophisticated two machine learning algorithms in the study, allowed study to achieve significant performance, outperforming existing neural network models by 99.05% in MSE, 91.08% in MAE, and 17.64% in R². The success of the ICA-DNN not only sets a benchmark for future research in its application for wave energy, but also holds potential for minimization of operational costs and maximization of output energy from large WEC farms. That is, prediction of exact amount of power output would allow WEC farms to detect operational inefficiencies by comparing actual and predicted total power output. Thus, future studies can develop real-time power monitoring systems based on ICA-DNN to better schedule and operate maintenance of WECs.

VII. REFERENCES [1] Rehman, S., Alhems, L. M., Alam, M. M., Wang, L., & Toor, Z. (2023). A review of energy extraction from wind and ocean: Technologies, merits, efficiencies, and cost. Ocean Engineering, 267, 113192. [2] Matthias Kimmel Bryony Collins Albert Cheung Lisa Becker Rohan Boyle, David Strahan et al. Global trends in renewable energy investment 2019. United Nations Environment Programme, 2019. [3] Asif, M. (2024). Renewable Energy: Technologies, Applications and Trends. In Handbook of Energy and Environment in the 21st Century (pp. 41-65). CRC Press. [4] Gao, Q., Ertugrul, N., Ding, B., Negnevitsky, M., & Soong, W. L. (2023). Analysis of wave energy conversion for optimal generator sizing and hybrid system integration. IEEE Transactions on Sustainable Energy. [5] Penalba, M., Giorgi, G., & Ringwood, J. V. (2017). Mathematical modeling of wave energy converters: A review of nonlinear approaches. Renewable and Sustainable Energy Reviews, 78, 11881207. [6] Deberneh, H., & Kim, I. (2018). Predicting output power for nearshore wave energy harvesting. Applied Sciences, 8(4), 566. [7] Meek, M. H. (2023). Hydrodynamic Modeling of Submerged Wave Energy Converters: Power Take-Off Mooring Configuration Effect on Power Performance. [8] David, D. R., Wolgamot, H., Kurniawan, A., Hansen, J., & Rijnsdorp, D. (2024). Nonlinear modelling of arrays of submerged wave energy converters. Ocean Engineering, 310, 118669.

266


[9] Li, M., Jia, G., Mahmoud, H., Yu, Y. H., & Tom, N. (2023). Physics-constrained Gaussian process model for prediction of hydrodynamic interactions between wave energy converters in an array. Applied Mathematical Modelling, 119, 465-485. [10] Burramukku, B. (2020). Estimator model for prediction of power output of wave farms using machine learning methods. arXiv preprint arXiv:2011.13130. [11] Wang, L., Wen, C., Wu, S., & Wu, S. (2024). Electric power prediction of a two-body hingebarge wave energy converter using machine learning techniques. Ocean Engineering, 305, 117935. [12] Nalamati, D. (2021). Forecasting power output of wave farm using machine learning: Lstm model [13] Khalil, M., Ströfer, C. M., Raghukumar, K., & Dallman, A. (2022). Wave sequential data assimilation in support of wave energy converter power prediction. arXiv preprint arXiv:2209.15115. [14] Chen, B. C., & Yu, Y. H. (2023, June). A Preliminary Study of Learning a Wave Energy Converter System U4sing Physics-Informed Neural Network Method. In International Conference on Offshore Mechanics and Arctic Engineering (Vol. 86908, p. V008T09A077). American Society of Mechanical Engineers. [15] Tharwat, A. (2021). Independent component analysis: An introduction. Applied Computing and Informatics, 17(2), 222-249.. [16] Sergiienko, N. Y., Neshat, M., Da Silva, L. S., Alexander, B., & Wagner, M. (2020, August). Design optimisation of a multi-mode wave energy converter. In International Conference on Offshore Mechanics and Arctic Engineering (Vol. 84416, p. V009T09A039). American Society of Mechanical Engineers. [17] Neshat,Mehdi, Alexander,Bradley, Sergiienko,Nataliia, and Wagner,Markus. (2023). Large-scale Wave Energy Farm. UCI Machine Learning Repository. [18] Neshat, M., Alexander, B., Sergiienko, N. Y., & Wagner, M. (2020, June). Optimisation of large wave farms using a multi-strategy evolutionary framework. In Proceedings of the 2020 genetic and evolutionary computation conference (pp. 1150-1158). [19] Naik, G. R., & Kumar, D. K. (2011). An overview of independent component analysis and its applications. Informatica, 35(1). [20] Palla, G. L. P., & Pani, A. K. (2023). Independent component analysis application for fault detection in process industries: Literature review and an application case study for fault detection in multiphase flow systems. Measurement, 209, 112504. [21] Hyvärinen, A., Khemakhem, I., & Morioka, H. (2023). Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning. Patterns, 4(10). [22] Wang, H., Sun, J., Xi, Z., Dai, S., Xing, F., & Xu, M. (2024). Recent Progress on Built-in Wave Energy Converters: A Review. Journal of Marine Science and Engineering, 12(7), 1176.

267


Correlation between Time of Light Exposure and Plant Growth Author

Full Name

:

Bang, Seoyun

:

Saint Paul Academy Daechi

(Last Name, First Name)

School Name

Abstract Differences in the amount of time a plant is exposed to the Light Emitting Diode (LED) lighting can influence the plant's growth rate and leaf changes. The aim of this paper is to investigate on how sweet basil growth speed and health are affected in an incubator environment by controlling the amount of time the plants are exposed to LED lighting while maintaining constant humidity and temperature. The characteristics of basil that respond to the amount of lighting include not only the growth speed of stems and leaves, but also color changes, but because color changes are difficult to measure quantitatively, the focus is on identifying growth speed in this report. In order to come up with an optimized experimental environment and perform the experiment and observation, we must consider various factors, such as clarifying experimental standards and improvement conditions through repetitive experiments based on our understanding of plant biology, and using reliable tools such as MATLAB to perform accurate statistical analysis. In this experiment, basil plants were divided into three groups according to LED light exposure time. Experimental groups were created by dividing into 24-hour, 12-hour, and 6-hour light exposure, and the cultivation of basil groups were observed while maintaining a constant temperature of 26.5 degrees and humidity of 50.2%. As a result of the experiment, the group exposed to light for 24 hours had the highest growth rate, and the group exposed to light for 12 hours had the lowest growth rate. The growth rate of the group exposed to light for 6 hours was medium, but the stem thickness was 30% thinner than the other groups. As urban population density intensifies and water shortage areas increase, research is needed on ways to efficiently secure crops even in urban areas, breaking away from the framework of traditional crop cultivation that requires vast plains and enormous amounts of water. As part of this, we can prepare for future food competition across globe by improving production yield by studying how plant growth is affected by changes in various factors in the artificial environment using low-power consumption devices such as LED lighting.

Keywords Ocimum Basilicum Hydroponic System LED light stimulation Light Duration Controlled Environment Agriculture (CEA)

268


Introduction The effect of illuminance on plant growth has been a long-standing area of biological investigation, particularly with regard to agriculture and indoor horticulture. This experiment was conducted to determine how different levels of LED lights influence basil. The study will also vary the period and intensity of light from LED sources in order to determine how artificial light molecules affect the development rate of basil plants by focusing on changes in some key characteristics like length and size respectively for its stem and leaf; while other factors will keep constant such as humidity, temperature, etc. This research can be regarded as not only a source for widening poor knowledge but also contributes to the improvement of the understanding regarding light physiology in plants grown under artificial conditions that assimilate the natural habitat. Light stimulation - exposure to photonic molecules - is the most necessary for plant growth (Rosenbusch et al., 2021). Light, especially natural sunlight, stimulates the photosynthesis of plants, which transforms light energy into chemical energy that plants use while growing. According to research, light-emitting diodes, also known as LED lights, can imitate sunlight to some extent by providing a similar light wavelength and color rendition. Hence, LED lights often replace sunlight in indoor environments, such as indoor farms like greenhouses or laboratories while experiments with plants are held. LED light influences the performance of enzymes, gene expression, cell wall formation, plant defense, and postharvest quality (Saikat et al., 2024). The choice of Ocimum Basilicum, more commonly known as Basil - one of the main herbs used as fragrant leaves for over 200 years - was an ideal choice for the simple reason that it grows relatively fast and easily adjusts to various environmental conditions. Basil is native to tropical and subtropical regions, usually in Asia: such as India, Pakistan, Iran, Thailand, etc. The species requires a temperature of at least about 75 Fahrenheit to germinate and does not tolerate relatively colder temperatures. (Rindels, 1997). The market size value of the Basil in 2023 was US$1,515.1 Million, and the Global Growth Rate from 2023 to 2033 was predicted to continue increasing by 4.1%, which indicates that investigation in the most adept indoor environment for the Basil to grow is crucial (Choudhury, 2023). However, since the focus was Basil's growth in indoor conditions, plenty of research was done beforehand. According to several experiment results of the thesis (MD Momtazur et al., 2021; Sipios et al., 2021), most proper time exposed to LED light for Basil plants would be 12 hours. Over-lighting would overheat the leaves and start making them dry. It would also cause some color changes in the Basil leaves, from green to brown. Conversely, the Basil seedlings that were exposed to dark for a longer time than the light were not capable of creating enough energy for photosynthesis and showed a slower growth rate (Mohammed et al., 2019). In this study, the hypothesis was that plant growth depends on the light duration. I will investigate how different amounts of time exposure to LED lights affect the growth of basil. I designed an experiment to control the amount of lights LED lights that the plants get and monitor the height of plants.

Materials and Methods a) Basil The Sweet Basil seedlings were purchased from the website Zaram Home. Three Basil seedlings were used for each group, hence a total nine seedlings were used for the experiment. The basil seedlings were divided into three in a control group, and each three in two experimental groups. b) Hydroponic system To match the setting of most of the natural and indoor environment of Basil, Plant Encyclopedia Hydroponic System (R-R-SWY-SGS-37, Shenzhen IGS Electronics CO.LTD, China, July 2022) was utilized, which provides the appropriate amount of LED light and waterbased on the pre-set time. The system provides all red, blue, and white light pigments for 10

269


hours a day. The water was filled every 5 days for the Basil seedlings to absorb. Two black boxes, extra LED light which emits the same light quantity, and nine basil seedlings; The seedlings were used instead of the seeds to measure the growth of the basil better since natural basil seeds take about 2 weeks to start germinating. Before the experiment began, the prediction was set; Basil that was exposed to light for the most time, Group 1, would grow at the quickest rate, and the basil that was exposed to light for the least amount of time, Group 3, would grow at the slowest rate. c) Experimental method Nine basil seedlings were planted in the hydroponic plant-growing system. The basil seedlings in the control group, named Group 1, were exposed to LED lights provided by the hydroponic system for 24 hours every day from 16:00 to 16:00. Since the hydroponic system had the light turned on for only 16 hours, the other LED light was used for lighting. The basil seedlings in the experimental group, Group 2, were exposed to the LED light 12 hours a day from 20:00 to 8:00 and were covered with a black box that blocked the exposure to light for 12 hours, from 8:00 to 20:00. The basil seedlings in the other experimental group, Group 3, were exposed to LED light for 6 hours, from 24:00 to 6:00, and were covered with a black box for 18 hours, from 6:00 to 24:00. The temperature and the humidity of the environment were the constant variables. The visual representation of this is shown in the schematic diagram below (Figure 1). The length of the stems and the length of the biggest leaves of the nine plants were measured every two days since the basil did not show enough growth to measure every day. Then, the average length of the stem(cm) and the average length of the biggest leaf (cm) were calculated and recorded for each group. The experiment was held for 21 days, from July 9th, 2024 to July 30th, 2024 d) Measurement Basil seedlings were measured with both a regular plastic ruler and a computer program called Arduino. The measurements were recorded in tables and the average lengths were calculated every day. The average temperature of the measured days was 26.3 degrees Celsius, and the average humidity was 50.2%.

Figure 1. Schematic of experimental setup

Result For 21 days of experimenting, the change in the average length of the stem has been recorded: As shown in the graph in Figure 2. the average length of the stem differed between the groups, which

270


means that the amount of time that basil was exposed to LED light and covered from LED light did affect the growth rate of the basil. From the graph, it could be seen as that the average length of the stem of Group 1(Light 24hrs) increased the most rapidly, also meaning that the growth rate was the fastest. In contrast, the basil seedlings in Group 2(Light 12 hrs) seemed to have the slowest growth rate.

Figure 2. The graph of the average length of stem for three different groups The change in the average length of the biggest leaf was also recorded: The average length of the biggest leaves of the Basil seedlings differed as well shown in Figure 3; the three groups showed distinct growth rates from each other. Observed from the graph, Group 1 (Light 24 hrs) seemed to have the fastest growth rate, and Group 2 (Light 12 hrs) seemed to have the slowest growth rate. The value of Group 1 (Light 24 hrs) started increasing rapidly since day 9.

Figure 3. The graph of the average length of the biggest leaves for three different groups

271


Discussion To compare the growth rate of the Groups with different LED light durations, the computer program, Matlab, was utilized to calculate the equations and slopes of the graph of each group (Figure 5, Figure 6). The color blue represents Group 1(Light 24 hrs), the color red represents Group 2 (Light 12 hrs), and the color light blue represents Group 3 (Light 6 hrs). As shown below(Figure 4), the data set and the codes were entered in the Command Window, which automatically calculated the quadratic equations of each group. The coefficient of the variable of x represents the slope of each graph. Group 1 (Light 24 hrs) had the equation of y= 0.01026x^2 + 0.1553x +4.65; the growth rate was about 0.1553. Group 2 (Light 12 hrs) had the equation of y= 0.01122x^2 + 0.1091x+4.485; the growth rate was about 0.1091. Group 3 (Light 6 hrs) had the equation of y= 0.009819x^2 + 0.139x + 3.854; the growth rate was about 0.139. Therefore, it is observed that Group 1(Light 24 hrs) had the fastest growth rate, then Group 3(Light 6 hrs), and that Group 2(Light 12 hrs) had the slowest growth rate.

Figure 4: The code entered to Matlab to calculate the growth rate of the plants

Figure 5. Change in value for the Average Length of Stem(cm) per day

272


Figure 6. Change in value for the Average Length of Leaf (cm) per day

Conclusions The group that was exposed to the LED light for the longest duration, 24 hours, shows the most rapid growth rate compared to the groups that were exposed to the LED light for less duration, 12 hours and 6 hours. Hence, the hypothesis that the plant growth depends on the light duration was accepted. Moreover, although there was a lack of measurement in color changes and thickness of stem, it clearly shows that the color of the leaves that were exposed to less LED lights turned pale progressively, compared to the leaves that gained more LED lights that turned into darker green colors. And 6 hours light exposure group shows thinner stem than the other groups. When inquiring the similar subject in the future, we can opt in a method of measuring color and stem thickness changes in detail and extend to human medicine reaction.

Reference 1) I. Rosenbusch, S. Matsubara, A. Ulbrich, & T. Rath. Influence of light stimuli on the habitus of basil (Ocimum basilicum L.) under daylight conditions. ISHS Acta Hortic 1327 (2021) https://doi.org/10.17660/ActaHortic.2021.1327.60 2) Saikat Sena, Soni Kumari, Vijay Kumar, & Azamal Husen. Light emitting diode (LED) lights for the improvement of plant performance and production. ScienceDirect, Current Research in Biotechnology Volume 7, 100184 (2024) https://doi.org/10.1016/j.crbiot.2024.100184 3) Rindels Sherry. 1997. News Article at IOWA State Univ. Extension and Outreachlogy. pp. 25-26 https://yardandgarden.extension.iastate.edu/article/1997/3-211997/basil.html#:~:text=Basil%20is%20one%20of%20the,%22king%20of%20the%20herbs%22 4) Choudhury Nandini. Basil Leaves Market Outlook. Future Market Insights. REP-GB-6073 (2023) https://www.futuremarketinsights.com/reports/basil-leaves-market 5) MD Momtazur Rahman, Mikhail Vasiliev, & Kamal Alameh. LED Illumination Spectrum Manipulation for Increasing the Yield of Sweet Basil (Ocimum basilicum L.). MDPI. PMC7917910 (2021) https://www.mdpi.com/2223-7747/10/2/344 6) Sipios László, Balázs László, Székely Géza, Jung András, Sárosi Szilvia, Radácsi Peter &

273


Csambalik László. Optimization of basil (Ocimum basilicum L.) production in LED light environments. ScienceDirect. Volume 289. 110486 (2021) https://www.sciencedirect.com/science/article/pii/S0304423821005938 7) Mohammed Aldarkazali, Hail Z. Rihan, Demelza Carne, & Michael P. Fuller. The Growth and Development of Sweet Basil (Ocimum basilicum) and Bush Basil (Ocimum minimum) Grown under Three Light Regimes in a Controlled Environment. MDPI. Agronomy, volume 9, issue 11 (2019) https://www.mdpi.com/2073-4395/9/11/743

Appendix Tables and raw data: Day/Average length of stem (cm)

1

3

5

7

9

11

13

15

17

19

21

Group1

4.8

5.2

5.8

6.4

6.9

7.2

7.9

9.4

10.9

11.7

11.9

Group2

4.6

5

5.4

5.8

6.1

6.7

8.1

8.8

9.4

11.2

11.3

Group 3

4.1

4.5

4.8

5.2

5.6

6

7.6

8.4

9.5

10.4

10.5

Figure 7. Table of Average Length of Stem (cm) per day for each group Day/Average Length of the biggest leaves (cm)

1

3

5

7

9

11

13

15

17

19

21

Group 1

2

2.2

2.3

2.5

3.2

3.6

4

4

4.1

4.3

4.4

Group 2

2.3

2.3

2.6

2.9

3.1

3.3

3.3

3.6

3.8

4

4

Group 3

2.1

2.2

2.5

2.5

2.5

2.8

3

3.2

3.5

3.8

3.8

Figure 8. Table of Average Length of the Biggest Leaves (cm) per day for each group

274


Enhanced Adversarial Attack on Voice Conversion Using Layer-Wise Relevance Propagation Author

Full Name

:

Hong, Ryan

:

Korean Minjok Leadership Academy

(Last Name, First Name)

School Name

Abstract Voice conversion, the technology that changes a speaker’s vocal qualities to match another speaker while maintaining the original content, has made significant strides in recent years. This led to concerns regarding the misuse of voice conversion in crime, and thus accelerated research about adversarial attack on voice conversion. Adversarial attack can be used to prevent criminal misuse of voice conversion. However, the current attack methods still lack imperceptibility and attack performance. This research focused on the enhancement of adversarial attack on voice conversion by applying Layerwise Relevance Propagation (LRP), a tool that can define critical components of a voice. Using LRP, we generated different types of binary or non-binary masks and applied perturbations on masked areas instead of the entire voice. As a result, we overcame the weaknesses of current methods by achieving a maximum 14% increase in imperceptibility while maintaining attack performance.

275


Introduction Voice conversion is the technology to convert one’s voice to another, while preserving the content. Voice conversion, as one of key generative AI technologies, brought innovation to the music and media industry. In recent years, voice conversion models have advanced to the point where they sound more natural, even with short utterances of voice as an input [1]. Some research even proposed zero-shot voice conversions, which can convert a voice using only a single sample from a speaker, without the need for prior training on other samples from that speaker [2], [3]. The advance of voice conversion raised concerns on the misuse of voice conversion, particularly in crimes such as voice phishing. In these cases, criminals could mimic the voices of people the victim knows, such as friends or family members, to deceive them. This led to more research on adversarial attacks on voice conversion. An adversarial attack is the process of creating inputs that deliberately mislead a machine learning model into making an incorrect prediction, even though the input appears valid to a human observer. Applying that to voice conversion means that, by adding minimal perturbations to the input voice, we prevent the voice conversion model from imitating it. There are two conditions for an adversarial attack on a voice conversion model to be successful. First, the change in the input voice should be unnoticeable to the human ear. Second, when we apply the model to this changed input voice, the output voice should sound like it comes from a different speaker. We refer to these two conditions as imperceptibility and attack performance, respectively. Improving both imperceptibility and attack performance at the same time is challenging, and a great deal of research attempted to overcome such limitation using different approaches. Huang et al. proposed the basic process of applying adversarial attack to voice conversion [4]. Other papers added features such as factoring in human hearing or real time masking of the voice [5], [6]. While all papers showed significant performance, we use the basic version of Huang et al. as we examine how applying Layer-wise Relevance Propagation (LRP) on adversarial attack could enhance its performance. LRP is a method that is used to explain how a model's output was generated. This is done by redistributing the model's output backward through the layers to identify which input features contributed most to the final decision. In this research, we apply LRP to an Automatic Speech Recognition (ASR) model to identify which parts of the input voice are important for recognizing the speech. We hypothesize that focusing perturbations on the important parts will improve the adversarial attack. In order to examine how using LRP in an adversarial attack could improve its performance, we compare the adversarial attack with and without LRP. For the adversarial attack without LRP, we use the embedding attack from [4]. Then, we propose a new attack method called the "LRP-based attack". LRPbased attack uses the LRP outputs to create different types of masks, either binary or non-binary. Then, it applies the LRP mask to limit perturbation areas during the attack. We compare the two attacks with three experiments : the imperceptibility experiment, attack performance experiment, and experiment on perturbation scale. We included the final experiment after noticing that varying perturbation scales produced different outcomes. Overall, while the LRP-based attack showed only a slight improvement in attack performance, it demonstrated a significant enhancement in imperceptibility at higher perturbation scales.

2. Methodologies 2.1. Layer-wise Relevance Propagation Layer-wise Relevance Propagation (LRP) is a method that helps us understand the output of a neural network through explanation by decomposition. If the input x consists of d dimensions, LRP assumes that each feature in these d dimensions contributes differently to the final output. The method calculates and analyzes each of these contributions, known as the relevance score. When using LRP, we first calculate the output by running the machine learning model. Then, LRP uses a top-down approach, applying the Taylor series to derive the relevance scores of nodes in each preceding layer, starting from

276


the output. The specific process is defined in [7]. LRP is frequently used in the audio domain, and especially for classification tasks such as ASR and gender classification. In those cases, LRP is applied to the spectrogram of an audio, which is the visual representation of it that shows how the frequency of a signal changes over time. A spectrogram is plotted with time on the x-axis and frequency on the y-axis, resulting in a matrix with dimensions corresponding to time and frequency. LRP calculates a relevance score for each input node at every specific time and frequency point in the spectrogram. The total number of relevance scores produced is d, which is the number of dimensions in the spectrogram. The output is a matrix of relevance scores that matches the size of the original spectrogram, with each value directly corresponding to a value in the spectrogram. This allows the use of LRP masks in the adversarial attack process, which will be discussed further in 2.2 and 2.3.

2.2. Voice conversion and Embedding Attack

Fig. 1: The encoder-decoder based voice conversion model, adapted from [4]. Perturbations are updated based on speaker embeddings, as the blue dotted line indicates. t is the source voice, which contains the content for the output voice. x is the target voice, which contains the vocal timbre for the output voice. F(t, x) is the output voice generated from t and x. Consider that audios are processed as spectrograms, not waveforms. This paper adapted the encoder-decoder structure for the voice conversion model. Fig. 1 is the basic structure represented in a diagram. t is the source voice providing content, and x is the target voice providing vocal timbre. t and x are both spectrograms. The content encoder 𝐸! extracts content information from t, and the speaker encoder 𝐸" extracts speaker characteristics from x. The extracted information forms vectors, referred to as the content embedding 𝐸! (t) and the speaker embedding 𝐸" (x). The Decoder D inputs these two vectors and outputs F (t, x), a spectrogram with content of t and speaker characteristics of x. Embedding Attack is a type of adversarial attack on voice conversion, proposed in [4]. We adapted the embedding attack as it had the highest performance between the methods introduced in the paper. To understand the embedding attack, it is important to know how adversarial attack on voice is done. The purpose of the attack is to alter the target voice x in Fig. 1 so that the speaker embedding 𝐸" (x) becomes different, which consequently changes the decoder output F (t, x) as well. However, the alteration of target voice should be minimal while the change in output should be maximized. We obtained this by using a loss function to optimize the alteration, which in this case is called perturbation. While diverse loss functions exist, the loss function of the embedding attack is as follows: Minimize loss function Subject to

𝐿(𝐸" (x + δ), 𝐸" (y)) − λ𝐿(𝐸" (x + δ), 𝐸" (x))) δ = 𝜀 · tanh (𝜔 ∗ 𝑀)

277

(1) (2)


where δ is the perturbation, and x + δ is the perturbed target voice. 𝐿(·, ·) is the distance between two embedding vectors and y is the adversarial target voice, a third voice that is only used during adversarial attack. 𝜔 is the actual variable being optimized. 𝜀 is the perturbation scale that controls the strength of the perturbation. Since the strength of the perturbation decides the imperceptibility of the perturbed voice, perturbation scale 𝜀 is an important factor to be experimented. Lastly, M represents the LRP mask. The role of equation (1) is to maximize the change in output. It aims to optimize δ so that the 𝐸" (x + δ) becomes similar with 𝐸" (y) and dissimilar with 𝐸" (x). Note that y is usually a voice of the opposite gender, so that the loss function can acquire bigger differences. While the loss function does not directly target the output, it uses the speaker embeddings to maximize change in the output. The role of equation (2) is to minimize perturbation strength. The optimization of δ is done by changing the value of 𝜔, rather than changing δ directly. Since 𝜔 is the value that changes during optimization, passing 𝜔through a tanh function and multiplying it by 𝜀 allows δ (the transformed version of 𝜔) to remain small and more stable throughout the process. While the traditional embedding attack uses 𝜔 , we used 𝜔 * M, which is the element wise multiplication of perturbation and LRP mask. This is possible because 𝜔 and M has the same size, as mentioned in 2.1.

2.3. LRP-based Attack

Fig. 2: The adversarial attack process using LRP mask. We calculate LRP relevance scores by inputting the spectrogram of the target voice into an ASRCNN model. We create different masks based on LRP scores; then, we optimize the perturbation using the loss function in 2.2. In this research, we propose a new attack method that involves LRP relevance scores. Fig. 2 is a diagram that shows the entire process. We first convert the target voice x into a spectrogram. Then, we input the spectrogram into an ASRCNN model and calculate the relevance scores using gradients. ASRCNN is a model for ASR, and is further discussed in 3.2. After the relevance scores are processed, we replace all negative values of the matrix to zero. This is to improve the interpretability by focusing only on the positive relevance scores. We then normalize the scores between 0 and 1 to scale the big numbers. The resulting matrix is referred to as G. Using the normalized LRP scores, we test 4 different masks in this paper: 1) No threshold mask: Without using threshold, we directly multiply G by the perturbation. Thus, M = G in this case. This allows the perturbation strength to be proportional to the relevance score, perturbing the important parts of the voice. 2) 0.0 threshold mask: We create and multiply a binary mask such that for all number M in the relevance score tensor:

278


3) 0.2 threshold mask: We create and multiply a binary mask such that for all number M in the relevance score tensor:

4) 0.4 threshold mask: We create and multiply a binary mask such that for all number M in the relevance score tensor:

The threshold values were chosen as such since most LRP scores were below 0.4, and we had to try out multiple thresholds in between the minimum and maximum scores. As discussed in 2.2, we used the masks for perturbation optimization by including them in the loss function. We used element-wise multiplication with M to reset the initial values and gradients to zero wherever M = 0. As a result, perturbations are only caused in areas where M = 1. Note that unlike threshold masks, the no threshold mask could potentially cause perturbations in the entire area of the spectrogram as it is not a binary mask.

2.4. Evaluation metric (a) Voice Similarity Voice Similarity metric is the percentage of the possibility that the two voices came from the same speaker. While there exist diverse methods to achieve this value, we calculate the cosine similarity between the embeddings of the two voices. The equation is as follows: 𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 =

𝐸1 ⋅ 𝐸2 |𝐸1| ⋅ |𝐸2|

where E1 and E2 are the voice embeddings. (b) PESQ (Perceptual Evaluation of Speech Quality) The Perceptual Evaluation of Speech Quality (PESQ) is an objective quality measurement of speech signals. It compares a reference signal (the original, undistorted speech) with a degraded signal (the distorted version of the reference signal) to assess how much the quality has been reduced. It uses a model of human auditory perception to evaluate the perceptual differences between the two signals. PESQ scores the quality of speech signal in a range of –0.5 to 4.5, with higher scores indicating better audio quality.

3. Experimental Settings 3.1. Dataset We used the VCTK-Corpus dataset, which contains 109 native English speakers and about 400 sentences for each speaker. The audio lengths are generally under 10 seconds, which allows faster processing of LRP and voice conversion.

279


3.2. Models For the voice conversion model, we used the AdaIN-VC model proposed in [3], which is capable of zero-shot voice conversions. To calculate the LRP relevance scores, we used the ASRCNN model proposed in Starganv2- VC. We considered the ASRCNN, a model for ASR, to be suitable for this task because the ASR is not only one of the most widely used tasks in the audio domain but also a popular choice for applying LRP [8], [9]. We adjusted and trained the model to be compatible with the preprocessed spectrograms that are used in [4].

3.3. LRP Library and Settings We used Zennit as the python library for LRP. Compared to other LRP libraries such as Captum and deepexplain, Zennit was more compatible with Pytorch and generated fastest results. For LRP rules, we chose EpsilonAlpha2Beta1, which uses the flat rule for any linear first layer, the alpha2-beta1 rule for all other convolutional layers, and the epsilon rule for all other fully connected layers. This decision was made based on the “current best practice for LRP” suggested in [10].

3.4. Experiment Procedure To measure the evaluation metric value, we experimented with each of the 109 speakers in the VCTKCorpus dataset. For each speaker, we randomly selected a set of voices that includes: 1) an audio file for the target voice 2) an audio file from a different speaker of the opposite gender for the adversarial target voice 3) an audio file from a different speaker of the opposite gender for the source voice in voice conversion. Note that different voices were selected for 2) and 3). This random selection generates a single sample with 109 sets, each containing 3 voices. Each set was used for a process where we first attack a voice and then apply voice conversion. Consequently, for one sample, we repeated this process 109 times and calculated the evaluation metric by averaging the results of all 109 attempts. For each attack in the experiments, we used three samples and then calculated the average across all samples.

Fig. 3: Experiment Procedure Diagram. The imperceptibility experiment (red dotted line) is a comparison between the perturbed voices A and B, while the attack performance experiment (blue dotted line) is a comparison between voice conversion results C and D. The perturbation scale experiment (black dotted line) involves both the imperceptibility and attack performance experiment.

280


Fig. 3 shows the experimental procedure for comparing the LRP-based attack with the original embedding attack described in [4]. Both attacks use the same set of voices and the same voice conversion model; the only variable is the type of attack applied. Due to this variation, the resulting perturbed voice and voice conversion output differ. Our experiments focus on comparing these different results. We conducted three types of experiments : the imperceptibility experiment, the attack performance experiment, and the perturbation scale experiment. Experiments are done with 1500 iterations and 𝜀 = 0.1, except for the perturbation scale experiment that tests different values of 𝜀. 3.4.1 Imperceptibility Experiment This experiment measured the imperceptibility of the perturbed voice (adversarial output). As in Fig 3, A is the perturbed voice without LRP mask, and B is the perturbed voice with LRP mask applied. First, we compared the Voice Similarity with the target voice for A and B. A higher voice similarity indicates better imperceptibility. Second, we compared the PESQ for A and B. The original voice is set as reference, while A and B are degraded versions. A higher PESQ indicates better voice quality, hence better imperceptibility. 3.4.2 Attack Performance Experiment This experiment measured the attack performance using the voice conversion output on the perturbed voice. As in Fig 3, C is the voice conversion output of A, and D is the voice conversion output of B. First, we compared the Voice Similarity with the target voice for C and D. A lower voice similarity indicates a stronger attack, as the purpose of voice conversion is to achieve a similar voice to the target voice. Second, we compared the PESQ for C and D. The target voice is set as a reference, while C and D are degraded versions. A lower PESQ indicates worse voice quality, which shows that the attack was more effective. 3.4.3 Perturbation Scale experiment This experiment investigated how different perturbation scales impact both imperceptibility and attack performance. We compared the effects of these scales on two types of attacks: the embedding attack and the threshold-0.0 attack. The threshold-0.0 attack was selected for this comparison due to its superior performance among all tested attacks, as indicated by our results. We evaluated these attacks on different perturbation scales from 0.2 to 1.

4. Results 4.1. Perturbation Output Results Embedding Attack

No threshold

Threshold 0.0

Threshold 0.2

Threshold 0.4

Average perturbation area (%)

100

50.7

49.5

1.2

0.1

Average absolute perturbation value

0.074

0.002

0.037

0.001

0.0001

Fig. 4: Table of the average perturbation area and the average absolute perturbation value for each attack type. Both values reduce as the threshold increases.

281


Fig. 4 presents a table quantifying the average perturbation area and the average absolute perturbation value for each type of attack. While the original embedding attack perturbs the entire spectrogram, the LRP-based attacks perturb smaller areas. Specifically, the no-threshold and threshold-0.0 attacks affect about 50% of the spectrogram, whereas attacks with higher thresholds perturb less than 1% of the spectrogram. We observed that as the threshold increases from 0.0 to 0.2 to 0.4, both the perturbation value and the perturbation area decrease significantly.

4.2. Imperceptibility Experiment Results

Fig. 5: Speaker Verification Accuracies (left) and PESQ values (right) of the perturbed voices of the embedding attack and LRP-based attacks. The whiskers for each bar show the approximate standard deviation calculated using the range rule with 3 samples. Fig. 5 is the bar plot for speaker verification accuracy and PESQ to compare the imperceptibility of the perturbed voices. For both evaluation metrics, all LRP-based attacks had a higher value than that of the original embedding attack. The LRP-based attack showed a 0.775% increase in speaker verification accuracy on average, while the approximate standard deviations were below 0.002%. It also showed a 0.0675 increase in PESQ on average, with a maximum standard deviation of 0.06. While the standard deviations were significantly low, the difference in evaluation metric values were low as well. There was a slight improvement with LRP-based attacks, but it was not statistically significant. Additionally, the LRP-based attacks showed similar performances to each other.

4.3. Attack Performance Experiment Results

Fig. 6: Speaker Verification Accuracies (left) and PESQ values (right) of the voice conversion outputs of the embedding attack and LRP-based attacks. The whiskers for each bar show the approximate

282


standard deviation calculated using the range rule with 3 samples. Fig. 6 is the bar plot for speaker verification accuracy and PESQ to compare the attack performance of different attack methods. The speaker verification accuracy was similar between the embedding attack and the LRP-based attack, with an average difference of just 0.1%. The standard deviation reached a maximum of 0.065%. We also observed a trend where attacks with lower thresholds generally resulted in lower speaker verification accuracy. Notably, when considering the standard deviations, the threshold-0.0 attack (the one with the lowest threshold) was the only attack that led to an actual decrease in speaker verification accuracy compared to the embedding attack. PESQ values were practically equal for the embedding attack and LRP-based attacks as the average difference is below 0.01, while the maximum standard deviation is 0.026.

4.4. Experiment on Perturbation Scale

Fig. 7: The Imperceptibility experiment for different perturbation scale 𝜀, exclusively to compare the original embedding attack and threshold-0.0 attack. The difference in speaker verification accuracy and PESQ between the two attacks is more significant for higher values of 𝜀. Due to time constraints, we selected one type of LRP-based attack for the perturbation scale experiment. Considering the results in section 4.2 and 4.3, the threshold-0.0 attack was chosen as it was the most effective. We proceeded to compare the threshold-0.0 attack with the embedding attack. We tested with perturbation scales below 1.0, as these scales are used to reduce the perturbation by multiplying it by a value less than 1.0. Fig. 7 shows the imperceptibility of the two attacks using different perturbation scales. The graphs in Fig. 7 reveal a trend that the threshold-0.0 attack exhibits higher imperceptibility compared to the embedding attack. Additionally, the difference in imperceptibility becomes more pronounced as the value of 𝜀 increases. For 𝜀 = 1.0, we obtain a maximum difference of 14% in speaker verification accuracy and 0.11 in PESQ. The results of the attack performance are not visualized here because there were no significant differences observed between the two attacks. For both types of attacks and at all perturbation scales, the speaker verification accuracy consistently remained at about 53%, and the PESQ score stayed around 1.0. These outcomes are similar to the findings reported in section 4.3, where the perturbation scale was set to 0.1.

5. Conclusion Based on the experiment results, we concluded that LRP-based attacks show improved performance compared to the original embedding attack on voice conversion, although in limited situations. In section 4.2 and 4.3, we used 𝜀 = 0.1 and observed only slight differences between the embedding attack

283


and LRP-based attacks. However, for higher values of 𝜀 , the LRP-based attack demonstrated significantly better performance compared to the embedding attack. This was based on the two important conditions of adversarial attacks: while the attack performance of the threshold-0.0 attack remained consistent, its imperceptibility actually increased, especially at higher perturbation scales. Although it is not safe to confirm that all LRP-based attacks are superior to the embedding attack, the threshold-0.0 attack demonstrates notably better performance. We attributed the improvement in imperceptibility of LRP-based attacks to the reduced perturbation area and perturbation value, as shown in Fig. 4. High perturbation scales resulted in greater improvements in imperceptibility because, as described in the loss function in section 2.2, larger 𝜀 values caused the LRP-based attacks to apply fewer perturbations compared to the original embedding attack. This led to a more pronounced difference in imperceptibility. However, lessening perturbation did not directly lead to better performance, as the threshold-0.0 attack which used the lowest threshold showed the highest performance. Therefore, while reducing perturbation enhances imperceptibility, this improvement is only effective up to a certain extent.

6. References [1] M. Masood, M. Nawaz, K. M. Malik, A. Javed, and A. Irtaza, “Deepfakes Generation and Detection: State-of-the-art, open challenges, countermeasures, and way forward,” Nov. 22, 2021, arXiv: arXiv:2103.00484. Accessed: Aug. 01, 2024. [Online]. Available: http://arxiv.org/abs/2103.00484 [2] K. Qian, Y. Zhang, S. Chang, X. Yang, and M. Hasegawa-Johnson, “AUTOVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss,” Jun. 06, 2019, arXiv: arXiv:1905.05879. Accessed: Aug. 01, 2024. [Online]. Available: http://arxiv.org/abs/1905.05879 [3] J. Chou, C. Yeh, and H. Lee, “One-shot Voice Conversion by Separating Speaker and Content Representations with Instance Normalization,” Aug. 22, 2019, arXiv: arXiv:1904.05742. Accessed: Jul. 27, 2024. [Online]. Available: http://arxiv.org/abs/1904.05742 [4] C. Huang, Y. Y. Lin, H. Lee, and L. Lee, “Defending Your Voice: Adversarial Attack on Voice Conversion,” May 04, 2021, arXiv: arXiv:2005.08781. Accessed: Jul. 26, 2024. [Online]. Available: http://arxiv.org/abs/2005.08781 [5] Y. Wang, H. Guo, G. Wang, B. Chen, and Q. Yan, “VSMask: Defending Against Voice Synthesis Attack via Real-Time Predictive Perturbation,” in Proceedings of the 16th ACM Conference on Security and Privacy in Wireless and Mobile Networks, May 2023, pp. 239–250. doi: 10.1145/3558482.3590189. [6] Z. Yu, S. Zhai, and N. Zhang, “AntiFake: Using Adversarial Audio to Prevent Unauthorized Speech Synthesis,” in Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, Copenhagen Denmark: ACM, Nov. 2023, pp. 460–474. doi: 10.1145/3576915.3623209. [7] G. Montavon, S. Bach, A. Binder, W. Samek, and K.-R. Müller, “Explaining NonLinear Classification Decisions with Deep Taylor Decomposition,” Pattern Recognit., vol. 65, pp. 211– 222, May 2017, doi: 10.1016/j.patcog.2016.11.008. [8] K. Markert, R. Parracone, M. Kulakov, P. Sperl, C.-Y. Kao, and K. Böttinger, “Visualizing Automatic Speech Recognition -- Means for a Better Understanding?,” in 2021 ISCA Symposium on Security and Privacy in Speech Communication, Nov. 2021, pp. 14–20. doi: 10.21437/SPSC.2021-4. [9] X. Wu, P. Bell, and A. Rajan, “Explanations for Automatic Speech Recognition,” Feb. 27, 2023, arXiv: arXiv:2302.14062. Accessed: Aug. 01, 2024. [Online]. Available:

284


http://arxiv.org/abs/2302.14062 [10] M. Kohlbrenner, A. Bauer, S. Nakajima, A. Binder, W. Samek, and S. Lapuschkin, “Towards Best Practice in Explaining Neural Network Decisions with LRP,” Jul. 13, 2020, arXiv: arXiv:1910.09840. Accessed: Jul. 27, 2024. [Online]. Available: http://arxiv.org/abs/1910.09840 Appendix A. Perturbation Area plot for each attack type Below is the plot of perturbation Area for each attack type. Embedding Attack is the original embedding attack proposed in [4]. The other attacks are proposed in 2.3. The perturbation area is smaller for higher thresholds, as shown in the plots. The perturbation area is decided by the number of vertical lines, as the y axis is the perturbation value.

Appendix B. Attack performance results for the perturbation scale experiment Below are the attack performance results for the perturbation scale experiment. We observed that regardless of the perturbation scale, the attack performance of the threshold-0.0 attack showed similar performance with the embedding attack.

285


Exploring the Impact of Water Temperature on the Electrical Resistance of Submerged Copper and Aluminum Wires: An investigation suitable for aiding high school physics education Author

Full Name

:

Im, Youngchan

:

Suzhou Singapore International School

(Last Name, First Name)

School Name

Abstract During the early 2000s, the semiconductor industry saw the replacement of aluminum for copper as the interconnect material during the manufacturing process. Copper can conduct electricity with much less resistance compared to aluminum, but it has a higher temperature coefficient. As the process of semiconductor manufacturing uses a variety of temperature ranges, the changes in the resistance of copper and aluminum wires have been experimented with in this investigation. The methodology was designed to be suitable for students and teachers to utilize in their classes when exploring the topic of resistance. Easily accessible apparatus was used, and the steps to carry out the procedure have been made simple for students to carry out independently. Copper and aluminum wires were bent and submerged in water at varying temperatures, and their resistance was measured using a multimeter. Results showed that the copper wires had less resistance compared to the aluminum wires at all temperatures. This experiment is an excellent opportunity for enthusiastic physics students wanting to investigate the industrial applications of copper and aluminum.

Keywords Semiconductor, Copper Coil, Aluminum Coil, Electrical Resistance, Do It Yourself (DIY), High School Education

286


I. Introduction Copper (Cu) and aluminum (Al) are commonly used metals in many areas of our lives. Copper, characterized by its reddish-brown color, boasts excellent electrical conductivity. Aluminum is a lightweight, silver-gray metal noted for its high corrosion resistance (VMT). The malleability and ductility of both metals make them versatile for many industries. In the semiconductor industry, copper and aluminum hold significant importance. Historically, because of its favorable conductivity and ease of manipulation, aluminum was the material for wiring in most semiconductor circuits. In the early 2000s, the industry shifted to copper, which further added to electrical conductivity and built better reliability into chips under new process advances such as the Damascene (IBM). Particularly in semiconductor applications, a critical consideration for these metals is their electrical resistance variation with temperature. With regard to the changes in resistance due to different thermal conditions, both copper and aluminum show changes in the electronic device's performance. This experiment examines how the resistance of copper and aluminum coils is affected when submerged in water at varying temperatures, providing insights into their behavior under thermal stress. Despite the importance and prevalent use of Cu and Al in electronics, throughout K–12 education, students are often deprived of the chance to ‘get close to’ these metals and their electrical properties. This paper tries to bridge this gap and shows how to build a low-cost and easy-to-handle experiment setup.

1.1 Research Question How do different water temperatures affect the resistance of submerged copper and aluminum wires?

1.2 Hypothesis The hypothesis of this research posits that as the temperature of the water increases, the electrical resistance of both submerged copper and aluminum wires will also increase due to the thermal agitation of their atomic structures. Given that copper has approximately 40% lower resistance than aluminum, it is expected to exhibit less overall resistance at all temperatures. However, since copper has a slightly larger thermal coefficient (0.0043 °C) compared to aluminum (0.0038 °C), the rate of increase in resistance with temperature might be marginally higher for copper (SONYG). This paper explores an investigation relating to the topic of resistance that students can easily conduct at home. The apparatuses required for the experiment mostly consist of household objects, with the exception of copper and aluminum wires and a multimeter. The experiment involves heating water in a vessel with the copper and aluminum wires submerged and then tracking the resistance changes as the water cools. Students can evaluate the accuracy of their experiment by calculating the temperature coefficient of each metal using the resistance values they obtain from this experiment and comparing them to theoretical values. Additionally, students can gain practical knowledge in the field of semiconductors by applying the concepts of electrical resistance and temperature coefficients.

II. Protocols and Apparatus 2.1 Apparatus We used copper and aluminum wires and a cooking pot, which are commonly available in stores, for students to obtain and conduct their educational experiments. Each wire must be bent to fit the shape of the pot; there must be enough water in the pot to submerge the wires; and students need access to a method of heating, such as a gas stove, to heat the water and consequently increase the temperature of the wire itself.

287


Table 1 indicates the apparatuses used in the experiment and their technical details. Items

Quantity Technical Details •

Copper Wire

1

Aluminum Wire

1

Tap Water

1

A vessel to hold 1 water Stove

1

Multimeter

1

Alligator Clip

1

Food Thermometer

1

99%+ uncoated copper, pure copper copper wire, sold for science experiments. • The copper wire used in the experiment was 55 cm in length and 2 mm in diameter. • O-ON aluminum wire for science experimentation purposes. • The aluminum wire used in this experiment is 55 cm in length and 2 mm in diameter. • Tap water from a household sink. • Requires at least 600 ml of water for each trial • Made of stainless steel • With dimensions that the copper and aluminum wires can be bent into • Household gas or induction stove • Capable of DCV, ACV, and resistance measurements with 55 cm-long lead wires • Able to measure over 100 degrees Celcius • An 85-mm-long clip with the same hole shape as the multimeter lead rod. • A 25.4-cm thermometer made of stainless steel with a measurement range of -50°C to +300°C. Table 1: Table of Apparatuses

2.2 Experimental Protocols This experimental protocol is divided into three parts: set-up, temperature measurement, and trials to standardize the design and implementation of experiments in a procedural way. Set-up Schematic Diagram

Figure 1. Schematic diagram of apparatus for (a) copper and (b) aluminum.

288


We placed the copper coil and aluminum coil in different pots, as shown in the diagram. Each coil should be bent to follow the shape of the pot, and the coils should be stiff without bends before being bent. An insulator, such as clay, should be installed to prevent direct contact between the steel pot and the coils. (For this experiment, the wire sticking out of the water is 5 cm long, and the part that is submerged in water is 15 cm across.) Temperature Measurement ① Fill a pot with 600 ml of water. ② Place the pot on the stove and heat the water to 100 °C. ③ Turn off the heat and allow the water to cool gradually, checking the set temperature with a food thermometer to measure the resistance value. Reason for Choosing a Temperature Range of 30°C to 100°C Degrees for Measurement Considering the conventional CMOS (Complementary Metal Oxide Semiconductor) temperature test range of approximately -55°C to +125°C, we measured from room temperature (30°C), which is easily accessible for students, up to 100°C, the boiling point of water. This range was chosen to cover as much of the CMOS test range as possible, allowing for an effective analysis within a practical and observable temperature spectrum (Leng et al.). Resistance(Ω) Measurement Photograph of the Experimental Setup

Figure 2. Photographs of the set-up This image shows the overall structure of the experiment. Insert the alligator clip into the end of the lead of the multimeter that can measure the value of resistance (Ω) and secure it with both wires. When the temperature reaches the set temperature, use the multimeter to measure the resistance at that moment.

289


How a Multimeter Functions: Its Relation to Ohm's Law A multimeter measures resistance by applying a small, known voltage from its internal battery across the tested component, causing a current to flow. It then measures this current and, using Ohm's law (which states that resistance R is equal to voltage V divided by current I), calculates the resistance by dividing the applied voltage by the measured current, displaying the result on the screen. (ELECTRICIAN U) Trials For each metal, there are six temperatures (50, 60, 70, 80, 90, and 100) set at 10°C intervals, in addition to room temperature (30°C) and ice water (0°C). Five trials must be run for each set temperature, not consecutively, but after the first trial measurement for all set temperatures, a second trial must be measured.

2.3 Safety Concerns Students should be careful not to touch the hot wires or pot directly with their hands due to the heat from the water. It is recommended that they use gloves for handling or ensure the setup is securely fixed before the water heats up, so that no adjustments are needed during the experiment. The wires that students can easily obtain are typically sold coiled in lengths of 1 to 4 meters. For this experiment, students need to straighten the wire and cut it to size with scissors. They should be cautious of the sharp, unfinished edges of the cut wire to avoid injury.

III. Experiment 3.1 Raw Data Below is the raw data without any calculations performed. A food thermometer was used to confirm the given temperatures, and a multimeter was used to measure the resistance directly at each given temperature. Therefore, the raw data shows the given temperatures and the corresponding resistance values for each trial. Copper Coil Temperature (°C) +/- 0.1 °C

Resistance (Ohm) Trial 1

Trial 2

Trial 3

Trial 4

Trial 5

30

4.72

4.78

4.72

4.75

4.76

50

5.11

5.32

5.24

5.31

5.32

60

5.36

5.59

5.47

5.54

5.57

70

5.58

5.81

5.68

5.74

5.76

80

5.82

6.01

5.93

5.95

5.98

90

6.03

6.18

6.13

6.16

6.21

100

6.21

6.37

6.34

6.34

6.39

Table 2. Raw data table for the copper coil experiment

290


Aluminum Coil Temperature (°C) +/- 0.1 °C

Resistance (Ohm) Trial 1

Trial 2

Trial 3

Trial 4

Trial 5

30

7.25

7.23

7.33

7.29

7.26

50

7.88

7.90

7.83

7.91

7.86

60

8.21

8.23

8.02

8.22

8.03

70

8.52

8.51

8.46

8.54

8.45

80

8.69

8.72

8.66

8.72

8.67

90

9.04

9.10

8.99

9.08

9.02

100

9.29

9.31

9.18

9.33

9.26

Table 3. Raw data table for the aluminum coil experiment

3.2 Processed Data The processed data below shows the average values of resistance for each temperature as well as the uncertainty in the measurement. There are some variations and slight outliers in the data set, but the averages still show an increasing trend as temperature increases, confirming that the outliers have no significant impact. Copper Coil Temperature(°C) +/- 0.1 °C

Resistance (Ohm) Trial 1

Trial 2

Trial 3

Trial 4

Trial 5

Mean

Uncertainty

30

4.72

4.78

4.72

4.75

4.76

4.76

0.08

50

5.11

5.32

5.24

5.31

5.32

5.26

0.1

60

5.36

5.59

5.47

5.54

5.57

5.51

0.11

70

5.58

5.81

5.68

5.74

5.76

5.71

0.11

80

5.82

6.01

5.93

5.95

5.98

5.94

0.09

90

6.03

6.18

6.13

6.16

6.21

6.14

0.09

100

6.21

6.37

6.34

6.34

6.39

6.33

0.09

Table 4. Processed data table for the copper coil experiment

291


Aluminum Coil Temperature (°C) +/- 0.1 °C

Resistance (Ohm) Trial 1

Trial 2

Trial 3

Trial 4

Trial 5

Mean

Uncertainty

30

7.25

7.23

7.33

7.29

7.26

7.27

0.05

50

7.88

7.90

7.83

7.91

7.86

7.88

0.04

60

8.21

8.23

8.02

8.22

8.03

8.14

0.11

70

8.52

8.51

8.46

8.54

8.45

8.5

0.04

80

8.69

8.72

8.66

8.72

8.67

8.69

0.03

90

9.04

9.10

8.99

9.08

9.02

9.05

0.05

100

9.29

9.31

9.18

9.33

9.26

9.27

0.08

Table 5. Processed data table for the aluminum coil experiment

3.3 Graphs Copper Coil

Figure 3. Graph for the copper coil experiment

292


Aluminum Coil

Figure 4. Graph for the aluminum coil experiment

3.4 Sample Calculations Value

Formula

Sample Calculation

Average resistance

Uncertainty

Gradient of the best-fit line

Table 6. Table for the Sample Calculations

293


IV. Discussion It is clear from looking at Figures 3 and 4 that the graphs for the two metals, copper and aluminum, show an increasing linear trend. This indicates that the resistance values for the copper and aluminum coils both increase with temperature, supporting the initial hypothesis. The data's linear trend supports the expected behavior of these metals under thermal conditions by demonstrating that the relationship between resistance and temperature is consistent throughout the tested temperature range. The first observation is that comparing the resistance values of copper and aluminum at room temperature (30 °C) reveals that the resistance of copper is approximately 65% of that of aluminum. 4.76(𝑐𝑜𝑝𝑝𝑒𝑟) × 100 ≈ 65 7.27(𝑎𝑙𝑢𝑚𝑖𝑛𝑢𝑚) This difference remains consistent across other temperature points, confirming the theoretical prediction that copper has about 40% lower resistance than aluminum. The steady ratio between the resistance values of the two metals across various temperatures suggests that the inherent properties of copper and aluminum are reliably reflected in the experimental data, lending further credibility to the theoretical expectations. The second observation is that, by comparing the experimentally obtained thermal coefficient values (α) for each metal with the theoretical α values, it is evident that the experiment was conducted with minimal error, despite using low-cost experimental tools at home.

Using the α-relative expression from the derivation above, we find empirical α values for each metal and percentage differences.

Although there is a percentage difference between the two metals, with the percentage difference for the thermal coefficient of copper being 9.6% and that of aluminum being 3.4%, it is important to note that both percentage differences fall within 10%. This indicates that, despite some degree of error, the experiment was conducted with a fairly high level of accuracy.

4.1 Additional Experiment Nevertheless, we investigated the reason for the observed errors and identified tap water as a potential source of the discrepancy. To assess the impact of tap water on the experimental data, we conducted an additional experiment using a copper coil of the same thickness and length, encased in silicone rubber.

294


Processed Data

Temperature (°C) +/- 0.1°C

Resistance (Ohm) Trial 1

Trial 2

Trial 3

Trial 4

Trial 5

Mean

Uncertainty

30

4.74

4.73

4.75

4.74

4.72

4.74

0.02

50

5.14

5.15

5.12

5.15

5.16

5.14

0.02

60

5.35

5.36

5.34

5.36

5.37

5.36

0.02

70

5.54

5.56

5.55

5.57

5.56

5.56

0.02

80

5.77

5.78

5.76

5.77

5.76

5.77

0.01

90

5.96

5.95

5.97

5.98

5.97

5.97

0.02

100

6.18

6.17

6.16

6.16

6.17

6.17

0.01

Table 7. Processed Data Table for the Copper Coil with Silicone Rubber Coating Graph

Figure 5. Graph for the Copper with Rubber Coating

295


We can compare the resistance values of raw copper wire and copper wire with a rubber-like insulator at the same temperatures. The results show that the resistance values of copper wire with an insulator are more precise compared to those of copper wire without any insulator, meaning that the range for the raw copper wire data has a wider range at each temperature. This is evidenced by the decrease in spacing between the data points at each temperature when both graphs are compared. The most plausible explanation for this is that tap water on its own has electrical resistance due to the presence of dissolved ions such as calcium ions (Ca²⁺), chloride ions (Cl⁻), magnesium ions (Mg²⁺), and bicarbonate ions (HCO₃⁻). Other impurities in tap water, such as organic matter and small particles, may have impacted the resistance of the water (UCSB).

V. Conclusion It was observed that the copper wire has approximately 40% less resistance compared to the aluminum wire. Copper’s mean resistance was 4.76Ω at 30°C and 6.33Ω at 100°C, whereas aluminum’s mean resistance was 9.27Ω at 100°C and 7.27Ω at 30°C. As the calculated temperature coefficient of copper (0.004712) is higher than the calculated temperature coefficient of aluminum (0.00393), the resistance of copper increases more than that of aluminum as the temperature increases. However, the results of this experiment showed that the copper wire’s resistance at all temperatures was approximately 40% lower than aluminum’s, making the difference in their temperature coefficients negligible, especially in the context of semiconductor industries, where most semiconductors are manufactured in varying temperature grades such as 0°C to 70°C and -40°C to 85°C. Therefore, copper’s lower resistance makes it more suitable for application in the semiconductor industry compared to aluminum. This investigation has demonstrated that it is suitable for teachers to use as educational material and for students to follow along easily on their own. Students are able to learn about the application of metals’ properties in the semiconductor industry and conduct an experiment to showcase the different resistance values of copper and aluminum at different temperatures.

296


References Christensen, Douglas A. “Ohm’s Law: Current, Voltage and Resistance.” Synthesis Lectures on Biomedical Engineering, Morgan & Claypool Publishers, Jan. 2009, pp. 1–11, https://doi.org/10.1007/978-3-031-01638-7_1. Accessed 11 Aug. 2024. CIRRIS. “Temperature Coefficient of Copper - Cirris Inc.” Cirris Inc, 11 June 2023, cirris.com/temperature-coefficient-of-copper/. Accessed 22 Aug. 2024. ELECTRICIAN U. “How Multimeters Measure Resistance: A Detailed Explanation – Electrician U.” Electricianu2.com, 2023, electricianu2.com/how-multimeters-measure-resistance-a-detailedexplanation/#:~:text=When%20measuring%20resistance%2C%20a%20small%20known%20c urrent%2C%20usually,from%20the%20measured%20voltage%20and%20the%20known%20c urrent. Accessed 12 Aug. 2024. Erol, Mustafa, and Muhammed Emre Kuzucu. “Measuring Thermal Conductivity via Basic Home Equipment.” Momentum Physics Education Journal, Kanjuruhan University, Jan. 2022, pp. 19–28, https://doi.org/10.21067/mpej.v6i1.6306. Accessed 11 Aug. 2024. Herman, Rhett. “An Introduction to Electrical Resistivity in Geophysics.” American Journal of Physics, vol. 69, no. 9, American Institute of Physics, Sept. 2001, pp. 943–52, https://doi.org/10.1119/1.1378013. Accessed 14 Aug. 2024. IBM. “Copper Interconnects | IBM.” Ibm.com, 2024, www.ibm.com/history/copper-interconnects. Accessed 11 Aug. 2024. Leng, Hongyan, et al. Research on Electrical Characteristics of CMOS Device at Cryogenic Temperature. Oct. 2018, https://doi.org/10.1109/icrms.2018.00056. Accessed 15 Aug. 2024. Mavromihales, Michael, and K. Sherwin. “Design, Fabrication and Testing a Heat Exchanger as a Student Project.” ASEE Annual Conference Proceedings, vol. 1999, ASEE, June 1999, pp. 1581–88, pure.hud.ac.uk/en/publications/design-fabrication-and-testing-a-heat-exchanger-as-astudent-proj. Accessed 11 Aug. 2024. SONYG. “저항온도계(係)수.” 네이버 블로그 | 격물치지, 2022, m.blog.naver.com/tutto/222829935965. Accessed 23 Aug. 2024. UCSB. “64.16 -- Conduction in Water.” Ucsb.edu, 2024, web.physics.ucsb.edu/~lecturedemonstrations/Composer/Pages/64.16.html. Accessed 23 Aug. 2024. V. Székely, et al. “CMOS Temperature Sensors and Built-in Test Circuitry for Thermal Testing of ICs.” Sensors and Actuators a Physical, vol. 71, no. 1-2, Elsevier BV, Nov. 1998, pp. 10–18, https://doi.org/10.1016/s0924-4247(98)00165-4. Accessed 11 Aug. 2024. VMT. “Copper vs Aluminum: What’s the Difference between the Two Metallic Materials?” Machining-Custom.com, 2023, www.machining-custom.com/blog/copper-vs-aluminum.html. Accessed 15 Aug. 2024.

297


A Study on the Space Use Status of High School Students and Implications of Future-oriented School Space Using Space Syntax Author

Full Name

:

Jo, Ajin

:

Cheongshim International Academy

(Last Name, First Name)

School Name

Abstract This study aims to analyze high school students' spatial usage patterns using the Space Syntax methodology and to propose implications for future-oriented school spaces. The research subject is the classroom space of Cheongshim International Middle and High School, and the research methods include a combination of Space Syntax analysis and surveys of students' spatial usage patterns. The Space Syntax analysis evaluates the school's floor plan based on indicators such as integration, connectivity, and visibility. At the same time, the survey of students' spatial usage patterns is conducted through observational studies and interviews. The analysis results reveal the blind spots in the school space and the students' usage patterns, confirming that spaces with low integration negatively impact students' safety and learning. In particular, a distinct difference was observed between frequently used spaces and those that are not, indicating a correlation between the physical structure of the space and students' behavior patterns. Additionally, it was noted that higher visibility of spaces tends to increase the frequency of student usage, emphasizing the need to consider visibility in space design. Based on these results, this study presents a vision for future-oriented school spaces and derives the following implications. First, it is essential to identify and manage spaces with low integration to ensure student safety. Second, improving the physical structure of spaces is necessary to create an environment that students can naturally utilize. Third, space design should reflect students' usage patterns, providing significant insights for educational policies and spatial design. This study can serve as foundational data for student-centered school space design and is expected to contribute to future research.

Keywords Future-oriented School Space, Educational Environment Innovation, Space Syntax, Convex Map, Students' Space Use Patterns

298


1. INTRODUCTION 1.1. Background and objectives of the study The modern educational environment is rapidly evolving. Advances in technology and social changes are demanding new learning spaces that support future-oriented education. A future-oriented learning environment is not just a place for delivering knowledge but an innovative space where students can develop creativity, collaboration, and problem-solving skills. In 2017, the OECD published "The OECD Handbook for Innovative Learning Environments," emphasizing the importance of innovative school environments. In South Korea, the Gyeonggi Provincial Office of Education is implementing the 'Gyeonggi-type (School) Restructuring Project (2024-2028)' to transform existing school spaces into future-oriented learning environments. This change signifies that school spaces must be flexible and sustainable, supporting student-centered learning and accommodating various learning styles and activities. Building and remodeling innovative and future-oriented schools is important. However, such policies require significant time and financial resources. Therefore, it is essential to analyze the current spatial structure of schools and the usage patterns of students. Space Syntax is a spatial analysis technique that quantifies spatial structures and helps understand the interactions between spaces and their users. Applying Space Syntax to analyze school environments allows for a deeper understanding of how current school spaces are utilized and helps identify areas that need improvement. This provides crucial insights when schools aim to enhance educational environments and create future-oriented learning spaces. It's also important to understand how students use the spaces. Unlike the intended use of spaces, students' actual use may differ, and the quantitative analysis results from Space Syntax may not always align with students' convenience or the goals of future-oriented school spaces. Therefore, it is necessary to conduct surveys on space utilization. For accurate analysis of school spaces, a combined study using Space Syntax spatial analysis and surveys on space usage is required. Particularly, research should be conducted from the perspective of students who use the school spaces, rather than solely from the viewpoints of architects or school policymakers. This study aims to propose improvements for future-oriented school spaces based on a vision of such environments. It will analyze current school spaces using Space Syntax methodology and investigate students' actual space usage patterns to identify areas for enhancement towards creating future-oriented learning environments. 1.2. Research Scope and Methods This study focuses on the classroom spaces of Cheongshim International Middle and High School. The research methodology involves a combination of Space Syntax analysis and an investigation of students' space utilization patterns. Space Syntax analysis will be conducted using the school's floor plans to evaluate indicators such as integration, connectivity, and visibility, utilizing DepthmapX software. The investigation of students' space utilization patterns will be carried out through observational studies and interviews. By comparing the quantitative results from Space Syntax analysis with the qualitative findings from the student utilization pattern survey against future-oriented school space criteria, the study aims to derive insights for improving school spaces.

2. THEORETICAL CONSIDERATIONS 2.1.Future-Oriented School Spaces The Learning Policy Institute (2021) in the United States has proposed design principles for schools. These principles, developed with input from educators, practitioners, scientists, and parents, outline five

299


key design principles for K-12 schools: ① Spaces that foster positive developmental relationships between students and educators; ② Environments filled with safety and a sense of belonging; ③ Spaces that support rich learning experiences and knowledge development; ④ Environments that promote the development of technology, habits, and mindsets; ⑤ Spaces with integrated support systems that reflect students' needs.[2] These principles aim to provide a comprehensive learning environment that supports students' learning styles, interactions, creativity, and collaboration, making them crucial indicators for evaluating future-oriented school spaces. "Architect Magazine" provides various case studies and articles related to school architecture.[3] Key characteristics of future school spaces highlighted in the magazine include ① STEAM-centered multipurpose spaces, including maker spaces, collaborative areas, and project zones; ② Outdoor classroom spaces; ③ Hygiene facilities, with increased attention to hygiene post-COVID-19; ④ Creative designs that reflect the school's identity and stimulate students' curiosity; ⑤ Personalized learning spaces that allow students to explore their interests and passions; ⑥ Diverse learning areas, including spaces for individual study, team projects, and experimentation with new knowledge. Dal-Hyo Kim (2021) analyzed teachers' space requirements based on the five characteristics of futureoriented school spaces proposed by Deed and Dwyer (2018): tradition, complexity, democracy, individuality, and technology.[4][5] According to the study, teachers emphasized the importance of an appropriate class size to perform their traditional roles of controlling (evaluating) and teaching students. They also prioritized the convenience and flexibility of desk arrangements. To meet the complexity of diverse student interactions, teachers deemed an open and active atmosphere in design as most important. For realizing individuality in student and independent learning, optimal lighting, noise control, and a comfortable, calm atmosphere were considered necessary. This research is significant in examining the direction of future school spaces in Korea from the teachers' perspective. Ji-Yu Lee and Jong-Kook Lee(2019) specifically categorized future-oriented school spaces.[6] By analyzing prior research related to future schools and examining six schools known for their futureoriented design, such as Pine Jog Elementary School (California, USA), the study identified common deficiencies in these schools, including adaptable spaces, counseling/consulting areas, lobbies/parent lounges, and community engagement spaces. The spatial categorizations of future-oriented school spaces are useful for understanding and interpreting school spaces in Korea. Category

Subcategory

Description Spaces adaptable to future teaching methods and changes in group sizes. Open spaces without dividing walls, allowing for flexible use. Multipurpose areas including lockers, multifunctional rooms, and rest areas. Spaces equipped with ICT devices for ubiquitous learning.

Flexible Spaces Learning Spaces

Open Plan Common Spaces ICT Integration Classrooms Small Group/Project Spaces

Multipurpose and Individual Learning Learning Community Spaces Spaces Performance and Exhibition Spaces

Areas that promote communication and interaction.

Atrium Support Spaces

Counseling and Consulting Spaces Cafeteria and Dining Areas

Spaces for the Convenience of School Members, Moving Away from Authoritarian Management

Community IntegrationLobby, Parent lounge and shared Convenience Spaces Community Activity Spaces

Spaces that open up the school to the community, serving as a link with the local community and providing community functions.

Eco-Friendly School

Ecological Spaces Utilizing Idle Areas within the School, Serving as Observation and Experiential Learning Environments

Ecological Environment Creation

300


Table 1. Characteristics of Future-Oriented School Space source : Lee, Ji-You & Jongkuk Lee (2019)

The discussion on future-oriented schools impacts not only the construction of new schools but also the remodeling of existing school spaces. In Gyeonggi Province, where the research target schools are located, the Gyeonggi Provincial Office of Education is implementing the ‘Gyeonggi-type (School) Restructuring Project (2024-2028).’ This initiative aims to transform 154 aging schools, which are over 40 years old, into future-oriented teaching and learning environments over the next five years. The space restructuring project focuses on remodeling schools into future-oriented spaces with an emphasis on space innovation, smart classrooms, green schools, and school multifunctionality.[7] Category

Description

Space Innovation

Flexible spaces in size and usage: Promotes elective learning and creative teaching.

Smart Classrooms

Digital-based personalized learning spaces: Expands on/offline blended learning and student-centered activities.

Green Schools

Carbon-neutral and non-toxic environments: Expands experiential environmental education.

School Multifunctionality

Sharing of school and community facilities: Strengthens the school’s role as a community hub.

Safety

Safe environments during construction: Provides a safe and healthy learning environment.

Table 2. Key Elements of Gyeonggi Provincial Office of Education’s School Space Restructuring Project (2024-2028) Source: Gyeonggi Provincial Office of Education Press Release, January 24, 2024. "Gyeonggi-type Space Restructuring Project 5-Year Plan"

The school space restructuring project is generally evaluated positively by teachers, students, staff, and parents. This is because they view the project as not only a physical transformation of the school but also a shift towards a future-oriented learning environment and school community culture. The school space restructuring initiative reflects the policy direction for future-oriented school spaces in Korea. The OECD’s "The OECD Handbook for Innovative Learning Environments" (2017) can be seen as a comprehensive summary of the principles for future-oriented school spaces discussed earlier. Over the past decade, the OECD has conducted extensive research on future education through its Innovative Learning Environments (ILE) project. The result, "The OECD Handbook for Innovative Learning Environments (2017)," presents seven learning principles: Learner-Centeredness, Social Nature of Learning, Emotional Engagement, Individual Differences, Demanding Yet Manageable Learning Environments, Broad Assessments and Feedback, and Horizontal Connectedness. This handbook aims to assist in the innovation of schools and educational systems, focusing on learning principles, innovative learning environments, and change and innovation. The concept of future-oriented school spaces becomes a crucial criterion for evaluating existing school spaces. These criteria should be established based on existing research and principles of learning environments. This study, considering Korean characteristics, classifies learning spaces and creates criteria for analyzing existing school spaces using concepts from related research, policies, and OECD’s innovative learning environments.

301


School Space Classification

Learning Spaces

The OECD, School Space Name Innovative Learning Environments

- The learning environment should place learners and their engagement at the center : Adequate-sized learning spaces for traditional roles of teachers. l Learner- Centeredness - The learning environment should reflect Classrooms, special and respect the individual differences of l Reflection of classrooms, learners : Dedicated individual study spaces Individual Differences multipurpose rooms, - The learning environment should promote l Challenge for All gymnasiums, horizontal connections among various Learners individual study areas, elements : Flexible learning spaces. l Promotion of club rooms, etc. Adaptable learning environments Horizontal Connectedness accommodating diverse student interactions. - All learners should be provided with challenging tasks, but excessive overload should be avoided : STEM-focused learning spaces integrating digital technologies.

Small group project Educational spaces, performance l Social Nature of Community Spaces and exhibition spaces, Learning atriums, etc.

Support Spaces

Community Integration and Shared

Convenience Spaces

Other

Characteristics of Future-oriented School Space

Counseling rooms, health rooms, cafeterias, etc.

- Learning is understood as a social process, where interaction is crucial : Spaces where students can naturally gather for discussions and collaboration. Democratic spaces that teach cooperation, decision-making, and mutual respect.

- The learning environment should be sensitive to learners' emotions and promote positive feelings : Spaces for the l Emotional Engagement convenience of school members, moving away from authoritarian management. l Use of Broad Assessment and Feedback Spaces equipped for health and hygiene. - Various assessment methods and feedback should be used to support learning

Lobbies, parent l Partnerships lounges, community

- Highlights the importance of collaboration and relationship-building with various partners inside and outside the learning environment : Spaces that

communication spaces, etc

facilitate community integration and perform community functions. Areas where parents and others can participate and interact.

Ecological environment creation, etc.

- Carbon-neutral and non-toxic ecofriendly spaces. Ecological spaces within the school used for observation and experiential learning.

Table 3. Criteria for Analyzing Future-Oriented School Spaces

2.2.Space Syntax Concept and Analysis Methods Space Syntax Network introduces Space Syntax as a scientifically based, human-centered approach that examines the relationships between spatial layouts and various social, economic, and environmental phenomena. These phenomena include movement, perception and interaction patterns, density, land use and value, urban growth, and social differentiation, as well as safety and crime distribution. Space Syntax is a spatial analysis technique developed in the 1970s by Professor Bill Hillier and his colleagues at

302


University College London (UCL).[9] Bill Hillier and his colleagues believed that by observing and quantifying people's movements, interactions, and the density, distribution, and behavior within an area, they could understand how spatial configurations impact social phenomena.[10] In essence, Space Syntax focuses on analyzing the connections between individual spaces rather than the characteristics of each space. Its goal is to understand or predict the patterns and movements of people using the space. This makes Space Syntax a useful tool for analyzing school spaces from the perspectives of students and teachers. Space Syntax analysis methods include convex maps and axial maps. A Convex Map divides spaces into convex areas for analysis. It is primarily used to evaluate visual accessibility and spatial connectivity. For example, a building can be represented as a convex map, where rooms and entrances are treated as convex spaces and adjacent connections are treated as links. In a convex map, a floor plan can be transformed so that convex areas are outlined in red and entrances are marked in blue. This map can be converted into a graph where rooms are nodes and entrances are links, and the graph (J-graph) represents the depth of specific rooms. [11] Also, an Axial map divides spaces along the longest straight lines (axes) for analysis. It is mainly used to assess movement paths and the efficiency of networks. Axial maps are often used for analyzing regional and urban spaces, focusing on the movement and connectivity within larger spatial networks. Both tools are chosen based on the research objectives: Convex maps are primarily used for internal building space analysis, while Axial maps are used for analyzing regional and urban spaces.[12] A. Farm house

B. Convex map

C. Graph

Picture 1. Representations of space : Convex map source : UCL Space Syntax : https://www.spacesyntax.online

The terms used to analyze Space Syntax include Depth, Connectivity, Integration, and Intelligibility. Depth is interpreted as the depth of space, which is the most basic unit in the Space Syntax. In the Space Syntax, depth is the distance between near and far. However, depth is not the same as distance as we know it: depth represents the number of spaces you have to travel through to get from one space to another. Connectivity is the number of other spaces that can be touched and traveled directly from one space to another. Integration refers to the accessibility of a space: high integration means good accessibility from the space being analyzed to all spaces, while low integration means poor accessibility. Intelligibility indicates the position of an individual space in the overall space. Intelligibility indicates whether a space is easy or difficult for humans to understand. A space like a plaza is easy for humans to recognize, while an amusement park maze is difficult for humans to recognize. In terms of intelligibility, a plaza is a clear space, while an amusement park maze is a less clear space. Space syntax analysis results are also displayed in color. The lowest value is purple, followed by blue, light blue, green, light green, yellow, pink, and red.

303


Building layout & its graph representation

Integration thematic

Picture 2. Convex map or J-graph analysis source : UCL Space Syntax : https://www.spacesyntax.online

The space syntax analysis method for schools mainly utilizes Convex map. They analyze school space by using the results of spatial analysis through Convex maps and surveys of school staff. Tatsuya Kishimoto & Mayuko Taguchi analyzed 76 Japanese elementary schools with Space Syntax and compared it with teachers' evaluations. The results showed that teachers preferred schools with high intelligibility and integration. Teachers rated schools with higher levels of spatial integration as more interactive. A similar study analyzing elementary schools argued that high integration promotes interaction between students and staff and improves the functionality of the educational environment.[13][14] What these two studies show is that, for schools, higher integration and intelligibility lead to more interaction between students, staff, and students. However, it is difficult to conclude that spaces with high integration and clarity are necessarily futureproof school spaces. Lee, Ji-You, & Lee, Jong-Kuk (2019) analyzed the spatial syntax of schools in Denmark, the United States, and Dubai, which are known as future-oriented schools, and summarized the spatial characteristics of future-oriented schools as follows.[15] First, the shape of the rooms is diverse, atypical, and open, and the characteristics of the rooms are divided according to whether they are connected to adjacent spaces. Second, the overall structure is not simple due to the variety of independent rooms and open rooms in the space. Third, in the case of the type with a curved corridor, the relationship with the neighboring space is not unified in the same way as the straight corridor due to the curved part. As such, the spatial characteristics of schools that are known to be proactive may be lower than straight corridors or simple school spaces in terms of intelligibility and integration. Therefore, it is important to be cautious about analyzing school spaces based on the results of space syntax analysis alone. Despite the limitations of analyzing school spaces with space syntax analysis, it is clear that the spatial characteristics of future-oriented schools are oriented toward increasing integration and intelligibility. Integration refers to the accessibility and centrality of the space, and clarity refers to the degree of understanding of the structure of the space, so when both factors are high, students can easily understand and effectively utilize the space. This is in line with the principles of futuristic learning spaces. Therefore, if you have a complex spatial structure or curved corridors, you should take various measures to improve integration and clarity.

1. SPACE SYNTAX ANALYSIS OF SCHOOL SPACES CHARACTERISTICS 3.1. School Space Characteristics

304


Cheongshim International Academy is an international middle and high school located in Seorak-myeon, Gapyeong-gun, Gyeonggi Province, Republic of Korea. It is one of the high schools with an international program and is a global member of the Round Square Conference. The school foundation is Seonhak Academy. Opened in 2006, the school was modeled after the UK's Eton College and Harrow School. It operates six grades in total, with three grades in middle school and three in high school, with four classes per grade, resulting in a total of 24 classes. The annual enrollment is around 100 students per grade, making the total number of middle and high school students approximately 600. All students live in dormitories. The school's founding principles are "Loving Heaven, Loving People, Loving Country," and its motto is "Altruistic Mind, Creative Knowledge, Global Leadership." Accordingly, it implements international and American-style education. The school is situated on a hillside, away from the town center, and is composed of the main school building and two dormitory buildings. The school building has a basement level and five above-ground floors. Located on a sloped terrain, the first and third floors are at ground level. The dormitories are located behind the school building and are connected through the third floor. The internal spaces of the school include not only middle and high school classrooms but also a large auditorium, gymnasium, library, dining hall, science laboratories, art rooms, music rooms, home economics room, health room, and counseling room. The school's corridors are curved and connected by stairs and elevators. The basement level and first two floors primarily contain support facilities such as the dining hall, gymnasium, dance studio, library, home economics room, science laboratories, counseling room, and health room. The third to fifth floors mainly house classrooms and learning spaces.

Picture 3. View of Cheongshim International Middle and High School and floor plan

305


3.2. School Space Characteristics The Space Syntax analysis method for the target school follows these steps: First, identify the areas to be analyzed using the school's floor plan or CAD drawings and set the analysis scope. For this study, the "floor-by-floor guide" of the school’s first floor was utilized. However, spaces not indicated on the floor-by-floor guide, such as the area under the stairs on the basement level and the central hall cabinets on the second floor, were identified through on-site surveys. Second, draw a Convex Space: This involves segmenting the actual space into Convex Space units. Even if two parts of a space are technically the same, they are divided if parts of the space are not visually connected. For instance, although the dance room on the basement level is one space, it was divided into two blocks due to its angular layout. On the basement, first, and second floors, long and curved corridors were divided into three blocks considering visibility. In contrast, corridors on the third, fourth, and fifth floors, while still curved, were shorter and thus divided into two spaces. The gymnasium, with no blind spots, was drawn as one block, while the dining hall, despite being a single space, was divided into two due to its entrance and spatial layout. The playground, being a learning space, was drawn as a single space on the front of the first floor. The school’s location on a slope means that the first and third floors are connected to ground level. The corridor on the third floor, connecting to the dormitory outside and featuring seating and rest areas along the slope, was marked as a separate space. The third-floor outdoor space, with a recently constructed snack bar, the playground at the front of the school building, and the snack bar on the sloped rear (third floor) were connected by roads and walking paths (rest areas) and marked as separate spaces. Spaces not indicated on the floor-by-floor guide, such as the area next to the stairs on the basement level, the restrooms next to the auditorium on the first floor, the central hall cabinets and adjacent corridors in the library on the second floor, the stairs and corridor leading to the dormitory on the third-floor middle school classrooms, and the stairs connecting the third, fourth, and fifth floors, were also drawn separately. Third, Create a Convex Map: Import the convex spaces into the DepthmapX program and convert them into a convex map. Draw connection lines (links) for each space. Connect all spaces from the basement level to the fifth floor by drawing connection lines through elevators and stairs for each floor. The Convex Space used for Cheongshim International Academy in this study does not reflect the actual size of the school's spaces. The school's space was drawn in DepthmapX without using CAD programs. This allows for analysis of connectivity and integration even if the actual sizes of the spaces differ. Finally, check the analysis results and represent them visually. Interpret the findings based on the visual and quantitative data. This method provides a structured approach to analyzing spatial configurations and connectivity within the school, using Space Syntax tools to understand how different spaces relate to each other and how they impact movement and interaction. <B1 floor plane>

<B1 convex space>

Picture 4. Comparison of Convex space with the actual plane

306


3.3. Space Syntax Analysis Results The average connectivity of Cheongshim International Academy is 2.75. This means that, on average, each space is connected to 2.75 other spaces. The highest connectivity was found in the central corridors on the basement level (B1), the first floor (1F), and the second floor (2F), where the number of connected spaces was 10. In particular, the connectivity of the corridors on the basement level and the first and second floors was higher compared to the corridors on the third, fourth, and fifth floors. Most of the learning spaces, such as classrooms, art rooms, and music rooms, are connected to the corridors, resulting in a connectivity of 1. The connectivity map in Figure 5 shows that most classrooms connected to the corridors are marked in purple. The presence of two doors and windows in the classrooms does not imply isolation simply due to low connectivity. Given the nature of school spaces, it is expected that corridors will have higher connectivity compared to other learning spaces. High connectivity indicates frequent use by students, which should be considered when planning spaces to ensure that movement is not obstructed while still serving as learning environments. The average integration of the target school is 1.00. The area with the lowest integration in the school is the interior side of the dance room on the basement level (B1), with an integration value of 0.65. This is the only area marked in dark purple. Next, the areas with lower integration include the space under the stairs on the basement level (B1) and the restaurant, with integration values of 0.71. The restaurant was analyzed separately from the waiting area in front of it (0.83). The highest integration was found in the left staircase of the central hall on the third floor, with a value of 1.46. Most staircases showed higher integration compared to other spaces, though the staircases on the basement level and the fifth floor had relatively lower integration. The highest integration in the corridors was found in the central corridor near the entrance on the first floor (1.43). Connectivity

Integration

5F

4F

307


3F

2F

1F

B1

Picture 5. Space syntax Analyze

308


A high integration value means that a space has good accessibility to all other spaces, while a low integration value indicates poor accessibility. In other words, areas with low integration are considered to be less frequented or less easily found by people. The detailed analysis of the school spaces is as follows: Firstly, most of the learning spaces, including middle and high school classrooms, science labs, and art rooms, did not exceed the overall average integration value of 1.00. However, the integration values for outdoor sports areas and the library were above 1.00. Middle school classrooms had higher integration values compared to high school classrooms, and the integration value was higher for the senior (Grade 3) classrooms compared to the freshman (Grade 1) classrooms. The integration values for the dance room, art room, and high school study room were relatively lower compared to other learning spaces. School Space Category Space name

Connectivity

Integration

High school classrooms (5F)

HS1

HS2

HS3

1

1

1

0.76*

0.76

0.77

Middle school classrooms(4F)

MS1

MS2

MS3

1

1

0.95 0.96 0.81~0.93

0.90

High school study Room (2F)

1 2~3

Middle school study Room(3F)

1~2

0.89~1.03

Library(2F)

1

1.02

Science Lab(4F)

1

0.90

Science Lab(2F)

1

0.92

Math classroom (4F)

1

0.96

Music class(1F)

1

0.94

Technology Room(1F)

1

0.94

Art classroom(2F) Dance practice Room (B1) Indoor exercise Room(1F) Outdoor Stadium(1F) International conference Room(4F) Sharing classroom (4F) Seminar Room (2F) Study Medium-sized Auditorium(1F) community space Club Activity Room(B1) Study Room(B1) Research Room(B1) Counseling Room (5F) High school Teacher’s Room : left / right Evaluation Room (4F) Middle school Teacher’s Room : left / right Support Spaces Broadcasting department (2F) for Students Nurses Office (1F) Administrator Office (1F) Convenience Store Cafeteria (B1) Community and Auditorium(1F) Shared Vision Hall (1F) Convenience Counseling Room (1F) Spaces

1 1~2 1 4 1 1 1 1 1 1 2 2 2/2 2 2/1 1 1 1 3 2/7 1 2 1

0.88 0.65~0.75 0.84 1.09 0.90 0.86 0.75 1.09 0.75 0.85 0.80 0.89 0.86/0.85 0.96 1.09 / 1.02 0.92 0.94 0.94 1.14 0.71 / 0.83 0.94 1.11 0.94

Staff resting Room (4F) Staff Room(4F) Walkway Space (1F)

2 1 3 2.75

1.02 0.86 1.14 1.00

tudy space

Other Total Mean

309


Table 4. Space Syntax Analysis Results by School Space *Integration values are rounded to two decimal places.

The learning community spaces include the international conference room, shared classrooms (G411, G412, G413), seminar rooms, Club Activity Room, Study Room, Research Room, and a medium-sized auditorium. Excluding the medium-sized auditorium, the integration values of the learning community spaces were below the average (1.00). Due to the nature of integration, corridors, and stairs tend to have higher integration values, while classroom spaces may have lower values. It was unexpected that the integration values for community spaces were generally lower than those for learning spaces. Notably, the seminar room (on the 2nd floor), despite being on the same floor, had a lower integration value compared to other community spaces. The support spaces include the career counseling room, administrative office, evaluation room, health room, administrative office, and convenience store. The integration values for support facilities were generally higher compared to learning and community spaces. In particular, the convenience store had a higher integration value than other support facilities. Conversely, the cafeteria had a lower integration value compared to other support facilities, which can be attributed to the generally low integration value of the basement level. The regional linkage and public convenience spaces are designed for parents and local residents and include the large auditorium, Vision Hall, and counseling rooms. Although the large auditorium (on the 1st floor) is used by students, it is primarily utilized for interactions with parents and external community members, thus it is categorized as a community space. Vision Hall, located in the school's 1st floor lobby, had a higher integration value compared to other interaction spaces. It is primarily used for events like bazaars. Most events involving residents and parents are held in the large auditorium, while ceremonies like graduation and entrance ceremonies are conducted in the indoor gymnasium. Overall, spaces for interaction with residents and parents had higher integration values compared to learning and learning community spaces. These spaces are placed in areas with high external accessibility but are located away from learning spaces to avoid disrupting students' studies. 3.4. Comparison and Evaluation of Space Syntax Analysis and Space Utilization Patterns The results from the Space Syntax analysis provide insights into the spatial layout of the school. For a more accurate analysis, it is essential to understand and interpret how students use the space. Comparing and evaluating the Space Syntax analysis results with students' space utilization patterns yields the following observations: Basement Level 1 (B1) This level contains many learning community spaces. Due to its location, most spaces in B1 have low integration values. Particularly, the dance room has the lowest integration value in the entire school, and students also perceive the dance room and the space under the B1 stairs as significant blind spots. Consequently, the school rigorously manages the dance room. The space in front of the cafeteria and next to the stairs also has a low integration value. Until last year, a convenience store was located here, which led to frequent student use. However, since early this year, the convenience store has moved to an outdoor space on the 3rd floor, and students rarely use this area except during lunch hours. The corridor on B1 also has a lower integration value compared to corridors on other floors. In practice, students use this space only for specific purposes, such as club activities or using the 'Study Room'. In summary, the integration value of B1 generally aligns with the students' space utilization patterns. First Floor (1F) This floor mainly contains 'regional linkage and public convenience spaces.' The central lobby is a space visited by parents and residents, as well as a frequently used area for students. The stairs and corridor near the large auditorium are used for events like on-demand lectures or briefings, making space

310


utilization irregular. The counseling room and gym are spaces used by parents, while the indoor gym serves as a venue for graduation ceremonies, entrance ceremonies, and school festivals, thus being utilized diversely by students and community members. Second Floor (2F) This floor contains 'learning spaces and learning community spaces.' The integration value of 2F is lower compared to the 1st and 3rd floors. Particularly, the integration values of the high school study room and seminar room are low. In terms of space access, the seminar room is not connected directly to the corridor and must be reached through the 3rd-year high school study room, making it more challenging to access compared to other 'learning community spaces'. Additionally, doors leading to the high school study room are located on both sides of the central corridor, with the door through the central hall being closed and currently used as a cabinet space. Third Floor (3F) The third floor generally exhibits high integration values. The spaces used by middle school students, such as the middle school classrooms, have corridors connected to the outside, making it convenient for students to move to the dormitory. The recently relocated convenience store is also conveniently situated. The spatial analysis results show that the integration values for middle school classrooms and corridors are higher than those for high school classrooms and corridors. The middle school study room, which is also on the same floor, is easily accessible, and its integration value is higher compared to the high school study room. This higher integration for middle school classrooms and study rooms is likely intended to facilitate a more convenient school life for middle school students. Fourth Floor (4F) The fourth floor primarily contains 'learning community spaces,' such as the faculty office, science labs, and international conference room. The central corridor on this floor has a high integration value, making it one of the highest on this floor, just below the middle school classrooms. This central corridor is frequently used by teachers before and after classes, and students often visit the science labs, international conference rooms, and evaluation rooms. The space with the lowest integration on the fourth floor is the restroom near the classrooms, while the central corridor has the highest integration value. Overall, the space utilization patterns of students align well with the integration values. Fifth Floor (5F)

The fifth floor consists of high school classrooms, generally characterized by lower integration values. Space utilization here tends to be quieter. The integration value of the senior year (Grade 3) classrooms is slightly higher compared to the first and second-year classrooms. This is likely because the senior students, who are preparing for university entrance exams, are placed in more accessible areas near stairs and elevators to facilitate easier movement. Does the Space Syntax analysis accurately describe current student behaviors? To explore this, we can examine specific cases. For example, the optimal step depth from the 1st and 2nd-year classrooms to the lobby in front of the cafeteria is 11 steps. According to Space Syntax analysis, it is most efficient to pass through 11 spaces to reach the cafeteria. However, this is not how students utilize the space.

311


Picture 6. Comparison of Step Depth Values by Route from 1st-Year Classrooms to the Cafeteria

High school 1st and 2nd-year students use three distinct routes to reach the cafeteria on the basement level for lunch. As illustrated in <Picture 6>, these routes are: 1. Taking the nearest central left staircase (on the left side of the image) down to the basement. 2. Going down from the 5th floor left staircase to the 1st floor, and then using the right staircase to reach the cafeteria on the basement level. 3. Descending from the 5th-floor left staircase to the 2nd-floor left staircase, and then using the rightend staircase on the 2nd floor to reach the stairs leading to the cafeteria. 4. Among these routes, 1st-year students most frequently use route 3, followed by route 2, and then route 1. In contrast, 2nd-year students predominantly use Route 2, followed by Route 3, and then Route 1. The step depth values for these routes are as follows: route 3 has a step depth of 12, route 2 has a step depth of 13, and route 1 has a step depth of 11. Although route 1 has the lowest step depth, suggesting it should be the most efficient route, actual student usage patterns differ. This can be interpreted from the perspective of corridor integration. Students tend to use the most integrated corridors even if they involve slightly more steps. For 1st-year students, who have less experience with the high school layout, route 3 is most commonly used. In contrast, 2nd-year students, who are more familiar with the space, prefer the more integrated route 2. Thus, as students become more accustomed to the space, they tend to choose routes with higher integration, even if these routes involve a bit more travel. This illustrates how students' spatial cognition and behavior are influenced by the integration of school spaces.

4. IMPLICATIONS FOR FUTURE-ORIENTED SCHOOL SPACES 4.1 Evaluation from a Future-Oriented School Space Perspective

312


The criteria for analyzing future-oriented school spaces involve comprehensive standards for school environments. This study examines the Space Syntax analysis results and student space utilization patterns to evaluate school spaces from a future-oriented perspective. Firstly, regarding learning spaces, the presence of individual study rooms and a variety of learning environments such as science labs, music rooms, and art rooms indicates that the school meets the requirements for future-oriented learning spaces. Although the integration of learning spaces was lower than average in the Space Syntax analysis, this is seen as positive from the perspective of creating a quiet learning environment. Notably, the separation of noisy areas like the music room and learning community spaces from individual study rooms and academic classrooms is advantageous. However, assessments related to the adequacy of learning space size or STEM-focused learning environments were challenging to perform with Space Syntax analysis. Nevertheless, the consideration of adaptable learning spaces is necessary for future spatial improvements. Secondly, learning community spaces are distributed in various forms. Unlike typical schools, Cheongshim International Academy, being a special school, has relatively more learning community spaces. There are various spaces where students can naturally gather, discuss, and collaborate. However, the accessibility of these spaces, measured by integration, was lower than the overall school average. Thus, measures to enhance the accessibility of learning community spaces are needed. Current Connectivity and Integration

Connectivity and integration after improvement

Picture 7. Measures to improve accessibility to seminar rooms and high school reading rooms(2F)

For instance, consider the seminar room and high school reading room on the 2nd floor. Currently, the seminar room is accessed through the high school reading room. If the seminar room were connected directly to the corridor, its integration would increase from 0.749 to 0.889, improving accessibility. In the case of the high school reading room, if the closed door towards the central hall were utilized, the integration of the central reading room would increase from 0.809 to 0.892, thus improving its connectivity with the corridor. Although improving integration might align with future-oriented school spaces, it is essential to understand current student usage patterns for effective space improvement. Adding a door to the seminar room would reduce the internal space of the seminar room. Additionally, creating a passage to the high school reading room from the central hall might increase accessibility but could disrupt the study atmosphere of the reading room. Therefore, a lower integration might sometimes be more beneficial for the learning environment depending on the characteristics of learning or community spaces. Thus, when improving existing school spaces, it is crucial to incorporate spatial analysis along with user feedback and school policies.

313


Thirdly, the student support spaces such as the health room and the store generally had good accessibility. The recent relocation of the store from the basement to the outdoor space on the 3rd floor is positive in terms of providing a relaxation area for students. Additionally, connecting the slope from the dormitory to the school with a roof to protect against rain and snow is considered an improvement from a safety perspective. However, further consideration is needed regarding what constitutes an ideal relaxation space for students to relieve stress and recharge. Cheongshim International Academy has implemented facilities and programs for students' health and hygiene in response to the COVID-19 pandemic. Given the nature of the boarding school, student health is managed through regular health check-ups, disinfection measures, and food hygiene management. Fourthly, regional linkage and shared convenience spaces are located on the 1st floor, which has relatively good connectivity and integration. Lobby areas, parent lounges, and regional linkage spaces are often lacking in many schools both domestically and internationally. Cheongshim International Academy also appears to have a shortage of regional linkage spaces. However, considering the characteristics of a boarding school and its location, interactions with the local community are challenging. Nonetheless, spatial considerations for visitors, such as parent lounges, are necessary. Finally, regarding eco-friendly space creation and the development of green environments, this study did not specifically analyze these aspects. However, the school’s location on a mountainside, surrounded by forest and featuring a grassy sports field and walking trails, indicates a higher affinity with the natural environment compared to urban schools. 4.2 Improving School Spaces from a Future-Oriented Perspective Future-oriented schools are characterized by high spatial connectivity and integration. Based on previous research, spatial analysis of Cheongshim International Academy, and analysis of students' space usage patterns, the following improvements for current school spaces are suggested: Firstly, it is important to identify spaces with low integration. Low integration indicates areas that are essentially blind spots. These spaces, which are not frequented or managed by students, can pose safety risks. Therefore, to ensure student safety, it is essential to identify and manage these low-integration areas. This includes assessing integration in learning spaces and community areas. To improve school spaces from the students' perspective, understanding the structural characteristics of the school is crucial. Secondly, various alternatives should be devised and implemented to increase integration. As previously mentioned, a future-oriented school space does not necessarily mean high integration. Spaces with curvilinear corridors or creative spatial layouts, which are typical of future-oriented designs, may actually have lower integration. Therefore, even if space is complex due to necessity, various measures are needed to enhance integration and accessibility for students. In the case of Cheongshim International Academy, the basement-level dance studio and the space next to the stairs represent significant blind spots in the school. To improve the integration of these areas, measures such as installing CCTV or regular inspections could be employed. However, it is also important to implement strategies that encourage frequent student use of these spaces. For example, installing personal lockers in the corridor, designing the space with bright and cheerful colors, or using lighting and windows to create a more open environment are potential solutions. Cheongshim International Academy features curvilinear corridors, which can make spaces less clear and have lower integration compared to straight corridors. While familiar to students, these corridors might be challenging for parents or prospective students. Solutions to help users navigate these curvilinear corridors more easily include marking directions with colors at key junctions, such as near the lobby entrance, elevator, and where stairs meet the corridor, and indicating facilities on the floor in each direction. Thirdly, it is necessary to increase the integration of learning community spaces. As previously suggested, improving the integration of the seminar room and high school reading room at Cheongshim

314


International Academy could enhance their functionality. Learning spaces, by their nature, may favor lower integration to maintain a quiet study environment. However, learning community spaces should be more open to facilitate student communication. Therefore, seminar rooms and small discussion rooms should be placed in areas with higher integration. Fourthly, it is important to utilize spaces with high integration. The areas with the highest integration and clarity in schools are typically corridors. Consider using corridors for functions like learning communication spaces or exhibition areas. Although expanding these spaces may be challenging, corridors should be thought of as part of the space for learning and community, not just as passageways. Finally, securing spaces connected to parents and the local community is necessary. For schools located in urban areas, communication with the community is vital. The Gyeonggi Provincial Office of Education’s "School Space Restructuring Project" also emphasizes the importance of community interaction through school integration. Spaces that facilitate interaction with the community without disrupting the learning environment are a key requirement for future-oriented schools. However, given the varying nature of schools such as boarding schools, it is important to consider the specific context when planning common convenience spaces.

5. CONCLUSION 5.1 Significance of the Study Professor Bill Hillier, who developed Space Syntax, was originally a linguist. He created an engineering analysis method for interpreting building spaces and urban areas. Space Syntax is not just a technique for analyzing spaces but can be considered an interdisciplinary field. Through this study, it became clear that Space Syntax provides a precise understanding of school spaces and, most importantly, offers insights into students' spatial usage patterns. The greatest significance of this research lies in its focus on future-oriented school spaces from the student's perspective and the use of spatial analysis programs for this purpose. It is particularly meaningful that the study presents new perspectives on school spaces from the viewpoint of students' spatial cognition and suggests improvements. From a policy perspective, this research highlights the necessity of spatial analysis for the effective implementation of the Gyeonggi Provincial Office of Education's 'School Space Restructuring Project.' It also aids in understanding the spatial usage patterns of students, which is crucial for designing studentcentered school spaces as advocated by the OECD. While people use spaces, spaces also change through human intervention. This study revealed that adding new doors to existing spaces alters their spatial value. Space Syntax is considered a valuable analytical tool for creating future-oriented school spaces. The analysis of Cheongshim International Academy through Space Syntax clarified previously vague blind spots in the school and provided new insights into the reasons behind the placement of learning and community spaces and the recent relocation of the snack bar. Most importantly, space analysis facilitated an understanding of the relationships between space, people, and the various facets of space, as well as students' behavioral patterns. 5.2 Limitations and Future Research This study focuses on a case study of a specific school, which may limit the generalizability of the results. The lack of comparison with results from other schools means that conclusions are confined to this particular case. Secondly, while Space Syntax is strong in quantitatively analyzing the physical structure of spaces, it was not combined with qualitative analyses such as student experiences and surveys. The analysis was based solely on student experiences, interviews, and field observations, which may reflect the

315


researcher's personal and subjective opinions. Thirdly, a comprehensive analysis using Space Syntax should include both Convex maps and Axial maps. Analysis of Intelligibility requires spatial analysis through Axial maps, which was not conducted in this study. Additionally, the analysis did not reflect the actual space sizes through CAD drawings, which is a limitation. Various research and policies are underway to develop future-oriented school spaces. Space Syntax is a useful analytical tool for planning and designing such spaces. Future research should aim for more specialized studies to contribute to creating more future-oriented and student-centered school environments.

REFERENCES [1] OECD (2017). The OECD Handbook for Innovative Learning Environments. OECD Publishing, Paris. http://dx.doi.org/9789264277274-en [2] Darling-Hammond, Linda; Cantor, Pamela; Hernández, Laura E.; Theokas, Christina; Schachner, Abby; Tijerina, Elizabeth; Plasencia, Sara (2021). Design Principles for Schools: Putting the Science of Learning and Development into Action. Learning Policy Institute. https://eric.ed.gov/?id=ED614438 [3] Architect Magazine. Future-Forward Middle-High School Design. https://www.architectmagazine.com/design/future-forward-middle-high-school-design [4] Deed, C., & Dwyer, M. (2018). Five propositions: Representing design in action. In S. Alterator & C. Deed (Eds.), School Space and Its Occupation: Conceptualizing and Evaluating Innovative Learning Environments. Boston: Brill Sense. https://brill.com/display/book/9789004379664/BP000010.xml [5] Kim, Dal-Hyo (2021). A Study on the Needs of Teachers for Future School Space. The Journal of Fisheries and Marine Sciences Education, 33(4), 894-902. https://doi.org/10.13000/JFMSE.2021.8.33.4.894 [6] Lee, Ji-You & Lee, Jongkuk (2019). A Study on the Architectural Design Feature for Future School - Focusing on the Space Composition of Educational Space -. Journal of Education Green Environment Research, 18(2), 12-21. [7] Gyeonggi Provincial Office of Education Press Release (2024, January 24). Five-Year Plan for Gyeonggi-type Space Restructuring Project. [8] Yoo, Myoung-Hee (2023). A Study on the Effectiveness of Spatial Restructuring in School Space Innovation Projects. Journal of Korean Space Design, 18(1), 135-148. [9] Space Syntax Network. https://www.spacesyntax.net/ [10] UCL Space Syntax. https://www.ucl.ac.uk/bartlett/ideas/space-syntax-human-dimensionarchitectural-space [11] van Nes, Akkelies & Yamu, Claudia (2021). Introduction to Space Syntax in Urban Studies. Springer. https://link.springer.com/book/10.1007/978-3-030-59140-3 [12] UCL Space Syntax. https://www.spacesyntax.online/overview-2/ [13] Kishimoto, Tatsuya & Taguchi, Mayuko (2014). Spatial Configuration of Japanese Elementary Schools: Analyses by Space Syntax and Evaluation by School Teachers. Journal of Asian Architecture and Building Engineering, 13(2), 373-380. https://doi.org/10.3130/jaabe.13.373 [14] Faris, Ali, Mustafa & Dalia, Ali, Rafeeq (2019). Assessment of Elementary School Buildings in Erbil City Using Space Syntax Analysis and School Teachers’ Feedback. Alexandria Engineering Journal. https://doi.org/10.1016/j.aej.2019.09.007 [15] Lee, Ji-You & Lee, Jong-Kuk (2019, October 24). A Study on the Characteristics of Architectural Planning Space of the Future School by Space Syntax. Proceedings of the Korean Institute of Architects Conference, Chungnam.

316


Management of Fine Dust in School Life Focusing on Cheongshim International High School Author

Full Name (Last Name, First Name)

School Name

:

Jo, Ajin

:

Cheongshim International Academy

Table of Contents 1. Introduction 1.1 Problem Statement 1.2 Objectives of the Study 1.3 Scope and Methodology of the Study 2. Hazards of Fine Dust 2.1 Air Pollution Issues 2.2 Health Impacts of Fine Dust 2.3 Fine Dust and Adolescent Brain Health 3. Experiment on Fine Dust in Cheongsim International High School Life 3.1 Current Air Quality in Gapyeong County 3.2 Fine Dust Investigation at Cheongsim High School 4. Discussion Points 4.1 Fine Dust Management at Schools 4.2 Management of Fine Dust in Cheongsim International High School Life 5. Conclusion

317


1. Introduction 1.1 Problem Statement Air pollution refers to the contamination of indoor or outdoor environments by chemical, physical, or biological factors that alter the natural characteristics of the atmosphere. It is linked to increased allcause mortality and a higher prevalence of non-communicable diseases, such as respiratory illnesses, cardiometabolic disorders, cognitive impairment, and early childhood developmental issues. Among the various air pollutants, fine particulate matter presents a particularly serious threat. Numerous policies are being implemented to mitigate fine dust pollution, as these very small particles (PM10 and PM2.5) can easily enter the body and have a more profound impact on health than other pollutants. Adolescents, still in their developmental stage, may be especially vulnerable to the harmful effects of fine dust, which can affect both physical health and brain function. 1.2 Objectives of the Study This study aims to examine the risks associated with fine dust and its impact on adolescent health, proposing strategies for managing fine dust within the school environment. 1.3 Scope and Methodology of the Study The study focuses on Cheongshim International High School in Gapyeong County, Gyeonggi Province. Research methods include literature review and on-site experiments using Dust Sensors and a simple Air Monitor to assess fine dust levels.

2. Hazards of Fine Dust 2.1 Air Pollution Issues According to the World Health Organization (WHO), nearly the entire global population (99%) is exposed to levels of air pollution that increase the risk of diseases such as heart disease, stroke, chronic obstructive pulmonary disease (COPD), cancer, pneumonia, and others. In 2019, WHO reported that 6.7 million deaths occurred due to outdoor and household air pollution combined. Indoor air pollution is as significant as outdoor air pollution. Research by the United States Environmental Protection Agency (EPA) indicates that indoor air pollution can be 2 to 5 times more severe than its outdoor counterpart. In 2020, the World Health Organization (WHO) estimated that around 3.2 million deaths were attributable to household air pollution, with over 237,000 of these deaths occurring in children under the age of five. This is primarily due to the exposure faced by women and children, who often handle household tasks such as cooking and gathering firewood. To mitigate the health impacts of air pollution, WHO has established Air Quality Guidelines (AQG). These guidelines set limits on specific air pollutants to help countries achieve air quality that protects public health. Since their initial release in 1987, these guidelines have been continuously updated. Table 3. WHO air quality guidelines

318


μg = microgram a 99th percentile (i.e. 3–4 exceedance days per year). b Average of daily maximum 8-hour mean O3 concentration in the six consecutive months with the highest six-month running- average O3 concentration. Note: Annual and peak season is long-term exposure, while 24 hour and 8 hour is short-term exposure. source : WHO(https://www.who.int/news-room/feature-stories/detail/what-are-the-who-air-qualityguidelines) In 2020, significant premature deaths occurred across the 27 European Union (EU-27) member states due to air pollution. According to guidelines from the World Health Organization (WHO) in 2021, exposure to fine dust concentrations exceeding guideline levels resulted in 238,000 premature deaths, while exposure to nitrogen dioxide levels exceeding these guidelines led to 49,000 premature deaths. Additionally, acute exposure to ozone caused 24,000 premature deaths [3]. Despite continuous improvements in overall air quality, as reported by the European Environment Agency (EEA), current EU standards are still not met throughout Europe. In Korea, official reports on deaths due to air pollution are not disclosed. However, there is a recognition of the seriousness of air pollution, and policies are actively pursued to reduce it. Korea operates a nationwide air quality monitoring network to measure and publicly disclose air pollution levels. The air quality criteria pollutants include sulfur dioxide (SO2), nitrogen dioxide (NO2), ozone (O3), carbon monoxide (CO), particulate matter (PM10 and PM2.5), and lead (Pb) [4]. Table 4. Air Quality Standards for Air Pollutants in Korea Substance Standard

Measurment Method

Sulfur Dioxide (SO2)

Annual average: 0.02 ppm or less 24-hour average: 0.05 ppm or less 1-hour average: 0.15 ppm or less

Pulse U.V. Fluorescence Method

Carbon Monoxide (CO)

8-hour average: 9 ppm or less 1-hour average: 25 ppm or less

Non-Dispersive Infrared Method

Nitrogen Dioxide (NO2)

Annual average: 0.03 ppm or less 24-hour average: 0.06 ppm or less 1-hour average: 0.10 ppm or less

Chemiluminescent Method

Particulate matter (PM10)

Annual average: 50 μg/m³ or less 24-hour average: 100 μg/m³ or less

β-Ray Absorption Method

Fine particulate matter Annual average: 15 μg/m³ or less 24-hour average: 35 μg/m³ or less (PM2.5)

319

Gravimetric Method or equivalent automatic measurement method


Ozone (O3)

8-hour average: 0.06 ppm or less 1-hour average: 0.1 ppm or less

U.V Photometric Method

Lead (Pb)

Annual average: 0.5 μg/m³ or less

Atomic Absorption Spectrophotometry

Benzene (C6H6)

Annual average: 5 μg/m³ or less

Gas Chromatography

source : https://www.me.go.kr/mamo/web/index.do?menuId=586 2.2 Health Impacts of Fine Dust Common expressions used to indicate dust concentrations in the atmosphere include Total Suspended Particulate (TSP), PM10, and PM2.5. TSP represents the total amount of suspended particles in the atmosphere, while PM10 and PM2.5 refer to particles smaller than 10 micrometers and 2.5 micrometers in diameter, respectively. Fine dust not only consists of simple dust particles but also includes various harmful chemicals such as black carbon (BC), organic matter (OM), sulfate (SO4 2−), and ammonium (NH4 +). Particularly, fine particulate matter (PM2.5), due to its ability to deeply penetrate the lungs, is more detrimental to health compared to other air pollutants [5]. Figure 6. Size-dependent regional deposition of inhaled particulate matters.

source : Serafin etc(2023), https://www.mdpi.com/2227-9059/11/5/1477 Fine dust not only causes respiratory diseases such as asthma, bronchitis, and pulmonary fibrosis but also triggers various health problems including cardiovascular diseases, allergic conditions, and even cancer. Particularly, it has been found that as fine dust concentrations increase, mortality rates also rise. Epidemiological studies indicate that long-term exposure to PM2.5 increases mortality by approximately 7% for every 5 μg/m3 exposure. Beyond respiratory and cardiovascular diseases, the carcinogenic effects of fine dust are well-documented. Moreover, exposure to elevated levels of fine dust is associated with metabolic disorders; according to studies, there is a correlation between increased fine dust concentrations and the incidence of type 2 diabetes. Recent research suggests that these health risks persist even at levels below recommended thresholds [6]. Recently, numerous studies have shown that fine dust is associated with brain diseases such as dementia. According to the Korean Dementia Association, fine dust can directly cause brain inflammation and neurodegenerative changes, leading to the onset of Alzheimer's disease, and can also promote arteriosclerosis, thereby inducing strokes and vascular dementia [7]. Research by Yonsei University

320


College of Medicine and Gachon University College of Medicine has revealed that as concentrations of atmospheric pollutants (PM10, PM2.5, NO2) increase, the thickness of the cerebral cortex, which plays a crucial role in cognitive function, decreases, along with reductions in the volumes of brain structures such as the hippocampus, basal ganglia, and thalamus [8]. Professor Roh from Gachon University stated in an interview with a medical newspaper that areas thinned by exposure to atmospheric pollutants are regions responsible for learning and memory in the brain, and even healthy elderly individuals without underlying conditions are at increased risk of accelerated brain aging and dementia due to long-term exposure to air pollution [9]. Fine dust is particularly dangerous to children, pregnant women, adolescents, as well as individuals with respiratory and cardiovascular diseases. Importantly, the global interconnected nature of atmospheric environments underscores that fine dust pollution can have long-term impacts. 2.3. Fine Dust and Adolescent Brain Health According to the European Environment Agency (EEA), more than 1,200 children under 18 years of age die annually in EEA member and collaborating countries due to air pollution. The EEA warns that air pollution triggers low birth weight, asthma, reduced lung function, respiratory infections, and allergies in children and adolescents, while increasing the risk of chronic diseases in adulthood. Furthermore, it suggests that air pollution influences children's brain development, contributes to cognitive impairments, and may play a role in the development of certain types of autism spectrum disorders [10]. There are two plausible pathways through which fine dust affects the brain and nervous system. Firstly, atmospheric pollutants can enter the brain directly via the olfactory nerve or lungs, causing direct damage, and subsequently accessing the brain through the bloodstream (path 1). This pathway primarily involves extremely fine particles. Secondly, atmospheric pollutants can enter the lungs through inhalation, impair lung function, or cause lung inflammation (path 2). Impaired lung function can lead to decreased blood oxygen levels (hypoxia), systemic inflammation, oxidative stress, stiffening of brain arteries, and damage to small blood vessels. Additionally, atmospheric pollutants can trigger or exacerbate systemic inflammation through immune cells in the lungs, such as pulmonary macrophages, contributing to the presence or increase of inflammatory mediators [11]. Figure 6. Size-dependent regional deposition of inhaled particulate matters

source : Serafin etc(2023), https://www.mdpi.com/2227-9059/11/5/1477

321


According to a study published in the Journal of the American Medical Association Psychiatry, adolescents living in urban areas in the UK are about twice as likely to experience psychiatric symptoms compared to those living in rural areas. Researchers indicated that urban air pollution, particularly concentrations of pollutants such as nitrogen dioxide, nitrogen oxides, and particulate matter, increased the likelihood of experiencing psychiatric symptoms by 27% to 72% [2]. While it cannot be definitively concluded that air pollution causes adolescent psychiatric disorders, it is clear that air pollution does impact the lives and mental health of adolescents. The Adolescent Brain Cognitive Development (ABCD) Study in the United States, conducted by 21 research institutions involving approximately 12,000 adolescents, suggested that even levels of air pollutants considered safe could have negative effects on brain development and cognitive function in adolescents [12]. A review of 31 global studies on air pollution and developmental health indicated that high exposure to pollution in early childhood is inversely related to academic achievement and cognitive outcomes in children. A study by the US Environmental Protection Agency (EPA) involving about 4,500 children suggested that air toxics may increase the prevalence of Autism Spectrum Disorder (ASD) [13]. Research from various countries is ongoing regarding the impact of exposure to air pollution, including fine dust, during fetal, infancy, and adolescent stages on brain development. Most studies suggest that air pollution negatively affects cognitive abilities in infants and adolescents. While age-related dementia may seem a distant concern for adolescents, dementia caused by environmental pollution is a disease that develops over long periods of exposure in everyday life. Therefore, caution is warranted from adolescence onwards. Particularly, fine dust can penetrate directly into the brain, potentially leading to cognitive impairments. In fact, the Korean Dementia Association warns that even short-term exposure to fine dust such as PM2.5 can cause changes in brain blood vessels.

3. Cheongshim International High School Air Pollution Experiment 3.1 Air Pollution Status at Cheongshim International High School The air monitoring network in Gapyeong County is located at the Seorak-myeon Administrative Welfare Center and the Gapyeong County Council Building. The Environmental Information Service measures the pollution levels of environmental standard substances and provides them on its website [14]. In 2023, the air pollutant measurements in Gapyeong County (Seorak-myeon Administrative Welfare Center) showed that the concentration of particulate matter with a size of 10 μm or less (PM10) ranged from 33 μg/m³ to 57 μg/m³, and fine particulate matter with a size of 2.5 μm or less (PM2.5) ranged from 11 μg/m³ to 21 μg/m³. Considering that the annual average air quality standard for PM10 is 50 μg/m³ or less, and for PM2.5 is 15 μg/m³ or less, the air pollution level in Gapyeong County is not satisfactory [15]. Particularly, high levels of fine dust were observed in spring and autumn. Table 5. Measurement Results of Air Pollutants in Gapyeong County Rate type SO₂ NO₂ CO O₃ PM2.5 PM10 Unit

(ppm) (ppm) (ppm) (ppm) (㎍/㎥)

(㎍/㎥)

2023.04 0.001 0.010 0.367 0.040 21 57 2023.05 0.001 0.007 0.339 0.040 16 35 2023.06 0.001 0.006 0.361 0.040 14 26 202307 0.001 0.005 0.372 0.030 13 25 202308 0.001 0.004 0.330 0.025 11 21 202309 0.001 0.004 0.336 0.022 11 21 202310 0.001 0.008 0.379 0.017 15 26 202311 0.002 0.011 0.487 0.020 21 33 source : https://www.gp.go.kr/portal/contents.do?key=653

322


The Korea Environment Corporation operates the Air Korea website, which provides real-time air quality information [16]. On June 30, 2024, in Gapyeong County, the PM2.5 pollution level at the Gapyeong County Council Building ranged from moderate (16-35), while at the Seorak-myeon Cultural Center, it was good (00-15). Looking at the air pollution status in Seorak-myeon, Gapyeong County, where Cheongshim International High School is located, on June 30, 2024, PM2.5 and PM10 levels were reported as good, and ozone levels were moderate. Since the school does not have its own particulate matter monitoring equipment, the fine dust levels at Cheongshim International High School are expected to be similar to those in Gapyeong County. Figure 8. PM2.5 Status in Gyeonggi Province Figure 9. Atmospheric Pollution Status in Seorak(2024.06.30.) myeon, Gapyeong County (2024.06.30.)

source : https://air.gg.go.kr/ There are no official statistics available regarding the indoor fine dust levels at Cheongshim International High School. Gapyeong County conducts indoor air quality measurements for multi-use facilities such as hospitals, libraries, theaters, nursing homes, kindergartens, indoor parking lots, internet game facilities, and academies. However, whether Cheongshim International High School has been investigated for fine dust levels is unknown.

323


3.2 Fine Dust Investigation at Cheongshim International High School (1) Methodology Fine dust measurements are conducted using an Arduino board (waveshare.com - Dust Sensor). While absolute values of fine dust concentration are challenging to measure, the sensor allows for identifying relative differences in concentration and locating areas with higher fine dust levels. The Dust Sensor measures PM2.5 concentrations (μg/㎥). It continuously monitors real-time concentrations, and once the repeated numbers stabilize, that number can be considered the local fine dust concentration in the experimental area. (2) Target of Investigation The investigation targets classrooms and dormitory rooms at Cheongshim International High School. (3) Results of Investigation On July 11th and 12th, classrooms of 2-3 grades and dormitory rooms at Cheongshim International High School were investigated over two days. The investigation using the Dust Sensor revealed excessively high fine dust levels in the classrooms. Considering frequency, the figure presented by the Dust Sensor was 6,504.96 μg/㎥, a concentration not typically expected. Despite reviewing the program and checking device connections, no method of correction was found. Therefore, rather than evaluating absolute concentrations, the study opted for relative concentration evaluation based on varying conditions that could affect fine dust levels in classrooms and dormitories. Figure 10. Dust Sensor

Table 7.Example of the results

source : https://www.waveshare.com/dust-sensor.htm

To measure relative concentrations, in classrooms, we set the fine dust level to 100 when arriving in the morning without opening windows, and compared it to the level when windows were opened. For dormitory rooms, we set the fine dust level to 100 upon entry without air conditioning, and compared it to levels after turning on the air conditioner and after opening windows for ventilation.

324


Table 7. Fine Dust Concentration Measurement Results at Cheongshim International High School Category Condition Fine Dust Concentration Classroom

After morning arrival with closed windows 100 After ventilation

87

Without air conditioning

100

Dorm room After 1 hour with air conditioning After opening windows for ventilation

92 79

Note: The 100 value for classrooms and dormitory rooms does not indicate the same fine dust concentration. It serves as a baseline for comparing conditions before and after ventilation, and before and after air conditioning operation.

4. Discussion Points 4.1 Fine Dust Management at Schools (1) Fine Dust Countermeasures The government is implementing various policies to prevent fine dust pollution. Considering Korea's geographical situation where fine dust concentrations are high in spring and fall, the government implements seasonal management measures. From December to March each year, measures such as reducing operations at factories and power plants, restricting the use of old diesel vehicles, and managing traffic demand are enforced. Additionally, indoor air quality in multi-use facilities heavily utilized by the public, accurate fine dust observation, and information disclosure are managed to control fine dust pollution. In Gapyeong County, similar measures to the government's fine dust policies are being executed. This includes restrictions and enforcement on old diesel vehicle operations, issuing emergency reduction measures for fine dust, implementing seasonal management (from December to March), conducting special inspections to prepare for spring fine dust and yellow dust, installing dust suppression and prevention facilities at construction sites, and managing air pollution from emission businesses. Furthermore, outdoor and indoor air pollutant measurements are conducted to manage air pollution. Encouraging resident participation is essential in agricultural areas like Gapyeong County where fine dust is generated by agricultural activities. In addition to the government and Gapyeong County's responses to prevent fine dust, I believe there is a need for fine dust measures specifically tailored for youth. Adolescents are the generation most vulnerable not only to current health effects but also to future health impacts from fine dust pollution. The directions for fine dust measures for adolescents are as follows: First, it is necessary to strengthen fine dust environmental standards targeting adolescents. Environmental organizations argue that Korea's current fine dust environmental standards should be strengthened to align with WHO recommendations[17]. Currently, Korea's fine dust (PM10, PM2.5) standards are higher than WHO recommended levels. If immediate strengthening is challenging, prioritizing the enhancement of fine dust standards for adolescent facilities such as facilities for pregnant women, childcare centers, indoor playgrounds, and middle and high schools should be considered. Second, mandatory installation of fine dust alert devices in schools and facilities used by adolescents is necessary. Providing fine dust alert devices in classrooms, corridors, indoor gymnasiums, and playgrounds frequented by infants and adolescents, as well as private facilities such as indoor

325


playgrounds and study rooms, should be encouraged by the government. Third, more research on the impact of fine dust on infants and adolescents, particularly on brain health, is needed. The ABCD Study in the United States is a large-scale study on adolescent mental health, investigating various topics related to adolescent brain health including drug addiction, living environment, air pollution, and brain health. The reason for conducting such research at a national level is the understanding that adolescent brain health is critical to the nation's future. The European Environment Agency (EEA) conducts various studies on how children and adolescents are affected by air pollution. These research results are publicly available on respective institution websites and are promoted through media such as press and broadcasting to encourage adolescents to live in healthier environments. Given the suffering caused by spring and fall fine dust pollution in Korea, more extensive research on the effects of fine dust on infants and adolescents exposed to air pollution is crucial. 4.2 Fine Dust Management in Daily Life at Cheongshim International High School (1) Reducing Fine Dust in Daily Life The Ministry of Environment suggests eight methods to reduce fine dust in daily life. First, there are four daily practices: ① Walk short distances and maintain eco-friendly driving habits (avoid rapid acceleration, speeding, and excessive idling!) ② Reduce waste disposal to decrease incineration and fine dust emissions ③ Maintain appropriate indoor temperatures (20℃) during winter to reduce wasted energy consumption ④ Immediately report illegal incineration or disposal rather than turning a blind eye. Second, there are four family practices to protect family health: ① Ventilate for at least 30 minutes after cooking, even on days with bad fine dust levels ② Regularly inspect filters in air purifiers or ventilation systems ③ Wash hands, face, and brush teeth after going out to remove fine dust particles ④ Avoid vigorous exercise on days with very bad fine dust levels[17]. From the perspective of adolescents, it is important to emphasize awareness of fine dust information. Recognizing that fine dust not only affects respiratory health but also mental health among adolescents, understanding the difference between responding to fine dust and not doing so can have a significant impact on behavior. Furthermore, adopting habits such as preparing masks on days with high fine dust levels, similar to carrying umbrellas on rainy days, should be normalized. Particularly for those with respiratory diseases, paying more attention to fine dust is crucial. Additionally, maintaining a diet rich in water, fruits, and vegetables that can help eliminate waste materials can also contribute to responding effectively to fine dust. (2) Fine Dust Response Strategies at Cheongshim International High School Based on the results of the fine dust experiments, the following fine dust response strategies are proposed: Firstly, ensure that classrooms and dormitory rooms are ventilated regularly. There was a significant difference in fine dust levels before and after ventilation. Therefore, students should ventilate classrooms immediately after arrival to reduce fine dust levels. It is important for the school to prioritize classroom ventilation as a crucial activity. Considering appointing a "Clear Air Keeper" position at the school could also be beneficial. The Clear Air Keeper would be responsible for ventilating classrooms and managing activities that generate dust. Secondly, dormitory rooms should also be ventilated like classrooms. Regular ventilation in the morning and evening is necessary to reduce fine dust. Cheongshim High School is located on a mountainside with good air circulation and relatively good air quality, so frequent ventilation is advisable. Even on days with high fine dust levels, it is recommended to ventilate briefly to prevent the accumulation of pollutants such as carbon dioxide and formaldehyde, which degrade indoor air quality.

326


Thirdly, regular cleaning of air conditioner filters in classrooms and dormitory rooms is essential. As indicated by the fine dust investigation, fine dust levels decreased when the air conditioner was used, presumably because the air conditioner acted as a filter. However, if dust accumulates and contaminates the air conditioner filter, it could become a device that generates rather than reduces fine dust. Therefore, special attention should be given to the maintenance of air conditioner filters. Fourthly, students should take an interest and actively participate in activities aimed at reducing atmospheric environmental pollution in Gapyeong County. Although air pollution in Gapyeong County is not as severe as in Seoul or major cities, it can occur due to wildfires or agricultural activities. Therefore, students should participate in local community activities aimed at preventing air pollution (such as wildfire prevention and reporting emission violations). Lastly, since fine dust is related to brain health, attention should also be paid to stress management on days with high fine dust levels. Techniques such as deep breathing and mindfulness should be employed to reduce stress, and efforts should be made to manage emotions with friends to prevent excessive emotional reactions. In conclusion, considering the impact of fine dust on adolescent brain health, it is recommended to install fine dust measurement devices indoors and outdoors at the school. Cheongshim International High School, located on a mountainside, has better air quality and is less affected by fine dust compared to urban high schools. However, installing fine dust measurement devices would raise student awareness and enhance their perception of the risks associated with fine dust.

5. Conclusion Fine dust is a critical air pollutant that poses serious health risks to adolescents. Particularly affecting respiratory health, it also impacts cognitive functions such as learning and memory, underscoring the need for effective management. Considering the impact of fine dust on adolescent health, efforts to prevent fine dust pollution require collaboration not only from adolescents themselves but also from schools, local communities, and national authorities. This includes establishing stricter fine dust environmental standards. Specifically, standards for facilities and schools targeting adolescents need to be strengthened. There are limitations to this study. Firstly, although fine dust was measured using Arduino boards, the study was limited by the researcher's technical capabilities and a short survey period, which may have hindered comprehensive field research. Particularly due to measurement errors, absolute values could not be accurately determined. Nonetheless, the study holds significance in its relative comparisons of fine dust levels and the interpretation of results.

327


References 1. World Health Organization. (n.d.). Dementia. Retrieved from https://www.who.int/newsroom/fact-sheets/detail/dementia 2. BBC Korea. (2018, November 5). 미세먼지로 인한 질환 '50 만 명 사망'… WHO 보고서. Retrieved from https://www.bbc.com/korean/news-46093494 3. European Environment Agency. (2022). Air quality in Europe 2022. Retrieved from https://www.eea.europa.eu/publications/air-quality-in-europe-2022/health-impacts-of-air-pollution 4. Serafin, P., Zaremba, M., Sulejczak, D., & Kleczkowska, P. (2023). Air Pollution: A Silent Key Driver of Dementia. Biomedicines, 11(5), 1477. https://doi.org/10.3390/biomedicines11051477 5. Ritz, B., Hoffmann, B., & Peters, A. (2019). The Effects of Fine Dust, Ozone, and Nitrogen Dioxide on Health. Deutsches Ärzteblatt International, 51-52(51-52), 881-886. https://doi.org/10.3238/arztebl.2019.0881 6. 대한치매학회. (n.d.). Retrieved from http://www.dementia.or.kr 7. 조, 재림., 장, 희선., 박, 현지., 노, 영., 손, 정우., 고, 상백., ... 김, 창수. (2022). Mechanisms linking particulate matter exposures and cognitive impairment: epidemiological evidence. 한국독성학회 심포지움 및 학술발표회 2022, 21-21. 8. 의학신문. (2021, February 15). Retrieved from http://www.bosa.co.kr/news/articleView.html?idxno=2144223 9. European Environment Agency. (n.d.). Air pollution and children's health. Retrieved from https://www.eea.europa.eu/publications/air-pollution-and-childrens-health 10. Aretz, B., Janssen, F., Vonk, J. M., Heneka, M. T., Boezen, H. M., & Doblhammer, G. (2021). Long-term exposure to fine particulate matter, lung function and cognitive performance: A prospective Dutch cohort study on the underlying routes. Environmental Research, 201, 111533. https://doi.org/10.1016/j.envres.2021.111533 11. National Institute of Mental Health. (n.d.). What is the ABCD Study? Retrieved from https://www.nimh.nih.gov/research/research-funded-by-nimh/research-initiatives/adolescentbrain-cognitive-developmentsm-study-abcd-studyr 12. Ha, S. (2021). Air pollution and neurological development in children. Developmental Medicine & Child Neurology, 63(4), 374-381. https://doi.org/10.1111/dmcn.14758 13. 대기환경정보서비스. (n.d.). Retrieved from https://air.gg.go.kr/ 14. Gyeonggi Province Government. (n.d.). Retrieved from https://www.gp.go.kr 15. Air Korea. (n.d.). Retrieved from https://www.airkorea.or.kr 16. Policy Briefing Korea. (n.d.). Retrieved from www.korea.kr

328


Optimizing Cultured Meat Production: Development of Affordable Conditioned Media Enriched with Afamin and Wnt3a Proteins Author 1

Author 2

Author 3

Author 4

Author 5

Full Name

:

Jung, Yae Joon

School Name

:

Phillips Academy Andover

Full Name

:

Kim, Siyoon

School Name

:

Cheongna Dalton School

Full Name

:

Kim, Eyoung

School Name

:

Branksome Hall Asia

Full Name

:

Son, Yubin

School Name

:

Concordia International School Shanghai

Full Name

:

Cho, Nayoon

:

Korea International School Jeju

(Last Name, First Name)

(Last Name, First Name)

(Last Name, First Name)

(Last Name, First Name)

(Last Name, First Name)

School Name

Keywords Afamin and Wnt3a, cultured meat, carbon emission, food scarcity, animal rights, livestock, conditioned media, FBS

329


Problem Statement Beef and its production processes are causing more problems in society than providing humanity with a food supply. These problems are narrowed down into excessive carbon emission, water shortage, potential food scarcity, and violation of animal rights. Acknowledging this, cultured meat has been introduced to the market; however, it also failed to become a major food supply for humanity. Aiming to overcome these challenges, we present a novel solution of using Afamin and Wnt3a to accelerate the cultivation of cell-based meat. This solution is projected to be the solution to the problems above as well as become an alternative for meat products in the status quo. The first challenge stands from the immense environmental damage by excessive carbon dioxide emission from farmland and livestock. During cropping, grazing, and fostering livestock, farms burn manure and biomass, producing toxic waste chemicals in the atmosphere. The total emissions from global livestock amount is about 7.1 gigatonnes of CO2-equivalent per year, which accounts for approximately 14.5 percent of all anthropogenic greenhouse gas emissions. The devastating impact of farmland expansion on the environment and the unceasing desire for beef suggests that a cell-based meat production method could be a fitting solution since the method occupies significantly low land provided to the farms that raise livestock, especially cattle. Beef is the primary contributor to this issue, responsible for around 65 percent of emissions within the total livestock sector. To reduce the carbon emission amount of livestock, we must devise a method to lessen the population of beef cattle, which can be solved by our conditioned media promoting the growth of cell-based meat. The CO2 emission and methane gas generated by livestock are critical because they are major causes of climate change. Forty-four percent of livestock emissions are methane, and methane gas can produce heat up to 21 times more than carbons. Thus, the accumulation of gasses in the atmosphere causes ozone depletion, which can lead to a serious climate change crisis. Environmental groups predict that the world needs to produce 50 percent more food without expanding the food system’s carbon footprint to prevent global warming. To minimize the carbon footprint, we must develop an immediate alternative to conventional meat: cell-based meats. However, it takes over three weeks to fully cultivate cell-based meats, and we must find a more efficient way to grow and sell them. Thus, we need conditioned media that accelerates the proliferation of cells.

Figure 1. Amount of water required to produce one pound of each food: vegetables, fruits, milk, cereal, eggs, chicken, pork, nuts, and beef (adopted from Water Footprint Network)

330


The third challenge stands from overconsumption of water to raise livestock and function agricultural technologies. To produce one pound of beef, approximately 1,847 gallons of water are required. (Figure 1). To put it in perspective, this amount of water would be sufficient to fill 39 bathtubs completely to the top. In fact, the water consumption of a single cow becomes overwhelming when considering the number of domesticated cows worldwide: 1 billion. As a result of excessive water usage, there are about 1.6 billion people currently suffering from water scarcity according to the UN. By 2030, with the existing climate change scenario, almost half the world's population will be living in areas of high water stress. This highlights the significant impact that water-intensive practices can have on global water resources and the urgency to find a solution to reduce the water required to cultivate livestock.

Figure 2. The growing global meat consumption from 1961 to 2022 (adopted from Sustainable Food Source) By 2050, more than 70 percent of the world's population will be urban, and urbanization will change the pattern of consumption. For example, the global consumption of meat has increased since 1961 as the world urbanizes. (Figure 2). In fact, it is estimated that we may need 70% more animal products by 2050 to feed the world. With escalating global demand for meat, the supply of meat may fall short, leaving a significant portion of the population impoverished and malnourished. In response to the rising demand for semi-processed or ready-to-eat foods, we should promote the consumption of cellbased meat as a future protein source. The final challenge is the issued violation of animal rights in farms. As a capitalistic society aims for pure benefit over any other factors, the farms that raise and sell livestock also objectify animals and conduct any means necessary to increase efficiency in their sales. This system can be classified into two major aspects: raising and slaughtering. Over 70% of hens solely raised for their eggs are miserably crammed into cages for their entire lives. Also, in the case of a female cow, their offspring are “taken away” less than 24 hours after their birth to secure milk from the mother cow. These methods of treating animals are unethical, encompassing neither respect nor compassion toward living beings. The other one is the slaughtering process. Typical methods of slaughtering involve penetrating a

331


captive bolt, the act of firing a gun at the brain of an animal, electrical shock, the act of electrocuting the livestock, and finally, gas killing, the act of exposing animals to high concentrations of gas; all of them results in the immediate death of the animal. These are some of the cruel ways in which livestock are handled with no dignity while purely being objectified. Implementing our cell-based meat production method as an alternative to natural meat would decrease and possibly eliminate the inhumane actions against livestock.

Figure 3. Price comparison between two types of meat-producing methods (adopted from bonappetit.com) Besides the challenges with the environment and sustainable food supply, there are problems lying within the cultured meat production. The three major challenges are the high price of the production, technical hurdles, and scale-up challenges. The first issue is regarding the high price of manufacturing lab-cultured meat. Lab-cultured meat is priced at almost 50% more expensive than naturally grown meat from farms. (Figure 3). Consequently, experts project this price to be increased as it proceeds through various retailers, ultimately being priced at least 100 US dollars per pound while some other meat is positioned at more approachable prices. This unprecedented leap in price goes against the common perspective that labgrown beef would be cheaper, hence leading to a sharp decline in customers. As this price is set due to the various specifications of the production processes, developing a new method of producing this meat could be a solution for the excessive cost of lab-cultured meat. The second issue is regarding the sophisticated requirements and technical processes. Cultured beef production requires specialized knowledge and technical expertise to develop a suitable growth medium for the meat cells, optimize the conditions for cell growth and differentiation, and ensure the safety and quality of the final product. For example, to grow 3D cultures, it must have a specific stimulus mechanism, waste removal system, specified starting points, scaffolds, and even artificial blood vessel networks. To successfully make 3D structured meat, it must also include sufficient nutrient and oxygen supply and carefully constructed scaffolds with the proliferation of bovine muscle cells, which requires enormous time and professionalism. Lastly, cultured beef production is limited to small-scale laboratory settings. Scaling up the production process to meet commercial demand presents significant challenges, including ensuring the consistency and quality of the final product, optimizing the production process for efficiency and cost-

332


effectiveness, and navigating regulatory and public acceptance issues.

Objectives and Aims The primary objective of our research is to enhance cultured beef production by developing an affordable media enhanced with Afamin and Wnt3a proteins. This study aims to address the current limitations in cultured beef production and contribute further progress to global challenges regarding poverty and malnutrition. We hypothesize that integrating Wnt3a and Afamin proteins will improve differentiation within the conditioned media, enhancing the growth of bovine muscle cells. With this objective in focus, our research seeks to elucidate how developing cost-effective conditioned media will optimize cultured beef production, leading to a more sustainable and economical approach. This overarching objective will serve as the backbone of our research’s structure, steering its overall progression by breaking down different aims of the study: the development of affordable conditioned media, the optimization of cultured beef production, and the contribution to a more cost-effective and sustainable production method. The first aim of our overall objective will revolve around developing a cost-efficient conditioned media. This goal will be the cornerstone for our study of diverse approaches to creating an economically viable growth media, thereby reducing dependency on costly elements such as fetal bovine serum (FBS). With our available technology, FBS remains a prevalent ingredient in beef cell cultures; however, its widespread use presents notable obstacles due to its high price and the ethical concerns associated with utilizing animal-derived components. Our research will strive to overcome these hurdles by devising affordable alternatives that maintain cell growth and differentiation and address the substantial challenges the cultured meat industry faces. By delving into innovative techniques and resourceful formulations, our study will aim to revolutionize beef culturing methods, ultimately providing a more sustainable and accessible solution for cultured meat production. This breakthrough will potentially reshape the landscape of cultured meat, making it not only environmentally responsible but also economically viable on a larger scale. The second focus of our study’s objective revolves around optimizing cultured beef production. Through the development of a conditioned medium enriched with Afamin and Wnt3a proteins, we will strive to increase the efficiency, quality, and scalability of beef production in a cultured environment. To assess the impact of the conditioned media on bovine muscle cell proliferation, our research will employ various experimental methods, including transfection, cell culture, and protein analysis. Our study will incorporate meticulous experimental design, encompassing selecting appropriate cell lines, determining transfection methods, and establishing optimal culture conditions. We will conduct systematic testing and iterations to discover and develop the most effective approach for promoting the growth and differentiation of bovine muscle cells. Additionally, our study will analyze the specific effects of Afamin and Wnt3a proteins on cell growth and differentiation. We hypothesize that integrating these proteins into the alternative conditioned medium would enhance the efficiency and efficacy of cell growth. Through our research, we will aim to develop beef cells with desirable properties, including flavor, texture, and nutritional composition, closely resembling that of conventional meat. By continually assessing and analyzing cultured beef cells, our research will aim to make informed judgments and implement necessary adjustments to enhance the quality and production yield of the final product. This comprehensive understanding will enable us to devise strategies for precise control over growth conditions, nutrient supply, and the cell environment, which are crucial for achieving the desired texture, flavor, and nutritional content. The optimization of cultured beef production will influence our experimental choices, data analysis, and decision-making process. Ultimately, we aim to identify an efficient, sustainable, and commercially viable technique for producing cultured beef in the future. Our objective’s final focus will be contributing to a more cost-effective and sustainable production method. This will highlight the broader significance of our research and its influence. It will recognize the global challenges in beef production and the need for sustainable alternatives. Our experiments will aim to develop an optimal cultured beef manufacturing technique that is both cost-effective and

333


sustainable, contributing to an environmentally conscious and ethical food system. The future meat industry, particularly cattle farming, faces significant sustainability issues related to greenhouse gas emissions, land usage, water consumption, and animal welfare concerns. By focusing on cultured beef production, our study will aim to address these challenges and reduce the environmental impact associated with traditional livestock farming. Cultured beef can potentially minimize the need for large land areas, water resources, and the release of greenhouse gasses, while also addressing animal welfare issues. Our research aims to align with the broader goals of addressing environmental concerns, reducing resource consumption, and promoting ethical and responsible food production. By maintaining a focus on cost-effectiveness and sustainability, the overarching objective of our study will ensure that our study’s outcomes will have real-world implications for the future of the food industry and global sustainability efforts.

Background and Significance To achieve the aim of lowering the cost of the cultured medium, serum-free media formulation or reduced-serum media has been investigated as a possible solution. Serum-free media is known to minimize the need for FBS, a common supplement added to cell culture media to provide nutrients necessary for cell growth. Serum-free media’s minimization of the need for FBS derives advantages in terms of carcinogenic risk and cost. As Serum-containing media can prompt a risk to people who may be exposed to potential carcinogens and zoonotic diseases from animal sources, serum-free media will eliminate the potential to be involved in carcinogenic risk. In addition, serum-free media may reduce the overall cost than serum-containing counterparts while maintaining cell growth and viability. For the combinations of proteins that would be used in the serum-free media, Wnt3a protein and Afamin protein were the suggested combinations of proteins. Wnt3a is a protein that activates the Wnt signaling pathway, regulating the transcription of a variety of genes involved in various cellular processes. Afamin is a protein that is involved in various physiological functions including modulation of cell differentiation and transport of vitamin E. As Wnt3a protein and Afamin protein become both active, it has been shown to stimulate the proliferation of bovine muscle cells and then enhance the differentiation of these cells into muscle tissue. Hence, we hypothesize that utilization of Wnt3a and Afamin protein may promote the growth and differentiation of bovine muscle cells more efficiently.

Research Design and Methods As previously mentioned, the primary objective of our research was to optimize cultured beef production by developing affordable conditioned media enriched with Afamin and Wnt3a proteins. To achieve this, our group designed our experiment into three major parts: establishing Afamin-Wnt3a producer cells, generating Afamin-Wnt3a enriched conditioned media, and analyzing the growth rate of bovine muscle cells for meat production. The use of fetal bovine serum (FBS), or conventional conditioned media in cultured meat production, poses significant challenges. Firstly, FBS’s high cost hinders scalability and affordability. This limits wider access to sustainable cultured meat. Additionally, the recycling and reusability of FBS present logistical and technical hurdles. Ethical concerns arise due to the extraction of FBS from bovine fetuses, involving sacrificing animal lives. Furthermore, FBS’s inconsistent composition complicates technical processes, affecting reproducibility and standardization. Lastly, FBS contributes to the carbon footprint of the cultured meat industry, necessitating attention to environmental implications. Thus, while cultured meat offers an alternative to conventional meat production, it is not without its imperfections yet. Our group focuses on creating Afamin-Wnt3a proteins enriched conditioned media to address challenges posed by FBS. This solution offers numerous benefits. It enables efficient recycling and reuse, enhancing sustainability and overcoming waste issues. The use of non-animal-derived growth factors eliminates ethical concerns. Precise control of the conditioned media ensures consistent outcomes, overcoming technical challenges. Moreover, our research eliminates reliance on costly FBS,

334


addressing production cost constraints. These advancements hold great potential for significant contributions to the cultured meat industry. To produce Afamin-Wnt3a proteins enriched conditioned media, our research will first require us to establish Afamin-Wnt3a producer cells through transfection. However, before transfection of the vectors that include Afamin and Wnt3a into HEK293T cells, our group needs to confirm if the vectors that were afforded for usage in our project are viable. Therefore, the first step of our research is to confirm the presence of Afamin and Wnt3a proteins in the DNA plasmid vector using PCR and gel electrophoresis techniques. The vector our research is going to use is BII-CMV-AfmW3A, a circular DNA with 10,434 base pairs. As our group is going to insert the vector into the cell, using a vector that has a circular form is more stable and compatible is important. The sequence of the vector gene our group is using goes as follows: CMV promoter, Afamin, T2A sequence, Wnt3a, puromycin resistance gene, and ampicillin resistance gene. Each gene in the vector has a distinct role that contributes to the production of Afamin-Wnt3a conditioned media. CMV is used for driving transgene expression in mammalian cells. The T2A sequence between Afamin and Wnt3a serves to make Afamin and Wnt3a independent proteins, which enables it to use only one vector. The puromycin resistance gene is a selection marker for eukaryotic gene manipulation. When the plasmid containing this marker is transformed, it gives cells the ability to thrive in media containing puromycin, which is toxic to humans, while non-transformed cells must die in the process. Similarly, the ampicillin resistance gene performs selection according to the presence of E. coli, allowing the detection of E. coli cells that have been transformed based on their ability to grow in an environment that contains the antibiotic ampicillin. After the PCR process to amplify the copies of the vector to confirm the presence of proteins, our group will use 1x TAE buffer and agarose powder to conduct agarose gel electrophoresis to finish the procedure of confirming. Once the presence of Afamin and Wnt3a proteins in the vector is confirmed, transfection of the vectors, DNA plasmid, and piggyBac vectors, into the genome of the cell HEK293T, or Human Embryonic Kidney must happen. This needs to be done in the ratio of DNA plasmid vectors: piggyBac vector = 1:2.5 and 1:5. Afamin, Wnt3a, and AW vector plasmid are needed for this process. To conduct this transfection effectively, our group will add liposomes and DNA to allow the DNA to enter the human cell. Without liposomes, the DNA used cannot be successfully inserted into HEK293T. Our research will use a stable transfection method, where T plasmid allows DNA to enter the host cell nuclear genome when mixed, sustaining long-term expression of a transgene by integrating foreign DNA into the host nuclear genome. After the transfection, puromycin, a toxic protein, can be used to remove cells without the plasmid or non-transfected cells, so our group can only get viable cells for our experiment. Also, Afamin and Wnt3a can do secretion when put in media, binding to each other in a 1 to 1 ratio. Then, our group wants to check how many targeted proteins are made using the western blot method. However, before the process, our research requires us to use the Bradford assay to measure the total protein concentration to ensure the same amount of protein can be used when conducting western blotting using the media sample made in the previous procedure. When coomassie blue is combined with protein, the color turns blue. Therefore, our group will measure the absorbance of orange color, which is the opposite of blue, for our Bradford assay. Once successful results through the Bradford assay are produced, a western blot to detect proteins should be done. By using SDS-PAGE gel, our group will determine how much Afamin and Wnt3a proteins are present by loading the pre-made samples in the gel. The first part of the process will be to conduct gel electrophoresis for western blot. Then, a cell proliferation assay from bovine cultured cells should be done. These procedures laid the foundation for the final step of data analysis.

Anticipated Results Foremost, the study validates the research hypothesis: Wnt3a and Afamin applied to cell culture media produce bovine cells, creating a basis for a novel method of lab-cultured meat production. In the status quo, conventional cultured meat production relies on FBS (Fetal Bovine Serum), a substance made from blood extraction from a cow. Despite the successful production of meat, the high price of FBS limits its application in the industry, contributing to higher market prices and causing producers to look

335


away from cultured meat. Contrasting the problematic approach of the status quo, our study focuses on capturing the benefits of lab-cultured meat while using a different approach to producing them: a plausible alternative to FBS is a combination of Wnt3a and Afamin proteins. Each distinct function of Wnt3a and Afamin proteins enables the production of bovine cells when utilized together. Wnt3a is a signaling protein that is integral in various cellular processes such as cell growth and differentiation, cell migration, and tissue development. Meanwhile, the Afamin protein is involved in various physiological components such as vitamin E, insulin secretion regulation, and cell proliferation and differentiation modulation. Combining these two proteins is a potential alternative to FBS, as Wnt3a has shown evidence of stimulating bovine muscle protein, and Afamin has displayed evidence of enhancing cell differentiation. These two cells are expected to act as proliferation assays in the cells and accelerate the cell production. Ultimately, the two different proteins serving each different role in the cell culture media allow for the proliferation of bovine cells, providing a basis for creating lab-cultured meat at a lower price than the conventional method. The experiment tests three different conditions: cell culture media without FBS, cell culture media with FBS, and finally cell culture media with Wnt3a and Afamin proteins. Culture media without FBS is projected to have the lowest cell proliferation rate, while the cell culture media with FBS is estimated to obtain the highest cell proliferation rate. The cell culture media with Wnt3a and Afamin protein is anticipated to show an intermediate cell proliferation rate, placing between the results of culture media with and without FBS. Although the proliferation rate may differ slightly depending on the different concentrations of Wnt3a and Afamin proteins, the research projects the result to place between the cell proliferation rate of the culture media with and without FBS.

Potential Contributions The use of Wnt3a and Afamin proteins is projected to have multiple layers of positive influence on both consumers and the industry. First, at an academic level, this research provides a firm basis for future studies. In the status quo, as mentioned before, a great majority of the lab-cultured meat is produced/proliferated with FBS. This study, in contrast, experiments with an alternative chemical to FBS; as the project is the first study to utilize Wnt3a and Afamin for bovine cell proliferation, this innovative approach is projected to influence further studies about the various alternatives for lab-cultured meat production and course the general industry to a more environmental and ethical method of cultured meat production. Second, applying novel proteins as cell proliferation assays eliminates any inconsistencies and dangers from lab-grown meat produced with FBS (Fetal Bovine Serum). In the status quo, virtually all cultured meat is produced with FBS, a formula derived from the blood of a living cow. The consequences of this formula are that the contents may differ from each other, just as every cow’s health conditions differ from each other. For example, if a certain disease or catastrophic natural disaster strikes cattle, the consequent FBS produced using their blood can contain higher stress hormones or unexpected contaminations. This eventually results in inconsistent-quality lab-cultured meat, simultaneously damaging producers' businesses and increasing consumer doubt. The ground- breaking method of producing lab-cultured meat with Wnt3a and Afamin is a solution to this problem, as it completely eradicates this unnecessary inconsistency in production and allows academia to precisely engineer the products and the industry to have access to readily available, stable meat. Third, this innovative method of utilizing Wnt3a and Afamin proteins instead of FBS (Fetal Bovine Serum) significantly reduces CO2 emission. In the status quo, cattle raising is one of the largest causes of global warming, mainly due to the CO2 produced when cows fart. However, with the decrease in consumer demand, farms are able to raise cattle even after acknowledging the severity of environmental problems. By developing a novel method of producing lab-cultured meat, the demand for cows will naturally decrease, and the number of CO2-producing cows will decrease. The decrease in cow population significantly reduces CO2 emissions produced from raising cattle and contributes positively to the ever-worsening climate dilemma in the world today.

336


Moreover, the conventionalization of lab-cultured meat through applying Wnt3a and Afamin proteins will lower the price and allow more people access to this meat. Currently, the steep price point of labcultured meat stands as the largest barrier between customers. Naturally, with FBS (costing about $500 to $600 per 500ml bottle) used to produce mainstream cultured meat, most of these are priced significantly higher than organic meat, at approximately $15 to $20 more per pound of meat. With this leap in price, it may seem unreasonable for price-sensitive consumers to choose cultured meat over organic meat, which is more affordable and familiar to them. The study provides a solution by utilizing a cheaper medium for cell proliferation, ultimately resulting in a reasonable price tag in the consumer market and further act as a catalyst in conventionalizing lab-cultured meat. Lastly, the research proposes an ethical method of producing lab-cultured meat. FBS, the formula used to produce lab-cultured meat, has been the subject of various ethical criticisms. Most importantly, extracting blood from cows has stigmatized cultured meat as "unethical." However, contrasting with this, as mentioned above, this research provides an alternative to this serum, Wnt3a, and Afamin proteins. Implementing these proteins instead of FBS allows for a more ethical method of lab-cultured meat production. Now, meat can be produced in a sterile lab environment without slaughtering for blood. Overall, bovine cell proliferation with Wnt3a and Afamin proteins is projected to have great impacts in the world today. With the potential of eliminating inconsistencies in meat quality, reducing CO2 emissions, lowering the price of cultured meat, and producing lab-cultured meat in a more ethical manner, this study lays a solid foundation for further research and opens new possibilities for sustainable and ethical food culture in the future.

References Animal slaughter - methods used. RSPCA. (n.d.). https://www.rspca.org.uk/adviceandwelfare/farm/ slaughter/factfile Armstrong, M., & Richter, F. (2022, September 14). Infographic: The Growing Global Hunger for Meat. Statista Infographics. https://www.statista.com/chart/28251/global-meat-production/ COP26: Agricultural expansion drives almost 90 percent of global deforestation. Newsroom. (n.d.). https://www.fao.org/newsroom/detail/cop26-agricultural-expansion-drives-almost-90-percent-ofglobal-deforestation/en Driver, A. (2023, July 5). Opinion: Lab-grown meat is an expensive distraction from reality. CNN. https://edition.cnn.com/2023/07/05/opinions/lab-grown-meat-expensive-distraction-driver/index.html Defendi-Cho, G., & Gould, T. M. (2023, February 8). In vitro culture of bovine fibroblasts using select serum-free media supplemented with chlorella vulgaris extract - BMC Biotechnology. BioMed Central. https://bmcbiotechnol.biomedcentral.com/articles/10.1186/s12896-023-00774-w FBS still on sale. (n.d.-a). https://www.neuromics.com/fbs-still-on-sale Farms and land in farms 2021 summary 02/18/2022 - USDA. (n.d.-a). https://www.nass.usda.gov/ Publications/Todays_Reports/reports/fnlo0222.pdf February 14, 2019 | By: Kristi Delynko. (n.d.). What’s the beef with water? Denver Water. https:// www.denverwater.org/tap/whats-beef-water?size=n_21_n

337


Francis, A. (2023, January 20). Will I see lab-grown meat in supermarkets any time soon?. Bon Appétit. https://www.bonappetit.com/story/lab-grown-meat Guardian News and Media. (2015, September 25). Industrial farming is one of the worst crimes in history. The Guardian. https://www.theguardian.com/books/2015/sep/25/industrial-farming-oneworst-crimes-history-ethical-question How to feed the world in 2050 - Food and Agriculture Organization. (n.d.-c). https://www.fao.org/ fileadmin/templates/wsfs/docs/expert_paper/How_to_Feed_the_World_in_2050.pdf Hussain, G. (2022, November 17). How does agriculture affect deforestation?. Sentient Media. https:// sentientmedia.org/how-does-agriculture-cause-deforestation/ IowaFarmBureau. (2023, July 27). Do “cow farts” cause global warming? iowafarmbureau.com.https://www.iowafarmbureau.com/Article/Question-Do-cow-farts-reallycontribute-to-global-warming Key facts and findings. FAO. (n.d.). https://www.fao.org/news/story/en/item/197623/icode/ Livestock Water Use Active. Livestock Water Use | U.S. Geological Survey. (n.d.). https:// www.usgs.gov/mission-areas/water-resources/science/livestock-water-use N. O’Neill, E., Neffling, M., Randall, N., Kwong , G., Ansel, J., Baar, K., & E. Block, D. (2023, March 25). The effect of Serum-free media on the metabolic yields and growth rates of C2C12 cells in the context of cultivated meat production. Science Direct. https://www.sciencedirect.com/science/ article/pii/S2666833523000126 O’Neill, A. (2023, June 8). Average prices for meat (beef) worldwide from 2014 to 2024. Statista. https://www.statista.com/statistics/675826/average-prices-meat-beef-worldwide/ United Nations. (n.d.). Scarcity, decade, Water For Life, 2015, UN-Water, United Nations, MDG, water, sanitation, financing, gender, IWRM, human right, transboundary, cities, quality, food security. United Nations. https://www.un.org/waterforlifedecade/scarcity.shtml O’Neill, A. (2023, August 3). Average prices for meat (beef) worldwide from 2014 to 2024. Statista. https://www.statista.com/statistics/675826/average-prices-meat-beef-worldwide/ USDA - national agricultural statistics service homepage. (n.d.-c). https://www.nass.usda.gov/ Publications/Highlights/2019/2017Census_Farms_Farmland.pdf 9 cruel yet legal farming practices. Animal Equality. (2023, April 19). https://animalequality.org/blog/ 2022/10/14/9-cruel-yet-legal-farming-practices/

338


Investigation into the tumor-suppressive effects of the IFN-ε gene on glioblastoma proliferation and viability Author

Full Name

:

Kang, General Hyunwoo

School Name

:

UWCSEA Dover

ABSTRACT This study investigates the effect of the IFN-ε gene overexpression on glioblastoma cell proliferation and viability. Glioblastoma cells were transfected with the IFN-ε gene, and results were analyzed across multiple trials. The findings reveal a significant reduction in cell confluency, live cell count, and cell viability in the IFN-ε overexpression group compared to the negative control. Statistical analysis confirmed the significance of these results, suggesting a potential tumor-suppressive role for IFN-ε in glioblastoma. This study provides a foundation for further exploration of IFN-ε as a therapeutic target in glioblastoma patient treatment.

KEYWORDS Cancer genomics, Bioinformatics, IFN-ε gene, Glioblastoma

339


INTRODUCTION Context Brain cancer is the abnormal growth of cells in or around the brain. Regardless of its status as a benign or malignant tumor, brain cancer poses a significant threat to the patient’s life due to the changes and pressures exerted by it on the vital structures of the brain. This fatal nature of brain cancer leads to its high morbidity and mortality rate, making it a crucial global health issue to address. According to the Global Cancer Observatory (GLOBOCAN) 2020 estimates, brain and central nervous system cancer have a significant impact on the global burden of disease, ranking 19th among the most frequent malignancies (1.9% of all cancers) and 12th among the leading causes of cancer deaths (2.5% of all cancers). 1 It shows the consistent rise in the number of new cases and deaths due to brain cancer, emphasizing the escalating public health concern of brain cancer.

Glioblastoma Glioblastoma is a type of glioma that originates explicitly from astrocytes, which are glial cells that comprise around 50% of all brain cells. Therefore, they will likely establish direct contact with glioblastoma cells, promoting invasion in the healthy tissues. 2 Glioblastoma is known to be the most frequent primary brain cancer tumor in adults. Yet, it is also the most aggressive primary brain tumor in adults, resulting in its assignment of grade IV, the highest grade in the World Health Organisation (WHO) classification of brain tumors. 3 The classification of glioblastoma as the most aggressive brain tumor can be observed from its median survival time of approximately 15 months after diagnosis or a five-year survival rate of 10%. The recurrence rate is nearly 90%. 4

The IFN-ε Gene and cBioPortal analysis The Interferon-Epsilon (IFN-ε) gene is Type-I interferons (IFN-I), a group of pro-inflammatory cytokines produced and recognized by all nucleated cells with the primary aim of blocking pathogensdriven functions. 5 Acute exposure of cancer cells to high concentrations of IFN-I has been shown to induce growth arrest and apoptosis, while chronic exposure to low concentrations offered notable survival advantages to the cancer cells. 6

Figure 1. Bar chart displaying the top 20 genes with the highest alteration frequency in two groups of patients: Living and Deceased.

On the cBioPortal, 23,698 genes were analyzed amongst 898 patients (637 living and 261 deceased) across 26 studies. These genes were then ordered according to their alteration event frequency, with the top 20 most frequently altered genes being displayed on the bar chart. Data showed that the deceased group consistently had higher alteration frequency across all genes presented, suggesting that the alteration of the genes may be linked with the survival outcome of brain cancer patients. Notably, of

340


the top 20 most frequently altered genes, there were 5 IFN-I genes (IFN-ε, IFN-A1, IFN-A8, IFN-A2, and IFN-A6, respectively), accounting for 25%. Furthermore, of all the IFN-I genes, it was evident that the IFN-ε gene had the highest alteration frequency and the most significant discrepancy in percentage alteration frequency between the living and deceased groups. Within the human genome, a cluster of thirteen functional IFN genes, including the IFN-ε gene, is located at the 9p21.3 cytoband. 7 In 19 different types of cancer, the IFN gene cluster frequently exhibited (7-31%) prevalent homozygous deletions, suggesting that the deletion of IFN-I genes can be associated with the worsening of overall or disease-free survival rates. 8 For instance, the copy number deletion of IFN genes triggers oncogenic pathways while suppressing immune signaling pathways through both the promotion of tumorigenesis and by enabling tumor cells to evade immunosurveillance. 9

Figure 2. Bar chart showing the alteration frequency of the IFN-ε gene across 20 studies

In Figure 2, 20 studies, totaling 7,390 patients, were ordered according to the percentage alteration frequency of the IFN-ε gene: the blue stands for deep deletion, red for amplification, and green for gene mutation. As can be seen, most of the genes altered were due to the deep deletion of the IFN-ε gene, thereby confirming that the deletion of the IFNE gene is the most frequent way in which the IFN-ε gene was altered in brain cancer patients.

341


Figure 3. Kaplan-Meier survival curve comparing the overall probability survival of brain cancer patients dependent on IFNE gene alteration status

Figure 3 displays the significant difference in the survival rate of brain cancer patients with or without the IFN-ε gene alteration. The IFN-ε gene altered group’s median survival was 14.49 months, whilst the unaltered group’s median survival was 31.10 months. This demonstrates that the alteration in the IFN-ε gene leads to poorer survival outcomes in brain cancer patients, exhibited by a much slower decline in survival probability, nearly half the unaltered group’s median. This further strengthens the argument that the absence of the IFN-I genes, and specifically the IFN-ε gene, could have contributed to the proliferation of brain cancer by removing tumor suppressor genes. With that being said, the exact function and role of IFN-ε gene is not fully known within the context of brain cancer; it can be inferred from previous research and data that IFN-ε may indeed similarly exhibit tumor-suppressive activities. A recent study discovered that the IFN-ε gene activated anti-tumor T and natural killer cells whilst also preventing the accumulation and activation of myeloid-derived suppressor cells and regulatory T cells. 10 This meant that the IFN-ε gene acted as a tumor suppressor that restricted ovarian cancer. As of the time of writing, no research has been done around the role of the IFN-ε gene in other cancer types. Hence, there is a novelty in this research.

Purpose of the research Given the findings on cBioPortal and recent studies, the following information about the IFN-ε gene was revealed: the indication of the IFN-ε gene as being involved or having tumor suppressive functions or activities, vast majority of its gene alteration being deep deletion, and the evident correlation between IFN-ε gene alterations and poorer survival outcomes in brain cancer patients. Thus, the primary objective of this study is to determine the impact of IFN-ε gene overexpression on glioblastoma cell proliferation, with the broader goal of understanding whether it could act as a tumor suppressor in glioblastoma patients. Overexpression of the IFN-ε gene in glioblastoma cells means that its role, which has previously been unavailable in brain cancer cells due to its deep deletion, could be observed. If the IFN-ε gene overexpressed glioblastoma cells show a decreased cell proliferation rate, it will indicate that the IFN-ε gene may indeed have tumor-suppressive functions in brain cancer. This could potentially lead to novel therapeutic approaches targeting the IFN-ε gene in brain cancer treatment.

342


Research hypothesis The glioblastoma cells with overexpressed IFN-ε genes will have a decreased cell proliferation rate.

METHOD Cell culture and maintenance A172 glioblastoma cancer cells were purchased from Korea Cell Line Bank. The cells were grown in Roswell Park Memorial Institute 1640 (RPMI1640) cell culture media. The cell culture media was supplemented with 10% Fetal Bovine Serum (FBS) and 1% antibiotics of penicillin and streptomycin. The cells were maintained healthy at 37 degrees Celsius in a 5% CO2 incubator.

IFN-ε transfection on A172 cell line IFNE overexpression plasmid (HG22269-UT) was purchased from Sinobio. The IFN-ε plasmid was transfected with the Lipofectamine 2000 (Invitrogen). After the cells were transfected for 12 hours, the cell culture media was replaced with the fresh cell culture media.

Cell imaging The cell images were captured using the Nikon inverted microscope. NIS software was used to capture the digital image from the microscope. The cell images were captured in 40X magnification. Cell images were captured 36 hours after IFN-ε transfection. Triplicate samples were captured and analyzed to observe the effect of IFN-ε on brain cancer cell confluency.

ImageJ analysis to measure the confluency of cells ImageJ program was used to analyze each cell image to quantify cell confluency. After images were converted into 8-bit images, a threshold function was used to indicate the area of the cells. After the threshold was adjusted to indicate only the cell portion of the images, the measurement function was used to analyze the percent area of the cells. Triplicate samples were analyzed for negative control (without the IFN-ε gene) and IFN-ε overexpression (with the IFN-ε gene).

Cell viability analysis using Luna-FL cell counter Cell viability analysis was performed using the Luna-FL automated cell counter device (Logos Bio). The live cells were stained green, and the dead cells were stained red. The overall viability and live and dead cell numbers were analyzed using this counter device.

RT-qPCR with agarose gel After the transfection of IFN-ε, RNA was extracted using the RNA extraction kit (Bioneer) using the manufacturer’s provided protocols. Then, RNA was converted into cDNA using the reverse transcription kit (Enzynomics). The synthesized cDNA was used to amplify the IFN-ε and GAPDH using the PCR. The amplified genes were analyzed using the agarose gel. The agarose gel was imaged using the Image analyzer (Biorad).

343


RESULTS AND DISCUSSION

First trial

Second trial

Third trial

344


Quantification by ImageJ for all three trials Figure 4. Microscope images of three different trials with four samples per negative control and IFN-ε overexpression, with bar charts displaying cell confluency quantified by ImageJ software. Figure 4 shows the three trials consisting of eight samples each to evaluate the effect of IFN-ε gene overexpression on the proliferation of glioblastoma cells, as indicated by cell confluency. The glioblastoma cells were divided into two groups: a negative control group and an IFNE overexpression group. In four samples for each trial, the IFN-ε gene overexpressed group was transfected with the IFNε gene. Then, both negative control and IFN-ε overexpression groups were cultured and incubated for 36 hours. Cell confluency was subsequently measured using ImageJ software to determine the impact of IFN-ε gene overexpression on glioblastoma cell proliferation. The images and data show an evident discrepancy in cell confluency between the two groups. There is a visible decrease in cell confluency between the two groups, and this observation is further supported by the results: the negative control group exhibited higher cell confluency across all four samples in all three trials, with an average confluency of 25.14%, whilst the IFN-ε overexpression group had a much more reduced average confluency of 4.50%, resulting in an average difference of 20.64% across the three trials. The results suggest that the overexpression of the IFN-ε gene in glioblastoma cells significantly inhibits the ability of the glioblastoma cells to proliferate, marked by a lower cell confluency in the IFN-ε overexpression group. This marked difference thus strongly suggests the potential tumor-suppressive role of the IFN-ε gene in glioblastoma.

Figure 5. IFN-ε overexpression decreased the number of live cells and viability. The bar chart represents the mean and standard deviation of the number of live cells (×104 cells/mL) and cell viability (%). N=3

345


The four samples of the negative control group and the IFN-ε overexpression group were combined into one sample each for cell count and viability analysis. Figure 5 illustrates the quantitative impact that the overexpression of the IFN-ε gene had on the glioblastoma cells. The image on the left shows a visual comparison between the negative control and IFN-ε overexpression group, with the latter exhibiting a discernible reduction in live cells, as marked by fewer green-highlighted cells (which are the detected live cells by the Luna-FL cell counter). The bar charts on the right shows the mean values of the three trials: the live cell count decreased by almost half—the negative control group had a mean of 9.12 ×104 cells/mL whilst the IFN-ε overexpression group had 4.72 ×104 cells/mL and cell viability decreased from 99.9% in the negative control group to 63.4% in the IFN-ε overexpression group. The evident discrepancy between the two groups, alongside the statistical significance with the p-value being less than 0.0001, provide strong evidence that overexpression of the IFN-ε gene leads to a substantial reduction in both the number of live glioblastoma cells and their viability, further highlighting the profound impact of the IFN-ε gene on glioblastoma cell survival.

Figure 6. IFN-ε overexpression increased the mRNA IFN-ε expression level. The bar chart represents the mean and standard deviation of normalized FN-ε expression level. GAPDH gene expression level was used for the normalization of IFN-ε expression level. N=2 Figure 6 shows that overexpression of IFN-ε in glioblastoma cells significantly increased its mRNA expression level compared to the negative control (p-value = 0.0024). The normalized expression level of IFN-ε in the overexpression sample was 1.12, compared to 0.87 in the negative control group. This result shows a 28.74% increase in IFN-ε expression. This result indicates that the IFN-ε gene transfection system used in this study effectively enhanced IFN-ε mRNA expression level in A172 glioblastoma cancer cells.

CONCLUSION This study provides compelling evidence that the overexpression of the IFN-ε gene significantly reduces glioblastoma cell proliferation and viability. The experiments conducted across multiple trials consistently demonstrated a marked decrease in cell confluency, live cell count, and cell viability in the IFN-ε overexpression group compared to the negative control group. Specifically, the live cell count in the IFN-ε overexpression group was reduced by nearly half, and cell viability dropped from 99.9% to 63.4%, highlighting the impact of the IFN-ε gene on glioblastoma cells. The statistical significance of these results, with consistently low p-values, reinforces the reliability of the findings. Additionally, the increase in mRNA expression of the IFN-ε gene following transfection confirms the effectiveness and validity of the transfection system used in this study.

346


These results suggest that the IFN-ε gene plays a crucial role in modulating the behavior of glioblastoma cells, potentially acting as a tumor suppressor. The observed reduction in cell proliferation and viability highlights the importance of further research into IFN-ε as a potential therapeutic target for glioblastoma. The findings from this study offer opportunities for future investigations into the mechanisms through which IFN-ε exerts its effects and the potential development of IFN-ε based therapies for glioblastoma treatment.

REFERENCES (1)

Sung, H.; Ferlay, J.; Siegel, R. L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71 (3), 209–249. DOI: 10.3322/caac.21660.

(2)

Brandao, M.; Simon, T.; Critchley, G.; Giamas, G. Astrocytes, the Rising Stars of the Glioblastoma Microenvironment. Glia 2019, 67 (5), 779–790. DOI: 10.1002/glia.23520.

(3)

Wirsching, H.-G.; Galanis, E.; Weller, M. Glioblastoma. Handb. Clin. Neurol. 2016, 134, 381–397. DOI: 10.1016/B978-0-12-802997-8.00023-2.

(4)

Park, J. H.; de Lomana, A. L. G.; Marzese, D. M.; Juarez, T.; Feroze, A.; Hothi, P.; Cobbs, C.; Patel, A. P.; Kesari, S.; Huang, S.; Baliga, N. S. A Systems Approach to Brain Tumor Treatment. Cancers (Basel) 2021, 13 (13). DOI: 10.3390/cancers13133152.

(5)

Sprooten, J.; Garg, A. D. Type I Interferons and Endoplasmic Reticulum Stress in Health and Disease. Int. Rev. Cell Mol. Biol. 2020, 350, 63–118. DOI: 10.1016/bs.ircmb.2019.10.004.

(6)

Cheon, H.; Wang, Y.; Wightman, S. M.; Jackson, M. W.; Stark, G. R. How Cancer Cells Make and Respond to Interferon-I. Trends Cancer 2023, 9 (1), 83–92. DOI: 10.1016/j.trecan.2022.09.003.

(7)

Antonelli, G.; Scagnolari, C.; Moschella, F.; Proietti, E. Twenty-Five Years of Type I Interferon-Based Treatment: A Critical Analysis of Its Therapeutic Use. Cytokine Growth Factor Rev. 2015, 26 (2), 121–131. DOI: 10.1016/j.cytogfr.2014.12.006.

(8)

Razaghi, A.; Brusselaers, N.; Björnstedt, M.; Durand-Dubief, M. Copy Number Alteration of the Interferon Gene Cluster in Cancer: Individual Patient Data Meta-Analysis Prospects to Personalized Immunotherapy. Neoplasia 2021, 23 (10), 1059–1068. DOI: 10.1016/j.neo.2021.08.004.

(9)

Wang, L.; Ye, Z.; Dong, H.; Li, Y.; Ma, T.; Huang, H.; Leong, H. S.; Eckel-Passow, J. E.; Kocher, J.-P.; Liang, H. Prevalent Homozygous Deletions of Type I Interferon and Defensin Genes in Human Cancers Associate with Immunotherapy Resistance. Clin. Cancer Res. 2018. DOI: 10.1158/1078-0432.CCR-17-3008.

(10)

Marks, Z. R. C.; Campbell, N. K.; Mangan, N. E.; Vandenberg, C. J.; Gearing, L. J.; Matthews, A. Y.; Gould, J. A.; Tate, M. D.; Wray-McCann, G.; Ying, L.; Rosli, S.; Brockwell, N.; Parker, B. S.; Lim, S. S.; Bilandzic, M.; Christie, E. L.; Stephens, A. N.; de Geus, E.; Wakefield, M. J.; Ho, G.-Y.; Hertzog, P. J. Interferon-ε Is a Tumour Suppressor and Restricts Ovarian Cancer. Nature 2023, 620 (7976), 1063–1070. DOI: 10.1038/s41586-023-06421-w.

347


Comparative Analysis of CT Scan Reconstruction Methods: Filtered Back Projection (FBP) vs. Simultaneous Algebraic Reconstruction Technique (SART) Author

Full Name

:

Kim, Aaron Junung

:

Hprep Academy

(Last Name, First Name)

School Name

Abstract CT scan is an essential technology in the medical field as it examines the inside of the patient's body in detail without any open surgery. In this research, a comparative analysis of different reconstruction methods in CT scans is conducted, specifically between Filtered Back Projection (FBP) and Simultaneous Algebraic Reconstruction Technique (SART). Using Python simulation, the research evaluates the performance of each reconstruction method in terms of the quality of the projection image and computational time. The simulation results reveal that while FBP provides a shorter computational time, SART improves the quality of the projection image with every iteration.

348


Introduction Within a century, Rapid development in imaging technology in the medical field has allowed diagnosis and medical procedures to be done with higher precision and accuracy. In particular, tomographic technology such as Computed Tomography scan, also known as CT, has become a necessity in the medical field since its invention in 1971 (Schulz et al., 2021). With its ability to demonstrate internal parts of the body without any open surgery, CT has been admired for diagnosing diverse diseases and body parts (CT Scan - Mayo Clinic, 2024). With its advantages, it is widely used, with over 80 million CT scans performed annually (Harvard Health, 2021). CT scan utilizes an X-ray that revolves around the patient and collects data on the attenuation ratio of every angle that the X-ray has penetrated. This data is then used to compose a sinogram, which is an image of CT projection that corresponds to each rotation angle of the CT. One of the most fundamental parts of the CT scan is the reconstruction method. Once the CT scans through the patient’s body, raw data (also known as sinogram) is collected, which has to be reconstructed using different methods such as Filtered Back Projection (FBP) or Simultaneous Algebraic Reconstruction Technique (SART). Nonetheless, due to the different mathematical properties of the methods, each type of reconstruction method has benefits and drawbacks. Therefore, the main purpose of this paper is to compare and contrast two major reconstruction methods: Filtered Back Projection and Simultaneous Algebraic Reconstruction Technique (Image Reconstruction Techniques, 2016.). Both reconstruction methods will be simulated on the computer.

Literature Review 2.1 CT Scan CT scan is composed of complex multiple steps that require computing and analysis of data to achieve the image needed. During a CT scan, an X-ray tube revolves around the patient who may be injected with contrast agents for higher accuracy of the scan as shown in the figure 1. The detector located opposite the X-ray tube records the attenuation ratio of emitted X-ray from all angles around the patient, and the X-ray scanner captures images of the object from multiple angles (Medical & Health - Medical Imaging - Ams-osram - Ams.). Depending on which part of the body the X-ray penetrates, the intensity at which the ray reaches the detector varies. The intensity of the Figure 1. Angle of CT Scan X-rays varies based on the density of Generation Of CT Scan ( Computed tomography). - radiologystar. (2022). the object, as objects have their Radiologystar. https://www.radiologystar.com/generation-of-ct-scan-computedcoefficient called the linear tomography/ attenuation coefficient, which is being imaged. Upon CT scan, the linear attenuation coefficient is converted into Hounsfield units.

349


The formula to convert the linear attenuation coefficient to the Hounsfield Unit is shown (Reeves et al., 2012);

The Hounsfield Unit is the quantitative unit for the electron density of an object; the tissue can absorb more radiation, the higher the Hounsfield Unit is. For instance, Bone, which is nearly impenetrable by X-ray, has a Hounsfield unit of 1000 while air, which X-ray can penetrate with almost no interference, has a Hounsfield unit of 0 (DenOtter & Schubert, 2023). Hounsfield Unit is expressed in grayscale to express the density of different tissues in the body (Razi et al., 2014). As the Hounsfield unit increases, the pixel on the projection representing the scanned voxel of the object becomes brighter. For example, the lung, composed of soft tissue and air, has a low Hounsfield unit that ranges from −1,000 to 0, which is mostly presented with darker pixels in the image, resulting in a dark gray-colored area on the projection (Lindow et al., 2023). Meanwhile, an area composed of bone, which has 1000 to 2000 Hounsfield units, is represented as brighter pixels on the projection (DenOtter & Schubert, 2023). 2.1.1 Sinogram Data gathered from the scan is used to create a sinogram, a graphical representation of the projections plotted against the angle of the scan. The sinogram is then transformed into an image using the inverse Radon transform. The inverse Radon transform reconstructs the data into a coherent image by accounting for the various angles and penetration rates recorded during the scan. The Radon transform integrates the straight lines that pass through the scanned object, with each point in the sinogram representing a ray integral of the object’s density along a specific direction. Subsequently, these data are transmitted from the detector to the computer and converted into images using various reconstruction methods. These images are then stacked to form a comprehensive threedimensional representation of the scanned area, allowing the data to be used for diagnostic purposes.

Figure 1. Sinogram of the Object Image

350


2.2 Effect of CT on Patient Health Although CT is invaluable in medical procedures and disease diagnosis, it exposes patients to a considerable amount of radiation. During the CT scan, a plentiful amount of ionizing radiation is delivered to scan through the patient’s body. A high dose of Ionizing radiation is strong enough to affect the health of the patient, potentially leading to unfavorable medical conditions such as skin burns, hair loss, birth defects, and cancer. The average radiation exposure from a single cranial CT scan, which scans the brain, head, skull, and sinus, is about 2 millisieverts (mSv) (Agency for Toxic Substances and Disease Registry (US), 1999). Following this are Chest CT scans, which expose patients to 7 mSv, and Abdomen and Pelvis CT scans, which expose patients to 10 mSv. The average amount of annual background radiation, a natural radiation dose that an ordinary individual receives for a year, is 6.2 mSv. Compared with this, a CT scan inflicts a remarkable amount of radiation per scan. Research done by the Environment Protection Agency revealed that around one-fourth of annual radiation exposure is caused by CT scans (Pietrangelo, 2018). Likewise, CT might impose potential health risks on the patients. Research by Berrington de González et al. estimated that with more than 70 million scans done annually in the United States, more than 29,000 cases of cancer are related to the increased number of CT scans (2009). Additionally, according to a study by Yale School of Medicine, annual 4 million CT scans for children in the U.S. have potentially led to approximately 5,000 cases of cancer (2014). 2.3 CT Reconstruction Technique: Filtered Back Projection & Simultaneous Algebraic Reconstruction Technique 2.3.1 FBP - Filtered Back Projection: Filtered Back Projection (FBP) is one of the earliest and most widely used methods in CT image reconstruction due to its fast computational time (Willemink & Noël, 2018). FBP operates similarly to Simple Back Projection; by back-projecting the sinogram, a projection image of the scanned object is gained. However, one problem regarding the back projecting process in Simple Back Projection is that the burning of the image due to a star-like artifact occurs (Schofield et al., 2020). Eventually, to mitigate the projected image's blurring, FBP includes the filter to make the image more refined (Xu et al., 2021). 2.3.1.1 Filter The filter in the FBP uses Fourier transform and inverse Fourier transform to diminish low frequencies of projection data, which reduces the blurring of the projection image. On the other hand, the filter emphasizes the high-frequency components, which correspond to the edge of the object of the projection image, providing higher detail of the image. The detailed description is explained in table 1.

351


Fourier Transform

Filter

Inverse fourier transform Back Projection

With the Fourier Transform, projection data 𝑃(θ,𝑡) is transformed into the frequency domain. The Fourier Transform provides the superposition of the wave functions. The filter function 𝐻(𝜔) is then multiplied by the transformed projection data to reduce low frequencies and emphasize high frequencies. These filters Eventually, the transformed projection data, applied with the filter, is converted into spatial data using the inverse Fourier transform.

Subsequently, the spatial data is back-projected to reconstruct the image. As the data has been modified by the filter function, the reconstructed

Table 1. Filtered Back Projection 2.3.1.2 Noise in Filtered Back Projection While FBP is fast, it produces noise in the resulting images (Schofield et al., 2020b). Upon determining the quality of a CT scan projection, the level of noise is a crucial consideration because high noise can significantly degrade the overall quality of the projection. In addition, FBP is highly susceptible to increased noise under low-dose conditions, which makes high radiation doses inevitable when using this method (Y. Xu et al., 2019). 2.3.2 SART - Simultaneous Algebraic Reconstruction Technique This method improves image quality by iteratively refining the image. The Process starts with an initial guess and repeatedly adjusts it to minimize the difference between the measured and calculated data. SART includes statistical modeling for factors like noise. The SART method utilizes statistical and geometric models to variably weight image data, allowing for an iterative process that independently reduces noise while preserving resolution and image quality (Seibert, 2014). Using this method, the initial image of the scan is estimated, rather than derived from the actual data(Lee et al., 2017). The image is processed through forward projection, in which the initially estimated image is processed as a form of sinogram, and the guessed projection is compared with actual data. Once compared, error projections are located, which are converted into error images through back projection and improve the previously estimated image (Long et al., 2023). During the computation, this comparison is made for the given number, and at the end of the computation, the algorithm can provide images with enhanced quality and reduced noise (Li et al., 2020). 2.4 Noise Degradation in image quality caused by noise was an inevitable problem. Noise in projection images is a problematic issue as it distorts the quality of the image itself. Moreover, the presence of noise makes it more difficult to identify, which eventually hinders accurate medical procedures and diagnosis. Noise is known to be associated with several causes, including slice thickness, tube voltage, and tube current. Slice thickness is the distance between the slices in a CT scan. While it is not the direct cause of noise in the image, slice thickness shows the indirect relationship between the amount of noise present in the

352


image. That is to say, the thinner the slice thickness, the more noise reduction is possible. Abdulkareem and his colleagues have examined the relationship between different slice thicknesses (from 0.625 to 10 mm) and reduction of noise (2023). They discovered a linear relationship between slice thickness and noise reduction. The thinnest slice thickness, 0.625 mm, resulted in the greatest noise reduction.

Figure 3. Object Image

Besides slice thickness, Tube current, and Tube voltage show a certain relationship with the noise of the image. Tube current is a factor that determines the amount of X-rays produced, while tube voltage is power supplied for X-rays. As Tube current and Tube voltage are directly related to the X-ray portion of a CT scan, higher Tube current, and Tube voltage increase the image quality of a CT scan (Gies et al., 1999).

Besides slice thickness, Tube current, and Tube voltage have a clear relationship with the noise present in the projection image. Tube current determines the amount of X-rays produced, while tube voltage supplies power for X-rays (Gies et al., 1999). Higher Tube current increases noise reduction in the projection image, increasing the image quality of the CT scan. On the other hand, Tube voltage is also related to noise as lowering Tube Voltage substantially increases image noise (Tang et al., 2012).

Methods 3.1 Computer Settings To investigate the quality of the image and computational speed of each reconstruction process, reconstruction methods were simulated using Python. Simulations of Filtered Back Projection and Simultaneous Algebraic Reconstruction Technique were carried out on MacBook Pro 2020 with a Processor of 1.4GHz Quad Core Intel Core i5, Graphic intel Iris Plus Graphics 645 1536 MB, and memory of 8GB MHz LPDDR3. 3.2 Reconstruction methods An X-ray image of a cross-sectional brain was selected as an Object Image, which provides fair conditions for both reconstruction methods since both reconstruction methods are used to create a projection of data collected by the X-ray on a CT scan device (figure 3.). The Object Image is converted into an image with a width and length of 360 pixels each as it is the calculable size in the given setting. To compare FBP and SART, the Object Image must be forward projected through radon transfer. This produces a sinogram of the image as shown in figure 2. The sinogram of the Object Image is obtained by integration of all pixels that are penetrated by a series of parallel rays that cross the Object Image. Eventually, this sinogram is used for both the FBP and the SART. 3.2.1 Filtered back projection steps In FBP, the image is reconstructed from the sinogram that has been modified by the Fourier Transform, which acts as the filter that suppresses pixels with low frequencies, making pixels with high frequencies gain importance. Eventually, once the process of FBP is finished, the original Object Image is subtracted from the final projection of the FBP, and the difference is recorded as an error. The error was recorded using root mean square (RMS) to gain the quantity of pixel difference between the Object Image and the projection image. The error serves as a standard to compare the quality of projection results of the FBP and SART.

353


3.2.2 Simultaneous Algebraic Reconstruction Technique During the SART, a projection image will be generated at every iteration, which is then compared with the original image. The difference between the original image and the projection image created from each of the iterations is recorded with RMS, therefore, the error range can be compared with that of FBP.

Results 4.1 Sinogram The Image of the sinogram created is shown in Figure 2.

Figure 2. Sinogram of the Object Image The Projection created by FBP and the error difference between the Object Image and Projection are shown in Figure 2. 4.2 Filtered Back Projection vs. Simultaneous Algebraic Reconstruction Technique

Figure 4. Simulation of Filtered Back projection The FBP method achieved a root mean RMS reconstruction error of 0.00956, which measured the average difference between the Object Image and Projection created by FBP. The computation of the FBP was done in 1.60 seconds.

354


Meanwhile, the SART method, which involves multiple iterations in the process, showed progressive improvement in accuracy with every iteration. SART achieved RMS errors of 0.0145, 0.00939, 0.00732, 0.00621, and 0.00552 for 1, 2, 3, 4, and 5 iterations, respectively. The projections and reconstructions error of 1,2 and 3 iterations are shown in figure 4.

Figure 3. Simultaneous Algebraic Reconstruction Technique

The computation time taken until the 3rd iteration was 11.53 seconds, while the computation time taken for the entire 5 iterations was 14.44 seconds. This meant that the average time spent for the first 3 iterations was much longer with 11.53/3=3.84334 seconds compared to the last 2 iterations, which were (14.44-11.53)/2 = 1.455 seconds.

355


Analysis Considering the given condition, the total computational time of FBP was faster than SART as was predicted by the hypothesis. In addition, the RMS reconstruction error was higher in SART than FBP until the second iteration, when the RMS error of SART was reduced under the 00956 of FBP with 00939. Eventually, in the 5th iteration, SART reached an RMS error of 0.00552, which was comparably smaller than that of FBP. Overall, the average RMS error for 5 iterations was 0.008588 while that of FBP was 0.00956. Notably, SART with two or more iterations surpassed the FBP in reconstruction accuracy, although it required significantly more computational time.

Conclusion In conclusion, this research paper investigated the quality of projections created by different reconstruction methods: Filtered Back projection and Simultaneous Algebraic Reconstruction Technique. During the CT scan, the rate at which much of the X-ray is penetrated through the patient’s body is detected by the detector from all angles. This data is used to create a sinogram through Radon transfer. Being significant steps in the procedure of CT scan, FBP and SART are two different major reconstruction methods that operate with totally different algorithms. Moreover, as the algorithm of FBP is shorter compared to SART, the amount of time took for the projection image to generate with FBP was comparably shorter than that of SART. In FBP, the data undergo Fourier transform, filtering, Inverse Fourier Transform, and Back Projection. This allows the projection image to be less blurred and more defined. On the other hand, in the SART, the projection image is generated through a course of iteration that compares the original data to conjectured data. The difference is applied in each comparison, which as the iteration increases, the quality of the projection image gradually improves. Eventually, the quality of the projection image increased following an increased number of iterations and computation time. This research sheds light on the comparison of different reconstruction methods in CT scan technology, ultimately contributing to the broader advancement of medical tomography technologies and improving diagnostic capabilities, leading to better patient outcomes.

Limitations One limitation of the simulation was that it was difficult to establish a standard for the quality of the projection. The quality of the CT projection depends on various factors, including noise and the resolution of low and high contrast (Elnour et al., 2017). This makes it difficult for projection to be evaluated under an objective standard but rather a more subjective standard (Tamura et al., 2021). While RMS is frequently used in research that handles the magnitude of a varying quantity, it is insufficient to convey the projection's comprehensive quality. Eventually, in future research, a standard that can evaluate the overall quality of the projection can be introduced. Besides the limitation of the evaluation standard, simulations can be performed under better conditions, particularly with higher-performance computers. While simulating the SART, the MacBook Pro 2020 could only handle calculations for up to 5 iterations. Consequently, this research was limited to a maximum of 5 iterations. If the simulation is performed with better computational capability, more detailed and precise research will be possible.

356


References 1. Abdulkareem, N. K., Hajee, S. I., Hassan, F. F., Ibrahim, I. K., Al-Khalidi, R. E. H., & Abdulqader, N. A. (2023). Investigating the slice thickness effect on noise and diagnostic content of single-source multi-slice computerized axial tomography. Journal of Medicine and Life, 16(6), 862–867. https://doi.org/10.25122/jml-2022-0188 2. Agency for Toxic Substances and Disease Registry (US). (1999, September 1). SUMMARY OF HEALTH EFFECTS OF IONIZING RADIATION. Toxicological Profile for Ionizing Radiation NCBI Bookshelf. https://www.ncbi.nlm.nih.gov/books/NBK597567/#:~:text=High%20doses%20of%20ionizing%2 0radiation,all%20other%20branches%20of%20toxicology. 3. CT scan - Mayo Clinic. (2024, May 7). https://www.mayoclinic.org/tests-procedures/ctscan/about/pac-20393675 4. De González, A. B. (2009). Projected cancer risks from computed tomographic scans performed in the United States in 2007. Archives of Internal Medicine, 169(22), 2071. https://doi.org/10.1001/archinternmed.2009.440 5. DenOtter, T. D., & Schubert, J. (2023a, March 6). Hounsfield Unit. StatPearls - NCBI Bookshelf. https://www.ncbi.nlm.nih.gov/books/NBK547721/ 6. DenOtter, T. D., & Schubert, J. (2023, March 6). Hounsfield Unit. StatPearls - NCBI Bookshelf. https://www.ncbi.nlm.nih.gov/books/NBK547721/ 7. Generation Of CT Scan ( Computed tomography). - radiologystar. (2022). Radiologystar. https://www.radiologystar.com/generation-of-ct-scan-computed-tomography/ 8. Elnour, H., Hassan, H. A., Mustafa, A., Osman, H., Alamri, S., & Yasen, A. (2017). Assessment of image quality parameters for computed tomography in Sudan. Open Journal of Radiology, 07(01), 75–84. https://doi.org/10.4236/ojrad.2017.71009 9. Gies, M., Kalender, W. A., Wolf, H., Suess, C., & Madsen, M. T. (1999). Dose reduction in CT by anatomically adapted tube current modulation. I. Simulation studies. Medical Physics, 26(11), 2235–2247. https://doi.org/10.1118/1.598779 10. Harvard Health. (2021, September 30). Radiation risk from medical imaging. https://www.health.harvard.edu/cancer/radiation-risk-from-medical-imaging 11. Image reconstruction techniques. (2016). Image Wisely. https://www.imagewisely.org/ImagingModalities/Computed-Tomography/Image-Reconstruction-Techniques 12. Lee, H. C., Song, B., Kim, J. S., Jung, J. J., Li, H. H., Mutic, S., & Park, J. C. (2017). Variable step size methods for solving simultaneous algebraic reconstruction technique (SART)-type CBCT reconstructions. Oncotarget, 8(20), 33827–33835. https://doi.org/10.18632/oncotarget.17385 13. Li, T., Sun, M., Qu, Y., Liu, Y., & Zhou, W. (2020). Simultaneous algebraic reconstruction technique based on total variation for photoacoustic image. Proceedings of SPIE. https://doi.org/10.1117/12.2575133 14. Lindow, T., Quadrelli, S., & Ugander, M. (2023). Noninvasive imaging methods for quantification of pulmonary edema and congestion. JACC. Cardiovascular Imaging, 16(11), 1469–1484. https://doi.org/10.1016/j.jcmg.2023.06.023 15. Long, Y., Huo, X., Liu, H., Li, Y., & Sun, W. (2023). An extended simultaneous algebraic reconstruction technique for imaging the ionosphere using GNSS data and its preliminary results. Remote Sensing, 15(11), 2939. https://doi.org/10.3390/rs15112939

357


16. Medical & Health - Medical Imaging - ams-osram - ams. (n.d.). Ams-osram. https://amsosram.com/applications/medical-health/medicalimaging#:~:text=A%20key%20component%20of%20a,them%20into%20a%20digital%20signal. 17. Pietrangelo, A. (2018, August 22). Cranial CT scan. Healthline. https://www.healthline.com/health/cranial-ctscan#:~:text=CT%20stands%20for%20computed%20tomography,it%20doesn%27t%20require% 20surgery. 18. Razi, T., Niknami, M., & Ghazani, F. A. (2014). Relationship between Hounsfield Unit in CT Scan and Gray Scale in CBCT. DOAJ (DOAJ: Directory of Open Access Journals). https://doi.org/10.5681/joddd.2014.019 19. Reeves, T., Mah, P., & McDavid, W. (2012). Deriving Hounsfield units using grey levels in cone beam CT: a clinical application. Dentomaxillofacial Radiology, 41(6), 500–508. https://doi.org/10.1259/dmfr/31640433 20. Schofield, R., King, L., Tayal, U., Castellano, I., Stirrup, J., Pontana, F., Earls, J., & Nicol, E. (2020a). Image reconstruction: Part 1 – understanding filtered back projection, noise and image acquisition. Journal of Cardiovascular Computed Tomography, 14(3), 219–225. https://doi.org/10.1016/j.jcct.2019.04.008 21. Schofield, R., King, L., Tayal, U., Castellano, I., Stirrup, J., Pontana, F., Earls, J., & Nicol, E. (2020b). Image reconstruction: Part 1 – understanding filtered back projection, noise and image acquisition. Journal of Cardiovascular Computed Tomography, 14(3), 219–225. https://doi.org/10.1016/j.jcct.2019.04.008 22. Schulz, R. A., Stein, J. A., & Pelc, N. J. (2021). How CT happened: the early development of medical computed tomography. Journal of Medical Imaging, 8(05). https://doi.org/10.1117/1.jmi.8.5.052110 23. Seibert, J. A. (2014). Iterative reconstruction: how it works, how to apply it. Pediatric Radiology, 44(S3), 431–439. https://doi.org/10.1007/s00247-014-3102-1 24. Tamura, A., Mukaida, E., Ota, Y., Kamata, M., Abe, S., & Yoshioka, K. (2021). Superior objective and subjective image quality of deep learning reconstruction for low-dose abdominal CT imaging in comparison with model-based iterative reconstruction and filtered back projection. British Journal of Radiology, 94(1123). https://doi.org/10.1259/bjr.20201357 25. Tang, K., Wang, L., Li, R., Lin, J., Zheng, X., & Cao, G. (2012). Effect of low tube voltage on image quality, radiation dose, and Low-Contrast detectability at abdominal multidetector CT: Phantom study. Journal of Biomedicine and Biotechnology, 2012, 1–6. https://doi.org/10.1155/2012/130169 26. Willemink, M. J., & Noël, P. B. (2018). The evolution of image reconstruction for CT—from filtered back projection to artificial intelligence. European Radiology, 29(5), 2185–2195. https://doi.org/10.1007/s00330-018-5810-7 27. Xu, J., Sun, C., Huang, Y., & Huang, X. (2021). Residual neural network for filter kernel design in filtered back-projection for CT image reconstruction. In Informatik aktuell (pp. 164–169). https://doi.org/10.1007/978-3-658-33198-6_39 28. Xu, Y., Zhang, T., Hu, Z., Li, J., Hou, H., Xu, Z., & He, W. (2019). Effect of iterative reconstruction techniques on image quality in low radiation dose chest CT: a phantom study. Diagnostic and Interventional Radiology, 25(6), 442–450. https://doi.org/10.5152/dir.2019.18539 29. Yale School of Medicine. (2014). Reducing the risk of CT scans. Yale School of Medicine. https://medicine.yale.edu/news/yale-medicine-magazine/article/reducing-the-risk-of-ct-scans/

358


Comparative Analysis of State Vector and Density Matrix Simulation on Hadamard’s Gate. Author

Full Name

:

Kim, Dohoon

:

The Hotchkiss School

(Last Name, First Name)

School Name

Abstract Quantum simulation is essential in developing and optimizing quantum algorithms solely on classical computers. This paper compares the efficiencies and robustness of state vector and density matrix simulations when applied to the Hadamard gate. The state vector simulation was observed to run within a time frame in the range of microseconds, indicating the computational efficiency of the state vector simulation without noise. However, to represent the application of noise in a state vector, numerous simulations were required to measure the purity of the ensemble density matrix from averaged noisy state vectors. In contrast, DM simulation is slower in performing Hadmard gate without noise yet precise enough to simulate quantum states in a noisy environment significantly faster than state vector. The trade-offs between computational performance and thorough noise modeling are stressed in this paper, and the choice between simulation methodologies, SV and DM, is determined by the individual demands of each quantum computing work. This comparison study is critical for optimizing quantum circuit simulations, improving quantum algorithm design, and moving from theoretical simulations to implementation on real quantum hardware.

359


Introduction Although quantum computing is advancing at a speed quite unprecedented, humans are not in a position to run algorithms with applications related to the quantum field because quantum computers are not made available up to now. Therefore, they have worked out a way to simulate quantum computing in classical computers. Quantum simulation refers to using either quantum or classical computers to simulate the behavior of quantum systems, such as molecules, materials, or complex quantum phenomena. This way, the operation of quantum algorithms is understood through this simulation, which is very important in developing and optimizing new algorithms. It enables the detailed study of quantum circuits without using physical quantum computers. Moreover, simulating quantum algorithms allows benchmarking for comparing the performance of quantum computers. The comparison studies the limitations and advantages of performing quantum computing over classical computing for specific tasks. Many simulators are available in a variety of programming languages, ranging from run-it-yourself simulators packaged with open-source tools like Qiskit and Cirq to standalone, hardware-optimized packages like Intel-QS and NVIDIA quantum, to cloud-based simulators from most of the major quantum cloud providers, such as the 29-qubit cloud simulator included in The IonQ Quantum Cloud. These simulators are conducted through classical simulations, analog quantum simulations using controllable quantum systems, and digital quantum simulations using universal quantum computers. However, despite researchers' significant development in quantum simulations, these simulators have distinct benefits and limitations. Accordingly, in simulating a specific task, finding what simulating system to use is essential. Thus, the paper will compare the efficiencies and robustness of two types of simulations—state vector and density matrix—when applied to the Hadamard gate. Understanding the efficiencies and robustness of such methods directs the selection of the simulation techniques in quantum algorithm design, testing processes for Hadamard gates realization, and other hardwareassisted simulations. This further decides upon quantum circuit simulation, optimization, and finally its implementation onto the real hardware of quantum involving error correction and algorithm development. This allows the developer to start with state vector simulations for early-stage algorithm design and then advance to the density matrix simulations for proper testing against realistic noise. One of the main challenges in the development of quantum computing technologies is the balance between computational efficiency and realistic noise modeling.

Discussion Background This section defines the key terminology that can help understand this research paper. A. State vector (SV) The state vector (SV) simulation is a method to simulate the happening of quantum systems on a classical processor. Since quantum computing is based on the principles of quantum mechanics, it can be inferred that an n-qubit system has a SV, which contains 2^n complex values that define the probabilities of various quantum states. For instance, a 2-qubit system can be in one of four possible states or a superposition of any of the two states: charging, |00⟩, |01⟩, |10⟩, |11⟩. Accordingly, the SV for this system would have four elements that depict the probability amplitude of the given state. Quantum computing mathematically starts from state |0⟩, which is a vector such that the first component of a vector is equal to 1, and all other components equal to 0. If the 2-qubit space is to be considered then the initial SV is [1, 0, 0, 0]. Quantum gates are operations that change the state of qubits. These gates are demonstrated through matrix calculation. If the gate is applied to each of the qubits, the gate transforms the SV. Specifically, the new SV is gained through matrix multiplication with the current SV. For instance, when applying the Hadamard gate to the first qubit in a 2-qubit system, a new SV is obtained by multiplying the SV by a matrix. This represents the Hadamard operation on that qubit. In SV simulation, the changes of the quantum state are introduced by applying the above gate matrices to the SV successively. Both applications consist of a matrix-vector multiplication such that the SV is

360


adjusted to the new state of the system. This process goes on for every gate operation in the quantum circuit and lets the classical computers replicate the functionality of a quantum system by indicating how the SV changes over time (Matuschak and Nielsen, 2019). B. Density Matrix (DM) Density Matrix (DM) simulation is one of the quantum computing techniques that is used to model both pure and mixed states of qubits. While the SV is effective in working only with pure states, DM is capable of addressing the system of quantum decoherence of qubits in a system. For an n-qubit system, the DM has 2^n times 2^n dimensions. Quantum gates and other operations in DM simulation are implemented through matrix manipulations carried out on the DM. For instance, to implement logic gates in the quantum circuit, unitary matrices are multiplied by the DM, which changes its state. During measures, if necessary, the DM is transformed according to the probabilities of the outcomes of the corresponding measure in the united process and is reduced to the state observed (Patel, 2023). For example, to include realistic conditions like quantum noise, DM simulations include noise models, which are models that can be propagated. For instance, depolarizing noise and amplitude damping can be described by particular superoperators. These superoperators change the DM which is indicative of the noise effect on the quantum system. Larger computations using DM simulations scale up in complexity like SV simulations depending on the number of qubits in use. For a practical example, consider a 2-qubit system: It should be noted that its DM is a 4×4 matrix meaning that the treatment is more complex than in other cases due to the representation of the SV. For the quantum interactions and quantum transformations, computational methods like partial trace or tensor products are employed, which are the techniques required to devise the matrix. C. Hadamard Gate Hadamard Gate, named after the French mathematician Jacques Hadamard a renowned French mathematician, is an operation for a single qubit (“Jacques Hadamard,” 2024). This operation is considered to be one of the most practical gates in quantum computing. This quantum gate is essential since it transforms the definite states of a qubit—specifically the states represented as 0 and 1 in classical binary—into superposition states. The Hadamard’s gate’s capability is essential in the fields of quantum computing because superposition permits qubits to exist in 0 and 1 states simultaneously, allowing the computational power to increase than classical bits (Matuschak and Nielsen, 2019). When applied to a qubit in the state |0⟩ (the quantum equivalent of the binary 0), the Hadamard gate converts the qubit into a state that is an equal superposition of |0⟩ and |1⟩, often expressed as |"⟩ % |&⟩ in a qubit state of |0⟩. Similarly, when applied to a qubit in the state |1⟩ the Hadamard gate √(

converts the qubit into

|"⟩ ) |&⟩

. This characteristic transformation is represented mathematically by the Hadamard matrix (Pennylane, n.d.): √(

H=

& √(

[

& & & )&

]

Shor’s algorithm, named after Peter Shor, is a quantum algorithm that makes use of the Hadamard gate; the quantum algorithm is useful in the factorization of large integers, which is crucial in cracking certain technologies that use cryptographic security, especially RSA. The RSA algorithm (Rivest-ShamirAdleman) is the foundation of a cryptosystem, which is a collection of cryptographic algorithms operated for certain security services or objectives through the effective usage of prime factorization (Cobb, n.d.). While classical algorithms can solve this task, Shor’s algorithm offers an exponentially sped-up solution. In Shor’s algorithm, the Hadamard gate is used to put the state of the system into a superposition of states of different integers from an exponential range. This preparation is for the Quantum Fourier Transform (QFT) step of the algorithm for efficiently analyzing the periodic properties of a function. The Hadamard gate is used for placing all the possible states in parallel and thus, due to quantum parallelism, the quantum system evaluates the function at all of these parallel points. This

361


capability is the fundamental reason for the exponential speed-up to be exclusive in Shor’s algorithm. Therefore, apart from its application to state preparation in quantum algorithms like Shor’s, the importance of the Hadamard gate to quantum computation extends to the practical implications of determining when quantum computers are likely to outcompete the present-day equally staged classical computers. Thus, it is apparent that the Hadamard gate plays a core role in the development of quantum computing.

Literature Review This section reviews the previous research and shows the progress of the current research on the topic. It identifies gaps where further research is needed and gives a distinction between my research and the topic. A. Innovation of DiaQ applied to State Vector (SV) According to Srikar Chundury. et. al., (2024), an innovative sparse matrix format, DiaQ, has been specifically designed to take advantage of the diagonal sparsity that quantum circuits exhibit and to enhance the memory efficiency of the SV simulation. DiaQ has been created by building the libdiaq library in C++ and parallelizing it using OpenMP, combining these two modern matrix computations yielding high performance. Through the integration of DiaQ with the SV simulator, the researchers achieved performance improvements in simulated circuits with two to three times as many qubits as those simulated by earlier simulators. Intel's Broadwell and Frontier supercomputers were used to conduct experimental evaluations of DiaQ, demonstrating that it "significantly reduces simulation runtimes." Speedups of up to 69% on Broadwell and 52% on Frontier were achieved—noticeable performances—when simulating quantum circuits that fit within cache capacity. One reason for the performance increase is that DiaQ's output format causes the simulator to access memories that are closer to linear access patterns. Other reasons include efficient sparse matrix-vector multiplication and matrix product operations ( Chundury, 2024). B. Advancements in Density Matrix (DM) George F. Viamontes. et. al., (2005) presented advancements in the simulation of quantum circuits, especially in terms of handling errors. The paper highlights the development of an additional algorithm for DM, called Quantum Information Decision Diagrams (QuIDDs). This graph-based algorithm efficiently simulates quantum circuits in DM and helps in handling Errors and Noise; Computational Efficiency, Outer Product, and Partial Trace Operations. For instance, QuIDDs compress the redundancy in matrices and vectors that arise in quantum computing, allowing for polynomial time and memory resources for many important cases. Moreover, the paper provides results demonstrating that QuIDDPro/D, the simulator based on these graph-based algorithms, significantly outperforms an optimized array-based simulator (QCSim) on a variety of quantum circuit benchmarks. Thus, the new graph-based algorithm QuIDDs can efficiently simulate quantum circuits in the DM representation, enabling better handling of quantum errors and improving the scalability of simulations (Viamontes et al., 2005). According to Thomas Ayral et al. (2023), a significant advance has occurred in the simulation of quantum circuits. This advance is tied directly to the Development of a Density-Matrix Renormalization Group (DMRG). The paper introduced a DMRG specifically created to complete the task of simulating quantum circuits. The algorithm has been designed to retain the advantages of the traditional timedependent DMRG while extending its applicability to unitary operators (and hence, to quantum circuits) at the algorithm's heart. In doing so, the algorithm achieved a level of depth and a simulation speed that compares favorably with a recent simulator that uses a classical supercomputer. Benchmarking the DM Renormalization Group (DMRG) algorithm involved several different types of quantum circuits, including those used in Google's “quantum supremacy” experiment and the Quantum Approximate Optimization Algorithm (QAOA). Through the usage of these circuits, they found out that the DMRG algorithm produced similar results (i.e. bitstrings) to those found using actual quantum circuits (i.e.

362


from Google's experiment), with fewer resources and less run time. In addition, the DMRG algorithm scales polynomially, which enables it to simulate hundreds of qubits. Thus, the results suggest that the DMRG algorithm has a substantial advantage over actual quantum hardware in achieving high fidelity (Ayral, 2023). C. Universality of Hadamard Gate in Quantum Computation One of the most significant factors written by Shepherd (2006) is the universality of Hadamard and Toffoli gates in quantum circuits. Shepherd established that although there were trivial limitations, Toffoli’s and Hadamard's gates together form a universal set for quantum computation. In other words, any quantum computation requires these gates, emphasizing the importance of their role. Quantum depth is defined as the new measure of complexity. The quantum distance used in the research is calculated based on the number of global Hadamard operations utilized in a circuit. By focusing on this measure, Shepherd explains how it is possible to use the operations with familiar algorithms, such as Shor’s and Grover’s. For instance, the paper demonstrates that Shor’s algorithm can be performed with a quantum depth of 2, which means that the algorithm can be explained by a sequence of processes that incorporate Hadamard transforms and Toffoli gates. Shepherd then explains how the structure of Shor’s algorithm can be applied to a model that is based on a global Hadamard operation and Toffoli gates. The algorithm is divided into two parts: order grabbing and post-processing. Therefore, these results demonstrate that quantum algorithms are superior to classical computers. By expressing quantum computations in terms of sequences of the universal gates such as Hadamard and Toffoli, the complexity of quantum algorithms and the basic principle of how quantum mechanics works can be more effectively explained and understood, thus allowing one to create a better classical simulation of quantum processes (Shepherd, 2006).

Methods This section demonstrates what method was used to compare SV and DM and the precise description of the code. The differentiation from previous research is that while researchers evaluated the efficiency of the SV and DM separately and created a way to optimize the algorithm, this paper simulates both the SV and DM. Specifically, the simulation compares the efficiency of SV simulation and DM simulation by measuring the purity level, run time, and the effect of noise in simulation. This simulation would mimic the Hadamard gate through both types of simulations: SV simulation and DM simulation. The Qutip would create a two-dimensional vector space, which has a value of |0⟩, and form a superposition state with it. A. Specific computer specifications in simulating the code The computer used for the comparison analysis has an Apple M1 processor, 16GB of memory, 1 terabyte of storage, and Sonoma 14.1. Python was utilizing a compiler named Replit. The simulation is run by installing Qutip and Matplotlib in Python. B. Procedure The code was designed to replicate the behavior of a qubit passing through a Hadamard gate. To effectively imitate the behavior and evaluate the effectiveness of the two simulating methods, Qutip was used to simulate the quantum computing environment, Numpy, to execute mathematical operations, and time to measure run time. The qubit's starting state is represented as a two-dimensional vector, with the base state |0⟩. The DM for this starting state is calculated by multiplying the original state by its conjugate transpose. Furthermore, the Hadamard gate is represented as a matrix, which is stored in the variable H. The noise level is set to 0.1, and the simulation runs 300 times.

363


[Table 1: Functions and its purpose] Name of the function

Purpose

HSV (state)

Shows a qubit passing through the Hadamard gate using SV simulation

HDM (density)

Shows a qubit passing through the Hadamard gate using DM simulation

add_noise noise_level)

(state, Adds noise to the SV of the qubit and also normalizes it by using a built-in function of np.random.normal ()

add_noise_density (density, noise_level)

Adds noise (0.1) to the DM of the qubit and also normalizes it by using a built-in function of np.random.normal ()

calculate_purity (density)

Calculate the purity of the DM

SV_to_DM (state)

converts a SV to a DM to calculate the purity of SV after noise.

* The table defines the 6 functions that I have created to compare the state vector and density matrix simulations. Having defined these functions, the main part of the code executed 300 runs for both SV and DM simulations. In every trial, the SV is converted to a DM, and the Hadamard gate is applied to both the initial SV and the DM. The code also evaluates the run time, i.e., how much time it took for the simulation of the SV and DM to show the Hadamard gate being applied to the initial state. The purity of the resulting DM and SV is calculated (transformed into a DM). Noise is added to the resulting SV and the purity of the noisy state is evaluated by calculating the ensemble DM from averaged noisy SV. Also, the resulting DM has noise added to it, and then the purity of the noisy state is calculated. The code then measures the run time for each simulation to apply noise. Then it averages out the result of the 300 simulations, and shows the runtime, noisy purities, and purities. In the print of the final output of the code, the averaged results are given besides the resultant SV and DM after the Hamdard gate, and the final states after the addition of noise (SV in the form of DM). This code compares the purity level, run time, and the effect of noise of the DM and SV simulation after applying the Hadamard gate, demonstrating which quantum simulation is more effective.

Analytic Results This section provides a detailed analysis of the result.

364


Figure 1: The result of the comparative analysis in the code of the runtime, Qobj data, purity, and the effect of the noise.

Figure 2: The graph depicts how the number of runs affects the purity attained after applying noise. For a small number of runs, the state vector purities (blue dots) have a wider dispersion, and as the number

365


of runs increases, they all tend to converge; simultaneously, the purities from the density matrix (orange crosses) have become stable for all run numbers. A. State vector As demonstrated in Figure 1, the qubit, initially in the state |0⟩ is converted into an equal superposition & of |0⟩ and |1⟩ after the Hadamard gate, represented by approximately coefficients and in 1×2 Matrix. √(

The SV simulation also demonstrated high efficiency with an average runtime of approximately 8.34e06 seconds. Such a minimal number shows that the runtime is in the microsecond range. Moreover, the purity is considered perfect with such trivial differences with 1, indicating that operating the Hadamard gate did not lead to a significant mixing. Furthermore, the ensemble density matrix from averaged noisy state vectors showed a purity of 0.905, demonstrating that the state is mixed. The average run time the state vector took to apply noise was around 0.000386 seconds. B. Density Matrix The DM shown in Figure 1 demonstrates a pure state corresponding to an equal superposition of |0⟩ and |1⟩. The off-diagonal elements (0.5) show the coherence between |0⟩ and |1⟩. The Qobj data proves the statement above. The average run time of the DM is slightly slower compared to the SV simulation. Yet, it is still effective with runtimes in the microsecond range. Moreover, the purity of the DM remains the same as the SV. However, after noise is applied to the resulting DM the purity drops significantly, demonstrating increased mixedness. This is shown by Qobj data as it shows a significant change in values inside the matrix. C. State vector vs. Density matrix When SV and DM are directly compared, SV simulations can process and mimic the Hadamard gate more computationally efficiently. The average run time for a Hadamard gate in SV simulations is 8.34×10^-6 seconds, while in DM simulations it is 1.84 × 10^-5 seconds. This means that SV simulations run approximately 54.67% faster than DM simulations for this operation. Noise causes a change in computational efficiency. The findings indicated that adding depolarizing noise to the state vectors or normalizing the noisy state, is substantially slower (in total, 0.000386 s). This is because normalization can only be performed after a random state is established, which makes the procedure time-consuming. On the other hand, adding the noise directly to the density matrix is faster. It has a running time of an average of 7.85 × 10^-5 seconds as DM simulation directly changes the elements in it without the need for normalization. This roughly makes the DM simulation 80% faster than the SV approach in adding noise. A critical difference is that the DM simulations, by nature, can show the effect of noise by simulating it only once. Furthermore, they are even accurate in describing the mixed state produced by the noise. On the other hand, SV simulations merely require to be conducted enough times to average out the effects of noise, essentially calculating the ensemble density matrix from a large number of noisy state vectors. As the SV technique demands carrying out many simulations of the same process, the difference in runtime between SV and DM increases, making the DM method significantly more time-efficient whenever there is any form of noise in the simulation. With respect to the representation of noise effects, the purity of the ensemble density matrix formed from averaged noisy state vectors in the SV simulation is 0.9048. As demonstrated in Figure 2, the purity value 0.9048 does not adequately indicate the purity after the noise is applied. This is because 0.9048 is merely an average of 300 state vectors affected by noise, which cannot be justified as an exact purity. The final noisy density matrix in the DM simulation has a similar purity of 0.9050 and it more clearly demonstrates the characteristics of a mixed state with reduced off-diagonal coherence terms. This indicates the sensitivity of the DM approach and its ability for an explicit representation of mixed states. As a result, both SV and DM simulations are effective, but SV simulations can be substantially faster in the case of simulating the Hadamard gate, and the DM simulations will be far more efficient

366


regarding adding noise. The SV simulations required numerous runs to calculate the exact purity, which suggests inefficiency. On the other hand, the DM simulations do show how a mixed state should be represented under the noise. This comparison underlines the trade-off between computational efficiency and a detailed representation of quantum states in a noisy environment.

Conclusion This research was focused on exploring the efficiency and robustness of state vector (SV) and density matrix (DM) simulations. The comparison was especially focused on mimicking Hadamard’s gate in these two simulations. From this comparison, this paper found that the SV and DM have clear distinctions. The SV simulations run notably faster in performing hazard gate, holding a time in the range of microseconds. The average runtime for SV simulations was approximately 8.34×10^-6, displaying relatively efficient performance on classical processors. By contrast, DM simulations were also effective but were 120.62% slower than SV simulations. For purity and noise resistance, both maintained near-perfect purity under no noise. However, when noise was added, the SV simulations were more robust, keeping a high level of purity. Yet, through calculating the purity and Qobj data of the ensemble DM from averaged oisy SV, the paper found out that the SV simulation requires multiple runs to average out the effects of noise, which can be computationally time-consuming. On the contrary, the simulations with a DM showed a big drop in purity to sensitized noise and high degrees of mixedness with comparatively short run time. In this process of modeling quantum states, SV simulations are more speed-efficient and therefore most suitable for representing quantum states in high-efficiency tasks that do not require noise. In contrast, DM simulations were more accurate when representing quantum states, particularly in quantum decoherence modeling. DM simulations would be, in this way, very crucial for situations regarding envisioning the whole quantum state that includes mixed states. Thus, which of these between the SV or DM simulations to be used depends on the requirements the quantum computing task may have. SV simulations are used when one needs high speed and efficiency from the simulations without noise. DM simulations, although significantly slower, still bear specific advantages over the other methods and are more fitted for tasks requiring in-depth modeling of quantum states and decoherence. Such comparative analysis provides proof that a proper choice of simulation method is important for dealing with the specific challenges to be faced in quantum computing.

367


References Ayral, T., Louvet, T., Zhou, Y., Lambert, C., Stoudenmire, E. M., & Waintal, X. (2023). Density-matrix renormalization group algorithm for simulating quantum circuits with a finite fidelity. PRX Quantum, 4(2), 020304. Chundury, S., Li, J., Suh, I. S., & Mueller, F. (2024). DiaQ: Efficient State-Vector Quantum Simulation. arXiv preprint arXiv:2405.01250. Cobb, M. (n.d.). What is the RSA algorithm? Definition from SearchSecurity. TechTarget. Retrieved August 1, 2024, from https://www.techtarget.com/searchsecurity/definition/RSA Hadamard, J., & Hadamard, J. (n.d.). Jacques Hadamard. Wikipedia. Retrieved August 1, 2024, from https://en.wikipedia.org/wiki/Jacques_Hadamard Klymko, C., Sullivan, B. D., & Humble, T. S. (2014). Adiabatic quantum programming: minor embedding with hard faults. Quantum information processing, 13, 709-729. Matuschak, A., & Nielsen, M. (2019, March 18). Quantum computing for the very curious. Quantum Country. Retrieved August 1, 2024, from https://quantum.country/qcvc Noh, K., Jiang, L., & Fefferman, B. (2020). Efficient classical simulation of noisy random quantum circuits in one dimension. Quantum, 4, 318. Patel, A. D. (2023). The Quantum Density Matrix and its many uses: From quantum structure to quantum chaos and noisy simulators. Journal of the Indian Institute of Science, 103(2), 401-417. Quick start guide — Matplotlib 3.9.1 documentation. (n.d.). Matplotlib. Retrieved August 1, 2024, from https://matplotlib.org/stable/users/explain/quick_start.html QUTIP: Quantum Toolbox in Python — QUTIP 4.7 documentation. (n.d.). https://qutip.readthedocs.io/en/qutip-4.7.x/ Shepherd, D. J. (2006). On the role of Hadamard gates in quantum circuits. Quantum Information Processing, 5, 161-177. The Value of Classical Quantum Simulators. (2024, January 18). IonQ. Retrieved August 1, 2024, from https://ionq.com/resources/the-value-of-classical-quantum-simulators What is a Hadamard gate? (n.d.). PennyLane. Retrieved August 1, 2024, from https://pennylane.ai/qml/glossary/what-is-a-hadamard-gate/

368


A Study on the Clinical Methodology for the Diagnosis of Neurodegenerative Diseases Author

Full Name

:

Kim, Dongeon

:

Concordia International School Hanoi

(Last Name, First Name)

School Name

Abstract The prevalence of neurodegenerative diseases is rising on a day-to-day basis. Unfortunately, there are no cures for these diseases, especially when they are diagnosed at later stages. This makes accurate early diagnosis crucial for effective treatment and management. This paper aims to explore the most effective imaging technologies used in the diagnosis of neurodegenerative diseases, focusing on three main types of scanners: CT, MRI, and PET The CT scanner is the fusion of the x-ray imaging and the filtered back projection to create a detailed image of the inner human body. The X-rays are weakened when interacting with different tissues. The intensity of the X-rays is recorded every time. These intensity rates are given to the computer for the process of filtered back projection to produce detailed images. Due to their lower sensitivity in detecting changes in soft tissues like the brain, their role in diagnosing neurodegenerative diseases seems limited. The MRI is the usage of resonance of the protons to create a detailed image of the body’s internal structures. For the MRI, radiofrequency waves are sent to disturb the alignment of protons and when they realign it emits energy that is detected by sensors. This energy is recorded and filtered back projection is used to produce detailed images. MRI is useful for identifying structural changes in the brain associated with neurodegenerative diseases. PET scans are especially valuable in diagnosing neurodegenerative diseases because it provides functional imaging that can detect metabolic activity in the brain. It mainly uses the radiotracer that binds with the amyloid beta. The scanners could detect the radiation emitted by the radiotracer and use the data to create an image that could help early diagnosis of neurodegenerative diseases. Among these methods, PET is the most effective way as it is the only method that could diagnose the main cause. Also, it is the only method, early diagnoses are possible for neurodegenerative diseases.

Keywords Diagnosis, Neurodegenerative diseases, CT scans, PET, MRI

369


Introduction Neurodegenerative diseases caused by the accumulation of amyloid beta could be diagnosed through numerous alternative methods, each containing its own benefits. Among these numerous methods for the diagnosis of certain neurodegenerative diseases, this research paper will be dedicated to the most productive and effective mechanism for patients to diagnose certain neurodegenerative diseases before their symptoms. According to the current research it is estimated that about 6.9 million people were diagnosed with neurodegenerative diseases when it was only about 5.9 million in the 2020s [1]. Moreover, without early diagnosis, neurodegenerative diseases like Alzheimer's cannot be cured appropriately. Early detection is important for both patients and families as they need to make informed decisions regarding the conditions and adjustments in lifestyles. Thus, the early diagnosis of neurodegenerative diseases is crucial nowadays as the most productive and effective methods are needed for these tragic events. The purpose of this proposal is to explore the most accurate technology to diagnose neurodegenerative diseases. As early diagnosis is crucial for the cure of such neurodegenerative diseases, a precise methodology needs to be used to increase the percentage of people who could be cured effectively. Moreover, it would explore the numerous procedures of the different methodologies.

Discussion Background A. Computed tomography (CT) scans 1. Methodology of CT scans Computed tomography (CT) scans are used for the diagnosis of numerous diseases. Using detailed 3D imaging, doctors would be able to diagnose the numerous types of symptoms and the cause of them. Blood clocking, bone breaks, strokes, cancer, and tumors. This detailed image also supports the diagnosis of the methods of therapy and how they could be used to result in an effective cure [15]. Figure 1: Example of a full-body CT image

370


(source: Chan, David. “How Much Does a Full-Body CT Scan Cost?” Quora, 2019, www.quora.com/How-much-does-a-full-body-CT-scan-cost. Accessed 31 July 2024.) The main methodology of CT scans is the usage of X-rays to produce detailed images. These X-rays could take pictures by using the atomic numbers involved with the materials of certain tissues. As the X-rays pass through the different tissues, they get weaker either by spreading when meeting with the atoms or getting absorbed by certain tissues [16]. Some contrast agents could be used to enhance the contrast of some tissues. The body parts where contrast materials are injected become more visible relative to other tissues. The contrast material of the CT scanner has a high atomic number which accumulates in the bloodstream for a clearer image. However, contrast agents are not used most of the time [16]. Due to the difference in atomic composition, varying degrees of X-rays pass through different tissues resulting in a series of cross-sectional images of the body. Denser tissues such as bones absorb more X-ray allowing them to appear lighter on the screen. On the other hand, less dense tissues including muscles and organs attenuate less X-ray causing them to appear lighter. The process of X-rays going through repeats for the whole body in gradients to get a precise image of the inner part of the human body [16]. Figure 2: explains the x-ray (uses the ratio of x-ray entering the body part and the x-ray coming out)

a. Advantages and disadvantages of CT scans CT scans are often an economical option compared to other imaging methods. They generally cost around $1,200, while MRIs are approximately $2,000 [15]. Additionally, CT scans are quicker, usually taking about 10 minutes, whereas MRIs can take 30 to 60 minutes [10]. PET scans are the most expensive, averaging $7,275, and require about an hour for imaging [7]. Even though these might change, for the moment, CT scans are the most economical option. Despite their lower cost and shorter duration, CT scans have some drawbacks. They expose patients to a higher level of radiation—up to 1,000 times more than standard X-rays [7]—which increases the risk of cancer and can be particularly concerning for pregnant women. While CT scans provide detailed images quickly, they may not capture as much detail as MRIs or PET scans [7]. Another limitation is that CT scanners have restrictions based on patient size. They may not be suitable for individuals over 450 pounds (about 204 kg) or those with a body thickness exceeding 24 inches (60.96 cm) [7].

371


CT scans can also have side effects from contrast materials similar to those used in MRIs [2], though they are not typically used to directly diagnose neurodegenerative diseases. Instead, CT can show effects like brain shrinkage or movement associated with such conditions [2].

B. Magnetic Resonance Imaging (MRI) 1. Methodology of MRI Magnetic Resonance Imaging (MRI), mostly known as magnetic resonance imaging, is the usage of magnetic fields to take images and collect precise data about the human body [8]. It could be used to diagnose numerous diseases such as blood clogging, tumors, and unusual brain activities which could lead to neurodegenerative diseases [8]. MRI uses contrast material for clearer imaging and brighter views. It mainly uses its strong magnetic field to align the hydrogen protons with its magnetic field. Through this process, the protons inside our body spin quickly due to their stabilized axis. After this process, radiation is sent to these protons if the radio waves and the protons in particular cells have the same frequency and are therefore in resonance. Using this the protons turn by 90 degrees up to 180 degrees [8]. After the radio wave pulse sequence, the protons start to line up once again with the magnetic field produced. Using this information, the MRI machine detects the magnetic energy produced in cells, detects the energy coming out, and uses certain data to differentiate the cells. The machine repeats this process in gradients to detect particular cells [12]. +++ The tomography would be able to precisely detect the energy and could determine the tissue using the different magnetic properties of the tissue. As the tissues realign the minimal energy. The tomography would be able to detect the amount of energy produced and build a 2D image using the time the energy was detected to show the exact location [8]. The contents and the main materials included to produce the MRI machine would be a wide tube, magnet, a Radio Frequency (RF) coil, gradient coils, a patient table, and a computer system. A tube magnet would be needed for the main function of MRI. MRI would be needed to produce a strong magnetic field for the protons to align with the field. Radio Frequency coil would be crucial for the protons to rotate the protons and to detect the amount of energy produced. Gradient coils could be used to make different variations of the gradient coil and to visualize the locational image [12]. Without gradients, there would be only signals. Figure 2: Labeled image of the MRI

(Sources: Haynes, Heather. “Figure 1: Schematic Diagram of an MRI Machine Illustrating The...” ResearchGate, Dec. 2013, www.researchgate.net/figure/Schematic-diagram-of-an-MRI-machineillustrating-the-concentric-arrangement-of-coils_fig1_266266309.)

372


a. Benefits and Drawbacks of MRI MRI scanners can create detailed images of any part of the body, allowing physicians to diagnose a wide range of diseases from a single full-body scan. These images help in locating the exact position of diseases and assist in therapy sessions [6]. One significant advantage of MRI is its superior imaging of soft tissues compared to CT scans [6]. Additionally, MRI does not expose patients to ionizing radiation, reducing the risk of associated health problems. Among the 504 patients studied, 70.6% were successfully diagnosed using MRI, highlighting its effectiveness in detecting neurodegenerative diseases through specific brain changes [9]. However, there are several downsides to MRI. The strong magnetic fields can damage the machine or pose risks to patients and doctors if small objects are attracted to the device. The use of gadoliniumbased agents (GBA) as a contrast material may cause allergic reactions [6]. The loud noise produced by MRI machines can induce claustrophobia, making some patients fear the procedure, which may result in unclear images. This is particularly problematic for patients who cannot remain still during the scan, as they might need to be restrained, causing anxiety and increased heart rate [6]. Moreover, MRI is unsuitable for patients with magnetic implants, and the contrast material can lead to side effects such as rash, nausea, and vomiting, with severe reactions potentially being fatal [3]. C. Amyloid PET 1. Methodology of Amyloid PET The main function of an Amyloid PET scan is to scan the clogging of amyloid beta and to figure out the possibility of certain neurodegenerative diseases such as Alzheimer's. These scans have high success followed up with the exact data produced [5]. PET scans start with the injection of fluorine-18, a radioisotope material that produces radiation which allows the PET scanners to detect the radiation. Flourine-18 contains 9 protons and 9 neutrons. It is often used for PET imaging mainly detects the abnormal accumulation of amyloid beta. In 2004, Klunk et al. used the radiotracer 11c Pittsburgh compound B (11c-PiB) to successfully image amyloid in Alzheimer’s disease. 11C-PiB has high sensitivity and specificity to amyloid beta, but unfortunately, it only has a life span of half-life limiting its use for specialized parts. Due to these limitations, a new substance was invented which was labeled as 18f. Such 18F’s are 18F-florbetapir, 18F-florbetaben, 18F-flutemetamol, and 18F-flutafuranol. Even though all of these chemicals have different chemical compounds, their use characteristics are more than similar making them all called a big group of 18 fluorines [16]. These chemical compounds are used to produce a digital image diagnosing Alzheimer's disease and other neurodegenerative diseases. These chemical substances are injected through the bloodstream of the brain for the PET scanner to be able to detect the amyloid beta for diagonalization of neurodegenerative diseases [16]. The detection happens as the protons meat with the electrons, gamma rays are emitted for the detection. Figure 3: Normal PET image and PET image of Alzheimer’s disease

373


(Sources: Chapleau, Marianne, et al. “The Role of Amyloid PET in Imaging Neurodegenerative Disorders: A Review.” Journal of Nuclear Medicine, vol. 63, no. Supplement 1, 1 June 2022, pp. 13S19S, jnm.snmjournals.org/content/63/Supplement_1/13S, https://doi.org/10.2967/jnumed.121.263195.) a. Benefits and Drawbacks of Amyloid PET PET imaging has higher sensitivity and image quality, which helps doctors determine diseases before symptoms and development occur. Unlike CT and MRI, PET scans provide a quantifiable “movie” with high time resolution of the physiological processes in the body, rather than a single snapshot [14]. PET scans effectively show how well certain body parts function rather than just their appearance, making them particularly useful for diagnosing the spread of diseases and assessing how well diseases are responding to therapy. They can also aid doctors during brain surgery for epilepsy [5]. The accuracy and precision of PET scans are notable, with up to 98% accuracy in diagnosing neurodegenerative diseases, as they measure the main causes of these diseases. Due to their high accuracy and efficiency, PET scans are considered one of the most effective methods for diagnosing neurodegenerative diseases [5]. However, PET scans have several downsides. They require a high level of expertise from radiologists to correctly interpret and diagnose diseases, and without precise knowledge, misdiagnoses can occur. Additionally, PET scans cannot pinpoint the exact location of a diseased body part; they only determine whether a disease is present [16]. The process involves radiation, posing a high risk of tissue damage that could potentially lead to cancer as the emission occurs inside the body. The contrast material used in PET scans can also cause problems if it does not decay properly [5]. Moreover, the high cost of PET scans, approximately $3000, can be prohibitive for lower-income individuals. Sometimes, PET scans can expose patients to excessive amounts of radiation [5].

Literature Review A. CT Scan Procedure CT scan starts with the patient going inside the CT machine. The X-rays are shot through the patient. The X-rays are shot toward the patient while the machine is rotating. As the X-ray passes through the skin, it gets weaker every time. Using the weakened x-ray the computer can reconstruct the image using the filtered back projection. These images are accumulated to make one detailed 3D image out of the 2D [15]. Figure 4: the procedure of the CT scan

374


(Source: Eustice, Carol. “What to Expect When Undergoing a CT Scan.” Verywell Health, 31 Aug. 2022, www.verywellhealth.com/what-is-a-cat-scan-189603.) Using the Data collected, the computer builds up a realistic 3D image out of multiple layers of 2D images. Using this data and image, the doctor finds and diagnoses the symptoms and the main cause of the diseases [15]. CT has numerous benefits, but also a few downsides as well. One of the biggest benefits of CT is its low price and time. CT mainly is the cheapest among all of the three mechanisms. Also, it takes the least time for the diagnosis of the diseases. Another benefit is its clearance for bones and other hard tissues [2]. The in-depth information provided could be other pros of them. The main downside of CT could be that it couldn’t get clear photos of the soft tissues and soft materials. This is because X-rays could easily pass through soft tissues, due to their low atomic number. Also, it couldn’t be possible for early diagnoses of certain diseases such as Alzheimer's. This is because CT couldn’t detect the primary cause of certain diseases, but its symptoms and the reactions. Moreover, CT has an excessive amount of radiation exposure which might be dangerous for those who are pregnant [2]. B. MRI Procedure MRI starts with the patient entering into the MRI machine. Soon the magnets start to spin creating a loud noise and a strong magnetic field. The protons start to align with the magnetic field. The resonance is used for the radio waves to be activated and to spring the protons for about 90 to 180 degrees. When it realigns with the magnetic field, the energy is given out due to friction and the MRI catches it. This process repeats with the different gradients of the magnetic field to get a precise image and to detect numerous types of tissues and problems [13]. The computer is used to build up the image and to create a detailed tomograph of the soft tissues. These images are used to diagnose the patient successfully and accurately [8]. Figure 5: describes the MRI machine and the images that came out from an MRI.

(Source: Ventura, Med. “An MRI Scanner Buying Guide.”, Medium, 8 Feb. 2017, medium.com/@medventura/an-mri-scanner-buying-guide-bdea482e5cce. Accessed 31 July 2024.)

375


Like CT, MRI has its benefits, but its drawbacks too. Mainly, MRI doesn’t have any radiation exposure like any other mechanisms to diagnose neurodegenerative disease. This occurs as MRI uses a strong magnetic field to diagnose instead of radiation. Another benefit could be its clearance of imaging on soft tissues compared to CT scanners. As MRI doesn’t use radiation, it could easily catch and image soft tissues of the body [12]. To state the downsides of MRI, it could be a relatively expensive price and excessive time consumption compared to CT scans. MRI usually takes two times more money time for one take [15]. Similar to CT it couldn’t directly diagnose the diseases based on their main cause, but it could only sense the symptoms caused by the diseases. Lastly, excessive exposure to radio waves could increase the temperature of the skin and has a chance to cause skin burns and harms [2]. C. PET Procedure PET starts with the injection of radiotracer (fluorine-18) into the bloodstream which leads to the brain. The radiotracer binds with the alpha-beta material and accumulates around the place where the alphabeta is accumulated. When the radiotracer starts to decay, it emits positrons which the meats electrons not far from its decay. The sensors detects the emission of gama rays when the electrons and protrons meat. These signals are used to build tomographs [16]. Figure 6: Procedure of the PET imaging

(Source: “PET | Radiology | U of U School of Medicine.” Medicine.utah.edu, 10 Nov. 2021, medicine.utah.edu/radiology/research/learn/pet.) D. Image Reconstruction Methods For the image reconstruction methods, the most common uses are the filtered back projection. This reconstruction method is the algorithm for solving multiple equations simultaneously to ascertain the correct value of the rays [11].

376


A+B = 1 C+D = 0 A+C = 1 B+D = 0 These values given could be concluded that A = 1 and B, C, D = 0. This example was only based on the four points. However, about 250,000 equations are done simultaneously by the computer for the solving of these equations in the human body [11]. However, unlike the given situation, the human body consists of numerous different organs, and the number of variables and the exact location couldn’t be specified. To overcome this situation, numerous pictures are taken at different angles for an in-depth image of the human body [11]. The computer uses the collected data to create exact tomography out of the grid. These tomography are overlapped to create the 3D image of the inner human body [11].

Analytical results A. Comparisons Among CT, MRI, and PET Scan

While CT, MRI, and PET could be used to diagnose neurodegenerative diseases, all of these methodologies possess alternative benefits and downsides. The productivity alters relatively to the situation and numerous other variables. However, for neurodegenerative diseases, PET could be one of the most effective methodologies even with its downsides [5]. The doctors could use the image created and compare them to the beforehand. This could help the doctors determine whether the patient has the typical disease. Use the information to diagnose the diseases [5]. The positive parts to list about the PET scan are its accuracy and its early diagnosis. PET has about 98% accuracy for the diagnosis of Alzheimer’s disease. Also, it could diagnose the primary cause of the disease, by looking at the accumulation of Amyloid Beta within the brain [17]. So, early diagnoses are possible for the PET scans, unlike the other scans diagnoses based on the reactions, changes, and symptoms [5]. One drawback to listing for PET scans is the imprudent amount of money and time spent on one scan. The PET scan has the most amount of money out of the three methods by having to have $7257 and also it takes the most time for it to finish. The average time it takes is 1 hour [7]. Table 1. Comparison of three methodologies (CT, MRI, and PET) Methodology for diagnoses CT (Computed Tomography)

MRI (Magnetic Resonance Imaging)

Radiation Level (mSv)

1 to 10

Null

Time

10 minutes

20-30 minutes

Precision

Drawbacks

Benefits

60%

-Could not perform early diagnoses -Unclear images for soft tissue -Excessive exposure to radiation

-Low price -Short time -Clear

70.6%

-Could not perform early diagnoses -Radio waves cause harm to skiing -High price

-Clear image of soft tissue -No radiation exposure

377


PET (Positron Emission Tomography)

8

1 hour

98%

-High price -Long time -Exposure to radiation

-Early diagnoses -High accuracy -High precision

Table 1. shows the comparison among three different mechanisms for diagnoses of neurodegenerative diseases. It could be concluded that CT is the most productive way for the conservation of time. Moreover, as CT scans take the shortest time for the result to properly come out, for emergency situation, CT is used most of the time. Also, CT could provide in-depth images [2]. For the MRI, almost every characteristic is similar to it. Among MRI and CT, they have their own advantages and disadvantages. This makes the scans to become dependent on the taste of the doctors. As both of the mechanisms are used for the same reason, the scans chosen depends on the doctors. However, even though MRI and CT has the same performance, MRI are better on the imaging of the soft tissues. Among the three methods, I believe that PET scans would be the most efficient and productive. As it is the one that has the most accuracy and precision. Also, it allows early diagnoses. Even when other methods could be used to diagnose, there is no reason to diagnose if the accuracy of the therapy drops dramatically. Therefore, other methodologies of diagnosis couldn’t do its normal purpose for the cure of the diseases. This makes PET to be the most effective way of diagnosing even while caring about all of the variables [4].

Conclusion and Limitations During the process of research, there were numerous limitations involved in it. during the process of the experiment. One of the main limitations involved during this experiment was that the simulation couldn't be done in real life, but rather be based on the research. These limitations might lead to false information. Also, the data couldn't be collected precisely until the unit digit. The positive points of CT scans are the fact that it is cheap and less time-consuming for the scan. A CT scan uses an X-ray and can be done in numerous gradients in a few minutes. However, as it takes less time, its quality couldn’t be as good as other scans, especially for soft tissues. Also, it contains health problems due to the high exposure of the x-rays [2]. MRI is the average among the two mechanisms. MRI has a high-quality image but doesn’t have as good precision as the PET mechanism only for neurodegenerative diseases. However, it still has no radiation exposures allowing it to be safe and have fewer health risks. Some downsides of MRI could be the side effects of radio waves due to high friction [19]. Also, similar to CT it could only be diagnosed after the disease has developed for certain amounts [14]. Lastly, PET is the most precise diagnosing mechanism for neurodegenerative disease among the three. As PET directly diagnoses the main cause of the disease, the accumulation of alpha beta, it is perfect for early diagnosis. Moreover, it has the highest precision [16]. This research supports the thought that PET scan is the most effective mechanism among the three mechanisms. This is because, without early diagnosis, the success rate of the therapy significantly decreases. Due to this reason, without early diagnosis, diagnosis isn't important as there would be a lack of chance for therapy to be successful [16]. The PET mechanism could also develop by using other mechanisms such as CT and MRI too. As PET couldn’t directly determine the exact location of the disease. However, CT and MRI can. Therefore, CT and MRI could be useful for doctors to determine the exact therapy to be used.

378


References [1] Alzheimer Association. “2024 Alzheimer’s Disease Facts and Figures.” Alzheimer’s Disease and Dementia, Alzheimer’s Association, 2024, www.alz.org/alzheimers-dementia/facts-figures. [2] Center for Devices and Radiological Health. “Computed Tomography (CT).” U.S. Food and Drug Administration, 14 June 2019, www.fda.gov/radiation-emitting-products/medical-x-rayimaging/computed-tomography-ct. [3] Cherney, Kristeen . “MRI vs. X-Ray: Pros, Cons, Costs & More.” Healthline, 30 Aug. 2021, www.healthline.com/health/mri-vs-xray. [4] Cleveland Clinic. “Neurodegenerative Diseases.” Cleveland Clinic, 10 May 2023, my.clevelandclinic.org/health/diseases/24976-neurodegenerative-diseases. [5] Eberling, Jamie L, and William J Jagust. “Imaging Studies of Aging, Neurodegenerative Disease, and Alcoholism.” Alcohol Health and Research World, vol. 19, no. 4, 1995, pp. 279–286, www.ncbi.nlm.nih.gov/pmc/articles/PMC6875739/. [6] FDA. “Benefits and Risks.” U.S. Food and Drug Administration, 9 Dec. 2017, www.fda.gov/radiation-emitting-products/mri-magnetic-resonance-imaging/benefits-and-risks. [7] Health images. “MRI vs. CT Scan.” Health Images, 8 Jan. 2019, healthimages.com/mri-vs-ct-scan/. [8] Johns Hopkins Medicine. “Magnetic Resonance Imaging (MRI).” Johns Hopkins Medicine, Johns Hopkins Medicine, 2019, www.hopkinsmedicine.org/health/treatment-tests-and-therapies/magneticresonance-imaging-mri. [9] Leonidas Chouliaras, and John T O’Brien. “The Use of Neuroimaging Techniques in the Early and Differential Diagnosis of Dementia.” Molecular Psychiatry, vol. 28, 22 Aug. 2023, https://doi.org/10.1038/s41380-023-02215-8. [10] Maligs, Ferb. “Filtered Back Projection | Radiology Reference Article | Radiopaedia.org.” Radiopaedia, radiopaedia.org/articles/filtered-back-projection-1. [11] Mayo Clinic. “CT Scan.” Mayo Clinic, Mayo Clinic, 6 Jan. 2022, www.mayoclinic.org/testsprocedures/ct-scan/about/pac-20393675. [12] “MRI Components with Engineering Materials.” RPWORLD, www.rpworld.com/en/resources/case-study/mri-components-with-engineering-materials.html. [13] National Institute of Biomedical imaging and bioengineering. “Magnetic Resonance Imaging (MRI).” National Institute of Biomedical Imaging and Bioengineering, National Institute of Biomedical Imaging and Bioengineering, 17 July 2018, www.nibib.nih.gov/scienceeducation/science-topics/magnetic-resonance-imaging-mri. [14] Poul Flemming Høilund–Carlsen, et al. “FDG-PET versus Amyloid-PET Imaging for Diagnosis and Response Evaluation in Alzheimer’s Disease: Benefits and Pitfalls.” Diagnostics, vol. 13, no. 13, 3 July 2023, pp. 2254–2254, https://doi.org/10.3390/diagnostics13132254. [15] Siemens Healthineers. “What Is Computed Tomography (CT) and How Does It Work?” YouTube, 25 Aug. 2021, www.youtube.com/watch?v=0OdNI3lSgLc. Accessed 31 July 2024. [16] Marianne Chapleau, Leonardo Iaccarino, David Soleimani-Meigooni and Gil D. Rabinovici. “The Role of Amyloid PET in Imaging Neurodegenerative Disorders: A Review.” Journal of Nuclear Medicine, vol. 63, no. Supplement 1, 1 June 2022, pp. 13S19S, jnm.snmjournals.org/content/63/Supplement_1/13S, https://doi.org/10.2967/jnumed.121.263195. [17] Yetman, Daniel. “Can a PET Scan Diagnose Alzheimer’s Disease?” Healthline, 22 Sept. 2023, www.healthline.com/health/alzheimers/pet-scan-for-alzheimers.

379


Nanotechnology for Corrosion Prevention in Deep Marine Environments Author

Full Name

:

Kim, Jinheon

:

Seoul Foreign School

(Last Name, First Name)

School Name

Abstract The field of nanotechnology has grown rapidly in recent years and holds the potential to advance deepsea exploration. The deep sea is an environment that has not yet been explored much by humans. There are several problems in the deep marine environment that obstruct human exploration of the deep sea, and among those, this paper will focus on corrosion. The paper first explains the background information about corrosion and the environment in the deep sea for a better understanding of the paper. The paper will then examine four different nanomaterials in order to tackle the challenges at the deep sea by examining previous research, concluding with two of them as more suitable solutions: nanocomposites and top-layer polyurethane coating. These materials show great physical strength as well as protection against extreme conditions and corrosion. By utilizing these materials on the surfaces of submarines, advancement in deep-sea exploration may be achieved through better mitigation of corrosion.

Keywords Deep sea exploration, Corrosion, Nanotechnology, Nanocomposites, Top Layer Polyurethane Coating

380


Introduction Despite the recent advancements in technology, the deep sea is an environment that is still largely unexplored by humans. It is expected that there will be many new ecosystems and organisms that can be discovered by exploring the deep sea, potentially bringing great advancements in the field of marine biology. However, to explore such an environment, major challenges must be addressed in the deep sea environment, including high pressure, corrosion, and low temperatures. Among these challenges, corrosion is a crucial threat that is difficult to mitigate. Nanotechnology is the study of materials in which the dimensions of them are less than 100 nm, and has immense potential. In the recent decade, nanotechnology has been an area with exponential growth and attention from many scholars. Nanotechnology can specifically be used to mitigate corrosion in deep-sea environments [1]. This paper will focus on nanomaterials or technologies that can assist humans in exploring the deep sea environment by preventing corrosion. By reviewing past papers of other scholars, the paper will provide some background knowledge and potential solutions to the issue. Then, the solutions, which are nanomaterials, will be compared based on their suitability in assisting human exploration in deep marine environments, resulting in two nanomaterials as the final solutions. Finally, a prediction of the impact these nanomaterials would have on the field of deep sea exploration will be proposed. This research aims to discover how nanotechnology can contribute to the exploration of deep marine environments by mitigating corrosion. It largely intends to contribute to further exploration and insight into the environment of deep waters. It is expected that the findings of this paper will bring positive changes in the fields of marine biology as exploration into deep-sea environments will be improved.

Background This section will provide some background information that is necessary for understanding the paper. It discusses corrosion and the environment of the deep sea. 1. Corrosion Many challenges still prevent humans from exploring deeper regions of the marine environment in spite of the recent advances in deep-sea exploration technology and nanotechnology. As mentioned above, high pressure, corrosion, and low temperatures are significant aspects that hinder human exploration of deep marine environments [1]. Of these three, this paper will focus on corrosion as it is more difficult to mitigate than the other difficulties: high pressure and low temperature. Corrosion is essentially the gradual destruction of materials (usually metals) by chemical and/or electrochemical reactions with their surroundings. The most common form of corrosion is rusting, which occurs when iron reacts with oxygen and moisture to form iron oxide [16]. Chemical Reactions: Corrosion often involves oxidation-reduction (redox) reactions. For example, rusting of iron involves iron (Fe) reacting with oxygen (O2) and water (H2O) to form iron oxide (Fe2O3) [5]. The overall chemical reaction can be represented by the following equations: 1. Formation of Iron(II) Hydroxide: 4Fe + 3O2 + 6H2O → 4Fe(OH)3 Here, iron reacts with oxygen and water to form iron(II) hydroxide (Fe(OH)₃). 2. Oxidation to Iron(III) Oxide (Rust): 4Fe(OH)3 → 2Fe2O3 ⋅ 3H2O Iron(II) hydroxide (Fe(OH)₃) further oxidizes to form iron(III) oxide-hydroxide (Fe₂O₃·3H₂O), commonly known as rust. 3. Combining these, the overall corrosion reaction of iron can be summarized as: 4Fe+3O2+6H2 O→2Fe2O3 ⋅ 3H2O

381


Electrochemical Reactions: In many cases, corrosion occurs through electrochemical processes where metal atoms lose electrons and form ions. These ions then react with other elements to form rust [5]. Corrosion has caused much damage in various industries, resulting in economic losses and safety issues. It leads to the destruction of buildings or bridges, pipelines breaking, leakage of chemical plants, and flooding of bathrooms. Furthermore, corroded metals can ignite fire by contacting other materials. Corroded medical equipment can also lead to blood poisoning, and corrosion has caused numerous figures of art and symbolic structures to be damaged, such as the Statue of Liberty. 2. Deep Marine Environments Understanding the unique characteristics of the deep marine environment is crucial for reading this paper. The characteristics of the deep sea environment contribute to the rate of corrosion, each factor accelerating or decelerating it. This section will examine the environmental conditions of the deep sea. 2.1 Definition of the Deep-sea The deep sea ocean is defined as the area in the sea below 200m, where the light struggles to reach. Because of this, photosynthesis occurs only up to 100 to 200m below the surface of the ocean. This area comprises 90% of the Earth’s marine environment; 79% of the entire volume of the Earth’s biosphere consists of waters with depths greater than 1,000m, making it the planet’s largest and most unexplored biome. the deep sea remains largely mysterious, with fewer humans having ventured there compared to outer space [3]. 2.2 Characteristics of the Deep-sea Environment The physical characteristics of the deep-sea environment can be divided into two major categories: the biotic and abiotic. The biotic conditions consist of things such as food and predators, while the abiotic components include light, pressure, temperature, and oxygen. The biotic characteristics of the deep marine environment are very different from other ecosystems on Earth due to the organisms’ unique adaptations to extreme conditions. Food: In the nutrient-scarce environment of the deep sea, where photosynthesis does not take place, creatures have evolved unique feeding strategies. Food in these depths primarily consists of detritus— decaying organic matter from the upper ocean layers—and other deep-sea organisms. The primary producers in this region obtain nutrients/energy via chemosynthesis. Chemosynthesis is a biological process by which certain organisms produce organic compounds using chemical energy derived from the oxidation of inorganic substances, rather than relying on sunlight as in photosynthesis. This process is crucial for life in extreme environments where sunlight is unavailable, such as deep-sea hydrothermal vents. The abiotic factors of the deep sea make it extremely difficult for animals to survive, hindering human exploration. Light: The deep sea begins at 200m below the surface, where sunlight is too weak for photosynthesis. This zone, known as the mesopelagic or "twilight" zone, extends to about 1,000 meters, where light diminishes completely, turning deep blue as other colors are absorbed. Beyond 1,000 meters, the ocean is pitch black in terms of sunlight. Despite the absence of light, organisms dwelling in such environments adopted a phenomenon known as bioluminescence — creating light in their own bodies. Pressure: Given the immense volume of water above the deepest parts of the ocean, hydrostatic pressure is a crucial environmental factor for deep-sea life. Pressure increases by 1 atmosphere (atm) for every 10 meters of depth. Since the deep sea ranges from 200 meters to about 11,000 meters deep, pressure in these zones spans from 20 atm to over 1,100 atm. This high pressure does not significantly impact the water molecules itself but impacts complex biomolecules.

382


Temperature: The temperature difference between the surface of the ocean and the deep marine environment is stark, except in the polar regions. The temperature of the ocean near the surface is about 20 degrees Celsius, while the deeper parts, which are constant throughout the globe, stay between about -1 to 4 degrees Celsius, except near hydrothermal vents. Regardless of the near-freezing temperatures, seawater does not freeze in the deep sea as salt water freezes at about -1.8 degrees Celsius. Oxygen: The deep sea generally contains sufficient oxygen as colder water is better at dissolving oxygen than warmer water. This is also due to the fact that the deep sea is usually located in the polar regions, where the water is colder. Since there is not enough biomass to use up the oxygen, oxygen remains sufficient. The deep sea plays a critical role in Earth's climate system. It is instrumental in regulating ocean currents and storing carbon, which helps mitigate global warming. Carbon from surface organisms sinks to the seabed, becoming trapped in sediments that can be kilometers thick. These sediments cover much of the ocean floor, creating carbon reservoirs that are crucial for maintaining the planet's climate balance. Additionally, thermohaline circulation, a process driven by differences in water temperature and salinity, is essential for distributing heat and nutrients globally. Cold, salty water sinks and moves through the deep ocean, while warmer surface waters take its place, facilitating a global exchange that affects weather patterns and climate [7].

Literature Review The literature review section reviews the work of other scholars related to the topic of this paper. 1. Corrosion in the Deep Sea Corrosion in the deep sea is influenced by many of the environment’s unique factors. In this section, the different factors that influence the rate of corrosion will be discussed, focusing on how each of them impacts the corrosion of metals. 1.1 Temperature One of the aspects that are required to be considered is temperature. In deep marine environments, the temperature is extremely low due to the absence of sunlight. Sunlight cannot reach the extreme depth in deep marine environments, preventing it from gaining much heat. As the temperature decreases, the speed of atoms moving inside the molecule decreases, resulting in a lower reaction rate. Therefore, as temperature increases, corrosion rate increases [7]. 1.2 pH Level Another aspect that has to be taken into account is pH, which is the acidity of the water. The pH level in deep marine environments tends to be lower (more acidic) due to the accumulation of dissolved carbon dioxide, which forms carbonic acid. This increased acidity can accelerate the corrosion process of metal structures and equipment, posing a significant challenge for materials used in deep-sea exploration. Essentially, as the pH level increases, the rate of corrosion decreases [6]. 1.3 Salinity Additionally, the salinity of the water is an essential factor to consider. Since deep marine environments are also a part of the ocean, they have high levels of salinity. The presence of salt accelerates the electrochemical reactions that lead to corrosion, making it a major problem for any metal submerged in seawater. High salinity levels can also affect the buoyancy and stability of submarines and other equipment, making the use of specialized materials and designs that can withstand such harsh conditions necessary [7].

383


1.4 Microbial Corrosion Corrosion is exacerbated by factors such as bacteria and microorganisms, known as "microbial corrosion." Microbial corrosion, also known as microbiologically influenced corrosion (MIC), is a type of corrosion caused or accelerated by the presence and activities of microorganisms. These microorganisms can include bacteria, archaea, and fungi, which influence the electrochemical processes that cause the deterioration of metals and other materials. Marine sediments provide ideal conditions for these microorganisms to thrive, affecting structures like drilling rigs and ship hulls, as well as materials like polymers and wood. Microbial corrosion is influenced by the presence of substances such as oxygen, carbon dioxide, and hydrogen sulfide in seawater, which vary by location and time [11]. 1.5 Hydrostatic Pressure Finally, hydrostatic pressure has to be considered in measuring the corrosion rate. Hydrostatic pressure has different impacts on different types of metals: active and passive metals. Active metals and passive metals differ primarily in their reactivity with their environment, particularly with regard to corrosion processes. Active metals tend to have faster corrosion rates as they corrode freely, in contrast to passive metals which corrode at a slower pace due to protection by a passive film. In general, hydrostatic pressure has a positive relationship with the rate of corrosion, meaning as hydrostatic pressure increases, the corrosion rate also increases [6]. Table 1: Deep Marine Conditions in Relation to the Rate of Corrosion Factors contributing to corrosion

Details

Impact on corrosion

Temperature

Higher temperature accelerates reaction speed in general

Positive

pH level

Increased acidity (lower pH) accelerates corrosion

Negative

Salinity

presence of salt accelerates the electrochemical reactions that lead to corrosion

Positive

Microorganisms

Microbial corrosion is caused or accelerated by microorganisms or bacteria

Positive

Hydrostatic pressure

Increased hydrostatic pressure increases corrosion rate

Positive

This table briefly summarizes the impact each factor (temperature, pH level, salinity, microorganisms, hydrostatic pressure) has on corrosion. 2. Nanomaterials Mitigating Corrosion Corrosion is a significant issue in the marine industry, causing extensive economic and environmental damage. Common methods to combat microbial corrosion include anti-corrosion coatings and stainless steel. Environmental factors such as oxygen concentration, temperature, and pH significantly impact corrosion rates, with higher pH levels reducing microbial corrosion. Nanotechnology offers advanced solutions through nanocomposites and nanocoatings, which provide superior protection for marine structures. Nanocomposites, composed of a nano-sized matrix and reinforcing particles, come in polymeric, metallic, and ceramic forms. These materials are known for their high strength, low weight, corrosion resistance, and radar absorption properties, making them ideal for aircraft and submarines.

384


2.1 Nanocomposites and Nanofiber Glass Coatings (1) Nanocomposites Nanocomposites and nanofiber glass coatings offer significant potential for enhancing the corrosion resistance of various metals, particularly in extreme environments like the deep sea. Composites, which are mixtures of distinct materials that can be easily distinguished, form the basis for nanocomposites— materials composed of a matrix embedded with nanoscale particles. These nanoparticles serve as reinforcements, enhancing properties such as strength, resistance, and electrical conductivity. Depending on the matrix material, nanocomposites can be categorized into polymeric, metallic, and ceramic groups [11]. Polymeric nanocomposites have been used in the aerospace industry for a long time due to their superior strength, rigidity, thermal stability, and dimensional stability. Similarly, nanocomposite thin film coatings have been extensively researched for corrosion mitigation due to their unique thermal stability, mechanical properties, and molecular barrier capabilities. These coatings incorporate both organic (e.g., silica-gel, para-aminobenzoic acid, benzophenones) and inorganic (e.g., clay, silica, zirconium, carbon) nanoparticles into polymer matrices (e.g., epoxy resin, polyimide, polystyrene, nylon, poly(methyl methacrylate)) at low volume fractions of 0.5% to 5% [11]. The production of nanocomposites typically involves techniques such as solution synthesis, in-situ polymerization, melt interaction, and in-situ formation. The resulting nanostructured films, applied using methods like nozzle spray, brush, or electrostatic self-assembly (ESA) processing, form highly ordered and densely packed layers that act as effective barriers, protecting underlying substrates from corrosion. For instance, poly(o-ethoxy aniline)-clay nanocomposites have demonstrated significant improvements in corrosion resistance, with increased clay content leading to exponential decreases in corrosion potential (Ecorr), corrosion current (Icorr), and corrosion rate (Rcorr), alongside significant increases in polarization resistance (Rp) [10]. Figure 1: Photo of poly(o-ethoxy aniline)-clay nanocomposite

Source: Sung, J.H., & Choi, H.J. (2004). Electrorheological characteristics of poly(o-ethoxy)aniline nanocomposite. Korea-australia Rheology Journal, 16, 193-199. Nanocomposite coatings offer numerous advantages over conventional coatings, including excellent adhesion, resistance to mechanical and chemical corrosion, thermal resistance, luster, and self-cleaning abilities. These properties not only enhance performance but also reduce raw material consumption, energy usage, and cleaning requirements. A practical application of nanocomposite coatings can be seen in drilling tools, where innovations like Vega Tooling Company's TH coating have doubled the efficiency and lifespan of the tools while reducing heat generation during use [11].

385


However, nanocomposite coatings tend to agglomerate in extreme temperatures and pressure. High temperatures allow particles to move more freely, increasing the chances of particles coming into contact and sticking together. Under high pressure, the forces between nanoparticles, such as van der Waals forces, become stronger, promoting the clumping or agglomeration of particles. Furthermore, the stabilizing agents or surface coatings that prevent nanoparticles from sticking together can become less effective at high temperatures, leading to agglomeration [9]. (2) Nanofiber Glass Coatings Fiberglass, known for its high strength and non-glowing texture, benefits from nanotechnology by creating very strong, lightweight nanofibers with a fine-grained texture superior to traditional fiberglass. These advancements in nanotechnology and nanocomposites have significant implications for various industries, including marine, aerospace, and manufacturing [10]. 2.2 Thermal Barrier Coatings Thermal Barrier Coatings, both single- and multi-layered, are commonly used to enhance the hightemperature corrosion and erosion resistance of materials in gas turbines, jet engines, power stations, and transportation vehicles. These coatings often include materials such as diamond-like carbon (DLC), TiO2, ZrO2, Al2O3, V2O5, TiN, TiB2, SiC, Y2O3, and hafnium oxide. They are typically applied using techniques like plasma spraying, laser glazing, chemical vapor deposition, and physical vapor deposition [2]. Using a thermal barrier coating as a top or interfacial layer significantly improves the corrosion and erosion resistance of the material surface, providing high surface hardness and wear resistance compared to the original materials. However, the nanoporosity that can form in these coatings may increase corrosion rates. This issue can be mitigated by using DLC or densely packed other coating materials to block the porosities. Micrographs of coatings like zirconium, yttrium, and DLC on substrates demonstrate these properties and improvements [2]. 2.3 Top Layer Coatings Polyurethanes are favored as top-layer coating materials due to their excellent osmotic barrier, and chemical, thermal, hydrolytic, and oxidative stability properties, which are advantageous for corrosion prevention. While some coating materials like epoxy and acrylic bases are readily available and inexpensive, their protection capabilities are limited under severe environmental conditions. Polyurethane top coatings are preferred to protect both initial organic layers and material surfaces against corrosion. Recently, fluorinated polyurethanes with the lowest known surface energy (6 mN/m) have been developed to drastically reduce the permeability of films against corrosive ions, molecules, moisture, temperature, and UV radiation. Techniques like interfacial coating and surface treatment (e.g., plasma and chemical etching) significantly improve the adhesion between protective layers and material surfaces, thereby increasing corrosion resistance [2]. Polyurethane coatings, made from polyisocyanates and polyols, have become increasingly popular due to their highly versatile chemistry and superior properties such as toughness, resistance to abrasion and chemicals, and their ability to provide hardness with flexibility. These coatings also exhibit excellent adhesion capability to various substrates. Polyurethane coatings can be applied using water-borne, solvent-borne, high-solid, or powder coating systems, making them suitable for diverse applications [2]. Additionally, polyurethane coating is considered more environmentally friendly than other types of coatings as it does not harm soil or water. On the other hand, The VOC used to liquify polyurethane are bad for the environment [13]. Despite this, polyurethane coating still remains a sustainable option as there are low VOC options such as water-based polyurethane and UV-cured polyurethane [4]. Chemistry of polyurethane coating Polyurethane (PU) coatings are broadly defined as systems based on polyisocyanate chemistry. The

386


isocyanate group (-N=C=O) provides the fundamental basis for polyurethane coatings due to its high chemical reactivity and ability to react with various chemical partners. This versatility makes the isocyanate group particularly suited for the coatings market. The isocyanate group can react with compounds having reactive hydrogen, leading to different types of linkages such as urethane, urea, allophanate, and biuret, depending on the reactive partner [8]. For example, an isocyanate group reacts with alcohol to form urethane linkages. The reaction between isocyanate and amine results in the formation of urea linkages. When isocyanate reacts with water, it leads to a "blowing" or "foaming" reaction in polyurethane chemistry, forming an intermediate carbamic acid that decomposes into an amine and carbon dioxide. This reaction is crucial for forming foamed or cellular polyurethane materials and coatings [8]. Table 2: Nanomaterials’ Strengths and Weaknesses Nanomaterials

Positive/neutral characters

Weaknesses

Nanocomposites

-

-

-

Enhanced corrosion resistance Improved mechanical properties (Strength, resistance) Thermal stability Chemical stability Versatility in material Strong adhesion Production flexibility Efficient resource usage Glowing texture

Nanofiber glass coating

-

High mechanical strength Lightweight Smooth texture Non-glowing texture

-

Damage vulnerability: ineffective if the coating is damaged or breached

Thermal barrier coating

-

Enhances high-temperature corrosion and erosion resistance Production flexibility Provides high surface hardness Wear resistance

-

Nanoporosity may form, which increases corrosion rates

Osmotic barrier Chemical, thermal, hydrolytic, oxidative stability properties Protects both initial organic layers and material surfaces against corrosion Reduced permeability of films against corrosive ions, moisture, temperature, and UV radiation Improved adhesion (allowed by certain techniques) Versatile chemistry, adhesion to various substrates Physical hardness with flexibility Considered environmentally friendly compared to some alternatives, as it doesn't harm soil or water and is nontoxic

-

The VOC used to liquefy polyurethane is bad for the environment, however: There are low VOC options, Water-based and UV-cured

Top layer (polyurethane) coating

-

387

-

-

Not biodegradable: generates waste Agglomeration in high temperature and high pressure Electrical conductivity


This table details the strengths and weaknesses of each material that was mentioned in the literature review section to provide a clearer understanding of the materials’ suitability for deep sea exploration.

Comparison Studies This section will compare the four different types of nanomaterials/coatings discussed above (Nanocomposites, nanofiber glass coating, thermal barrier coating, and top layer (polyurethane) coating). While all of them provide corrosion resistance to a certain degree, considering everything, including environmental impact, versatility, resource availability, physical characteristics, and manufacturability, two of them seem to be superior to the others in terms of exploring the deep marine environment and protecting metals against corrosion. Nanocomposites and nanofiber glass coatings offer significant advancements in enhancing corrosion resistance for various metals, especially in challenging environments like the deep sea. Nanocomposites exhibit several strengths, including enhanced corrosion resistance, improved mechanical properties, thermal and chemical stability, strong adhesion, and efficient resource usage. Their versatility in material composition allows for customized solutions in different applications, providing flexibility in the process of manufacturing and performance. However, nanocomposites certainly have weaknesses. They are not biodegradable, leading to waste generation, and they can agglomerate under high temperature and pressure conditions. This is important because the deep sea has high pressure, reaching up to about 1,100 atm. Additionally, their electrical conductivity can be a drawback in certain applications, such as in submarines where minimizing electromagnetic signatures is crucial. High electrical conductivity is not desirable on the surfaces of submarines as it may promote galvanic corrosion, and the sensitive electrical wires inside the submarine must be insulated. Nanofiber glass coatings, on the other hand, are known for their high mechanical strength and lightweight properties. Their smooth, non-reflective texture makes them ideal for applications requiring stealth abilities. Despite these advantages, nanofiber glass coatings are vulnerable to damage; if the coating is breached or damaged, their effectiveness is significantly reduced. Thermal barrier coatings are another advanced solution, primarily used to enhance high-temperature corrosion and erosion resistance. These coatings provide high surface hardness and wear resistance. They offer good manufacturability and are effective in harsh conditions. Flexibility is a crucial strength for submarines as it allows them to endure changes in pressure and temperature and collisions with objects. However, the formation of nanoporosity within these coatings can lead to increased corrosion rates, posing a significant challenge to their longevity and effectiveness. Furthermore, high-temperature corrosion resistance is essentially useless as the environment in the deep sea has cold temperatures. Top layer polyurethane coatings present a comprehensive solution with numerous strengths. They act as an osmotic barrier and exhibit remarkable chemical, thermal, hydrolytic, and oxidative stability properties. These coatings protect both initial organic layers and material surfaces against corrosion, reducing the permeability of films against corrosive ions, moisture, temperature, and UV radiation. These properties are useful in the extreme conditions in the deep sea. Additionally, their versatile chemistry allows for adhesion to various substrates, combining physical hardness with flexibility. This means that it can be used in different types of submarines made of different materials. Polyurethane coatings are also considered environmentally friendly compared to some alternatives, as they do not harm soil or water and are non-toxic. While the volatile organic compounds (VOCs) used to liquefy polyurethane can be harmful to the environment, there are low VOC options, including water-based and UV-cured formulations, which mitigate this drawback. Comparing these materials, nanocomposites and polyurethane coatings stand out as superior options. Apart from the physical strength of nanocomposites, the versatility and strong adhesion abilities are desirable for the outer layer of submarines. Polyurethane coatings provide excellent barrier properties, flexibility, environmental benefits, and enhanced adhesion, making them a comprehensive solution for corrosion protection. Both materials' strengths significantly outweigh their weaknesses, positioning them as the best choices for advanced protective coatings in various industries.

388


Synthesis and Discussion This section includes a diagram and an explanation that examines the two nanomaterials that mitigate corrosion by protecting the body of the submarine from the characteristics of the deep marine environment that contributes to the rate of corrosion. The diagram details the problems of the deep sea environment, the nanomaterials as solutions, the characteristics of the nanomaterials, and the general outcome of incorporating the nanomaterials on submarines. The diagram is then more thoroughly explained in a paragraph below. Figure 2: Nanomaterials are suggested as solutions for factors influencing the rate of corrosion. Characters of the nanomaterials that contribute to the mitigation of corrosion and the improvement of deep sea exploration are listed, and the outcome is stated.

Figure 2 details how nanomaterials can help advance deep sea exploration. The four problems that lead to the acceleration of corrosion rate are salinity, hydrostatic pressure, microorganisms that cause microbial corrosion, and temperature. Nanocomposites, with their enhanced corrosion resistance, improved mechanical properties — including strength and resistance — and great thermal and chemical stability, hold the potential to endure the low temperatures and high hydrostatic pressures of the deep sea. Top layer polyurethane coating on the metal surface of submarines, with its versatile chemistry, chemical, thermal, oxidative stability, and reduced permeability against corrosion, moisture, temperature, and UV radiation protects the submarine from the presence of salt and microbial corrosion. Ultimately, by integrating both materials accordingly, corrosion can be mitigated when exploring the deep marine environment.

ⅤI. Conclusion In this research, we examined four different nanomaterials to mitigate corrosion in the deep marine environment. Among them, nanocomposites and top layer polyurethane coating stand as promising solutions for corrosion with their properties of physical strength and corrosion resistance. Nevertheless, these solutions are not perfect. Nanocomposites have a major problem of agglomeration at high temperatures and high electroconductivity, while both nanocomposites and polyurethane coating have environmental problems. Specific solutions targeting these problems would be a good topic for a following research, such as specific nanotechnological solutions mitigating the agglomeration of nanocomposites in high temperatures.

389


References [1] Al Bahir, A., Imen, B., & Alqarni, N. (2024). Innovations in nanomaterials: A paradigm shift in surface engineering for corrosion mitigation. Results in Chemistry, 7, 101392. [2] Asmatulu, R., & Claus, R. O. (2003). Corrosion Protection of Materials by Applying Nanotechnology Associated Studies. MRS Online Proceedings Library (OPL), 788, L11-44. [3] Deep Ocean. (n.d.). One Ocean. https://www.oceanprotect.org/resources/issue-briefs/ deep-ocean/ [4] Eco-Friendly Polyurethane UV Protective Coating Options. (n.d.). Raider Painting. Retrieved August 1, 2024, from https://www.raiderpainting.com/blog/eco-friendly-polyur ethane-uvprotective-coating-options/ [5] Gao, S., Brown, B., Young, D., & Singer, M. (2018). Formation of iron oxide and iron sulfide at high temperature and their effects on corrosion. Corrosion Science, 135, 167-176. [6] Liu, R., Liu, L., & Wang, F. (2022). The role of hydrostatic pressure on the metal corrosion in simulated deep-sea environments—A review. Journal of Materials Science & Technology, 112, 230-238. [7] MarineBio. (2018). The Deep Sea ~ MarineBio Conservation Society. Marinebio. https://www.marinebio.org/oceans/deep-sea/ [8] Polyurethane Paint & Coatings: Uses, Chemistry, Process & Formulation. (n.d.). Coatings.specialchem.com. https://coatings.specialchem.com/selection-guide/polyurethanecoatings [9] Recent Advances of Graphene-Derived Nanocomposites in Water-Based Drilling Fluids Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Advantages-and-disadvantages-of-nanocompositessynthesis-routes-143_tbl2_346591082 [accessed 1 Aug 2024] [10] Renpu, W. (2011). Oil and gas well corrosion and corrosion prevention. Advanced well completion engineering, 617-700. [11] Salmasi, A., Abraham, J., & Salmasi, F. (2022). Prospects for Application of Nanotechnology in Marine Industries: A Brief Review. [12] Sung, J., & Choi, H. (2004). Electrorheological characteristics of poly(o-ethoxy)aniline nanocomposite. Korea-Australia Rheology Journal. https://www.sem anticscholar.org/paper/Electrorheological-characteristics-of-nanocomposite-SungChoi/33ae461ffdbe854cd946c6864588969c35e2e2ef [13] The Science Behind VOCs in Polyurethane and Their Impact. (2024, June 7). All Hardwoods & Carpet. https://www.hardwoodfloorsllc.com/blog/the-science-behind-vocs-in-p olyurethane-andtheir-impact/ [14] Venkatesan, R. (2005). Studies on corrosion of some structural materials in deep sea environment. [15] Venkatesan, R., Venkatasamy, M. A., Bhaskaran, T. A., Dwarakadasa, E. S., & Ravindran, M. (2002). Corrosion of ferrous alloys in deep sea environments. British Corrosion Journal, 37(4), 257-266. [16] What is Corrosion? (n.d.). ECS. https://www.electrochem.org/corrosion-science/#:~:te xt=Because%20of%20it%2C%20buildings%20and

390


Density-Based Clustering method of Isotropic and Anisotropic Data Using Hillbased Abstraction Author

Full Name

:

Kim, Woojin

:

Seoul Science High School

(Last Name, First Name)

School Name

Abstract K-means clustering, which has been classically used the most, has the limitation of being weak in classifying anisotropic data. There are various anisotropic data clustering methods such as DBSCAN and spectral clustering designed for this purpose, but these also have clear limitations. This study proposes a new clustering method - Hill clustering, which complements the shortcomings of these anisotropic data clustering methods. A hill is created in the entire space by adding a Gaussian weight (hill) to each data point, and the boundaries of clusters are found by detecting the valley of the hill by starting from the maximum point and moving down from the created hill. Hill clustering finds the exact boundaries between clusters, so when new data comes in, it can more easily distinguish which cluster it belongs to than existing algorithms. In Hill clustering, each cluster The smaller the total length of the minimum spanning tree (when the weight of each edge of the tree is the length), the better the clustering is judged to be.

Keywords Clustering, anisotropic data, cluster boundary, hills, minimum spanning tree

391


I. Introduction 1. Necessity and purpose of the study In this study, we propose a new method for clustering non-isotropic data. Currently, the most commonly used clustering method is K-means clustering, but K-means clustering is effective for isotropic data, but it has the limitation that it does not properly cluster nonisotropic data. To compensate for this, there are methods such as DBSCAN that use the idea of moving a circular kernel, but they also have the disadvantage that it is difficult to cluster properly if the clusters have different densities, and the number of clusters cannot be specified in advance. Therefore, we propose a clustering method that can be applied to all of these common datasets and is deterministic (the number of clusters is specified in advance). The clustering method proposed in this study aims to develop a clustering method that can be generally used for both isotropic and anisotropic data and a method for determining the boundaries of the clusters by compensating for the shortcomings of existing anisotropic data clustering methods. 2. Prior research and related theories

1) K-means clustering K-means clustering is a famous algorithm for clustering isotropic data. It divides data into k clusters, where k is the parameter user defined earlier. The clustering process is performed by allocating each data point to the nearest cluster center, that is, the centroid. This centroid is constantly updated based on the average position of the allocated points. In the initialization step, k data points are randomly selected and used as an initial cluster centroid. In the allocation step, each data point is allocated to the cluster based on the nearest centroid. This process is generally based on the Euclidean distance. In the update step, a new cluster centroid is defined by calculating the average position of each allocated cluster. Thereafter, the allocation and update steps are repeated until there is no change in cluster allocation. K-means is a relatively simple and fast algorithm, can be effectively applied to large datasets, and can be easily implemented due to the simplicity of the algorithm. However, the number of clusters k must be specified in advance, and the optimal k value must be found using the Elbow method in advance. Depending on the selection of the initial centroid, the final clustering result may vary greatly, and the clustering result may be distorted due to a large influence of outliers. [1]

2) DBSCAN DBSCAN is a clustering algorithm that forms a cluster based on the density of data. This method does not need to predefine the shape of the cluster and works strongly for noisy data. DBSCAN uses two main parameters, eps and minPts. Eps define the range to find neighbors at a given point, and minPts specify the minimum number of points to form a cluster. The clustering process classifies each data point into three types: the core point, the boundary point, and the noise point. A core point is a point that has more than minPts within a given eps distance. A boundary point is a point that is a neighbor of a core point but does not have more than minPts on its own. A noise point is a point that is neither a core point nor a boundary point. The key idea of DBSCAN is to connect the key points to form a cluster, and the boundary points are connected to that cluster. Noise points do not belong to any cluster. This algorithm is particularly useful in clusters of different sizes or datasets containing noise. However, since this algorithm operates under the assumption that all clusters will have similar densities, its performance may degrade in datasets of various densities. DBSCAN is useful for identifying the intrinsic structure of data as it does not need to specify the number of clusters in advance. However, setting appropriate eps and minPts values may not be easy depending on the data, which requires sufficient information about the characteristics and distribution of the data.[2]

392


3) Minimum Spanning Tree A minimum spanning tree is a graph consisting of a subset of edges connecting all connected nodes while minimizing the sum of the edges. There are some methods of obtaining a minimum spanning tree from a given graph such as Kruskal's algorithm and Prim's algorithm. Kruskal's algorithm selects the edge with the smallest weight at every moment of all edges in the graph so that no cycle occurs, and Prim's algorithm starts at one node and creates a minimum spanning tree, as greedy algorithm.

II. Methods 1. Tools and Materials Use the python 3.10.4 environment. We use numpy 1.23.2 because we are basically dealing with matrices, and the matplotlib 3.5.3 python library for visualization. We used scipy 1.9.0 to access methods for counting the number of groups in a binary matrix. Used sklearn 1.1.2 for data generation and data scaling.

2. Experimental Method Initial setting values required for clustering given dataset include the Gaussian value (standard deviation of Gaussian) to be put in the data point and the number of clusters to be classified k. At this time, the Gaussian value may be determined through a change in the number of maximum points according to the change in value when a Gaussian is added to each point of the dataset, and depends on the characteristics of the dataset and the number of clusters k. When a Gaussian with a standard deviation of is added to each data point, you can think of a hill created in the data space. The algorithm proposed in this study is a clustering method that implements this based on the fact that the valley of the hill becomes the boundary of the clusters in the above situation. The algorithm consists of creating a hill, rolling a ball in the hill to collect the ball to the valley, and forming a boundary between clusters at an appropriate time (which can be clustered according to the number of clusters k). Form an n * n grid on the data hill, and each point is the maximum point: If the function value of the point is greater than the value of the four neighboring points (if any of the neighboring points have already been removed, then each point is removed from the n * n grid. Repeat the same implementation again with the remaining points except for the points that were removed as the maximum point in the previous implementation, and repeat the implementation to remove the maximum points in each implementation. Through this, it is possible to implement the ball going down from the hill to the valley. The structure of the algorithm can be largely divided into 1) determining the standard deviation (sigma, 𝜎) value of Gaussian to be applied to the dataset, 2) generation deployment, and 3) boundary formation. 1) Standard deviation of Gaussian to be applied to dataset – Elbow method in Hill clustering As it is an algorithm that uses the geometric characteristics of the hill created by applying Gaussian to each point of the dataset, the step of determining the sigma of the Gaussian at first is important. If the sigma is too large, the dataset is flattened overall, and the features (density, etc.) of the dataset disappear, and if the sigma is too small, each hill is distributed only small near each data point, which does not reflect the spatial characteristics of the dataset well. The following is a picture of the dataset hill according to the sigma value.

393


Figure 1. changing gaussian value to 0.01, 0.2, 1 in same data As can be observed in the figure above, it is important to determine an appropriate sigma value depending on how many groups clustering is in because the geometric characteristics of the data hill change when the sigma value changes. To this end, we propose the Elbow method in Hill clustering. It is similar to the Elbow method for determining the number of clusters in K-means, but here the Elbow method can serve not only to determine the most suitable number of clusters, but also to determine an appropriate sigma value for the number of clusters k. The idea came from the fact that the points merge through generation. At first, there are many points, so if the sigma is small, the maximum point is almost as large as the number of data points, but as the sigma increases, it becomes flattened little by little, and the number of maximum points decreases. This can be drawn as a graph and used similarly to the Elbow method to determine the most suitable number of clusters k, or the appropriate sigma value for a specific k. 2) Generation Deployment The first step is to draw a grid on the dataset and place a ball at each grid point. Generations progress by finding the maximum point in the remaining balls of each generation and eliminating the ball. If the grid point with a ball is 1, and the grid point without a ball is 0, you can imagine a pixel that is gradually zero spreading based on the maximum point. 3) boundary formation When a pixel that is black is spread based on the maximum point, a group of pixels that is black may be considered. At this time, boundary formation starts when the number of groups of pixels that are black is less than or equal to the number of clusters k (hereinafter, k). At this point, the group of pixels that are black is separated and labeled with each other. At this time, the label function of scipy.ndimage is used. Then, while generation development is the same, points where different groups of black (different labels) expand and meet are added to the edge list. Generation development may be continued until the balls of the entire grid are all zero, and finally, an edge list with coordinates of the boundary line may be obtained. As an evaluation index for the result of clustering, the sum of the lengths of the minimum spanning tree of each cluster is used as an evaluation index, and the smaller the sum, the better clustering is. Hill clustering aims to ensure that each of the clusters is partially as dense as possible. The idea that two points that are relatively close to each other compared to the distance from other points should belong to the same cluster is prioritized, and the sum of the lengths of the minimum spanning tree is used as an evaluation index.

394


III. Data & Result In the experiment, I used three datasets as a representative. From the library sklearn, sklearn.datasets method is used for generating data.

Figure 2. Three data used in experiment Each dataset is generated by below code. from sklearn.datasets import (make_blobs, make_circles, make_moons) data1 = make_circles(factor=0.5, random_state=42, noise=0.05, n_samples=500) data2 = make_moons(n_samples=500, random_state=42, noise=0.05) data3 = make_blobs(n_samples=500, random_state=42, cluster_std=2) After generating, I normalized data range between 0 and 1 using MinMaxScaler from sklearn.preprocessing. 1) Elbow method in Hill clustering

Figure 3. Number of extremes on the hill as a standard deviation, for each data For the above three data, s is plotted as np.linspace(0.01, 0.5, 30) with 30 values from 0.01 to 0.5, and the number of maxima in each situation is counted. The best value of sigma for a given value of k can be found by the following criteria. 1. find the point where the value of k is reached for the first time. 2. find the point before that point where it decreases with a steep slope. In the range selected by the above criteria, you can try values of s just before and after the steep decline. For data1, you might decide on 0.02 because the graph drops off sharply at s = 0.02. For data2, we can

395


determine 0.1 because the graph drops off sharply before s = 0.1. For data3, we can determine 0.05 because the graph drops sharply before s = 0.05 and reaches k = 3. After fitting a gaussian to each of the data with the determined s values to create a hill, the result is shown below.

Figure 4. Hill visualizes using standard deviation determined by Elbow method 2) Generation and Formation of Boudary After making hills using determined standard deviation value, I proceeded the generation. The pixels where the generation reached are black, and else are white. The pixels which compose the edge are marked with gray.

Figure 5. Generation in data1

396


Figure 6. Abbreviated in 5 steps to see major changes Over generations, black pixels are spread. Every pixel is white at first, and every pixel turns black during generation, starting from the extreme points. This can make an effect to detect the valley of a hill. I call the bundle of black pixels a connected component. The components combined through generations and the total number of the components reduce. When the number of connected components is equal to the number of clusters k, which I chose earlier, then detecting edge begins. After this, the generation proceeds same as before but the points where the components meet will be marked. The points where the components meet will be marked as a “edge pixel”. Eventually, when the generation ends (which means, every pixel is black or gray), we can get the boundary of clusters. For the result of generations in other data, the image is attached in Supplementary section at the end of the paper.

Figure 7. Boundary formation result for each data Above images show the result of boundary formation for each data. The number of clusters which were determined earlier is 2, 3, 3, respectively. The black line is the boundary between clusters, and we can divide clusters and classify data with the line.

397


Figure 8. MST evaluation for clustering from Hill clustering, K-means clustering, respectively

IV. Discussion I chose the standard deviation value near the rapid decrease of Elbow method graph, since we can form proper hills at that value. As I expected, we could make good clustering boundaries when we choose the standard deviation of Gaussian as the rule. We proved that the expanding method from the peaks of hill can make good clusters boundary for the goal “The points that are close to each other are classified into the same cluster”. When we measure the error (total MST length as an error function) for the Moons dataset clustered from Hill clustering and K-means clustering, we can check it shows better result at Hill clustering. The reason why I chose the evaluation function as a total length of minimum spanning tree is it can show how the clusters are condensed locally. If the density in every part of each cluster is high then the total length of MST will be small, then we can use this as criteria for the clustering – since our purpose is to make a clustering that implement “The points that are close to each other are classified into the same cluster”. If we set the total length of MST as a criterion for clustering, why don’t we just use MST to divide clusters? When we think about the algorithms to make MST such as Kruskal algorithm, it is proper to use MST to use itself as a clustering method, to accomplish the goal we are working for. Actually, there already exists MST-based clustering. However, there is a crucial drawback in it, it is very sensitive to noise. For a simple example, we can see the below dataset will not be clustered well like other algorithms.

398


Figure 9. Example dataset which MST-based clustering is bad for Let us think that we are clustering above dataset to two clusters. In MST-based clustering, we delete the longest edge to make two clusters. After this process, the point located in the lower right will be one cluster, and the other points will be another. This is not a useful result. Thus, this method is not appropriate for clustering. In this research, I tried to counter the problem with the method I proposed above. With the idea of Gaussian distribution, the sensitivity to noise can be resolved. For the data that have a lot of outliers, we can use a slightly different strategy. We can determine the number of cluster k with the Elbow method. This is especially useful for the real world, since in the majority of cases we don’t have sufficient information of data. Similar to the method I introduced before, we can choose the number of clusters k by capturing the value of y axis near rapid large inclination. Let us think about the situation when we want to divide the dataset into k clusters. If we just put this k as a cluster numbers, there can be some problem – if there are some outliers, they are strong candidates for extremes, and the clustering will go wrong when the outliers are used for the initiators of generation. In this case, we can determine k with this strategy and after clustering. After clustering, we can reallocate the outliers to the other clusters.

V. Conclusion This new algorithm is significant since it supplemented the problems of classic algorithms, such as limitation for anisotropic data of K-means clustering or sensitivity of DBSCAN. K-means clustering is good for isotropic data, but not in anisotropic. Hill clustering’s idea is irrelevant to isotropic status of data. This advantage on general data shows distinctive strength on handling unknown data. K-means clustering allocates new data with comparing distance to each cluster’s centroid. This can be thought of as K-means clustering is making the boundaries of cluster as a line, which is a vertical bisector of line segment connecting centroids. Compared to this, Hill clustering can make more flexible boundary including complex curves. Furthermore, it provides exact boundary which prevent allocating incorrectly.

399


DBSCAN, which is most commonly used in non-isotropic data clustering, has the disadvantage of operating under the assumption that all clusters have similar densities. However, hill clustering is relatively less sensitive to the density of data in each cluster because it creates a hill by adding Gaussian to the data to adjust the sigma value and cluster with the hill. In addition, DBSCAN is not suitable for situations where a specific number of clusters must be clustered because it cannot be specified and clustered. DBSCAN has the disadvantage of relying heavily on eps (radius defining neighbors) and minPts (minimum number of points). In order to properly determine these parameters, the only parameter that has an important effect on clustering is the Gaussian sigma, and the sigma value can also be determined using the Elbow method in the dataset. In addition, DBSCAN is less sensitive in this regard because the shape of the cluster can vary greatly depending on how the data at the boundary point is handled, but the Hill method does not need to care about the handling of the boundary point. During the clustering process, the grid value and the ball value must be determined for the number of balls to run generation to represent the hills of the data space. (In the case of a two-dimensional space, the grid is arranged in the size of a grid * grid and the ball is arranged in the size of a ball * ball.) Since the calculation amount varies depending on the grid value and the ball value, the cluster boundary result can be obtained by adjusting the grid and ball values appropriately as necessary. For example, if you need to search for various things, analyze data quickly, and find out features, you can obtain a rough cluster boundary by making the grid and ball values small, and if you need more precise cluster boundaries, you can obtain a precise cluster boundary by increasing the grid and ball values. When we allocate new data to the clustered data, we should calculate it to figure out where it will be placed. Hill clustering is stable for addition of data, since it is not sensitive to noise or outliers. And the most merit of it is that the computation required for classifying new data is very simple. If we made the clustering once, then it can be used in various way with fast computation in big data.

VI. Proposal Due to the nature of the algorithm that uses the geometric features of the hill when forming the boundary, the boundary sometimes does not converge in one line and deploys in two lines. You will be able to improve the edge formation method, post-processing method, or sigma adjustment method that can improve this point.

VII. References [1] Steinhaus, Hugo (1957). "Sur la division des corps matériels en parties". Bull. Acad. Polon. Sci. (in French). 4 (12): 801–804. [2] Ester, Martin; Kriegel, Hans-Peter; Sander, Jörg; Xu, Xiaowei (1996). Simoudis, Evangelos; Han, Jiawei; Fayyad, Usama M. (eds.). A density-based algorithm for discovering clusters in large spatial databases with noise. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96). [3] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12, 2825-2830.

VIII. Supplemental Information

400


Figure 10. Generation in data2

401


Figure 11. Generation in data3

402


Main python code for the research import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets import (make_blobs, make_circles, make_moons) import pickle from scipy.ndimage import label def gaussian(x, y, x0, y0, sigma=1): return np.exp(-((x - x0)**2 + (y - y0)**2) / (2 * sigma**2)) def cone(x, y, x0, y0, sigma=1): return np.maximum(0, 1 - np.sqrt((x - x0)**2 + (y - y0)**2) / sigma) def plot_data(data): data = np.array(data) plt.scatter(data[:, 0], data[:, 1]) plt.show() def plot_data_hill(data, s, grid): x = np.linspace(0, 1, grid+1) y = np.linspace(0, 1, grid+1) x, y = np.meshgrid(x, y) z = np.zeros_like(x) for point in data: z += gaussian(x, y, point[0], point[1], s) fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(111, projection='3d') ax.plot_surface(x, y, z, cmap='viridis', edgecolor='none') ax.set_title('3D Gaussian Plot') plt.show() def simul_edge(data, lim, grid, ball, sigma, hill_method=0, k=2): # grid//ball should be int X = data xlim, ylim = lim x = np.linspace(0, xlim, grid+1) y = np.linspace(0, ylim, grid+1) x, y = np.meshgrid(x, y) z = np.zeros_like(x) if hill_method == 0: for point in X: z += gaussian(x, y, point[0], point[1], sigma=sigma) elif hill_method == 1: for point in X: z += cone(x, y, point[0], point[1], sigma=sigma) # add ball ball_x = np.linspace(0, xlim, ball+1) ball_y = np.linspace(0, ylim, ball+1)

403


ball_x, ball_y = np.meshgrid(ball_x, ball_y) ball_z = np.array([[z[i][j] for j in np.arange(0, grid+grid//ball, grid//ball)] for i in np.arange(0, grid+grid//ball, grid//ball)]) # iter delta = [[-1, 0], [1, 0], [0, -1], [0, 1]] level = np.zeros((ball+3, ball+3)) level[1:ball+2, 1:ball+2] = ball_z.copy() level_bin = np.zeros((ball+3, ball+3)) level_bin[1:ball+2, 1:ball+2] = np.ones((ball+1, ball+1)) edge = np.zeros((ball+3, ball+3)) temp_clusters = [] clusters_array = [temp_clusters] level_array = [level.copy()] level_bin_array = [level_bin] edge_array = [] gen = 1 edge = [] reach_to_k = 0 f1 = 1 f2 = 1 while np.any(level_bin == 1): if f1: f1 = 0 else: if not reach_to_k: clusters, k_ = label(level_bin[1:-1, 1:-1] == 0) if k_ <= k: reach_to_k = 1 if not reach_to_k: temp = level.copy() temp_bin = level_bin.copy() for i in range(1, ball+2): for j in range(1, ball+2): if not temp_bin[i, j]: continue flag = 1 for d in delta: if level[i+d[0], j+d[1]] > level[i, j]: flag = 0 break if flag: temp[i, j] = 0 temp_bin[i, j] = 0 else: if f2: f2 = 0

404


pad = np.zeros((ball+3, ball+3)) pad[1:-1, 1:-1] = clusters clusters = pad.copy() clusters = clusters.astype(np.uint8) temp = level.copy() temp_bin = level_bin.copy() temp_clusters = clusters.copy() for i in range(1, ball+2): for j in range(1, ball+2): if not temp_bin[i, j]: continue flag = 1 for d in delta: if level[i+d[0], j+d[1]] > level[i, j]: flag = 0 break if flag: types = [] for d in delta: if 0<i+d[0] and i+d[0]<ball+2 and 0<j+d[1] and j+d[1]<ball+2: # ball+3 to ball+2 if temp_bin[i+d[0], j+d[1]] == 0: types.append(temp_clusters[i+d[0], j+d[1]]) if 0 in types: types.remove(0) if len(set(types)) > 1: edge.append([i, j]) temp[i, j] = 0 temp_bin[i, j] = 0 temp_clusters[i, j] = 0 if not types else types[0] gen += 1 level_array.append(temp) level_bin_array.append(temp_bin) clusters_array.append(temp_clusters) level = temp level_bin = temp_bin clusters = temp_clusters bin_array = level_bin.copy() return level_bin_array, edge def plot_level(level_bin_array, size1=2): gen_num = len(level_bin_array) rows = gen_num//10 + 1 plt.figure(figsize=(size1*10, size1*rows)) for i in range(len(level_bin_array)): plt.subplot(rows, 10, i+1) plt.title(f"gen : {i}") plt.axis("off") plt.imshow(level_bin_array[i][::-1], cmap='gray') plt.show()

405


Evaluation of Noise Reduction Filters on Retinal Images with Varying Noise Levels for Optimal Image Restoration Author

Full Name

:

Kim, Yewon

:

Western Reserve Academy

(Last Name, First Name)

School Name

Abstract Retinal fundus images hold a crucial role in ophthalmology for diagnosis of retinal diseases. However, noise often corrupts images, making filters necessary to reduce noise. Many ophthalmologists have applied enhanced filters for denoising purposes, and we were curious about the effectiveness of conventional filters. In this study, we applied different intensities of noise—specifically Gaussian noise—to increase noise in a retinal image and used the Adaptive Median Filter, Butterworth Low-Pass Filter, and Gaussian Filter to reduce noise in the image to find out the filter most effective in image recovery. We compared the Peak Signal-to-Noise Ratio (PSNR) values of the three filters to evaluate which filter most effectively denoised images for different intensity levels. Our findings showed that the Adaptive Median Filter and the Butterworth Low-Pass Filter were effective on different intensities. The Gaussian Filter was the least effective on all intensities. When considering average PSNR value as the sole parameter, the Butterworth Low-Pass Filter gave the best results overall compared to the other two filters. The results of this study show that the Butterworth Low-Pass Filter is an effective method of denoising retinal fundus images affected by Gaussian noise.

Keywords Retinal fundus images; Ophthalmology; Gaussian Noise; Adaptive Median Filter; Butterworth LowPass Filter; Gaussian Filter; Peak Signal-to-Noise Ratio

406


Introduction Retinal fundus images play an important role in ophthalmology for the diagnosis of retinal diseases such as diabetic retinopathy as they provide detailed information about the retina [1]. As retinal diseases can ultimately cause blindness, early diagnosis through retinal image analysis is critical [2]. However, noise often downgrades image quality, making it difficult for ophthalmologists to find and interpret eye disorders [3]. Therefore, denoising retinal fundus images is essential to accurately diagnose and treat retinal diseases. Filters denoise images by adjusting their characteristics. While filters reduce noise, many lead to sudden changes in color, loss of image details, and creation of artificial boundaries [1]. Thus, selecting a filter that preserves the quality of an original image is crucial. To do so, scientists have experimented with enhanced filters in recent years. Our research question stemmed from our curiosity about the effectiveness of conventional filters in reducing noise in retinal fundus images. The conventional filters we chose to test in this study were the Adaptive Median Filter, Butterworth Low-Pass Filter, and Gaussian Filter. We examined which of these three filters was most effective in restoring noisy retinal images, and hypothesized that the Adaptive Median Filter would restore images more clearly than the other two filters.

Materials and Methods Materials 1. Visual Studio Code for coding 2. Image of retina

Methods We obtained a suitable retinal image to begin. Then, we downloaded Visual Studio Code to proceed with the coding part of the study. To eventually reach the goal of comparing the original image with recovered images with increased noise, we applied Gaussian noise to the original retinal image using python code. Gaussian noise is a type of noise that is evenly distributed over an image. It adds a value from the zero-mean Gaussian distribution leading to distortions in each pixel [4]. We decided to test four different noise intensities—20, 30, 40, and 50—to compare how differently the filters recovered the images according to varying intensities of noise. We chose three different filters—Adaptive Median Filter, Butterworth Low-Pass Filter, and Gaussian Filter—to evaluate which filter recovered retinal images with increased noise most effectively. We applied the three filters to the images through python code, resulting in a total of 12 new images (4 noise intensities x 3 filters). The Adaptive Median Filter is a filter that changes the filter window's size based on whether the value of the window center is noise or not. If the filter window center's pixel is the noise point, the median value replaces that pixel. If the filter window center's pixel is not the noise point, that pixel value remains the same. Through this process, the Adaptive Median Filter can reduce image distortion as well as protect the image details. Below is the formula for this process:

407


[5] Xmin, Xmax, Xmed are the filter window’s minimum, maximum, and median gray value, respectively; Xi,j is the gray value at coordinate (i,j); and Smax is the specified maximum [5]. In this study, we set a window size of 7x7 for the Adaptive Median Filter. The Butterworth Low-Pass Filter is a filter that changes an image in the spatial domain into the frequency domain—a space defined by Fourier transform—by Fast Fourier Transformation (a tool that converts the spatial domain to the frequency domain) [6]. After the transformation, the cutoff frequency decides which frequency values to accept or cut [7]. Frequencies lower than the accepted value remain and frequencies higher than the accepted value convert to 0 (later, they convert to the nearest spatial domain). This process eliminates high frequencies, rendering smoothed-out images with less noise and higher image quality. Finally, Inverse Fast Fourier Transformation inverses the process to reobtain the spatial domain [6]. The following formula explains these series of processes; n represents the filter order, D(u,v) represents the distance between the frequency rectangle’s center and the point (u,v) in the frequency domain image, and D0 stands for cutoff frequency [6]. In this study, the cutoff frequency was 30.

[6] The Gaussian Filter obtains weighted mean pixel values across an image by averaging original pixel values and neighboring pixels. Pixels closer to the center are assigned more weight, and pixels far away from the center have lesser weight [8]. This filter is efficient in pressing out noise and rendering a smooth image, but it distorts pixels in part with abrupt changes in color and eliminates the edges of images [9]. The following formula represents the processes of the Gaussian Filter:

[9] Here, x and y are kernel size and σ² is the variance [9]. We set a kernel size of 7x7 for the Gaussian Filter in this study. In order to determine the best filter for each intensity and to compare the quality of the denoised image to its original image, we selected Peak Signal-to-Noise Ratio (PSNR) as the dependent variable. PSNR compares denoised images with their original noisy images to measure the quality of the recovered images. Specifically, PSNR calculates the proportion of the maximum potential pixel value to the Mean Squared Error (MSE) among the denoised image and its original image [11]. The formula below represents PSNR, with MAX representing the maximum pixel value of the original image [12] :

408


[12] MSE calculates the mean squared difference between the original and denoised images [11]. The following formula represents MSE, with M and N denoting the image’s width and height, si,j representing the dimensions of the original pixel, and pixel [12]:

representing the dimensions of the restored

[12]

Results For the Adaptive Median Filter, we gathered values of 31.9456 dB, 29.3346 dB, 28.4361 dB, and 28.0820 dB for intensities 20, 30, 40, and 50, respectively. For the Butterworth Low-Pass Filter, we achieved values of 31.7941 dB, 29.9534 dB, 28.7486 dB, and 27.8427 dB for intensities 20, 30, 40, and 50, respectively. For the Gaussian Filter, we obtained values of 31.5808 dB, 29.0020 dB, 28.5673 dB, and 27.9322 dB for intensities 20, 30, 40, and 50, respectively. The average PSNR values within the intensity range of 20-50 were 29.4496 dB for the Adaptive Median Filter, 29.5847 dB for the Butterworth Low-Pass Filter, and 29.2706 dB for the Gaussian Filter. Our results show the general trend that the higher the intensity, the lower the quality of the recovered image (lower PSNR value).

Average PSNR PSNR value PSNR value PSNR value PSNR value value (dB) for 𝜎 (dB) 𝜎 = 20 (dB) 𝜎 = 30 (dB) 𝜎 = 40 (dB) 𝜎= 50 20 to 50

Adaptive Median Filter

31.9456

29.3346

28.4361

28.0820

29.4496

Butterworth Low Pass Filter

31.7941

29.9534

28.7486

27.8427

29.5847

Gaussian Filter

31.5808

29.0020

28.5673

27.9322

29.2706

Table 1. PSNR values for Adaptive Median Filter, Butterworth Low-Pass Filter, and Gaussian Filter at intensity values 20, 30, 40, 50.

409


[13] (a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)

(p)

(q)

Figure 1. (a) is the original retina image (b,c,d,e) are images with increased noise by a intensity of 20, 30, 40, and 50, respectively (from left to right) (f,g,h,i) are images denoised using the Adaptive Median Filter for each intensity (j,k,l,m) are images denoised using the Butterworth Low-Pass Filter for each intensity (n,o,p,q) are images denoised using the Gaussian Filter for each intensity Figure 1 demonstrates that a higher PSNR value corresponds to recovered images with higher image quality (less noise). As intensity increased from 20 to 50, images contained more residual noise. The Butterworth Low-Pass Filter had the highest average PSNR value from intensity 20 to 50. The filter was effective in suppressing noise level overall by smoothing out images, but it had the tendency of excessively blurring out images compared to the other two filters. The Adaptive Median Filter, in comparison, not only reduced noise level left in images but also protected image details. The Gaussian Filter blurred out the whole image in general by smoothing out the image, leading to less differential edges. Image reconstruction from noise level 50 did not reveal analytically meaningful results as considerable residual noise remained in the images even after the filters were applied.

410


Discussion The PSNR values indicate that the higher the value, the better the image recovery. Therefore, as shown in Table 1, images recovered the best with the Butterworth Low-Pass Filter at Gaussian noise intensities of 30 and 40. The PSNR values for the filter at both intensities were 29.9534 dB and 28.7486 dB, respectively. At intensities 20 and 50, the Adaptive Median Filter was most effective as it resulted in a PSNR value of 31.9456 dB and 28.0820 dB, respectively. Overall, the Butterworth Low-Pass Filter showed the best recovered image value by resulting in an average PSNR value (intensity range 20-50) of 29.5847dB. As there are many components to consider in an image, such as color, image details, and boundaries, it is difficult to conclude that the Butterworth Low-Pass Filter is the most effective filter among the three for all imaging cases. However, we could conclude that the Butterworth Low-Pass Filter is most effective among the three filters when denoising retinal fundus images affected by Gaussian noise. The results show that our hypothesis was wrong as the Butterworth Low-Pass Filter, not the Adaptive Median Filter, showed the best recovered PSNR values on average. Limitations of this study include parameters that were used to quantify the quality of recovered images, as PSNR was the only dependent variable in the study’s design. Other quantitative measures that could have been additionally used to evaluate the quality of denoised images include Structural Similarity Index Map (SSIM) and Mean Squared Error (MSE). These two parameters compare images using a different method from PSNR, so they can lead to different results for which filter is most effective in denoising. SSIM compares luminance, contrast, and structure to calculate the similarities of structure between the original and denoised image [11]. MSE works as mentioned above.

Conclusion Overall, images recovered with the Butterworth Low-Pass Filter showed the highest average PSNR value, meaning that the filter was most effective in recovering retinal fundus images among the three filters tested in this study. Potential directions for further research is to keep creating and applying enhanced filters based on the Butterworth Low-Pass Filter that can satisfy a wide scope of conditions that vary among different types of noise and imaging machines. Another direction is to use quantitative variables other than PSNR for measuring the quality of denoised (recovered) images.

411


References [1] Dai P;Sheng H;Zhang J;Li L;Wu J;Fan M; (2016, September 4). Retinal fundus image enhancement using the normalized convolution and noise removing. International journal of biomedical imaging. https://pubmed.ncbi.nlm.nih.gov/27688745/ [2] Elseid, A. a. G., Elmanna, M. E., & Hamza, A. O. (2018). Evaluation of spatial filtering techniques in retinal Fundus images. American Journal of Artificial Intelligence, 2(2), 16. https://doi.org/10.11648/j.ajai.20180202.11 [3] Sonali, N., Sahu, S., Singh, A. K., Ghrera, S., & Elhoseny, M. (2019). An approach for de-noising and contrast enhancement of retinal fundus image using CLAHE. Optics & Laser Technology, 110, 87–98. https://doi.org/10.1016/j.optlastec.2018.06.061 [4] Smolka, B., Kusnik, D., & Radlak, K. (2023). On the reduction of mixed Gaussian and impulsive noise in heavily corrupted color images. Scientific Reports, 13(1). https://doi.org/10.1038/s41598023-48036-1 [5] Tang, R., Zhou, X., & Wang, D. (2017). Improved adaptive median filter algorithm for removing impulse noise from grayscale images. International Journal of Engineering: Transactions A: Basics, 30(10), 1503-1509. [6] Shahnawaz, M., Shaikh, M. S., & Wadhwani, R. (2016, January 1). Analysis of digital image filters in frequency domain. ResearchGate. https://www.academia.edu/80844637/Analysis_of_Digital_Image_Filters_in_Frequency_Domain [7] Yu, B., Gabriel, D., Noble, L., & An, K.-N. (1999, August 10). Estimate of the Optimum Cutoff Frequency for the Butterworth Low-Pass Digital Filter. ResearchGate. https://www.researchgate.net/publication/263276258_Estimate_of_the_Optimum_Cutoff_Frequency_ for_the_Butterworth_Low-Pass_Digital_Filter [8] Yu, J. (2023). Based on Gaussian filter to improve the effect of the images in Gaussian noise and pepper noise. Journal of Physics Conference Series, 2580(1), 012062. https://doi.org/10.1088/17426596/2580/1/012062 [9] An adaptive Gaussian filter for noise reduction and edge detection. (1993). IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/document/373563 [10] Kumar, B. K. S. (2012). Image denoising based on gaussian/bilateral filter and its method noise thresholding. Signal Image and Video Processing, 7(6), 1159–1172. https://doi.org/10.1007/s11760012-0372-7 [11] Nazir, N., Sarwar, A., & Saini, B. S. (2024). Recent developments in denoising medical images using deep learning: An overview of models, techniques, and challenges. Micron, 180, 103615. https://doi.org/10.1016/j.micron.2024.103615 [12] Ma, H., & Nie, Y. (2018). A two-stage filter for removing salt-and-pepper noise using noise detector based on characteristic difference parameter and adaptive directional mean filter. PLoS ONE, 13(10), e0205736. https://doi.org/10.1371/journal.pone.0205736 [13] Hopfauf, B., & Hopfauf, B. (2023, May 1). What does retinal imaging show? - Calgary Family Eye Doctors. Calgary Family Eye Doctors -. https://calgaryfamilyeyedoctors.com/what-does-retinalimaging-show/

412


The Effect of Ventilation on Indoor Dust Concentration Author

Full Name

:

Ko, Bisong

:

Branksome Hall Asia

(Last Name, First Name)

School Name

Abstract This study investigates the correlation between room ventilation duration and indoor dust concentration. Using an Arduino Uno and dust sensor, dust levels were measured at 5-minute intervals over 25 minutes. The results indicate a significant decrease in dust concentration with increased ventilation time, confirming the hypothesis that prolonged ventilation improves indoor air quality. Factors such as wind speed and temperature, which were monitored during daytime experiments, are also considered. This research provides valuable insights into optimal ventilation practices for reducing indoor dust levels and enhancing overall air quality.

413


Introduction Background Research Air quality has garnered significant attention due to its profound impact on public health. Fine particulate matter (PM2.5) is a notable air pollutant linked to respiratory and cardiovascular diseases. Indoors, where people spend a significant portion of their time, air quality can be compromised due to inadequate ventilation, accumulation of dust, and other pollutants. Addressing indoor air quality is crucial, as it directly influences human health and well-being. Poor indoor air quality can exacerbate conditions such as asthma, allergies, and other respiratory issues, underscoring the need for effective strategies to mitigate indoor pollution (“Ventilation and Indoor Air Quality”). Research Focus This project investigates the relationship between room ventilation and dust concentration, a key factor in indoor air quality. Effective ventilation is often recommended to reduce indoor pollutants, but the specific impact of varying ventilation durations on dust concentration is not well-documented. While extensive studies have focused on gaseous pollutants, less is known about how ventilation affects particulate matter like dust. Understanding this relationship is essential for formulating guidelines that ensure healthier indoor environments. Despite the importance of indoor air quality, detailed studies on how different durations of ventilation influence dust concentration are lacking. Most research has not systematically explored the quantitative effects of incremental ventilation times on dust levels. Additionally, existing studies often do not maintain consistent experimental conditions, such as using the same room and similar weather conditions across trials, which can affect the results. This study aims to fill these gaps by systematically examining how various durations of ventilation affect dust concentration in a specific room. By conducting multiple trials under controlled conditions, this research provides reliable data that can inform guidelines for optimal ventilation practices in residential settings. Improving our understanding of this relationship is essential for enhancing indoor air quality and protecting public health. Research Question How does the duration of ventilation affect the concentration of dust in an indoor environment? Hypothesis It is hypothesized that increasing the duration of ventilation will significantly reduce the dust concentration within the room. This hypothesis is based on the understanding that prolonged ventilation facilitates the exchange of indoor and outdoor air, thereby more effectively removing dust particles (“Ventilation Rates and Health in Homes”). Research has shown that higher ventilation rates reduce indoor concentrations of various pollutants, including particulate matter, by facilitating air exchange and removal of contaminants (“Ventilation and Indoor Air Quality”). Variables The independent variable in this study is the time spent ventilating a specific room. Ventilation begins at an initial time (x = 0), and measurements are taken at 5-minute intervals up to 25 minutes. These intervals are used to determine how different durations of ventilation affect dust concentration. The dependent variable is the dust concentration, measured in micrograms per cubic meter (μg/m³), using an Arduino Uno and a dust sensor. Controlled variables include using the same room for all measurements to maintain consistency, conducting experiments under similar weather conditions to control for external factors affecting ventilation, and keeping the window and door configuration constant during each trial.

414


Materials and Method Material -

Arduino Uno: A microcontroller board to interface with the dust sensor. Dust Sensor (Waveshare Dust Sensor Module with Sharp GP2Y1010AU0F): Used to measure dust concentration in the air. Breadboard: For creating the circuit without soldering. Jumper Wires USB Cable: To connect the Arduino to the computer. Computer Measuring Tape: To ensure consistent placement of the dust sensor in the room. Timer: For tracking ventilation duration. Digital Spreadsheet: To record dust concentration readings. Window: The specific window used to ventilate the room.

Method 1) Setup and Calibration: To assemble the circuit, connect the dust sensor to the Arduino Uno using the breadboard and jumper wires according to the following connections: VCC (dust sensor) to 5V (Arduino), GND (dust sensor) to GND (Arduino), AOUT (dust sensor) to A0 (Arduino), and ILED (dust sensor) to D7 (Arduino). Download the Arduino IDE from the official Arduino website and install it on the computer. Write or download the dust sensor code and upload it to the Arduino. The code should read analog values from the sensor and convert them into dust concentration (μg/m³). Verify and upload the code to the Arduino. 2) Conducting the Experiment: Choose a specific room for the experiment and ensure no other significant sources of dust are present. Measure and mark the specific location for the dust sensor to maintain consistency throughout the experiment. As shown in figure 1, place the dust sensor at the marked location in the room. Open the window to start ventilation and set the initial time (x = 0). At each time interval (5, 10, 15, 20, 25 minutes), record the dust concentration readings from the Arduino's serial monitor. For each interval, take three readings to ensure accuracy and record the data in a logbook or digital spreadsheet, noting the time and corresponding dust concentration. Ensure the window and door configuration remains constant throughout all trials and conduct experiments under similar weather conditions to control for external factors affecting ventilation.

Figure 1. Experimental Image Showing the Arduino Connected With a Dust Sensor

415


Results Time of Ventilating a Room (min)

Dust Concentration (μg/m3) Trial 1

Trial 2

Trial 3

Trial 4

Trial 5

5

38.16

27.42

33.28

32.71

27.97

10

29.51

25.13

31.65

28.27

22.98

15

26.87

22.53

28.44

27.56

20.12

20

25.42

20.91

25.39

23.11

17.83

25

18.67

19.14

22.33

19.42

16.09

Table 1. The Effect of Ventilation on the Dust Concentration - Raw Data

Normalized Dust Concentration (μg/m3) Time of Ventilating a Room (min) Trial 1

Trial 2

Trial 3

Trial 4

Trial 5

5

1

1

1

1

1

10

0.77

0.92

0.95

0.86

0.82

15

0.70

0.82

0.85

0.84

0.72

20

0.67

0.76

0.76

0.71

0.64

25

0.49

0.70

0.67

0.59

0.58

Table 2. The Effect of Ventilation on the Dust Concentration - Processed Data

416


Graph 1. The Effect of Ventilation on the Dust Concentration - Raw Data Interpretation of Data The raw data presented in Table 1 and the normalized data in Table 2 provide a comprehensive overview of how dust concentration varies with different durations of ventilation. The trends observed in these tables are further illustrated in Graph 1, which plots the dust concentration against the time of ventilation for each trial. As shown in Table 1 and Graph 1, the dust concentration consistently decreases as the duration of ventilation increases. For instance, in Trial 1, the dust concentration starts at 38.16 μg/m³ at 5 minutes of ventilation and decreases to 18.67 μg/m³ after 25 minutes. This trend is similar across all trials. In Trial 2, the dust concentration decreases from 27.42 μg/m³ at 5 minutes to 19.14 μg/m³ at 25 minutes. Similar reductions are seen in Trial 3, Trial 4, and Trial 5, demonstrating a clear negative correlation between ventilation time and dust concentration. The normalized data in Table 2 further highlights the proportional decrease in dust concentration. Normalizing the data helps in understanding the rate of decrease relative to the initial dust concentration. For example, in Trial 1, the dust concentration drops to 77% of its original value after 10 minutes of ventilation and further to 49% after 25 minutes. Similarly, in Trial 5, the concentration decreases to 82% at 10 minutes and 58% at 25 minutes. These normalized values reveal that the reduction rate is relatively consistent across different trials, indicating a robust relationship between increased ventilation time and reduced dust levels. Graph 1 visually corroborates these observations, showing a clear downward trend in dust concentration as ventilation time increases. Each trial's line graph demonstrates a similar pattern, reinforcing the conclusion that extended ventilation effectively reduces indoor dust levels. Overall, the data supports the hypothesis that increasing the duration of ventilation significantly reduces the dust concentration within the room. This consistent trend across multiple trials under controlled conditions provides strong evidence that prolonged ventilation is an effective strategy for improving indoor air quality by reducing particulate matter concentration.

Discussion The results of this study support the hypothesis that increasing the duration of ventilation significantly reduces the dust concentration within a room. Across all trials, a clear negative correlation was observed

417


between the time spent ventilating and the dust concentration, with longer ventilation times consistently resulting in lower dust levels. This outcome aligns with the general understanding that effective ventilation facilitates the exchange of indoor and outdoor air, thereby removing airborne dust particles and improving indoor air quality. The experimental method employed in this study was designed to minimize the impact of external variables and ensure the accuracy of the results. By using the same room for all measurements, maintaining a consistent window and door configuration, and conducting the experiments under similar weather conditions, the study aimed to control for factors that could influence the dust concentration. The use of an Arduino Uno and a dust sensor provided precise and reliable measurements of dust levels at regular intervals, which were crucial for observing the effects of ventilation duration. However, it is essential to consider how external variables such as wind speed and temperature might have affected the results. Wind speed, in particular, plays a significant role in ventilation effectiveness. Higher wind speeds can enhance the rate of air exchange between the indoor and outdoor environments, potentially leading to a more rapid reduction in dust concentration. Conversely, low wind speeds may result in slower air exchange rates, making ventilation less effective. In this study, trials were conducted during the daytime, which generally experiences higher wind speeds compared to nighttime. This could have contributed to the consistent decrease in dust concentration observed across the trials. To further improve the study, it would be beneficial to measure and record wind speeds during each trial to quantify their impact on dust concentration. Temperature is another critical factor that can influence dust measurements. Higher temperatures can cause convection currents that enhance air movement within the room, potentially aiding in the dispersion and removal of dust particles. On the other hand, lower temperatures might reduce air movement, resulting in higher dust retention. Although this study maintained similar weather conditions, future research could include temperature monitoring to better understand its effect on dust concentration during ventilation. To build on the findings of this study, several extensions can be proposed. Firstly, conducting experiments at different times of the day, including nighttime, would provide a more comprehensive understanding of how natural variations in wind speed and temperature affect the efficiency of ventilation in reducing dust concentration. Furthermore, using controlled mechanical ventilation systems with adjustable airflow rates could help isolate the impact of wind speed and provide more precise control over ventilation conditions. Moreover, expanding the scope of the study to include different room sizes and configurations would offer insights into how room geometry influences ventilation effectiveness. This could help develop more tailored guidelines for optimizing indoor air quality in various residential and commercial settings. Finally, investigating the long-term effects of sustained ventilation on dust concentration and other indoor air pollutants would be valuable. Continuous monitoring over extended periods would reveal how consistent ventilation practices impact indoor air quality and contribute to healthier living environments.

Conclusion The study demonstrated that increased ventilation duration significantly reduces indoor dust concentration. By systematically examining various ventilation times under controlled conditions, the research provides robust data supporting the effectiveness of prolonged ventilation in improving indoor air quality. Consideration of external variables such as wind speed and temperature highlighted their potential influence on ventilation efficiency, suggesting areas for further investigation. These findings contribute to a better understanding of how to optimize ventilation practices to enhance indoor air quality and protect public health.

418


Bibliography “Ventilation and Indoor Air Quality.” NCHH, 2019, nchh.org/information-and-evidence/learn-abouthealthy-housing/health-hazards-prevention-and-solutions/ventilation-and-indoor-air-quality/. Accessed 23 July 2024. “Ventilation Rates and Health in Homes” Lbl.gov, 2015, iaqscience.lbl.gov/ventilation-rates-andhealth-homes. Accessed 23 July 2024.

Appendix Figure 2. Code for Arduino IDE to Measure the Dust Concentration Using a Dust Sensor /***************************************************************************************************** **** * * File : DustSensor * Hardware Environment: * Build Environment : Arduino * Version : V1.0.5-r2 * By : WaveShare * * (c) Copyright 2005-2011, WaveShare * http://www.waveshare.net * http://www.waveshare.com * All Rights Reserved * ***************************************************************************************************** ****/ #define COV_RATIO 0.2 //ug/mmm / mv #define NO_DUST_VOLTAGE 400 //mv #define SYS_VOLTAGE 5000 /* I/O define */ const int iled = 2; const int vout = 0;

//drive the led of sensor //analog input

/* variable */ float density, voltage; int adcvalue; /* private function */ int Filter(int m) { static int flag_first = 0, _buff[10], sum; const int _buff_max = 10; int i; if(flag_first == 0) { flag_first = 1; for(i = 0, sum = 0; i < _buff_max; i++) { _buff[i] = m; sum += _buff[i]; }

419


return m; } else { sum -= _buff[0]; for(i = 0; i < (_buff_max - 1); i++) { _buff[i] = _buff[i + 1]; } _buff[9] = m; sum += _buff[9]; i = sum / 10.0; return i; } } void setup(void) { pinMode(iled, OUTPUT); digitalWrite(iled, LOW); //iled default closed Serial.begin(9600); //send and receive at 9600 baud Serial.print("*********************************** WaveShare ***********************************\n"); } void loop(void) { /* get adcvalue */ digitalWrite(iled, HIGH); delayMicroseconds(280); adcvalue = analogRead(vout); digitalWrite(iled, LOW); adcvalue = Filter(adcvalue); /* covert voltage (mv) */ voltage = (SYS_VOLTAGE / 1024.0) * adcvalue * 11; /* voltage to density */ if(voltage >= NO_DUST_VOLTAGE) { voltage -= NO_DUST_VOLTAGE; density = voltage * COV_RATIO; } else density = 0; /* display the result */ Serial.print("The current dust concentration is: "); Serial.print(density); Serial.print(" ug/m3\n"); delay(1000); }

420


The Relationship between Telehealth Usage and Healthcare Affordability: A CrossSectional Analysis of U.S. States Author

Full Name

:

Koo, William Bon

:

The Webb Schools

(Last Name, First Name)

School Name

Abstract This study will examine relationships between telehealth usage rates and burdens of healthcare affordability in the U.S. Utilizing data from the Centers for Medicare & Medicaid Services and the Healthcare Value Hub, telehealth adoption rates and healthcare affordability challenges were analyzed in ten states. Based on descriptive statistics, correlation analysis, and regression analysis, the findings indicate a negative relationship between telehealth usage and unwanted burdens of healthcare affordability. States that have better telehealth adoption detailed lower challenges of healthcare affordability, suggesting that telehealth is effective in reducing costs and improving access. This study highlights the necessity of targeted policies in expanding telehealth infrastructure and addressing healthcare access, while considering the impact of economic factors such as state income tax rates.

Keywords Telehealth, Telemedicine, Healthcare affordability, Healthcare burden, US Income Tax

421


Introduction The United States healthcare landscape is evolving rapidly, specifically with the integration of telehealth technologies. Due to the COVID-19 pandemic, this change underscored a critical necessity for healthcare solutions to be accessible and flexible (Mann et al., 2020; Wosik et al., 2020). Encompassing a variety of digital communication tools for remotely providing healthcare, telehealth offers great advantages, particularly when enhancing access to healthcare in underprivileged and rural areas (Eberly et al., 2020). Beyond improving access, telehealth can reduce healthcare costs, which is a significant burden for many citizens. Insurmountable healthcare costs are a devastating determinant of healthcare accessibility, reducing the positive experiences and outcomes for patients (Patel et al., 2020; Weigel et al., 2020). Different states exhibit varying levels of telehealth adoption, which most likely correlates to various levels of healthcare burden experienced by residents (Eberly et al., 2020). The goal of this study is to explore the relationship between usage rates of telehealth and the burdens of healthcare. I hypothesize that higher telehealth usage is related to a low burden of healthcare. This analysis not only includes a few states but also applies to various regions, putting states with both high telehealth adoption, such as California, and lower rates, such as Mississippi. This study also considers the potential impact of state income tax rates, providing broader context in understanding the relationship between telehealth usage and healthcare affordability. While exploring the relationship between telehealth usage and healthcare affordability, this study also investigates the possible influence of state income taxes. State income tax rates are included as a control variable based on the hypothesis that higher tax rates might influence healthcare affordability. First, higher taxes may reduce disposable income and make healthcare less affordable for taxpayers. On the other hand, states with higher tax rates may have more resources to allocate to healthcare investments in healthcare infrastructure including telehealth services. This could potentially improve healthcare affordability. The inclusion of state income tax rates into our study helps control these potential effects and isolate the relationship between telehealth usage and healthcare affordability with more accuracy. Through analysis of data in dissimilar states with both the highest and lowest telehealth usage rates, and considering state income tax rates, this research aims to understand how digital health innovations can cross financial barriers for accessibility of healthcare. These findings are meant to recommend different policies, recommending the expansion of telehealth services as a strategy to reduce healthcare costs and improve overall healthcare access and satisfaction across the United States (Patel et al., 2020).

Literature Review Integrating telehealth technologies into various healthcare systems has been greatly accelerated due to the COVID-19 pandemic, highlighting this mode of healthcare delivery's opportunities and challenges. This literature review summarizes various findings from recent studies to explore telehealth's impact on healthcare affordability, accessibility, and quality, including its widespread adoption barriers. Emerging as a crucial component in modernizing healthcare systems, the adoption of telehealth technologies has become a vital element in healthcare. According to Bujnowska-Fedak and Pirogowicz (2014), telehealth can point out important cracks in healthcare delivery, particularly in areas lacking sufficient medical infrastructure. However, transitioning to a telehealth approach requires in-depth training and various infrastructure support to ensure that healthcare providers can effectively utilize these technologies. The COVID-19 pandemic further emphasized the necessity for capacity-building and the obstacles linked with the rapid integration of telehealth as healthcare systems around the world were required to adapt swiftly to new modalities of care delivery (Ryu, 2012; Mann et al., 2020).

422


Research on the adoption of telehealth technologies discusses various challenges further, emphasizing that for telehealth adoption to be effective it requires technical infrastructure and policy frameworks that support reimbursement and regulatory standards. Studies highlight how important a coordinated approach involving policymakers, healthcare providers, and technology developers is to ensure that telehealth can be integrated sustainably through various routine healthcare practices (Kichloo et al., 2020). Although having vast potential benefits, the literature also addresses telehealth adoption’s many challenges and limitations. For instance, the digital divide is telehealth’s significant barrier, most likely susceptible in rural areas and low-income populations. Gajarawala and Pelkowski's (2021) explanation suggests that the various disparities when accessing high-speed internet and technological devices can prevent these populations from the benefits of telehealth services. In addition, especially in cases requiring physical examinations or complex diagnostic procedures, telehealth’s quality in terms of care has been under scrutiny (Snoswell et al., 2020). Based on these limitations, to ensure equal access to and utilize the effectiveness of telehealth services, it is necessary to input targeted policies and infrastructure investment. Telehealth is recognized for its potential to enhance the accessibility and quality of healthcare, targeting underserved populations. Academic sources explore how telehealth can increase the accessibility of healthcare services, especially where traditional healthcare infrastructure may be lacking, such as in remote and rural areas (Wosik et al., 2020). Telehealth cuts expenses for patients and medical facilities by allowing remote care and eliminating travel needs. This is especially crucial for managing chronic diseases, where ongoing monitoring and prompt interventions are essential. (Shigekawa et al., 2018). Another advantage of telehealth is its economic impact, particularly in lowering healthcare costs and enhancing accessibility. Studies offer practical evidence that telehealth reduces healthcare spending by minimizing the need for emergency room visits and hospitalizations. They also display improvements in patient outcomes and satisfaction, suggesting telehealth's potential to be a cost-effective alternative to traditional healthcare delivery models (Eberly et al., 2020; Mann et al., 2020). However, the economic influences of telehealth also depend on the regulatory environment, especially policies regarding reimbursement. Researchers discuss the inconsistency of reimbursement policies in various jurisdictions that often hinder adopting telehealth into healthcare. The authors emphasize their need for standardized policies to ensure reimbursement for telehealth services is compensated at rates similar to in-person visits, eventually encouraging healthcare providers to integrate telehealth (Latifi et al., 2021).

Methodology Data Collection This study’s data was collected through various reputable sources to ensure the results were accurate and reliable. Some primary points include: •

Telehealth Usage Rate: Managed by the Center for Medicare and Medicaid Services (CMS), it is derived from the Medicare Telehealth Trends Data. The total number of unique Medicare Part B beneficiaries who received one or more telehealth services is divided by the number of distinctive beneficiaries with at least one telehealth-eligible service, defining the measure for telehealth usage. (either in-person or via a telecommunication device). Healthcare Affordability Burden: Healthcare Value Hub’s state-level surveys. The measurement included reports of the respondents’ various percentages that experienced one or more affordability burdens in the last 12 months. Some of these burdens included: trouble with medical bills, delaying or excluding necessary care due to problems related to cost, or the lack of insurance due to their high costs.

423


Variables • • •

Independent Variable: Telehealth Usage Rate (%), the prevalence of Medicare Part B beneficiaries resorting to telehealth services in terms of percentage. Dependent Variable: Healthcare Affordability Burden (%), the prevalence of respondents experiencing financial difficulties related to healthcare costs in terms of percentage. Confounding Variable: State Income Tax Rate (%)

Statistical Analysis Techniques Descriptive Statistics Preliminary analysis utilized the calculation of descriptive statistics (means, medians, standard deviations) for data summary. This step was imperative to form a baseline understanding of the distribution and propensities of telehealth usage and healthcare affordability burden across the states.

Correlation Analysis Pearson correlation coefficients were utilized to the relationship between telehealth usage rates and healthcare affordability burdens’ strength and direction. The Pearson correlation coefficient is the measurement of two variables’ linear relationship, values ranging from -1 to +1. A negative coefficient is an indication of an inverse relationship, affirming the view that telehealth usage has an effect on healthcare affordability burdens.

Multivariable Regression Analysis A multivariable regression model was used, with its purpose being the overseeing of potential confounding variables. This could influence the independent and dependent variables’ relationship. The model included: • • •

Independent Variable: Telehealth Usage Rate (%) Dependent Variable: Healthcare Affordability Burden (%) Control Variables: State Income Tax Rate (%)

By accounting for these confounding factors, the regression model aided in isolating the effect of telehealth usage on healthcare affordability burden. The regression equation is: Healthcare Affordability Burden(%) = β0 + β1(Telehealth Usage%) + β2(State Income Tax Rate%) + ϵ where β0 is the intercept, β1,β2 are the coefficients for the independent and control variables, and ϵ\epsilonϵ is the error term.

Data Analysis Process 1. Descriptive Statistics: The calculation of each variable for the understanding of the data’s basic features. 2. Correlation Analysis: Employed for the examination of the relationship between telehealth usage and healthcare affordability burden. 3. Regression Analysis: In depth analysis for confirmation of the effect of telehealth usage on healthcare affordability burden while including various confounders. The utilization of a regression model was to evaluate for its explanatory power (R-squared) and the statistical significance of each variable (p-values).

424


Software and Tools The statistical analyses utilized Python, most significantly libraries such as pandas for data manipulation, statsmodels for regression analysis, and seaborn for data visualization. Through the employment of these methodologies, the study aims to provide an in-depth understanding of telehealth usage and healthcare affordability burden’s inner workings, offering various insights for informing policy and healthcare practices.

Results Descriptive Statistics The descriptive analysis highlighted important discrepancies in the relationship between telehealth usage and healthcare affordability burdens across states. States in the top quintile for telehealth usage (30% to 43%) attributed to lower healthcare affordability burdens (51% to 57%). In contrast, states within the bottom quintile for telehealth usage (13% to 20%) indicated higher healthcare affordability burdens (62% to 72%). This data suggests that higher telehealth adoption correlates to lower healthcare affordability burdens. Definition of Telehealth Usage: The telehealth usage data derives from the Medicare Telehealth Trends Data. The Center for Medicare and Medicaid Services (CMS) manages the data while Data.CMS.Gov publishes it quarterly. The measurement’s definition for telehealth usage is the total number of distinctive Medicare Part B beneficiaries who received at least one telehealth service divided by the number of unique beneficiaries who received at least one telehealth-eligible service (in-person or via a telecommunication device). Simply put, this measurement is the division of Total Telehealth Users by Total Telehealth Eligible Users. Definition of Healthcare Affordability Burden: State-level surveys conducted by Healthcare Value Hub produce data on the healthcare affordability burden. The study’s definition is the basis of survey responses, indicating the experiences of one or more healthcare affordability burdens in the prior 12 months conducted through patients: medical bills, disregarding surgery due to costs, or because of the high costs of insurance. In California, for example, 57% of respondents reported experiencing at least one case of burden due to affordability. State

Telehealth Usage (%)

Healthcare Affordability Burden (%)

California

43

57

Maryland

32

55

Massachusetts

37

51

New York

33

52

New Jersey

30

57

Mississippi

13

72

Louisiana

14

67

Missouri

15

62

Indiana

16

63

Georgia

20

67

Telehealth Usage and Healthcare Affordability Burden

425


Telehealth Usage vs. Healthcare Affordability Burden Definition of State Income Tax Rate: The state income tax rates were sourced from Tax-Rates.org, using 2024 state tax information. For states with progressive tax systems, the highest marginal tax rate was selected to reflect the maximum tax burden on higher-income residents. For states with flat tax rates, the single rate was used. The rates for the states in our study are as follows: State

Income Tax Rate (%)

Type

California

13.3

Highest marginal tax rate

Maryland

5.75

Highest marginal tax rate

Massachusetts

9.0

Highest marginal tax rate

New York

10.9

Highest marginal tax rate

New Jersey

10.75

Highest marginal tax rate

Mississippi

5.0

Flat tax rate

Louisiana

4.25

Highest marginal tax rate

Missouri

5.0

Highest marginal tax rate

Indiana

3.15

Flat tax rate

Georgia

5.75

Highest marginal tax rate

State Tax Rate These rates were chosen to represent the potential impact of state-level taxation on healthcare affordability, as higher tax rates might influence residents' disposable income and, consequently, their ability to afford healthcare services.

426


Correlation Analysis The Pearson correlation coefficient calculates the relationship between telehealth usage rates and healthcare affordability burdens, leading to a coefficient of -0.75 (p < 0.01). This strong inverse relationship suggests that as telehealth usage increases, the burden of healthcare affordability decreases, highlighting the potential of telehealth to alleviate financial pressures associated with healthcare access. Telehealth Usage (%)

Healthcare Affordability Burden (%)

1.00

-0.75

Healthcare Affordability Burden (%) -0.75 Correlation Matrix

1.00

Telehealth Usage (%)

Multivariable Regression Analysis A multivariable regression analysis was conducted to further explore the relationship between telehealth usage and healthcare affordability burden, while controlling for the state income tax rate as a potential confounder. Variable

Coefficient (β)

Standard Error

p-value

Constant

72.3234

35.113

0.000

Telehealth Usage (%)

-0.6874

0.184

0.003

State Income Tax Rate (%) 0.7780 0.629 Multivariable Regression Analysis

0.205

Regression Model Summary • R-squared: 0.771 • Adjusted R-squared: 0.705 • F-statistic: 11.77 • Prob (F-statistic): 0.00577

Regression Analysis: Telehealth Usage vs. Healthcare Affordability Burden

427


These results indicate that higher telehealth adoption is significantly associated with reduced healthcare affordability burdens, even when controlling for state income tax rates. The model demonstrates 77.1% of the variance in healthcare affordability burden, emphasizing the significant impact of telehealth usage on reducing healthcare costs.

Discussion This study found a negative relationship between telemedicine utilization and the economic feasibility of medical care. The Pearson correlation coefficient was -0.75 (p < 0.01) ,which shows a strong negative relationship. Pearson correlation coefficient results supporting the hypothesis that telemedicine can improve access to healthcare and reduce costs, the higher the telemedicine utilization, the lower the healthcare cost Multivariate regression is one of the most important factors in furthering this relationship. The explanatory power of the model (R-square = 0.771) indicates that telemedicine utilization and state income tax rates account for 77.1% of the difference in healthcare cost burden. Telemedicine use emerged as a statistically noteworthy predictor (p = 0.003), highlighting its potential role in mitigating the healthcare cost problem. Despite the fact that state income tax rates did not reach statistical significance in this model (p = 0.205), it was included as a control variable to account for the impact of the broader economic environment on the ability to afford health care costs. Higher tax rates may increase the burden of health care costs to some extent, as indicated by the slightly positive coefficient, but this effect was not significant in this sample. These findings proclaim telehealth's potential as an efficacious tool for reducing healthcare costs, even when considering the influence of state tax policies. The impact of additional economic and policyrelated factors on telehealth's effectiveness in reducing healthcare affordability burdens is further research that could be explored in the future. Despite these factors, it is critical to address limitations and potential detriments of telehealth adoption for a balanced comprehension of telehealth; the digital divide is a vital concern. While telehealth’s potential for increased access to healthcare is critical, it can fortuitously deteriorate the divide among various populations due to the restricted access to internet services or essential technological devices compulsory for telemedicine (Gajarawala & Pelkowski, 2021). For instance, in addition to elderly populations, rural areas and low-income households are dissimilar groups that are more susceptible to encountering numerous hardships in the usage of telehealth services efficaciously, unwittingly resulting in unequitable healthcare access and inconsistent outcomes. Moreover, telehealth’s quality of care remains uncertain. Particularly for medical cases that involve physical examinations or operations, telehealth’s ability to replicate the quality of in-person consultations has accumulated concerns among professionals. (Snoswell et al., 2020). Due to increasing demands for telehealth services, provider burnout is an additional major issue, potentially impacting the quality of care (Jiang et al., 2021). Previous research aligns competently with the results, an indicating factor that telehealth can improve healthcare accessibility and affordability, particularly for underprivileged populations (Wosik et al., 2020; Shigekawa et al., 2018). Observations in the study regarding negative relationships between telehealth usage and healthcare affordability burdens are most likely attributed to these factors: 1. Reduced travel costs: Patients save money on travel when using telehealth because they don't need to go to hospitals or clinics. 2. Improved chronic disease management: Continuous remote monitoring and timely interventions enabled by telehealth may prevent costly emergency room visits and

428


hospitalizations (Eberly et al., 2020). 3. Increased efficiency: Telehealth can streamline healthcare delivery, potentially reducing overhead costs for providers and, consequently, costs for patients. 4. Enhanced access to specialists: Telehealth allows patients to consult with specialists from anywhere, giving access to expert care that might not be available nearby. This can lead to better health results and may reduce healthcare expenses over time. It is important to regard, however, the limitations of this study. The cross-sectional nature of these results may limit the ability to establish causal relationships. Additionally, the focus on telehealth usage data’s Medicare beneficiaries might not represent the telehealth utilization patterns of the general population. Finally, telehealth should provide high-quality and consistent care, as well as access disparities, and therefore be further investigated in future research for the affirmation that the adoption of telehealth will not inadvertently widen the healthcare accessibility gap.

Conclusion This study provides evidence of telehealth utilization and healthcare affordability concerns critical inverse relationships across the United States. The reduction of the monetary concerns of healthcare access and utilization can be addressed with an increase in the adoption of telehealth services, as suggested in these findings. The result's implications are paramount, especially during incessant healthcare reforms and the aftermath of the COVID-19 pandemic, escalating the application of telehealth services. The deliberation of employing telehealth as a notable tool to address concerns of healthcare affordability should be propositions made to policy makers and healthcare administrators. Future research directions could include: 1. Long-term studies to establish causal relationships between telehealth adoption and healthcare affordability. 2. Investigations into the specific mechanisms by which telehealth reduces healthcare costs. 3. Examination of telehealth's influence on healthcare affordability across different demographic classifications and socioeconomic strata. 4. Assessment of the capacity of telehealth in the depletion of healthcare discrepancies in rural and under-resourced communities. Ultimately, telehealth epitomizes significant capability as an approach to amplify healthcare affordability and accessibility. With the continuous evolution of healthcare, integrating telehealth services may be a major component when addressing various challenges of healthcare affordability in the United States.

References 1. Bujnowska-Fedak, M. M., & Pirogowicz, I. (2014). Telemedicine in emergency medicine. Journal of Telemedicine and Telecare, 20(3), 135-146. doi:10.1177/1357633X14526156 2. Ryu, S. (2012). Telemedicine: Opportunities and developments in Member States: Report on the second global survey on eHealth. World Health Organization. Retrieved from https://www.who.int/goe/publications/goe_telemedicine_2010.pdf 3. Mann, D. M., Chen, J., Chunara, R., Testa, P. A., & Nov, O. (2020). COVID-19 transforms health care through telemedicine: Evidence from the field. Journal of the American Medical Informatics Association, 27(7), 1132-1135. doi:10.1093/jamia/ocaa072

429


4. Kichloo, A., Albosta, M., Dettloff, K., Wani, F., El-Amir, Z., Singh, J., & Aljadah, M. (2020). Telemedicine, the current COVID-19 pandemic and the future: A narrative review and perspectives moving forward in the USA. Family Medicine and Community Health, 8(3), e000530. doi:10.1136/fmch-2020-000530 5. Wosik, J., Fudim, M., Cameron, B., et al. (2020). Telehealth transformation: COVID-19 and the rise of virtual care. Journal of the American Medical Informatics Association, 27(6), 957-962. doi:10.1093/jamia/ocaa067 6. Shigekawa, E., Fix, M., Corbett, G., Roby, D. H., & Coffman, J. (2018). The current state of telehealth evidence: A rapid review. Health Affairs, 37(12), 1975-1982. doi:10.1377/hlthaff.2018.05132 7. Eberly, L. A., Kallan, M. J., Julien, H. M., et al. (2020). Patient Characteristics Associated With Telemedicine Access for Primary and Specialty Ambulatory Care During the COVID-19 Pandemic. JAMA Network Open, 3(12), e2031640. doi:10.1001/jamanetworkopen.2020.31640 8. Latifi, R., Doarn, C. R., & Merrell, R. C. (2021). Telemedicine for Healthcare: Capabilities, Features, Barriers, and Applications. Journal of Telemedicine and Telecare, 27(5), 265-270. doi:10.1177/1357633X20963742 9. Patel, S. Y., Mehrotra, A., Huskamp, H. A., Uscher-Pines, L., Ganguli, I., & Barnett, M. L. (2020). Trends in Outpatient Care Delivery and Telemedicine During the COVID-19 Pandemic in the US. JAMA Internal Medicine, 180(10), 1410-1412. doi:10.1001/jamainternmed.2020.3315 10. Weigel, G., Ramaswamy, A., Sobel, L., Salganicoff, A., Cubanski, J., & Freed, M. (2020). Opportunities and barriers for telemedicine in the U.S. during the COVID-19 emergency and beyond. Kaiser Family Foundation. Retrieved from https://www.kff.org/womens-healthpolicy/issue-brief/opportunities-and-barriers-for-telemedicine-in-the-u-s-during-the-covid-19emergency-and-beyond/ 11. Gajarawala, S. N., & Pelkowski, J. N. (2021). Telehealth benefits and barriers. The Journal for Nurse Practitioners, 17(2), 218-221. https://doi.org/10.1016/j.nurpra.2020.09.013 12. Snoswell, C. L., Chelberg, G., De Guzman, K. R., Thomas, E. E., Caffery, L. J., & Smith, A. C. (2020). The clinical effectiveness of telehealth: A systematic review of meta-analyses from 2010 to 2019. Journal of Telemedicine and Telecare, 26(7-8), 401-411. https://doi.org/10.1177/1357633X20929963 13. Jiang, G., Ruwaard, J., & Jongen, P. (2021). Burnout in healthcare providers: Effectiveness of interventions and prevention strategies. Journal of Occupational Health Psychology, 26(2), 151163. https://doi.org/10.1037/ocp0000248

430


Temporal Dynamics of Physical Intervention on Brassica oleracea Growth Author

Full Name

:

Kwon, Michaela Cho

:

Korea International School Jeju Campus

(Last Name, First Name)

School Name

Abstract Interaction between humans and plants has shown numerous positive effects on human well-being. Since ancient civilizations, humans have based their economy on agriculture and relied highly on it. Through this study, the effect on the Brassica oleracea (kale) growth rate caused by the differentiated exposure time of plants to physical intervention was researched. Three groups of kale were exposed to physical intervention with different exposure times to determine if the influence would vary. Supplementary data was collected throughout the experiment, and the result showed how a restricted amount of physical intervention escalated kale’s growth rate, whereas an overloaded amount of physical intervention reduced kale’s growth rate. The findings of this study can advantage the agricultural field by introducing the method of utilizing a controlled amount of physical intervention intentionally to possibly increase the speed of production. Adding on, the findings can lead to further research

Keywords Physical intervention, kale, growth, influence, obstacles, hydroponic, interruptions, rate change, acceleration, deceleration

431


I. INTRODUCTION As the environment is rapidly changing, it is required for the farmers to develop a method to maintain their harvesting rate to amply feed the population.1 Various research involving physical interruptions and obstacles plants possibly experience during their growth can potentially lead to beneficial applications in agriculture.2 Specifically, the study of direct physical intervention to the plant’s stem can support the enhanced understanding of plant biology and the development of new technologies in the future since the study is about circumstances that the majority of plants will experience during their growth, and is deeply correlated with the growth of the plant.3 In the previous research entitled “Mitochondrial function modulates touch signaling in Arabidopsis thaliana”, Professor Jim Whelan at La Trobe University studied the sensitivity of plants and changes in growth within the physical intervention to exhibit how physical intervention influences plant growth.4 The result of this previous study showed a significant reduction in plant growth. 10 percent of the plant’s genome was altered after the physical intervention of 30 minutes, and repeated touching reduced “plant growth…up to 30 percent.” (Whelan) The study found that mitochondrial function modulates significant changes in gene expression—triggered by the touch-induced stress response—that shifts the energy allocation and diverts the resources to the defense mechanism of the plant. Moreover, hormones— especially involving ethylene and gibberellic acid—that regulate various elements of plant growth were changed in concentration due to the mutations in mitochondrial proteins and played a crucial role in the suppression of plant growth. Despite how the previous study’s result only observed negative changes in plant growth, an appropriate amount of exposure to the intervention physically can trigger stress responses and cellular signaling, which influences the plant to endure environmental difficulties and positively affects plants to grow faster if anything. Stress-related phytohormones, including abscisic acid (ABA), act crucially to the active stress response mechanisms that prevent and repair the plant.5 SnRKs (SNF1-related protein kinases) and extensive crosstalk with growth-related pathways are involved within the ABA signaling, which helps plants balance growth and stress resistance.6,7 Peptides and receptor-like kinases (RLKs) are involved in plant cellular signaling and act as phytohormones to support growth through cell-cell communication.8 Specifically, RALF (Rapid Alkalinization Factor) is one type of signaling peptide that is involved in responses to abiotic stresses.9 As these peptides are recognized by RLKs, the signaling pathways are activated and enable plant growth and development control.10 These activations of stress responses minimize the inhibition during growth and ultimately improve the stress tolerance and sustainability of the plant.11 Notwithstanding, the inordinate amount of physical intervention overexposes the plant to the stress, causing physical damage, disruption of the natural processes, energy diversion, and oxidative stress, which ultimately cause the death of the plant. Plants exposed to prolonged or severe stress can have direct physical damage to plant tissues and structures (leaves, stems, and other parts of the plant) which can lead to difficulties in the maintenance of its normal physiological processes.12 Those physiological processes include critical processes for plant survival, such as photosynthesis.13 The productivity of energy and growth decreases with the decreased efficiency due to inhibition under extreme stress. Furthermore, reproductive processes such as flowering and seed development may also be disrupted due to reduced yield and reproductive success.14 Adding on, the energy diversion occurs due to plants' usage of energy and resources in responding to stresses. For instance, plants divert their energy to produce stress-response proteins and metabolites and use valuable resources to repair mechanisms for cellular damage (synthesis of protective compounds

432


and repairment of enzymes).15,16 Ultimately the reallocation reduces the amount of energy and resources available for vital processes, which harms the overall growth and fitness of the plants.17 Extreme abiotic stress could also cause excessive production of reactive oxygen species (ROS) (Mittler, 2002), which plays crucial signaling roles but at the same time can be highly detrimental in the case of overproduction.18,19 Overloaded ROS could cause cellular dysfunction and death by damaging proteins, lipids, and DNA.20 Moreover the oxidative damage—caused by the antioxidant system—could negatively impact the plant’s health by triggering programmed cell death.21,22 As a result, the overall plant health and productivity were reduced significantly. In this study, Kale was intentionally exposed to physical intervention as Kale grows. Since Kale is known to be a hardy plant that can withstand various harsh environmental conditions.23 Adding on, rapid growth and continuous harvest are other strengths of Kale.24 These two characteristics of Kale make it suitable for experiments, especially in the study of the influence of physical intervention over time. Kale also provides visible responses through leaves with color, texture, and size changes, which simplifies the observation of the effects of physical intervention.25 Within this study, our field of agriculture can improve significantly through enhanced knowledge of appropriate physical intervention that can escalate the rate of kale’s growth. Through this research, the field of agriculture can discover ways to utilize lands that were known to be disadvantageous for the plants, with various obstacles that can cause physical obstacles; furthermore, can provide support to forestall possible negativities to the plants on the environment before the plant is influenced. All the other variables other than the time of exposure to physical intervention will be controlled, consequently, the experiment will show the influence of physical intervention on the kale. From this research, I hypothesize that based on the time that the kale is exposed to the physical intervention, the influence on growth rate will be differentiated; kales with 1 hour of physical intervention would show a higher growth rate compared to kales with no exposure; whereas, kales with 8 hours of physical intervention would show lower growth rate compared to kale with no exposure shown in section III. This experiment will have a total of 12 Kale plants: 4 with no physical intervention, 4 with 1 hour of physical intervention, and 4 with 8 hours of physical intervention as described in section II. All kales would be exposed to sunlight through LED lighting for 16 hours a day equally and will share the same water during their growth. Their stem will be exposed to physical intervention throughout their growth.

II. METHOD a. Materials The hydroponic system used in the experiment was from Korean company Spiano; specifically, the model Spiano Plant Encyclopedia Hydroponic LED Plant Cultivator SGS-37 Home Smart Farm (Registration number 40-2137703 in the Republic of Korea). The seed of kale that was used in the experiment was Greenstar Kale (The Republic of Korea origin).

433


Fig. 1. Experimental setup; (i) hydroponic system, (ii) motor, (iii) Arduino, (iv) LED

Fig. 2. Experimental setup (day 3)

b. Experimental conditions The experiment took place in July for 3 weeks from July 1st to July 26th of 2024. The location of the experiment was in Seogwi-po, Jeju in Korea, and the season was summer. The experiment took place under the condition between a humidity rate of 55~65% with an average of 58% and between a temperature of 29~33°C with an average of 30°C. c. Motorized physical interruption The Arduino motor used in the experiment was from LK EMBEDDED, model Arduino Uno R3 Beginner Kit Step 1 with additional Servo Motor SG-90. The used motor for the physical intervention in the stem was Servo Motor SG-90, and the used code for the motor is: void loop() { for (pos=0; pos <=90; pos +=1) { myservo.write(pos); delay(15); } for (pos=90; pos >=0; pos -= 1) { myservo.write(pos);

434


delay(15); } which used delay() to control the speed of the motor and for() to control the degree of the motor’s movement. Delay occurred for the servo to reach the position was 15ms and the motor was access to degrees from 0 to 180. Using this code, the servo motor arm sweeps back and forth 22 times in every minute. d. Recording environmental conditions Kales were provided light with LED light on the plant hydroponic system but were able to access the natural sunlight equally for all the plants involved in the experiment. Water was genuinely 4 liters and was refilled every Monday. The record of data on temperature, humidity, and kale growth was done daily, which makes the data supplementary. The measurement was done from the soil to the top of the plant, after straightening the plant by stretching the leaves. The kales that lay down to the curve were also measured after straightening. The measuring of the plant was done by Matlab through the daily recorded of plants through photos.

Fig. 3. Kale during experiment (day 9); (i) plant 3, (ii) plant 7

Table 1: Humidity and temperature on a hydroponic system recorded over 21 days.

435


Fig. 4. Trend of humidity and temperature on a hydroponic system over 21 days.

e. Statistical analysis For Duncan’s Multiple Range Test (MRT), the application of RStudio (without purchase) was used. Through the ANOVA (Analysis of Variance), MRT exhibits rather various data sets that have significant differences in their means. The specific code used was: install.packages('agricolae') library(agricolae) install.packages('readxl') library(readxl) A <- read_excel("/Users/periwinkle/Downloads/Record.xlsx",col_name=TRUE,na="NA") attach(A) model<-aov(length~time, data=A) model comparison<-duncan.test(model,"time",main="anov",alpha=0.05) duncan.test(model,"time",alpha=0.05,console=T)

III. RESULT In this experiment, both the positive and negative impacts of physical intervention was observed on the kale growth rate. Kales that experienced 1 hour of physical intervention showed a slightly higher growth rate compared to the kales with no intervention, while 8 hours of physical intervention decelerated the growth of kales. Adding on, all 4 kales under the 8 hours of physical intervention experienced a significant curve of the stem in the area where intervention was provided. This links back to the hypotheses where the expectations were: acceleration with 1 hour of intervention and deceleration with 8 hours of intervention.

436


Table 2. Kale height (cm) of three groups: control (Plant 1-4), 1-hr physical intervention (Plant 5-8), and 8-hr physical intervention (Plant 9-12).

Fig. 5. Growth trend of plants 1-12 over 21 days

The average growth of the controlled group since the day intervention was given by motors is 20.9cm and the rate of growth is 1.16cm per day. The second group with 1 hour of physical intervention had to experience 1,320 times of sweeping of the motor daily, and the group’s average growth was 23.2cm, 2.30cm higher, and the rate of growth was 1.29cm, 0.128cm higher, compared to the control group. As both average growth and rate of growth show accelerated growth rate of the group with 1 hour of intervention, this interprets the positive influence of the limited amount of physical intervention

437


provided to the kales. The total growths of both the controlled group and 1-hour intervention kales are somewhat equally distributed without any significant outlier, as well as the average growth per day. Withstanding, the increased rate of growth of kale proves that the higher average growth is not due to the difference in kale’s height at the start of the experiment nor a significant outlier.

Fig. 6. Kale height trend of (a) control group, (b) 1-hr physical intervention group, and (c) 8-hr physical intervention group. (day 1 ~21) The bar represents standard deviation.

The third group with 8 hours of physical intervention had to experience 10,560 times of sweeping of the motor daily, and the group’s average growth was 17.4cm, 3.50cm lower, and the rate of growth is 0.967cm, 0.194cm lower, compared to the control group. As both average growth and rate of growth show the decelerated growth rate of the group with 8 hours of intervention, this interprets the negative influence of the excessive amount of physical intervention provided to the kales. The total growths of both the controlled group and 8-hour intervention kales are somewhat equally distributed without any significant outlier, as well as the average growth per day. Withstanding, the decreased rate of growth of kales proves how the lower average growth is not due to the difference kale height at the start of the experiment nor a significant outlier.

438


From the 13th day of the experiment, unexpected results of kale stem curving appeared for the kales that experienced 8 hours of physical intervention daily. Starting from plant 12 on day 13, plant 10 on day 15, plant 9 on day 16, and lastly, plant 11 on day 18 experienced a significant curve on their stem that caused the upper part of the curve to entirely touch the surface of the hydroponic system. The average rate of growth before and after the curve did not show any significant difference. A possible explanation for the curve could be excessive physical damage to the stem to the extent that where stem was not able to maintain straight. Thus, the overload of physical damage led to the weakening and significant curve of the plant stem.

Fig. 7. Plant 12 on day 13, touching the surface of hydophnoic system

The result received through Duncan’s Multiple Range Test (MRT), on the other hand, exhibited different results from the growth chart calculated based on the average growth rate overall and per day. Unlike the actual quantitative difference between the growth of the controlled group and the 1-hour intervention group, both groups received the same alphabet on the MRT, showing that there is no significant difference in the average growth of the two groups. Whereas for the comparison of the average growth rate between the controlled group and 8 hours of the intervention group, two different alphabets were given through the MRT showing that the difference in the average growth of the two groups is significant.

IV. DISCUSSION The main goal of this study was to observe any changes that direct physical intervention to the stem of the kale cause the kale’s growth rate, and how that change differentiates based on the amount of time intervention was made. From the experiment, it was discovered that a limited amount (1 hour) of intervention benefits the kale by escalating the growth rate, whereas an abused amount (8 hours) of intervention degrades the speed. The main finding from this experiment was the proof of the possibility of benefit that access to the limited amount of physical intervention could provide to the kale’s growth. The 1 hour of daily physical intervention resulted positively in the overall growth by the acceleration of the average growth rate,

439


while 8 hours of daily physical intervention resulted negatively in the overall growth by deceleration of the average growth rate. The result manifests how influence differs by the extent of the physical intervention given to the kale, and how an appropriate and limited amount of intervention amount assists the growth rather than hindering. Referring to Professor Jim Whelan’s previous study, the result of the previous study diverges from the result of this study to a moderate extent since both escalation and reduction of growth rate were observed with the physical intervention in this study, but only a reduction of growth was observed from the previous study.4 Though there are countless differences between the two studies—such as plant used, location of the experiment, length of the experiment, etc—still physical intervention made with it influenced the plant’s growth in both studies. This certainly exhibits that the change in growth rate shown in this study was caused by the physical intervention, not by any other factor in the environment. Moreover, the diverges between the two experiments suggest the possible difference in physical intervention impact based on the plant used in the experiment. While this study can be supported by scientific mechanisms proven with previous experiments and studies, some limitations might affect the interpretation of the results. This experiment was only done once, locating kales in the same location as the hydroponic system for the whole experiment. Even though the hydroponic system is made to provide an equal amount of light through LED to all 12 kales, there is a possibility that kales accessed different amounts of LED light based on their location in the hydroponic system. Since the difference in LED light emission can make a difference to the growth of the plant, to confirm that the result of the experiment is only from exposure time different to physical intervention, the experiment should have been done multiple times with different locations of kale in the hydroponic system.26 Additionally, the only plant that was experimented with was kale, which precludes determining if this result applies to all kinds of plants or only to kale. Since the result of this experiment is not generalized, it is impossible to utilize the findings in agriculture production else than kale without any further study being done. On the other hand, with generalized results after additional study and experiment, this study can also be applied to the real-world setting to influence the practice in the field of agriculture. Through understanding the appropriate physical intervention and environmental conditions that can optimize plant growth, agricultural productivity can be significantly enhanced even in less favorable environments.27 Knowledge about the benefits the physical intervention can provide to plant growth can lead to the expansion of land utilization. The insight from this experience can be applied to make previously unsuitable lands more viable for cultivation. Elements in the environment that cause physical intervention, such as wind, rock, or contact with other plants, can be utilized effectively by farmers to improve plant resilience and successfully produce crops around natural physical obstacles. Furthermore, noble innovative farming techniques can be established, implicating the observation from this study, which can significantly impact the productivity of the field. Methods such as controlling mechanical stimulation and intensive exposure to a limited amount of physical intervention can successfully enhance growth rates and maximize efficiency with productivity.5,8 Another future direction of this experiment can be connected to the result of MRT. When comparing the kales with no intervention and 8 hours of intervention, MRT resulted in two different alphabets, exhibiting the difference between the final result of the growth of kales. Whereas, when comparing the kales with no intervention and 1 hour of intervention, MRT resulted in the same alphabet, showing that the two growth rates are somewhat similar to each other. One hypothesis that could be made is that the physical intervention given to the kale was too minimal that it was not able to influence the kale noticeably. Even though the MRT did not exhibit two different alphabets for the growth rate, since the height of the final kales did show two varied results of growth, it is logical to predict that if kales were exposed to a slightly longer time of intervention there is a chance of them showing growth rate increase that is noticeable in MRT.

440


To test this hypothesis, further experiments could take place with several conditions. For instance, the experiment could be the same except increasing the intervention time of the second group to 2 hours from 1 hour and observing if the result receives different alphabets from the MRT, while the kale still shows the growth. Another example could be an experiment where the whole purpose is to discover the time range in which both conditions—kale shows an escalation in growth rate and MRT gives two different characteristics—qualify. Adding on, quality and quantity is another branch that can be experimented as a future direction of this study. Since the main focus of this experiment was the length of the kale and how the growth rate changed based on time and the physical intervention the group of kale experienced, the quality and quantity of the kale were not a considered factor. In the field of agriculture, however, the quality and quantity of the crop is an essential portion that should be considered.28 Numerous previous research exhibit how the quality and quantity of plants can vary based on every element of the growth process of the plant.29 Thus, for further study direction, the quality and quantity of the kale plant could be experimented with to discover the possible change, negative or positive, that occurred within the change in the growth rate of the kale through varying hours of physical intervention without any other factors causing influence. This research was to determine if the difference in the time the kale is exposed to the physical intervention will make a difference in the final result, and would physical intervention causes both positive and negative influences based on the exposure time. Through the experiment, it was observed that only when the exposure time to physical intervention was appropriate and controlled, did the kale experience escalation in the growth rate. With further research and study, this finding can be implicated in the field of agriculture to possibly increase productivity. As much as the environment is shifting in the modern world, humans must adapt agricultural technology to the changing environment, and access to intentional physical intervention will be the first step for the proper improvement of the agricultural field.

References 1.

Lubchenco J. Entering the Century of the Environment: A New Social Contract for Science. Science. 1998;279(5350):491-497. doi:10.1126/science.279.5350.491

2.

El-Esawi MA. Introductory Chapter: Physical Methods for Stimulating Plant Growth and Development. In: Physical Methods for Stimulation of Plant and Mushroom Development. IntechOpen; 2018. doi:10.5772/intechopen.80441

3.

Wang B, Zhou J, Wang Y, Zhu L, Teixeira da Silva J. Physical Stress and Plant Growth. In: Vol 3. ; 2006:68-85.

4.

Xu Y, Berkowitz O, Narsai R, et al. Mitochondrial function modulates touch signalling in Arabidopsis thaliana. Plant J Cell Mol Biol. 2019;97(4):623-645. doi:10.1111/tpj.14183

5.

Cutler SR, Rodriguez PL, Finkelstein RR, Abrams SR. Abscisic acid: emergence of a core signaling network. Annu Rev Plant Biol. 2010;61:651-679. doi:10.1146/annurev-arplant-042809112122

6.

Fujii H, Zhu JK. Arabidopsis mutant deficient in 3 abscisic acid-activated protein kinases reveals critical roles in growth, reproduction, and stress. Proc Natl Acad Sci U S A. 2009;106(20):83808385. doi:10.1073/pnas.0903144106

7.

Hubbard KE, Nishimura N, Hitomi K, Getzoff ED, Schroeder JI. Early abscisic acid signal transduction mechanisms: newly discovered components and newly emerging questions. Genes Dev. 2010;24(16):1695-1708. doi:10.1101/gad.1953910

8.

Cui Y, Lu X, Gou X. Receptor-like protein kinases in plant reproduction: Current understanding and future perspectives. Plant Commun. 2021;3(1):100273. doi:10.1016/j.xplc.2021.100273

441


9.

Pearce G, Moura D, Stratmann J, Ryan C. RALF, a 5-kDa ubiquitous polypeptide in plants, arrests root growth and development (vol 98, pg 12843, 2001). Proc Natl Acad Sci. 2001;98:15394-15394.

10. Haruta M, Sabat G, Stecker K, Minkoff BB, Sussman MR. A peptide hormone and its receptor protein kinase regulate plant cell expansion. Science. 2014;343(6169):408-411. doi:10.1126/science.1244454 11. Matsubayashi Y. Post-translational modifications in secreted peptide hormones in plants. Plant Cell Physiol. 2011;52(1):5-13. doi:10.1093/pcp/pcq169 12. Mittler R. Abiotic stress, the field environment and stress combination. Trends Plant Sci. 2006;11(1):15-19. doi:10.1016/j.tplants.2005.11.002 13. Ashraf M, Harris PJC. Photosynthesis under stressful environments: An overview. Photosynthetica. 2013;51(2):163-190. doi:10.1007/s11099-013-0021-6 14. Barnabás B, Jäger K, Fehér A. The effect of drought and heat stress on reproductive processes in cereals. Plant Cell Environ. 2008;31(1):11-38. doi:10.1111/j.1365-3040.2007.01727.x 15. Zhu JK. Abiotic Stress Signaling and Responses in Plants. Cell. 2016;167(2):313-324. doi:10.1016/j.cell.2016.08.029 16. Waadt R, Seller CA, Hsu PK, Takahashi Y, Munemasa S, Schroeder JI. Plant hormone regulation of abiotic stress responses. Nat Rev Mol Cell Biol. 2022;23(10):680-694. doi:10.1038/s41580022-00479-6 17. Claeys H, Inzé D. The agony of choice: how plants balance growth and survival under waterlimiting conditions. Plant Physiol. 2013;162(4):1768-1779. doi:10.1104/pp.113.220921 18. Mittler R. Oxidative stress, antioxidants and stress tolerance. Trends Plant Sci. 2002;7(9):405410. doi:10.1016/s1360-1385(02)02312-9 19. Apel K, Hirt H. Reactive oxygen species: metabolism, oxidative stress, and signal transduction. Annu Rev Plant Biol. 2004;55:373-399. doi:10.1146/annurev.arplant.55.031903.141701 20. Gill SS, Tuteja N. Reactive oxygen species and antioxidant machinery in abiotic stress tolerance in crop plants. Plant Physiol Biochem PPB. 2010;48(12):909-930. doi:10.1016/j.plaphy.2010.08.016 21. Van Breusegem F, Dat JF. Reactive Oxygen Species in Plant Cell Death. Plant Physiol. 2006;141(2):384-390. doi:10.1104/pp.106.078295 22. Noctor G, Mhamdi A, Foyer CH. The roles of reactive oxygen metabolism in drought: not so cut and dried. Plant Physiol. 2014;164(4):1636-1648. doi:10.1104/pp.113.233478 23. Šamec D, Urlić B, Salopek-Sondi B. Kale (Brassica oleracea var. acephala) as a superfood: Review of the scientific evidence behind the statement. Crit Rev Food Sci Nutr. 2019;59(15):2411-2422. doi:10.1080/10408398.2018.1454400 24. Vidal NP, Pham HT, Manful C, et al. The use of natural media amendments to produce kale enhanced with functional lipids in controlled environment production system. Sci Rep. 2018;8(1):14771. doi:10.1038/s41598-018-32866-5 25. Tan J, Jiang H, Li Y, et al. Growth, Phytochemicals, and Antioxidant Activity of Kale Grown under Different Nutrient-Solution Depths in Hydroponic. Horticulturae. 2023;9(1):53. doi:10.3390/horticulturae9010053

442


26. Ma Y, Xu A, Cheng ZM (Max). Effects of light emitting diode lights on plant growth, development and traits a meta-analysis. Hortic Plant J. 2021;7(6):552-564. doi:10.1016/j.hpj.2020.05.007 27. Edita A, Dalia P. Challenges and problems of agricultural land use changes in Lithuania according to territorial planning documents: Case of Vilnius district municipality. Land Use Policy. 2022;117:106125. doi:10.1016/j.landusepol.2022.106125 28. Rae D. Fit for purpose: The importance of quality standards in the cultivation and use of live plant collections for conservation. Biodivers Conserv. 2011;20:241-258. doi:10.1007/s10531-0109932-8 29. Chiang C, Bånkestad D, Hoch G. Reaching Natural Growth: Light Quality Effects on Plant Performance in Indoor Growth Facilities. Plants. 2020;9(10):1273. doi:10.3390/plants9101273

443


Elucidating Thigmomorphogenesis: Effects of Mechanical Stimulation on the Growth Rate of Basils Author

Full Name

:

Lee, Inho

:

Hprep Academy

(Last Name, First Name)

School Name

Abstract Thigmomorphogenesis refers to the adaptive response of plants to physical perturbations and touches. Typically, this phenomenon results in reduced growth and increased structural strength. The objective of this research is to deepen our understanding of thigmomorphogenesis and investigate how plants respond to external disturbances, such as physical touch. The focus will be on analyzing plants' behavior when disturbed by interferences and comparing them to the untouched ones. A review of seven research papers has been conducted to investigate this topic further. Through the experiment, it has been concluded that when plants encounter interference, they adjust to the surrounding environments. Touched plants have a slower growth rate with an average reduction of 25 percent in growth rates compared to those that are untouched. The touched plants adjust to the changing environment, in which they are repetitively touched. These findings provided insights into thigmomorphogenesis that could be utilized to control and modulate plant growth. This can enhance productivity and control the growth rates in agricultural practices. Given the various influences of thigmomorphogenesis, further extensive research is required to expand knowledge about it and explore its potential benefits.

Keywords Thigmomorphogenesis, Plant Growth, Mechanical Stress, Basil

444


Introduction Climate change has been a pressing global issue in the past few decades [13]. Not only does climate change affect our environment, but it also affects the development of organisms, with plants being the most vulnerable. Natural factors caused due to climate change such as extreme winds, sudden shifts in the weather, and heavy rains significantly affect the growth of plants [15]. The changing climate increases stressors in plants that decrease their resilience and disrupt their growth. To add on, in nature, plant growth is disturbed through hurricane winds, monsoon rains, and herbivory attacks by insects and wild animals [14]. Interestingly, the distraction of plants can help enhance agricultural practices. Scientists have found out that purposeful disruptions on plants allow them to adapt to the perturbations and stay healthy. While these disturbances may slow down growth rates, they do not damage the plants but instead enable them to develop resilience, ultimately promoting their overall well-being. To strengthen the roots and increase crop yield, farmers purposefully tread on wheat and barley. This helps plants to build a more robust root and stem system, which can enhance stability. Additionally, they release ducklings into fields to fertilize plants and enable them to build stronger stems [16]. Another purpose of the intentional stepping is that some plants cannot be eaten or marketed if they are overgrown. Like this, a study about the concept of controlled physical stimulation regulating plant growth while not disrupting their health is called thigmomorphogenesis. An in-depth research of thigmomorphogenesis is essential to improve crop yield, increase resistance to diseases, prevent further environmental destruction, and stabilize urban growth. Overall, thigmomorphogenesis is the study of plants’ response toward outside disturbances and an indepth research for it is necessary for the improvement in agriculture and productivity. This research aims to investigate how basil plants respond to thigmomorphogenesis. If outside factors touch plants, it acts as a disturbance and reduces the growth rate compared to the untouched plants. A comparison is done for physical changes between those that have been touched regularly with those that have been untouched in section V. By planting basils in controlled conditions and measuring the heights frequently, the study will assess the differences in growth between the touched and untouched plants. Additionally, the research will review existing literature on plants' responses to physical interactions and identify the correlation of these topics with this research in section III. The ultimate goal is to identify strategies for managing plant growth based on observed effects of physical disturbances on basil.

Discussion Background The term ‘thigmomorphogenesis’ originates from the Greek roots ‘thigma’, ‘morphe’, and ‘genesis’. Each root represents touch, form and shape, and origin [16]. When plants are subjected to touch, their development is disrupted, resulting in a significant decrease in growth rate. Nevertheless, the disrupted plants often grow healthier and develop stronger stems [1]. As mentioned before, understanding more about thigmomorphogenesis is beneficial for agriculture, environmental conservation, and the balanced development of green urban areas. As previously discussed, thigmomorphogenesis, which allows plants to adjust to the changing environment, is a process that prevents plants from overgrowing. Preventing the overgrowth of plants is necessary as plant overgrowths can lead to disturbance in their health and the disordering of their nutrients. This means that if plants are subjected to physical disturbances, such as touching, they will naturally limit their growth rate. This is particularly useful in preventing plants from growing more than needed, which is problematic. This is because plants can fall off or their stems may be bent when they can no longer resist their own weight. The induced physical disruptions allow the appropriate growth for plants. Compared to those that are left undisturbed, the controlled disturbances can prevent excessive growth without affecting the plants’ nutrient content. This mechanism can be advantageous in agriculture by enhancing crop productivity. By regulating plant growth rates through such interventions, crop development can be well-controlled. This approach opens up the possibility that scientists can develop agricultural tools or methods to manage the plants’ height effectively [5].

445


A compelling example of this is the interaction between corn and tomatoes. Planting these two crops in proximity can be problematic as both significantly affect each other’s growth. Tomatoes need full sunlight for optimal growth. If they were grown within close proximity to corn, which tends to grow tall, they might not get enough sunlight, negatively affecting their growth and development. In this case, mechanical stress can be applied to corn to prevent overgrowth. This strategy allows the management of heights of corn, ensuring that corn does not overshadow the sunlight needed for tomatoes. By applying this technique, both corn and tomatoes can be grown together without negatively affecting each other [4]. Furthermore, it is critical that plants grow to optimal heights. This is because if plants overgrow, there may not be enough support to endure the weight of themselves, in which plants can be bent or fall off. Although plants being tall is beneficial for them, it leaves disadvantages to them. Consequently, the plants will end up with their leaf sizes decreased. This reduces the amount of surface provided for photosynthesis, resulting in less energy and nutrients. Therefore, plants will not be able to maintain their well-being and eventually be unhealthy. In addition, overgrown plants increase neither biomass appropriation nor biomass production, meaning that overgrown plants are not beneficial. Another likely constraint on overgrown plants is wind blowing, which reduces the stability of plants and even bends them. Thus, this shows that overgrown plants are not necessarily beneficial, which highlights the importance of thigmomorphogenesis in controlling plant growth [12]. Changes in plant growth in recent years have been mostly caused by natural disasters and climate change. Because of this, plants have evolved sensitive mechanisms that enable them to react to even small stimuli, such as touch. As a result, plants with these systems are mature enough to withstand drastic changes in climate as well as natural disasters. Some plants respond to the stimuli in a matter of seconds, whereas others take days or even weeks to display any kind of response. Wind, insect herbivory, animal grazing, and sound vibrations are some perturbations that cause stress in plants. Plants remember these disturbances and develop a defense system to protect themselves against touches. The first line of defense is the physical defense. The defenses hinder herbivores from eating the plants. These physical defenses often include the development of thicker stems, reduction of their heights, smaller leaves, and overall increased strength [15]. Plants also establish defense hormones and genes to disenchant herbivores. Many of these hormones are toxic. Most plants release toxic chemicals from their leaves to ward off herbivorous insects. Certain plants secrete nectar that draws ants, which may protect the plant from other insects [15]. Various types of mechanical stresses force plants to endure harsh conditions, enhancing their overall strength. For instance, farmers adopted a new method known as “mugifumi,” in which the farmers purposefully step up crops. Mugifumi hardens the crops, enabling them to survive adversely harsh weather and allowing them to adapt to the altering surroundings while reducing the effect of pesticides on the environment [12]. Figure 1 illustrates the practice of mugifumi in Fujisawa, Japan. Picture (A) depicts farmers and neighboring children stepping on the wheat seedlings, promoting the development of healthy stems and proper development. This diminishes the need for pesticides, which will attract more consumers to purchase the products by promoting healthy consumption. Picture (B) shows that farmers are using a tractor to treat and touch the wheat seedlings. Figure 1 indicates the significance of mugifumi and thigmomorphogenesis on agriculture, and that not all perturbations are detrimental for some cases [7].

446


[Figure 1: Scene of Mugifumi]

(Source: National Center for Biology Information https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4162469/)

Thus, it is evident that a comprehensive knowledge and study of thigmomorphogenesis is crucial. However, there are limitations that prevent further developments of the studies on thigmomorphogenesis. One of the restrictions is that insufficient findings of studies on different plant species through different approaches frequently hinder the practical application of the acquired knowledge. This is because several studies do not use the proper treatment when pursuing experiments and inaccurately analyze and measure statistics. Hence, this research will accurately measure data and properly analyze the statistics through visual analyses.

Literature Review 1. Thigmomorphogenesis (1) Thigmomorphogenesis The term ‘thigmomorphogenesis’ refers to the changes caused by disrupting the plants’ growth through continuous touching. When touched, plants are disrupted from growing and develop much less than the untouched ones. Even without intentional disruptions, plants experience disturbance through wind, rain, and insects crawling in nature. Plants that experience thigmomorphogenesis create high levels of jasmonate defense hormone, which enables plant protection against insects and fungi. Figure 2 shows the comparison of untouched and touched plants [1].

447


[Figure 2: Plant Growth in Response to Repetitive Touch]

(Source: Cell Press https://www.cell.com/action/showPdf?pii=S0960-9822%2817%2930866-7)

Compared with the untouched 6-week-old plants, the touched plants were repetitively touched; twicedaily touch stimulation. The touched plants show shorter height than the untouched ones, indicating that plants that are repetitively touched are likely to have a slower growth rate [1]. Thigmomorphogenesis is also used to improve agricultural practices. For instance, Japanese farmers purposefully step on the crops to strengthen roots and improve yield. They also release animals such as rabbits into fields to develop stronger stems and make plants stronger [1]. (2) Thigmomorphogenesis: A Complex Plant Response to Mechano-stimulation Plants, through experiencing harsh disturbances, have developed very sensitive mechanisms that allow the plants to respond to small disruptions. Some plants immediately react to stimulus, while others require some time, ranging from days to weeks, to respond [5]. Plants, unlike animals, cannot adapt to the changing or changed environment; therefore, such changes may be detrimental to plants without developed mechanisms. As a result, plants often try to avoid obstacles and develop advanced defense hormones. Genes that are induced after touch develop proteins involved in calcium-sensing, cell wall modifications, and defense. In fact, 23 percent of genes contain elements that can react to stimuli by appropriately responding to them. Additionally, signaling molecules mediate plant reactions to mechanical disruptions; such signaling factors include hormones, nitric oxide, and reactive oxygen species [5]. For instance, nitric oxide helps plants regulate their body and systems like seed germination and cell differentiation. Scientists have also found that the increase in calcium plays a great role in plant sensing and responses to stimuli. Another signaling factor is abscisic acid, which is a plant hormone that helps the plant respond to stress and develop throughout the disruptions [5]. (3) Elucidating Thigmomorphogenesis: an Epigenetic Phenomenon of Mechanical Stress Acclimation in Plants Severe climate change and natural disasters cause mechanical stress (MS) for plants. Plants remember

448


MS and develop a defense hormone to protect themselves against disturbances. They develop thicker stems, smaller leaves, reduced height, and overall increased strength [3]. MS reduces stress for plants as they allow them to adjust to the changes and disruptions. Plants have fewer changes when they face interference, which provides the plant an advantage to endure and adjust to the changing environment. Diverse factors, including wind, insect herbivory, animal grazing, and sound vibrations can cause different types of mechanical stress [3]. (4) The Course of Mechanical Stress: Types, Perception, and Plant Response Annual plants are plants that live only for one growing season, while perennial plants regrow every spring without dying. The deficient findings on research on different plant species, both for annual and perennial, and plant systems using different approaches frequently prevent further application of the acquired knowledge [9]. (5) Thigmomorphogenesis: A Detailed Characterization of the Response of Beans to Mechanical Stimulation Mechanical stimulation of a young Red Cherokee Bush plant resulted in a forty-six percent reduction in height and stem extension. The optimal conditions for this thigmomorphogenetic effect include rubbing the plant five to ten times with balanced pressure daily while growing the plants under 16 hours a day with exposure to bright light at 24 degrees Celsius. The young plants showed the best response, and the most effective stimuli included rubbing, bending, or providing wind directly to the plant [8]. (6) Touch-induced Changes in Arabidopsis Morphology Dependent on Gibberellin Breakdown Touches on plants can reduce plant growth and delay flowering in plants, also known as thigmomorphogenesis. Touches induce the expression of AtGa2ox7 genes, which are genes that are involved in gibberellin catabolism; in other words, those genes are plant hormones that are essential for various growth and developmental processes such as stem extension, flowering, and lead expansion. This indicates that AtGA2ox7 genes are crucial for thigmomorphogenesis. Therefore, gibberellin catabolism could enhance plant resistance to abiotic and biotic stresses [11]. (7) In Touch: Plant Responses to Mechanical Stimuli The plant responses to unexpected and mechanical stimuli are crucial for plants. Mechanical disturbances, including environmental stimuli like wind and constant perturbations like gravity, influence plant morphogenesis (Braam, 2004). Signaling molecules and hormones also play a critical role in touch responses. Important signaling molecules include calcium, reactive oxygen species, and ethylene. For each touch forced on the plant, over 2.5 percent of the genes in the plant are up-regulated. Most genes are used to perform calciumbinding, cell wall modification, and defense. The function of these genes enables response regulation, allowing plants to react to changes [2]. (8) Green Fingers: Plant Thigmo Responses as an Unexplored Area for Haptics Research Haptics research is the study that mostly focuses on how humans sense and react to touches. However, the research provided information that plants also detect and respond to touch and explore how the plants are subjected to touch. Plants, just like humans, have developed systems to be able to respond to mechanical stimuli that occur in the surrounding environment. Since at least the ancient Greeks, humans have obtained the knowledge that plants are affected when touched, and they used this method to enhance their agricultural practices [6].

449


[Figure 3: Disturbances of Plant Growth Through Arduino Uno and Wave]

(Image from Huisman, 2020)

The study and this research both used Arduino Uno, a coding platform, when disturbing the plants’ growth. Figure 3 shows the results of about two weeks of continuous perturbations on the root. Arduino Uno created sine waves that could destabilize the roots, which can also hinder the plants’ growth. The results are that plants have a significant decrease in their growth rate compared to when the roots did not get interfered [6].

Data and Methods In this research, materials such as MATLAB, Arduino, and basil seeds were used to test the hypothesis. The most essential component of this study was the basil seeds. Six basil plants were cultivated in pots, with three plants subjected to mechanical stimulation by motors and the other three left untouched. To compare the heights of the plants, photographs were taken periodically. It was crucial to set a consistent frequency for measuring the plant heights and taking photos at each designated interval. So, photos were captured every three days to monitor growth. Each photo was taken from a front-facing angle with a ruler placed vertically beside the plant for scale. An example of the photos taken of one of the basils is shown in Figure 4. By photographing the plants every three days, the growth heights of the touched and untouched basil plants were compared using MATLAB. Figure 4 shows the photos taken for Plant 1 on Day 3, Day 12, and Day 21. Over time, it is shown that the plant grew significantly, from approximately 1 centimeter to about 20 centimeters. [Figure 4: Plant 1 Growth over Time (Day 3, Day 12, Day 21)]

450


MATLAB is the major coding platform that was used in this research. This is an online programming platform designed for engineers and scientists to analyze data and systems. Unlike other programming tools like Java and Python, MATLAB is a computing platform with its own programming language and is an extremely fast and high-performing solution [17]. Measuring plants and comparing them by analyzing them in plots was done through MATLAB. For measuring the heights of plants, more apps had to be utilized. One of them was Image Viewer, which is an app for measuring the height of plants in pixels. When measuring the height, the distance of the ruler has to be measured, then measure the height of the plant, and the conversion of pixels into centimeters through simple calculations is needed. The calculation is done by multiplying the height of the plant in pixels with the measured distance of the ruler in centimeters. After that, divide the multiplied value by the measured distance of the ruler in pixels. By doing this, the plant height in pixels is converted into centimeters. [Figure 5: Matlab Code for Creating Plots]

Then, insert all the data that has been collected as shown in the code in this figure. After that, press the run button, and the program creates a plot showing the heights of the touched and the untouched plant in a graph. Also, another plot has been created to test the hypothesis. The plot allows testing whether the hypothesis of this research is legitimate and reasonable.

451


[Figure 6: Matlab Code for Hypothesis Testing]

Figure 6 shows the code that is used to make plots for hypothesis testing. This code creates a plot that can test the hypothesis. It uses the slopes for the graph that compares the untouched and touched plants and provides the maximum and minimum values. Another platform that has been used is Arduino. Arduino is an open-source electronics platform based on hardware and software. The Arduino Board is a microcontroller board that has to be connected to the Arduino coding platform in order for it to properly function. A large number of researchers use Arduino Uno as it is simpler and easier to use compared to the other boards. It is easily programmable and can be used to connect different electronic components. In addition to the Arduino board, an SG90 motor, also called a servo motor, has been used to continuously touch the plants and lower their growth and development rate. These motors have 3 wires, have high torque at high speed, and are energy-efficient. However, servo motors are less accurate in position, limited in rotation angles, and are expensive. The Arduino Uno board allowed the motors to activate. By inputting codes to the Arduino board through Arduino, the servo motors have been activated. The code that was used to activate the motors is shown in this figure [10]. [Figure 7: Arduino Coding for Functioning Motors]

452


When setting up the motors, it is critical that the motor touches only the very top part of the plants as extreme interference may severely damage or even deprive the plants’ lives. The speed of the motor is also a significant aspect to include. If the motor moves more frequently and quickly than necessary, it may also badly damage the plant, which is why the motor should keep its appropriate speed. In this light, the appropriate speed represents about “delay (70000)”, which corresponds to 70,000 milliseconds. Every 70,000 milliseconds, the motor is rotated at a designated degree.

Results By pursuing this research, it was concluded that the touched basils had a slower growth rate compared to the untouched basils. As a result, the hypothesis, which predicted that outside factors act as a disturbance and reduce the growth rate compared to the untouched basils has been proved. Through the experiment, it proved that the touched basils grew less than the untouched ones. Each basil was labeled as 1, 1D, 2, 2D, 3, or 3D, with “D” representing disturbance or touch. This means that 1D, 2D, and 3D are the basils in which their growth was disrupted by motors. Plants with corresponding numbers were compared with each other: 1 with 1D, 2 with 2D, and 3 with 3D. Plant heights were measured frequently throughout the experiment period. All basil plants were measured every three days over a span of 21 days, and a total of seven measurements per plant. For Plant 1, the measured heights are 1, 1.6, 3.3, 7.3, 11.1, 12.5, and 19.1 centimeters. In comparison, for Plant 1D, which was subjected to disturbance, the measured heights are 0.6, 1.24, 3.2, 5.6, 8.2, 9.8, and 14.2. All of these heights were accurately measured through MATLAB and were converted from pixels to centimeters. The comparison between basils 1 and 1D is illustrated in the figure below. [Figure 8: Comparison of Basil 1 and Basil 1D]

The red dot represents the untouched basil, or Plant 1, and the blue dot represents the touched basil or Plant 1D. In Figure 8, Plant 1 and Plant 1D have been compared. On Day 3, it is shown that Plant 1 was slightly taller than Plant 1D. Until Day 9, there was no significant difference in plant heights but the plants, in fact, had the same heights. However, from Day 12, Plant 1D suddenly lost its growth rate, and

453


Plant 1 began to grow rapidly. Eventually, Plant 1 grew to about 20 centimeters, and Plant 1D did not reach 15 centimeters. According to the statistics, the two plants had a height difference of about 4.9 centimeters. Through this, it is clear that the motor substantially disturbed the growth of the plant. Therefore, this may also lead to better control of plant growth, as it is shown through this experiment that plant growth is hugely affected when disturbed by outside factors. As shown in Figure 9, Plant 2 and Plant 2D initially had the same height, but Plant 2 eventually grew taller at the end of the experiment. The heights for Plant 2 that have been measured for 21 days are 1.1, 2.1, 4.25, 8.2, 11.2, 12.9, and 18.1. For Plant 2D, the measured heights are 1.1, 1.25, 4.44, 7.1, 9.2, 10.4, and 15.3. These heights are presented in centimeters. [Figure 9: Comparison of Basil 2 and 2D]

On Day 3, the start of the experiment, plants had the same height with both being 1.1 centimeters. On Day 6, Plant 2D did not have much change, and Plant 2 grew slightly more than Plant 2D. On Day 9, Plant 2D had boosted its growth rate and had grown taller than Plant 2. From Day 12 to Day 18, Plant 2D lost its growth rate and it began to grow slower. Finally, on Day 21, Plant 2 nearly reached 20 centimeters, while Plant 2D grew slightly above 15 centimeters. This also shows that Plant 2D got greatly disrupted from growing as well as Plant 2, the untouched basil. In Figure 10, Plant 3 and Plant 3D have been compared. These basils did not show a significant difference compared to the plants that were previously compared. For Plant 3, the measured height for 21 days is 0.8, 1.1, 4.58, 6, 9.8, 11.5, and 17.7. For the disturbed basil, Plant 3D, the heights are 0.7, 1.1, 3.59, 6.8, 9.5, 10.5, and 15.4. All of these heights were measured through MATLAB in pixels and converted into centimeters.

454


[Figure 10: Comparison of Basil 3 and Basil 3D]

Figure 10 shows that Plant 3 and Plant 3D, initially having the same height, ultimately experienced a difference in the growth rate. Until Day 15, the two plants did not have much height difference; in fact, on Day 12, Plant 3D was taller than Plant 3. From Day 18, the plants started to experience a difference in growth rate, and on the last day of the experiment, the plants had a height difference of 2.3 centimeters. Even though these plants still had a height difference, it is expected that if the period for measuring the plant heights was extended, the difference between the touched and untouched plants would have been greater. Nonetheless, there are limitations that may have altered the result of this experiment: the weather conditions. Even though the plants together may have not been affected as all of the plants experienced the same weather conditions, the severe weather could have shifted the result for differences between each plant. Without the weather limitations, the result may have been clearer and more evident. While pursuing this experiment, the weather was extremely hot and humid due to the heavy rainfall. This could have influenced the outcome of the experiment, as plants required time to adapt to such changes. Therefore, to maintain a more consistent condition in future research, it is crucial to maintain the appropriate indoor conditions by ensuring stable temperature and humidity levels. One potential method to resolve these environmental factors is the use of air conditioners. Through air conditioners, the humidity and temperature can be controlled, which can prevent severe and frequently shifting weather. Another possible solution for controlling the conditions is the implementation of a smart farming system, which is a system that uses modern technology to optimize farming practices. There are sensors and monitoring systems to track soil moisture, temperature, and humidity. Through this, it is possible to maintain optimal growth conditions for plants.

455


[Figure 11: Smart Farm at Industrial Site]

(Source: Korea JoongAng Daily https://koreajoongangdaily.joins.com/2018/12/17/industry/Smart-farm-atindustrial-site-grows-fture-foods/3057064.html)

Figure 11 shows the smart farm at an industrial site. The screen displays temperature and humidity to ensure proper environmental treatment for the experiment. Under these circumstances, thigmomorphogenesis is taking effect actively, leading to a clearer test result [7]. Through MATLAB, the hypothesis has been tested by creating another plot. Figure 12 is the plot for hypothesis testing, which provides information about the growth rate of the touched and untouched plants. [Figure 12: Hypothesis Testing for Touched and Untouched Plants]

456


This data shows that untouched basils achieved a maximum value of 0.04, whereas the maximum value for the touched basils is 0.027. This suggests that the untouched basils grew taller and better than the touched basils. Both the touched and untouched basils had the same minimum height of approximately 0.02, which indicates that the starting points where the plants started to grow were similar. In addition, the untouched plants achieved a higher median than the touched ones. This illustrates that the untouched plants had an overall greater height than the touched basils. This emphasizes that the touched basils grow less than the untouched basils as they are disrupted from growing at their usual growth rate.

Conclusion & Implications Thigmomorphogenesis has brought diverse benefits to today’s world; such advantages include enhanced agricultural practices, preservation of the environment, and the durable development of green areas. Overall, thigmomorphogenesis is the process by which plants respond to unexpected changes or disturbances that may affect their growth rate. Through this research, it was proven that plants that got disturbed or touched grew significantly less than the untouched plants. In this research, basils were the plants used for measuring the heights. By creating graphs and plots to compare the touched and untouched plants, the hypothesis, which indicated that the touched basils will grow noticeably less than the untouched basils, has been proved. The graphs and plots have been created through a coding platform called MATLAB and the motors that distracted the plants from growing were activated through Arduino. By pursuing the experiment, it has been clear that the plant heights and growth rates can be controlled through touches and disturbances. This shows that scientists can develop drugs and touching tools that can hinder plants from overgrowing, which can be helpful not only in agriculture but also in increasing productivity. The limitations of this research were the extreme weather conditions and no significant difference between touched and untouched plants. Therefore, the use of air conditioners and smart farms can resolve the limitations caused due to weather conditions and the extended period of the experiment could have brought clearer results and more differences between touched and untouched plants. This research can be effectively used to control plant growth using thigmomorphogenesis and allow plants to be grown at an appropriate height. Thus, it is necessary to expand the knowledge and studies about thigmomorphogenesis as it can be beneficial in various ways.

References [1] Braam, Janet, and E. Wassim Chehab. “Thigmomorphogenesis.” Cell Press, 11 Sep. 2017, https://www.cell.com/action/showPdf?pii=S0960-9822%2817%2930866-7. Accessed Jul. 27, 2024. [2] Braam, Janet. “In Touch: Plant Responses to Mechanical Stimuli.” New Phytologist Foundation, 17 Nov. 2004, https://nph.onlinelibrary.wiley.com/doi/full/10.1111/j.1469-8137.2004.01263.x. Accessed Jul. 28, 2024. [3] Brenya, Eric. “Elucidating Thigmomorphogenesis: An Epigenetic Phenomenon of Mechanical Stress Acclimation in Plants.” Western Sydney University, Mar. 2020, https://researchdirect.westernsydney.edu.au/islandora/object/uws:55143/. Accessed Jul. 29, 2024. [4] Buiano, Madeline. “14 Vegetables You Should Never Plant Together—Gardening Experts Explain Why.” Martha Stewart, 16 Jan. 2024, https://www.marthastewart.com/vegetables-to-never-planttogether-8425391. Accessed on Jul. 31, 2024. [5] Chehab, Elamir Wassim, Elizabeth Eich, et al. “Thigmomorphogenesis: A Complex Plant Response to Mechano-Stimulation.” ResearchGate, Jan. 2009, www.researchgate.net/publication/23669242_Thigmomorphogenesis_A_complex_plant_response_to _mechano-stimulation. Accessed Jul. 28, 2024.

457


[6] Huisman, Gijs. “Green Fingers: Plant Thigmo Responses as an Unexplored Area for Haptics Research.” Research Gate, Sep. 2020, [7] Iida H. “Mugifumi, a beneficial farm work of adding mechanical stress by treading to wheat and barley seedlings”. Front Plant Sci. 2014 Sep 12, doi: 10.3389/fpls.2014.00453. PMID: 25309553; PMCID: PMC4162469. Accessed Jul. 31, 2024.https://www.researchgate.net/publication/344148578_Green_Fingers_Plant_Thigmo_Responses _as_an_Unexplored_Area_for_Haptics_Research. Accessed Jul. 29, 2024. [8] Joon-Ho, Choi, “Smart farm at Industrial Site grows Future Foods.” Korea JoongAng Daily, 17 Dec. 2018, https://koreajoongangdaily.joins.com/2018/12/17/industry/Smart-farm-at-industrial-sitegrows-fture-foods/3057064.html. Accessed Jul. 31, 2024. [9] Jaffe, M.J., “Thigmomorphogenesis: A Detailed Characterization of the Response of Beans to Mechanical Stimulation.” ScienceDirect, 15 Mar. 2012, https://www.sciencedirect.com/science/article/abs/pii/S0044328X76800177. Accessed Jul. 29, 2024. [10] Kouhen, Mohamed, Anastazija Dimitrova, et al. “The Course of Mechanical Stress: Types, Perception, and Plant Response.” Molecular Diversity Preservation International, 30 Jan. 2023, https://www.mdpi.com/2079-7737/12/2/217. Accessed Jul. 30, 2024. [11] Koumaris, Nick, “Using the SG90 Servo Motor with an Arduino.” Electronic-lab.com, https://www.electronics-lab.com/project/using-sg90-servo-motor-arduino/. Accessed Jul. 31, 2024 [12] Lange, Maria João Pimenta, and Theo Lange. “Touch-induced Changes in Arabidopsis Morphology Dependent on Gibberellin Breakdown.” Nature Plants, 9 Feb. 2015, https://www.nature.com/articles/nplants201425. Accessed Jul. 30, 2024. [13] Nagashima, Hisae, and Kouki Hikosaka. “Plants in a Crowded Stand Regulate Their Height Growth so as to Maintain Similar Heights to Neighbours Even When They Have Potential Advantages in Height Growth.” National Center for Biotechnology Information, 11 May 2011, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3119620/. Accessed Jul. 31, 2024. [14] Nancy A Eckardt, et al., “Focus on climate change and plant abiotic stress biology,” The Plant Cell, Volume 35, Issue 1 Jan. 2023, Pages 1–3, https://doi.org/10.1093/plcell/koac329. Accessed Aug. 1, 2024. [15] “Plants and Climate Change.” National Park Service, https://www.nps.gov/articles/000/plantsclimateimpact.htm#:~:text=Unfortunately%2C%20the%20changing%20climate%20increases,the%20 loss%20of%20plant%20species.. Accessed Jul. 29, 2024. [16] “Plant Defenses.” Learn.Genetics, https://learn.genetics.utah.edu/content/herbivores/defenses/. Accessed Jul. 29, 2024. [17] “Thigmomorphogenesis: The Plant Response to Touch.” Area 2 Farms, https://www.area2farms.com/glossary/thigmomorphogenesis#:~:text=The%20term%20is%20derived %20from,meaning%20the%20development%20of%20form.. Accessed Jul. 30, 2024. [18] “What is MATLAB?”. MathWorks, https://kr.mathworks.com/discovery/what-is-matlab.html. Accessed Jul. 30, 2024.

458


Temperature as an indicator for the extent of DENV transmission Author

Full Name

:

Lee, Kunwoo

:

International School of Kuala Lumpur

(Last Name, First Name)

School Name

Abstract Recent outbreaks of Dengue Fever in formerly DENV-free regions are causing substantial global concerns. This outbreak is due to increasing global temperatures. In this study, the correlation of the DENV transmission and temperature in Malaysia is determined in order to find the relationship between the two. The data analysis through the histogram of Yearly Average Dengue Fever cases in temperature range indicates an optimal temperature of DENV transmission to be 26.4°C~26.6°C. This could also serve as an indicator of the DENV transmission in other countries.

Keywords DENV DENV transmission Dengue Fever Temperature

459


Introduction Dengue fever, often called break-bone fever, is an endemic disease to tropical and subtropical climates such as Malaysia1. Dengue is a prevalent disease in Malaysia, with the first documented case dating back to 1902. Recently, the dengue disease has spread worldwide to non-tropical climates notably in the American countries2. The primary pathway for the transmission of the dengue virus(DENV) to humans is through mosquito vector1. Vector-borne diseases are infections transmitted to humans by blood-feeding arthropods such as mosquitoes, ticks, and fleas3. Dengue fever is transmitted to humans through infected mosquitoes1. Previous studies have looked into how dengue viruses are transmitted to humans. Transmission of DENV requires multiple steps to occur. Firstly, mosquitoes acquire DENV infection from a viremic individual. After sufficient virus replication in the salivary glands of the host mosquito, the following feeding event would transmit the virus to humans via the saliva of the infected mosquito. The primary vector of DENV is Aedes aegypti, one of the mosquito species that inhabits tropical and subtropical regions4. Recently DENV spread through another mosquito species Aedes albopictus, and is now established in numerous areas such as Southern Europe5 and Taiwan6. Since Aedes albopictus can inhabit a more temperate environment than the tropical Aedes aegypti, this leads to a higher risk of DENV transmission in temperate climates that have never been susceptible to DENV. There are several factors for DENV transmission such as temperature, vector density, and rainfall/humidity7, 8. This study will look into the relationship between temperature and dengue fever cases. Because there has been a notable increase in dengue cases, the hypothesis is that there is an optimal temperature for the spread of dengue fever.

Methodology The clinical Dataset for Dengue cases in Malaysia was found in the Mendeley Database. In the database, the keyword for the search was Dengue Cases, Malaysia. To filter unnecessary data types, only the dataset data type was filtered through. Project Tycho team, the provider of the clinical dataset9, states that the original data sources are derived from open- and restricted-access sources. They have obtained permission for redistribution. The clinical dataset from the Project Tycho team is chosen in this study because their original data sources are from reputable sources, most notably from the Ministry of Health of Malaysia. Temperature data of Malaysia is obtained from the Climate Knowledge Portal of the World Bank10. The statistical analysis below creates a histogram to visualize the distribution of the dengue fever cases reported in each temperature bin value. In the histogram, the temperature bin value is set to be 0.2°C, and the dengue fever cases are moderated as the average number of dengue fever cases that occurred in one year in the temperature range. For instance, the dengue cases reported in a temperature range of 26.0°C and 26.2°C are summed and divided by the number of years that fall in the temperature range.

Result Year

Case

Temp

1973

1487

25.96

1974

2200

25.58

1975

830

25.58

1976

790

25.40

1977

780

25.67

460


1978

929

25.80

1979

862

25.89

1980

668

25.84

1981

524

25.99

1982

3052

25.99

1983

790

26.28

1984

702

25.60

1985

367

25.80

1986

1408

25.89

1987

2025

26.25

1988

1428

26.09

1989

2564

25.87

1990

4880

26.25

1991

6628

26.24

1992

5473

26.14

1993

5615

26.06

1994

3133

26.14

1995

6543

26.18

1996

14255

26.09

1997

19429

26.41

1998

27381

26.79

1999

10146

26.16

2000

7103

26.29

2001

16368

26.39

2002

32767

26.57

2003

31545

26.33

2004

33895

26.37

2005

39686

26.36

2006

38556

26.27

2007

48846

26.20

2008

49335

26.01

2009

41486

26.31

2010

46171

26.43

2011 19884 26.22 Table 1: Raw Data Table of Year, Dengue Fever Cases, and Yearly Average Temperature in Malaysia

461


This is a raw data table of the dengue fever cases and the yearly average temperature in Celsius(°C) reported in Malaysia during the years 1973 to 2011.

Figure 1: Scatter Plot and Power-Series Trendline of Number of Dengue Fever Cases versus Temperature This graph in Fig. 1 is obtained through the Google spreadsheet chart, which features a scatter plot and a power series trendline. Power series regression analysis was conducted to evaluate the correlation between temperature and dengue cases. The coefficient of determination (R2) was calculated to analyze the relationship between the yearly average temperature and the total yearly reported dengue fever cases in Malaysia. The coefficient of determination (R2) of 0.55 indicates a high R-value of 0.74, and the confidence interval of the standard error test does not include 0.

This indicates that the power series can approximate the dataset to an adequate extent. However, there is a need for a better statistical model that fits the data. Temperature

Average dengue fever cases in a year

25.60

1060.4

25.80

1133.0

26.00

10813.7

26.20

12833.9

462


26.40

32654.3

26.60

32767.0

26.80

27381.0

Table 2: Average number of Dengue Fever cases in a year within a Temperature Bin Histogram value This table shows the average dengue fever cases in a year under a certain temperature bin value. The sum of all dengue cases that were reported in the given temperature range was divided by the number of years that fell into the temperature range.

Figure 2: Histogram of Yearly Average Dengue Fever cases in temperature range The histogram in Fig. 2 was created in the desmos box plot feature using the data from the table above. The temperature with dengue cases ranges from 25.40°C to 26.79°C. In the temperatures above 27°C and below 25°C produced no cases. The number of cases exponentially increases from 25.5°C until it peaks at 26.4°C and 26.6°C. At 26.6°C there are 32767 yearly average cases and at 26.4°C there are 32654.3 yearly average cases, which indicates that there is a thermal optimum of the DENV transmission at around 26.5°C. In higher temperatures above 26.6°C, there is a sharp exponential decline in cases reported.

463


Figure 3: Q-Q plot of Histogram value The Q-Q plot is obtained through the scatter plot and linear regression feature of the Google spreadsheet. To produce the data points for the plot, the histogram data points were paired with the corresponding quantiles of the standard normal distribution. This Q-Q plot shows whether the histogram comes from sampling a normal distribution. The R2 value of this Q-Q plot is 0.888. This indicates that the data distribution of dengue fever cases versus temperature does not obey a normal distribution, but rather a right-skewed distribution.

Discussion According to thermal biology, temperature limits the transmission of mosquito-borne disease due to its effects on mosquito traits11. The vector-borne disease of arthropods peaks around 23–29ºC and declines to zero in extreme temperatures below 23°C and above 32ºC. At extreme temperatures, disease transmission cannot occur as temperature prohibits the survival, development, and metabolism of the mosquito and the virus12. This supports the shape of the histogram in Fig. 3. As the graph shows, the peak DENV transmission occurs around 26.6°C to 26.6°C with around 32,000 yearly average cases. The transmission of vectorborne diseases such as dengue fever has thermal optima for the highest transmission rate. This is due to the presence of thermal optima for the survival and reproduction of the mosquitos. Furthermore, the DENV transmission illustrated a sharp decline in temperatures below the thermal optima(<26.4°C) and above the thermal optima(>26.6°C). Additionally, the dengue fever cases decrease drastically in temperatures above the optimal temperature than those below the optimal temperature. This is because the decrease in virus transmission at a temperature exceeding the thermal optima is higher than the decrease in virus transmission at a temperature lower than the thermal optima. This creates the trait thermal performance curves coldskewed and the distribution of dengue fever cases to be hot-skewed. In Fig. 3, there are fewer temperature ranges with reported dengue cases in temperatures above 26.6°C than in temperatures below 26.4°C.

464


Conclusion As shown in Fig. 2, the DENV transmission marked the highest of 32767.0 yearly average cases, which proves the hypothesis about the presence of evident thermal optima for the transmission of DENV. One limitation is that there is only one sample each in the temperature intervals (26.5°C,26.69°C) and (26.7°C,26.89°C). This creates low statistical reliability as there could be more variability in the data set. In the future, when more data are collected, further analysis could be conducted. Another limitation of this study is that multiple confounding variables, which have a clear association with the DENV transmission, were disregarded. Hence, an extension to this study would be to take account of other confounders. In Fig. 1, there are multiple outliers with a high number of dengue fever cases around 26°C. This may indicate the existence of the confounders. Therefore, there is a need for further research into other confounders such as rainfall and population density. With more variables, a multi-variable linear regression could be conducted to present a deeper insight into the effect of climate on the transmission of DENV.

465


References 1.

“Dengue and Severe Dengue.” World Health Organization, World Health Organization, www.who.int/news-room/fact-sheets/detail/dengue-and-severe-dengue. Accessed 14 Aug. 2024.

2.

“Dengue Fever, Once Confined to the Tropics, Now Threatens the U.S.” NBCNews.Com, NBCUniversal News Group, 28 May 2024, www.nbcnews.com/health/health-news/denguefever-climate-change-mosquitos-tropical-disease-rcna149366.

3.

“Acute Communicable Disease Control.” Department of Public Health - Acute Communicable Disease Control, publichealth.lacounty.gov/acd/vector.htm#:~:text=Vector%2DBorne%20Disease%3A%20Disea se%20that,%2C%20Lyme%20disease%2C%20and%20malaria. Accessed 14 Aug. 2024.

4.

Carrington, Lauren B., and Cameron P. Simmons. “Human to Mosquito Transmission of Dengue Viruses.” Frontiers, Frontiers, 3 June 2014, www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2014.00290/full.

5.

P;, Delaunay P;Jeannin C;Schaffner F;Marty. “[News on the Presence of the Tiger Mosquito Aedes Albopictus in Metropolitan France].” Archives de Pediatrie : Organe Officiel de La Societe Francaise de Pediatrie, U.S. National Library of Medicine, pubmed.ncbi.nlm.nih.gov/19836679/. Accessed 14 Aug. 2024.

6.

Author links open overlay panelChao-Fu Yang a, et al. “Discriminable Roles of Aedes Aegypti and Aedes Albopictus in Establishment of Dengue Outbreaks in Taiwan.” Acta Tropica, Elsevier, 23 Oct. 2013, www.sciencedirect.com/science/article/pii/S0001706X13002908.

7.

Thongsripong, Panpim, et al. “Human–Mosquito Contact: A Missing Link in Our Understanding of Mosquito-Borne Disease Transmission Dynamics.” OUP Academic, Oxford University Press, 10 May 2021, academic.oup.com/aesa/article/114/4/397/6273070.

8.

Thomson, R. C. Muirhead. “The Reactions of Mosquitoes to Temperature and Humidity: Bulletin of Entomological Research.” Cambridge Core, Cambridge University Press, 10 July 2009, www.cambridge.org/core/journals/bulletin-of-entomologicalresearch/article/abs/reactions-of-mosquitoes-to-temperature-andhumidity/5289E2C1B8C20D27CCCF9D78840E0E50.

9.

Van Panhuis, Willem, et al. “Counts of Dengue Reported in Malaysia: 1963-2011.” Zenodo, Project Tycho, 3 June 2024, zenodo.org/records/11451659.

10.

“World Bank Climate Change Knowledge Portal.” Climatology | Climate Change Knowledge Portal, climateknowledgeportal.worldbank.org/country/malaysia/climate-data-historical. Accessed 14 Aug. 2024.

11.

Cator, Lauren J., et al. “More than a Flying Syringe: Using Functional Traits in Vector-Borne Disease Research.” bioRxiv, Cold Spring Harbor Laboratory, 1 Jan. 2019, www.biorxiv.org/content/10.1101/501320v3.abstract. Thermal Biology of Mosquito‐borne Disease - Mordecai - 2019 - Ecology Letters - Wiley Online Library, onlinelibrary.wiley.com/doi/full/10.1111/ele.13335. Accessed 13 Aug. 2024.

466


Exploring the Wave Protection Capability of Wave-Block According to the Slope Angle of the Breakwater Surface Author 1

Author 2

Full Name

:

Lee, Sangmin

School Name

:

Hansung Science High School

Full Name

:

Choi, Yunho

:

Hansung Science High School

(Last Name, First Name)

(Last Name, First Name)

School Name

Abstract Due to recent global warming and climate change, sea levels are rising, and abnormal weather conditions are occurring. This has increased the importance of breakwaters. For example, wavedissipating blocks, as upright breakwaters, cannot be constructed on weak ground and pose the risk of erosion and subsidence if waves exceed their design limits. To address these issues, we explored the wave-blocking ability according to the angle of the wave-dissipating block's breakage surface by combining sloping breakwaters with wave-dissipating blocks. The wave-dissipating blocks were designed with angles of 30°, 45°, 60°, and 90°, with an upright wall set as the control group. The performance of the breakwaters was compared using wave overtopping, reflection rate, and the velocity change rate of the waves entering and exiting the wave-dissipating blocks. These metrics were chosen because wave overtopping and reflection rate are two critical factors for wave-blocking, and the velocity change rate is related to the amplitude of the reflection rate. The experimental results showed that the wave-dissipating block with a 45° angle performed the best, and the wave overtopping and reflection rate were influenced by the period and height of the incoming waves. This study indicates that existing wave-dissipating blocks can be improved and replaced. However, additional research is needed on the economic feasibility, stability, and noise reduction of breakwaters that combine sloping breakwaters with wave-dissipating blocks.

Keywords Global warming, Climate change, Sea level rise, Abnormal weather, Breakwater, Wave-dissipating block, Upright breakwater, Sloping breakwater, Wave overtopping, Reflection rate, Velocity change rate, Wave height, Period, Economic feasibility, Stability, Noise, Additional research

467


1.Introduction Breakwaters come in various types, including sloped breakwaters, vertical breakwaters, composite breakwaters, and submerged breakwaters. Sloped breakwaters are typically constructed in areas with weak ground and shallow depths, with tetrapods being a common example. Vertical breakwaters are built on solid ground, with Wave-Blocks as a typical example. Composite breakwaters combine the advantages and disadvantages of both sloped and vertical breakwaters. In South Korea, most coastal areas use dissipative-type irregular blocks to protect harbors by preventing waves and erosion. The tetrapod is a widely used dissipative block made of concrete. It is relatively easy to install, cost-effective, and applied in various locations as a breakwater and coastal protection structure. The spaces between each block also provide habitats for marine life, offering multiple benefits. However, recent issues have arisen at construction sites covered with tetrapods due to the gaps between blocks. Between 2014 and 2016, 23% (34 people) of fatalities occurred at breakwaters and harbors, and between 2018 and 2019, there were 51 accidents involving tetrapods, with 8 resulting in death. Additionally, there are problems related to coastal environmental pollution and the reduced usability of waterfront spaces due to various types of waste. As a solution to these problems, Wave-Blocks, which address the issues of tetrapods, have emerged as an alternative. However, Wave-Blocks, as vertical breakwaters, also have the typical disadvantages of vertical breakwaters. They cannot be installed in areas with weak ground, and if waves exceed the design limits, erosion, subsidence, or scouring of the ground can occur, leading to the collapse of the breakwater. To overcome these drawbacks, this study aims to improve Wave-Blocks by integrating them with sloped breakwaters and exploring the wave protection capability based on the angle of the Wave-Block surface.

2. Theoretical Background 2.1. Breakwaters Breakwaters are constructed to protect coastal areas from tides, currents, waves, and storm surges. They are installed to minimize erosion and safeguard harbors. The design of a breakwater should take into account the characteristics of the waves, including wavelength, wave height, and period, to ensure the appropriate type of breakwater is chosen.

2.2. Types of Breakwaters There are three main types of breakwaters: sloped breakwaters, vertical breakwaters, and composite breakwaters. Sloped breakwaters are typically constructed in areas with weak ground or shallow water depths where wave activity is minimal. They are cost-effective and convenient for construction and maintenance, but they require a large area and must be built in shallow waters. Vertical breakwaters require less space and construction cost, but they pose a risk of erosion and subsidence if the ground is weak or if waves exceed the design limits. Composite breakwaters combine the features of both sloped and vertical types, allowing them to be constructed regardless of water depth or wave conditions. They effectively mitigate most disadvantages of the other types, offering a faster and more affordable construction process compared to sloped breakwaters, though they are slower and more expensive than vertical ones. There are three main types of breakwaters: sloping breakwaters, vertical breakwaters, and composite breakwaters.Sloping Breakwaters are typically constructed in areas with weak foundations, shallow waters, or where wave activity is minimal. These breakwaters are cost-effective and easy to build and maintain, but they require a large area and must be constructed in shallow waters, which can be a limitation. Vertical Breakwaters require less space and are generally cheaper to construct. However, they pose risks of erosion and settlement if built on weak foundations or if waves exceed the design limits, which can compromise their stability. Composite Breakwaters combine the features of both sloping and vertical breakwaters, allowing them to be constructed regardless of water depth or wave conditions. They are designed to overcome most of the disadvantages associated with the other types. Although composite breakwaters are faster and cheaper to construct than sloping breakwaters, they are slower and more expensive than vertical breakwaters.

468


Figure. 1. Types of Breakwaters (from left to right: Sloping Type, Vertical Type, Composite Type)

2.3. Wave- Blocks (Haepar Block) It is a breakwater designed to redirect wave energy by 180 degrees, causing the waves to collide with subsequent incoming waves, thereby reducing the impact on the structure. By absorbing wave energy internally, it also reduces the overtopping volume. This type of breakwater falls under the category of vertical breakwaters. The advantages of the Wave-Block compared to the widely used tetrapods include a reduction in overtopping volume by approximately tenfold. While tetrapods require a large quantity for installation, leading to reduced cost-effectiveness, Wave-Blocks can achieve structural stability when combined with the column-binding method, resulting in over 20% reduction in construction costs. Traditional tetrapods have the drawback of creating gaps between blocks, which can lead to falling accidents and the accumulation of marine debris. However, Wave-Blocks prevent such accidents and do not allow marine debris to accumulate, thus avoiding the associated odors and decay. Additionally, as a vertical breakwater, the flat top surface of Wave-Blocks can be utilized as a recreational space. However, since Wave-Blocks are a relatively new technology, related research is still lacking. Moreover, as a type of vertical breakwater, they cannot be installed in areas with weak ground and are not effective against waves that exceed the design threshold. Figure. 2. Wave- Blocks (Haepar Block) and Tetrapod

2.4. Wave Action When waves approach the coast and enter shallow waters where the depth is less than half the wavelength, the wavelength shortens, and the wave height increases, resulting in breaking waves. Waves exhibit properties of reflection, refraction, and diffraction. The diameter of the orbital motion of water particles at the surface is equal to the wave height, and this diameter reduces by half with each depth increase of 1/9th of the wavelength. When the wave crest forms a minimum angle of 120° or when the wave height exceeds 1/7th of the wavelength, the wave will break. 2

2.5. Overtopping Overtopping refers to the phenomenon where seawater flows over structures such as embankments, breakwaters, or seawalls due to high waves or storm surges. The overtopping volume is defined as the

469


volume of water that overtops the structure per unit width, while the overtopping discharge refers to the volume of water overtopping the structure per unit width per unit time.

2.6. Reflection Coefficient The reflection coefficient is the ratio of the energy of the reflected wave to that of the incident wave at the boundary of different media. It is often described as the ratio of the amplitude of the reflected wave to the amplitude of the incident wave.

3. Research Objectives and Distinctiveness 3.1. Research Objectives The primary objective of this study is to explore the wave protection performance of breakwater structures by analyzing the overtopping volume and the resultant waves formed by the incident and reflected waves on Wave-Blocks, with the aim of improving and potentially replacing the tetrapods currently installed in South Korea. To achieve this, three variables that are expected to influence the wave protection performance of the breakwater structures were identified and experimented upon.

3.2. Distinctiveness from Previous Studies Previous studies have analyzed the reflection coefficients and overtopping characteristics of vertical walls, tetrapods, and Wave-Blocks. Other research has focused on improving Wave-Blocks by varying factors such as the number, size, and exit angle of the holes in the blocks. In contrast, this study proposes a novel Wave-Block design that integrates sloped breakwaters with Wave-Blocks. A model was created, and experiments were conducted to measure the overtopping volume. Based on the measured wave heights, the reflection coefficient was calculated. Additionally, the flow of seawater within the WaveBlock pipes was analyzed using Ansys Fluent to assess the rate of change in velocity, enabling a comparison of wave protection capabilities based on the angle of the breakwater surface.

4. Experimental Design 4.1. Wave Generator Period and Wave Propagation Speed To determine the wave propagation speed, Speed Relationship Between Angular Frequency and ! #$ %! Frequency were used v = " , w = 2πf = " , v = #$ . (v: Wave speed, λ:Wavelength, T: w:Angular frequency) .In this context, the angular frequency is determined by the settings of the wave generator motor, so the wave propagation speed can be calculated once the wavelength is known. To determine the wavelength, the wave generator was operated under different conditions, and the resulting waves were recorded. The wavelength was measured using the ruler function available in Tracker software. To ensure precision, the wavelength was measured from the recorded footage at least five times, and the average value was used for subsequent calculations. Table. 1. Relationship Between Wave Generator Motor RPM and Incident Wave Period Device Motor Period of Artificial Wave (unit: seconds) 35rpm

1.7s

30rpm

2.0s

25rpm

2.4s

20rpm

3.0s

470


4.2. Construction of Breakwater Models The structures used in this experiment consist of five types: a vertical wall without holes, and breakwater blocks with slopes of 30°, 45°, 60°, and 90°. The breakwater blocks, each with a height of 30 cm, are manufactured with different base lengths depending on the slope angle. The vertical wall is made by flipping the 90° model. The production process is as follows. 1.Cut the styrofoam into 15 cm × 15 cm pieces. 2.Use a U-pipe to draw a design on the styrofoam, then cut it using a hot wire cutter. 3.Assemble the cut styrofoam pieces to construct the breakwater structures. 4.For the models with breakwater angles of 45° and 30°, additional structures are made to complete the breakwater models. Figure. 3. Model Design (From left to right: 30 degrees, 45 degrees, 60 degrees, 90 degrees model)

Figure. 4. Model with angles from left to right: 30 degrees, 45 degrees, 60 degrees, 90 degrees

4.3. Measurement of Overtopping The water depth and wave period were selected based on the study and the results are as follows. Table. 2.Overtopping Measurement Conditions Water Depth

Period 1.7s

0.21m

2.0s 2.4s

0.23m

2.4s 3.0s

The measurement process is as follows 1.Attach a reservoir behind the breakwater block. 2.Secure the breakwater block to the wave generator and wait until the waves calm down. 3.Operate the wave generator at the selected period value and measure the volume of monthly waves for 30 seconds. Repeat this process at least four times to obtain an average value.

471


Figure. 5. Overtopping measure ment Schematic and Experimental Setup

4.4. Wave Energy Induction Figure. 6. Explanation of Symbols Used for Equation Derivation η& (t): Wave represented by a harmonic function a' : Incident wave height a(! : Wave height of the reflected wave off the wall of the breakwater block a(" : Wave height of the reflected wave off the pipe of the breakwater block #$

k(= ! ): Wave number ω(= 2πf): Angular frequency ρ: Density of water Φ: Velocity potential of the wave V: Volume A: Area

Figure. 7. Figure 18. Specifications of the Wave Generated by the Wave-Making Device h: Wave Height, λ: Wavelength, u: Axial Velocity, w: Velocity in the z-Axis Direction

Depending on the movement of sea waves, the energy held in the water can be either potential energy or kinetic energy. First, if the part at position (x, z) has a force in the force field, the potential energy can be multiplied by ρgz dz dx. Therefore, the fundamental width energy of the wave and its position at the fundamental length are # ( ')/+ 1 " $ 1 " 1 1 1 E! = % % ρgz dzdx − % dx % ρgz dzdx = ρg % η' dx = ρga' % cos' (kx − ωt)dx = ρga' λ # %& λ # 2λ 2λ 4 %& # #

On the other hand, since the kinetic energy of the same minute part is , the kinetic energy contained in the unit width and unit length of the wave is as follows )

*

- ,

)

*

- ,

01

01

E& = ! ∫+ ∫./ # (u# + w # ) dzdx = ! ∫+ ∫./ # [( 02 )# + ( 03 )# ] dzdx

Substituting the velocity potential into the above equation and calculating the upper limit of the integration as 0 instead of η as an approximation of a small amplitude wave, the following equation is

472


obtained. E& =

ρa# ω# sinh2kh 2ρa# gk tanhkh sinhkh coshkh 1 = = ρga# 8k sinh# kh 8k sinh# kh 4

According to the above derivation, the unit width and energy per unit length of a small amplitude wave ) are E = E4 + E& = # ρga# . For the wave passing through the pipe of the wave block, the energy conservation per unit width and unit length can be calculated as follows. 1 1 ρVgh 2Vh ρga' # = ρga( #" + → a ' # = a (" # + 2 2 A A

Figure. 8. Explanation of Symbols Used for Equation Derivation η& (t): Superposed wave (combination of incident and reflected waves) a' : Height of the incident wave a(! : Height of the reflected wave off the wall of the breakwater block a(" : Height of the reflected wave off the pipe of the breakwater block ε' : Phase constant of the incident wave ε(! : Phase constant of the wave reflected off the wall ε(" : Phase constant of the wave reflected off the pipe #$

k: wave number) = !

k′: Wave number of the reflected wave off the pipe x: Phase of the wave gauge from an arbitrary reference point ω(= 2πf): Angular frequency w′: Angular frequency of the reflected wave off the pipe b: Cross-sectional area ratio of the pipe to the breakwater surface

The preceding study, represented the combined wave of incident and reflected waves using the following wave equation η(t) = a' cos(kx − wt + ε' )+a( cos(kx + wt + ε( )+e& (t) Figure. 9. Explanation of Symbols Used for Equation Derivation η(t): The wave measured from the wave gauge e& (t): Measurement errors in the wave observed at the

k-th wave gauge due to nonlinear interactions, signal interference, and noise. In this study, since the wave height is measured using a tracker program by filming an image, the expressed in the given equation is ignored. The reflected wave is divided into a wave passing through the pipe and a wave hitting the wall, so taking this into account and expressing the wave equation: η(t) = a! cos(kx − wt + ε! )+(1 − b)a", cos(kx + wt + ε## ) + b a"- cos(k′x + w′t + ε!!! ) Assuming that the measuring point is taken as the origin and a parallel movement is made to eliminate. η(t) = a! cos(− wt + ε! )+(1 − b)a", cos(wt + ε## ) + b a"- cos(w′t) When waves cross different media in the process, the frequency remains constant, so the frequency of the incident sea wave and the reflected wave reflected by the pipe are the same. Therefore, it is as follows. η(t) = a! cos(− wt + ε! )+(1 − b)a", cos(wt + ε## ) + b a"- cos(wt) By trigonometry formula

473


η(t) = a! (coswt cosε! + sinwt sinε! ) + a(LSUBR$ 1 − b)(coswt cosε!! − sinwt sinε!! ) + ba"- coswt Irearrange the equation, η(t) = coswt(a! cosε! + (1 − b)a", cosε!! + ba"- ) + sinwt(a! sinε! − (1 − b)a", sinε!! ) If we substitute as follows A = a! cosε! + (1 − b)a", cosε!! + ba"- B = a! sinε! − (1 − b)a", sinε!! (

n% (t) = √A& + B& cos(wt − α) (α = tan'$ ()))

&*

Therefore, the amplitude of the composite wave is √A& + B& and the period is + . If expressed ,-

= a! & + (1 − b)& a", & + b& a"- & + 2b a! a"- cosε! + 2b(1 − b)a", a"- cosε!! + 2(1 − b)a! a", cos(ε! + ε!! ) -

as a quadratic equation for (1 − b)& a", & + 2[b(1 − b)a"- cosε!! + (1 − b) a! cos(ε! + ε!! )]a", + b& a"- & + a! & h& + 2b a! a"- cosε! − =0 4 ,β = b(1 − b)a"- cosε!! + 2(1 − b) a! cos(ε! + ε!! ), γ = b& a"- & + a! & + 2b a! a"- cosε! − At this time, in order to obtain a physically meaningful solution, we add the condition that must have / one value. - = β& − (1 − b)& γ = 0 Solving the quadratic equation using the root formula |a", | = -

|

'0 ($'2)-

|=

($'2)4|6| ($'2)-

=

7|8. - 9&8. 8/ 2 :;<=. 9 2- 8/ - '0 | -

-

1

($'2)

At this time, can be obtained by using the ratio of

the pipe length and period of the circuit block. Table. 3. Values of 𝜂 for Breakwater Block Models Water Depth

0.21m

0.23m

Perioid

30° model

45° model

60° model

90° model

1.7s

0.996

0.996

0.995

0.996

2.0s

0.995

0.995

0.994

0.995

2.4s

0.995

0.995

0.992

0.994

2.4s

0.995

0.995

0.993

0.994

3.0s

0.993

0.993

0.990

0.992

According to the above table, it can be approximated by cosε' ≃ 1 . Therefore "

5|7# " 8#7# 7$ : 8 :" 7$ " .% | "

"

. So when calculating the composite wave of the reflected waves using the same method, the amplitude (a( ) is Oa(! # + a(" # + 2a(! a(" cosε'' . To find the cosε'' in this case. &

().:)

474


Table. 4. Values of 𝐜𝐨𝐬𝛆𝐈𝐈 according to measurement conditions Water Depth

0.21m

0.23m

Perioid

30° model

45° model

60° model

90° model

1.7초

-0.996

-0.996

-0.995

-0.996

2.0초

-0.995

-0.995

-0.994

-0.995

2.4s

-0.995

-0.995

-0.992

-0.994

2.4s

-0.995

-0.995

-0.993

-0.994

3.0s

-0.993

-0.993

-0.990

-0.992

According to the above table, it can be approximated as cosε'' ≃ −1 . Therefore, a( = Oa(! # + a(" # − 2a(! a(" = |a(! − a(" |

4.6. Reflection Coefficient Comparison Referring to the study, the water depth and wave period were selected, and the results are as follows. The period of the composite wave is [value], which matches the period of the incident wave. Thus, by analyzing the wave heights for 130-180 cycles (2 minutes) per experiment, the maximum wave height of the composite wave can be sufficiently measured. Therefore, the wave height of the reflected wave can be determined using the methods described for the incident and reflected waves. Using the above method, the amplitudes of the reflected and incident waves are obtained, and the reflection coefficient is defined as the amplitude of the reflected wave relative to the amplitude of the incident wave, for comparison. (1).Secure the wave block on the wave flume and wait until the water calms down. (2).Operate the wave flume with the selected period value and record the wave movements for 2 minutes. (3).Use the tracker program to measure the average maximum amplitude of the incident wave and the composite wave formed by the incident and reflected waves from the recorded video. (4).Determine the amplitude of the reflected wave using the previously explained method for calculating the amplitude of the composite wave.

4.7. Numerical Analysis Using ANSYS Fluent (1).3D Modeling: Use the ANSYS Designer module within ANSYS Fluent to create a 3D model of the initially designed wave block pipe. (2).Domain Partitioning: Partition the pipe model as follows: Designate the bottom openings where water enters as inlet. The side surfaces of the inlet are labeled inlet_wall. The curved section of the pipe is labeled bend_curve_wall. The surfaces where water exits are labeled outlet. The side surfaces of the outlet are labeled outlet_wall. (3).Meshing: Create a mesh for the wave block pipe using the mesh tool. Set the mesh size to 0.01 for the straight sections (inlet, inlet_wall, outlet, outlet_wall) and 0.001 for the curved section (bend_curve_wall) to improve accuracy in the curved region. (4).Setup and Simulation: Configure the fluid as water in the setup. Input the flow velocity determined using the tracker program at the inlet. Set the number of iterations to 1000 and start the calculation. (5).Result Analysis: After the calculations are complete, compare the velocities at the inlet and outlet in the results section. (6).isualization: Plot the velocity of water within the pipe as vectors and create a cumulative force graph based on the position inside the pipe.

475


Figure. 10. process for designing a wave-block pipe using the ANSYS Designer module, including the steps for dividing the pipe into different parts

Figure. 11. Mesh setup and the setup screen for numerical analysis for waveblcok pipe Table. 5. Wavelengths According to Measurement Conditions Dept

0.21m

0.23m

Period

1st

2nd

3rd

4th

5th

Average

1.7s

0.66m

0.63m

0.69m

0.68m

0.69m

0.67m

2.0s

0.83m

0.81m

0.80m

0.82m

0.79m

0.81m

2.4s

0.96m

1.02m

0.99m

1.00m

0.98m

0.99m

2.4s

1.00m

1.04m

1.02m

0.98m

1.01m

1.01m

3.0s

1.33m

1.36m

1.34m

1.35m

1.37m

1.35m

Table. 6. Wavelengths and Speeds for a Water Depth and Periods Water Depth

0.21m

0.23m

Period

Wavelengths

Velocity

1.7s

0.67m

0.394m/s

2.0s

0.81m

0.405m/s

2.4s

0.99m

0.413m/s

2.4s

1.01m

0.421m/s

3.0s

1.35m

0.450m/s

5.2. Wave Overtopping

476


At a water depth of 0.21 m, the wave block did not exhibit wave overflow rates, which were observed only in the case of vertical walls. Table. 7. Experimental Results of Overtopping for Vertical Walls at a Water Depth of 0.21 m Period

1st

2nd

3rd

4th

Average

1.7s

1345.5ml

1356.7ml

1349.6ml

1348.7ml

1320.6ml

2.0s

1178.9ml

1180.4ml

1160.7ml

1175.4ml

1173.9ml

2.4s

976.7ml

946.5ml

954.6ml

964.5ml

960.6ml

Table. 8. Experimental Results of Wave Overtopping at a Water Depth of 0.23 m and Period Period

2.4s

3.0s

1st

2nd

3rd

4th

Average

Period

30° model

1437.5ml

1475.5ml

1461.8ml

1450.8ml

1456.4ml

45° model

950.4ml

947.5ml

950.9ml

950.7ml

949.9ml

60° model

1315.3ml

1320.4ml

1320.6ml

1320.5ml

1319.2ml

90° model

2248.7ml

2252.4ml

2259.3ml

2250.6ml

2252.8ml

직벽

2602.7ml

2610.8ml

2626.5ml

2607.9ml

2612.0ml

30° model

1253.2ml

1256.7ml

1254.4ml

1257.2ml

1255.4ml

45° model

760.7ml

759.2ml

750.4ml

758.8ml

757.3ml

60° model

1086.7ml

1088.9ml

1086.6ml

1085.8ml

1087.0ml

90° model

2067.8ml

2137.2ml

2133.4ml

2040.3ml

2044.7ml

직벽

2442.4ml

2441.3ml

2438.3ml

2431.5ml

2438.4ml

Figure. 12. Experimental Graph of Wave Overflow Rate According to Period at a Water Depth 3000 2500 30° model

2000

45° model

1500

60° model

1000

90° model

500

straight wall

0 0.21m 1.7s

0.21m 2.0s

0.23m 2.4s

0.23m 2.4s

0.23m 3.0s

5.3. Reflection Coefficient Table. 9. Average Experimental Values of Maximum Wave Heights for Composite Waves of Incident and Reflected Waves

477


Water Depth

Period 1.7s

30° model 0.129m

45° model 0.104m

60° model 0.135m

90° model 0.173m

0.21m

2.0초

0.136m

0.093m

0.121m

0.995m

2.4초

0.134m

0.093m

0.135m

0.994m

2.4초

0.154m

0.134m

0.139m

0.994m

3.0초

0.115m

0.103m

0.110m

0.992m

0.23m

Table. 10. Average Experimental Values of Maximum Wave Heights According to Conditions Water Depth

Period 1.7s 2.0s 2.4s 2.4s 3.0s

0.21m

0.23m

Incident Wave Height 0.097m 0.088m 0.084m 0.112m 0.100m

Table. 11. Average Amplitude Values of Reflected Waves (a(! ) Generated by Wave Block Walls According to Models Water Depth 0.21m

0.23m

Period 1.7s 2.0s 2.4s 2.4s 3.0s

30° model 0.074m 0.075m 0.073m 0.088m 0.069m

45° model 0.063m 0.057m 0.056m 0.079m 0.063m

60° model 0.076m 0.068m 0.074m 0.081m 0.066m

90° model 0.094m 0.080m 0.083m 0.108m 0.086m

Table. 12. Average Amplitude Values of Reflected Waves (a(" ) Generated by Wave Block Pipes According to Models at a Water Depth Water Depth 0.21m

0.23m

Period 1.7s 2.0s 2.4s 2.4s 3.0s

30° model 0.042m 0.037m 0.035m 0.051m 0.044m

45° model 0.041m 0.036m 0.033m 0.050m 0.043m

60° model 0.039m 0.033m 0.030m 0.048m 0.041m

90° model 0.041m 0.035m 0.033m 0.049m 0.043m

Table. 13. Reflection Coefficient According to Period at a Water Depth Water Depth 0.21m

0.23m

Period 1.7s 2.0s 2.4s 2.4s 3.0s

30° model 33.0% 39.6% 45.2% 33.1% 24.7%

45° model 23.0% 24.6% 27.6% 26.1% 20.6%

60° model 38.5% 40.1% 48.6% 29.6% 25.2%

90° model 54.3% 52.0% 59.2% 53.1% 42.6%

Straight Wall 81.0% 76.0% 85.0% 84.0% 82.0%

Figure. 13. Reflection Coefficient According to Period at a Water Depth (Unit: %)

478


100 90 80 70 60 50 40 30 20 10 0

30° model 45° model 60° model 90° model 0.21m 1.7s 0.21m 2.0s 0.21m 2.4s 0.23m 2.4s 0.23m 3.0s

Straight Wall

5.4. Numerical Analysis Using ANSYS Fluent Table. 14. Incoming Velocity, Outcoming Velocity, Change in Velocity, and Rate of Change in Velocity According to Conditions Model

Depth 0.21m

30° model 0.23m 0.21m 45° model 0.23m 0.21m 60° model 0.23m 0.21m 90° model 0.23m

Incoming velocity

Outcoming Velocity

Change in Velocity

Rate of Change in Velocity

0.394m/s 0.405m/s 0.413m/s 0.421m/s 0.450m/s 0.394m/s 0.405m/s 0.413m/s 0.421m/s 0.450m/s 0.394m/s 0.405m/s 0.413m/s 0.421m/s 0.450m/s 0.394m/s 0.405m/s

0.308m/s 0.323m/s 0.334m/s 0.346m/s 0.376m/s 0.280m/s 0.299m/s 0.334m/s 0.310m/s 0.339m/s 0.309m/s 0.320m/s 0.334m/s 0.320m/s 0.380m/s 0.339m/s 0.359m/s

0.087 0.082 0.078 0.075 0.074 0.114 0.114 0.079 0.111 0.111 0.085 0.085 0.079 0.085 0.070 0.055 0.053

22.0% 20.2% 18.9% 17.9% 16.5% 29.0% 27.6% 19.1% 26.4% 24.7% 21.5% 21.0% 19.1% 21.0% 15.6% 14.0% 12.9%

0.413m/s 0.421m/s 0.450m/s

0.351m/s 0.371m/s 0.403m/s

0.054 0.050 0.047

13.3% 11.8% 10.5%

Figure. 14. Speed Change Rate According to Period (Unit: %) 40 30

30° model

20

45° model

10

60° model

0

90° model 0.21m 1.7s 0.21m 2.0s 0.21m2.4s 0.23m 2.4s 0.23m 3.0s

Table. 15. Magnitude of Received Cumulative Force at Different Positions for a 30° Wave Block

479


at a Water Depth of 0.21 m Location 0.025m 0.049m 0.074m

Period 1.7s 0.206N 0.378N 0.471N

Period 2.0s 0.310N 0.494N 0.564N

Period 2.4s 0.421N 0.617N 0.668N

0.099m 0.123m 0.148m 0.173m

0.502N 0.512N 0.484N 0.414N

0.603N 0.599N 0.575N 0.512N

0.698N 0.688N 0.653N 0.603N

0.197m 0.222m 0.247m

0.444N 0.456N 0.432N

0.532N 0.554N 0.506N

0.609N 0.613N 0.592N

Figure. 15. Magnitude of Received Cumulative Force at Different Positions (Unit: m) for a 30° Wave Block at a Water Depth of 0.21 m (Unit: N) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.05

0.1

0.15 1.7s

2.0s

0.2

0.25

0.3

2.4s

Table. 16. Magnitude of Received Cumulative Force at Different Positions for a 30° Wave Block at a Water Depth of 0.23 m Location 0.025m

Period 2.4s 0.432N

Period 3.0s 0.546N

0.049m 0.074m 0.099m 0.123m 0.148m 0.173m 0.197m 0.222m 0.247m

0.622N 0.744N 0.753N 0.723N 0.685N 0.704N 0.733N 0.757N 0.737N

0.718N 0.829N 0.822N 0.787N 0.766N 0.768N 0.794N 0.818N 0.790N

Figure. 16. Magnitude of Received Cumulative Force at Different Positions (Unit: m) for a 30°

480


Wave Block at a Water Depth of 0.23 m (Unit: N) 1 0.8 0.6 0.4 0.2 0 0

0.05

0.1

0.15 2.4s

0.2

0.25

0.3

3.0s

Table. 17. Magnitude of Received Cumulative Force at Different Positions for a 45° Wave Block at a Water Depth of 0.21 m Location 0.020m 0.041m 0.061m 0.082m 0.102m 0.123m 0.143m 0.164m 0.184m 0.205m

Period 1.7s 0.202N 0.378N 0.458N 0.497N 0.525N 0.514N 0.523N 0.537N 0.524N 0.518N

Period 2.0s 0.292N 0.463N 0.532N 0.572N 0.595N 0.589N 0.605N 0.631N 0.625N 0.604N

Period 2.4s 0.362N 0.538N 0.603N 0.625N 0.663N 0.653N 0.681N 0.718N 0.725N 0.704N

Figure. 17. Magnitude of Received Cumulative Force at Different Positions (Unit: m) for a 45° Wave Block at a Water Depth of 0.21 m (Unit: N) 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.05

0.1

0.15

1.7s

2.0s

0.2 2.4s

Table. 18. Magnitude of Received Cumulative Force at Different Positions

481

0.25


for a 45° Wave Block at a Water Depth of 0.23 m Location 0.020m 0.041m 0.061m

Period 2.4s 0.382N 0.545N 0.623N

Period 3.0s 0.437N 0.606N 0.693N

0.082m 0.102m 0.123m 0.143m

0.672N 0.683N 0.673N 0.677N

0.739N 0.752N 0.741N 0.744N

0.164m 0.184m 0.205m

0.704N 0.696N 0.663N

0.767N 0.756N 0.725N

Figure. 18. Magnitude of Received Cumulative Force at Different Positions (Unit: m) for a 45° Wave Block at a Water Depth of 0.23 m (Unit: N) 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.05

0.1

0.15 2.4s

0.2

0.25

3.0s

Table. 19. Magnitude of Received Cumulative Force at Different Positions for a 60° Wave Block at a Water Depth of 0.21 m Location 0.013m 0.026m 0.039m 0.052m 0.065m 0.078m 0.091m 0.104m 0.117m 0.130m

Period 1.7s 0.310N 0.521N 0.633N 0.804N 0.876N 0.906N 0.785N 0.744N 0.764N 0.743N

Period 2.0s 0.423N 0.654N 0.735N 0.939N 0.995N 1.009N 0.886N 0.823N 0.863N 0.845N

Period 2.4s 0.523N 0.757N 0.854N 1.058N 1.131N 1.104N 1.005N 0.994N 0.973N 0.953N

Figure. 19. Magnitude of Received Cumulative Force at Different Positions (Unit: m) for a 60°

482


Wave Block at a Water Depth of 0.21 m (Unit: N) 1.2 1 0.8 0.6 0.4 0.2 0 0

0.02

0.04

0.06

0.08

1.7s

2.0s

0.1

0.12

0.14

2.4s

Table. 20. Magnitude of Received Cumulative Force at Different Positions for a 60° Wave Block at a Water Depth of 0.23 m Location 0.013 0.026 0.039 0.052 0.065 0.078 0.091 0.104 0.117 0.130

Period 2.4s 0.464 0.550 0.796 0.996 1.117 1.049 0.909 0.833 0.822 0.821

Period 3.0s 0.593 0.735 0.960 1.186 1.336 1.289 1.175 1.095 1.042 1.021

Figure. 20. Magnitude of the force received (in N) according to the position (in meters) at a water depth of 0.23 m for a 60° wave-dissipating block. 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0

0.02

0.04

0.06

0.08

2.4s

0.1

0.12

0.14

3.0s

Table. 21. Magnitude of Received Cumulative Force at Different Positions for a 90° Wave Block

483


at a Water Depth of 0.21 m Location 0.012m 0.025m 0.037m

Period 1.7s 0.487N 0.830N 0.995N

Period 2.0s 0.605N 0.924N 1.123N

Period 2.4s 0.714N 1.092N 1.282N

0.050m 0.062m 0.075m 0.087m

0.969N 0.875N 0.802N 0.770N

1.103N 1.022N 0.920N 0.871N

1.256N 1.133N 1.056N 0.966N

0.100m 0.112m 0.125m

0.741N 0.721N 0.713N

0.836N 0.817N 0.801N

0.933N 0.903N 0.884N

Figure. 21. Magnitude of the cumulative force received (in N) according to the position (in meters) at a water depth of 0.21 m for a 90° wave-dissipating block. 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0

0.02

0.04

0.06 1.7s

0.08 2.0s

0.1

0.12

0.14

2.4s

Table. 22. Magnitude of Received Cumulative Force at Different Positions for a 90° Wave Block at a Water Depth of 0.23 m Location 0.012m 0.025m 0.037m

Period 2.4s 0.741N 1.204N 1.376N

Period 3.0s 0.851N 1.321N 1.476N

0.050m 0.062m 0.075m 0.087m 0.100m 0.112m 0.125m

1.356N 1.162N 0.963N 0.927N 0.924N 0.893N 0.884N

1.454N 1.286N 1.105N 1.064N 1.054N 0.996N 0.988N

Figure. 22. Magnitude of the cumulative force received (in N) according to the position (in

484


meters) at a water depth of 0.23 m for a 90° wave-dissipating block. 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0

0.02

0.04

0.06

0.08

2.4s

0.1

0.12

0.14

3.0s

6. Experimental Results Analysis 6.1. Incident Wave Speed As the period decreased, the speed of the incident wave also decreased, and the speed further reduced with a shallower water depth. The wavelength was observed to be between 1/2 and 1/20 of the water depth. Consequently, the incident wave exhibited the characteristics of a progressive wave, with waves traveling faster at greater water depths for the same period.

6.2. Wave Overflow Rate At a water depth of 0.21m, the wave overflow rate increased as the period of the incident wave decreased, meaning that as the speed of the incident wave decreased, the wave overflow rate increased. This is attributed to the increased wave superposition in front of the model with shorter periods. The same trend was observed at a water depth of 0.23m, where shorter periods resulted in higher wave overflow rates. Additionally, for periods of 2.4s and 3.0s, the wave overflow rate increased in the order of 45°, 60°, 30°, 90°, and vertical wall configurations. The lower wave overflow rate at 60° is due to the waves breaking as they climbed the breakwater face and formed a wave-breaking shape. The 45° configuration, with its gentle slope, allowed the waves to break more easily and demonstrated the best breakwater performance. At 30°, the waves overflowed before breaking into a wave-breaking shape. The length of the U-pipe design also affected the wave overflow rate, with longer U-pipes reducing the wave overflow.

6.3. Reflection Coefficient At a water depth of 0.21 m, the reflection coefficient increased in the order of 45°, 30°, 60°, 90°, and vertical wall, regardless of the period. At a water depth of 0.23 m, for a period of 2.4s, the reflection coefficient increased in the order of 45°, 60°, 30°, 90°, and vertical wall. For a period of 3.0 s, the reflection coefficient increased in the order of 45°, 30°, 60°, 90°. The higher reflection coefficient for the 30° and 60° wave blocks at a period of 2.4s is attributed to the reflection effect caused by the wave block not fully generating wave effects, leading to the incoming water being reflected similarly to a reflected wave. Hence, longer periods result in higher reflection coefficients. For the 60° configuration, the short U-pipe design caused water to remain in the incoming holes, leading to higher reflection. The 45° configuration, being a compromise between 30° and 60°, achieved optimal wave-breaking performance.

6.4. Numerical Analysis Using ANSYS Fluent

485


In the numerical analysis, the rate of speed change increased with longer periods, i.e., higher speeds, at a water depth of 0.21 m. The rate of speed change was highest for the breakwater angles in the order of 90°, 60°, 30°, and 45°, with similar rates for 60° and 30°. At a water depth of 0.23m, the speed change rate followed the same order, with similar rates for 60° and 30°. The magnitude of the received force varied based on the position relative to the U-pipe. The force was highest before and after the curve, with the force being relatively lower in the curve region. For the 30° configuration, the force was higher at positions 0.099m before the curve and 0.222m after the curve, while it was lower at the curve region (0.123m). For the 45° configuration, the force was relatively larger at positions 0.102 m before the curve and 0.184 m after the curve, with a smaller force at the curve region (0.123m). For the 60° configuration, the force was higher at 0.06m, with a lower force in the intermediate region. For the 90° configuration, the force varied across different sections. The force magnitude increased with higher speeds, and the force was smaller at positions with smaller breakwater angles and larger at positions with steeper angles.

7. Conclusion and Recommendations 7.1. Conclusion As water depth increases, the wave overtopping volume also increases. Additionally, the wave overtopping volume increases as the wave period decreases. It was observed that the wave overtopping volume is relatively lower for wave-dissipating blocks compared to vertical walls. Regardless of the type of structure, larger wave heights and longer periods lead to increased wave overtopping volumes. A high reflection coefficient indicates that the structure does not effectively absorb or dissipate incoming waves but rather reflects them, suggesting that higher reflection coefficients correlate with poorer breakwater performance. Generally, the reflection coefficient decreases as the wave period shortens. The reflection coefficients were highest for vertical walls, followed by 45°, 30°, 60°, and 90° angled configurations, with similar reflection coefficients observed for 30° and 60° configurations. The 45° wave-dissipating blocks, which had the lowest reflection coefficient, demonstrated the best performance in terms of reflection. Regarding wave overtopping volumes, the performance of wavedissipating blocks ranked best, in the order of 45°, 30°, 60°, 90°, and vertical walls, with 45° wavedissipating blocks showing the lowest overtopping volumes. This indicates that 45° wave-dissipating blocks have the best breakwater performance in terms of wave overtopping. The rate of change in velocity was compared using the loss of head, which includes frictional and various minor losses. The head loss is represented as: For a straight pipe h? = f?

@ B-

A &C

f is the friction factor, L is the length of

the pipe, D is the diameter of the pipe, g is the acceleration due to gravity, v is the flow velocity. For a curved pipe:h% = f%

Bf is the loss coefficient of the bend, g is the acceleration due to gravity,v is the &C %

flow velocity. When comparing head loss using this formula, the head loss increased in the order of 90°, 60°, 30°, and 45°, regardless of the wave period. Greater head loss results in a reduction in the energy of reflected waves traveling along the U-shaped pipe, which leads to a decrease in the reflection coefficient. Therefore, the 45° configuration exhibited the best breakwater efficiency. The force magnitude at each position was highest at the bend curve of the pipe (bend_curve_wall). To increase the lifespan of wave-dissipating blocks in breakwaters, additional materials that absorb shock could be added or installed at these regions.

7.2. Recommendations Firstly, there is a need for further research on breakwaters that combine sloped structures with wavedissipating blocks, focusing on aspects such as economic feasibility, stability, and noise reduction, as existing studies in these areas are insufficient. Secondly, in the measurement of reflection coefficients, although the Treaker program was used in this study, employing a wave gauge followed by methods such as FFT (Fast Fourier Transform) or the three-

486


point method to separate and measure reflected waves could yield more accurate experimental results. Thirdly, while this study used styrofoam and glue guns for constructing breakwater models, utilizing more precise methods such as 3D printing or concrete could help reduce experimental errors and improve the accuracy of the results.

Refernece [1] 박우선, 오영민, 전인식, 서경덕, 이달수 “최소자승법에의한 입ㆍ반사파의분리” , 한국해안해양공학회 1992년도 정기학술강연회 발표논문 초록집, 121 – 125 (1992년) [2] 김인철, 박기철 “회파블록케이슨 방파제의 수리학적 성능에 관한 실험적 연구” , 韓國海洋工學會誌 = Journal of ocean engineering and technology, 제33권 1호, 61 – 67 (2019년) [3] 황종길, 우종협, 조용식 “사각형형상 수중 방파제에 의한 불규칙 파의 반사” , 한국수자원학회 논문집, 제37권 11호 , 949 – 958 (2004년) [4] 이종인, 함장호, 안춘성, 조지훈 “직립 유공케이슨 방파제의 반사특성분석” , 한국건설기술연구원, https://scienceon.kisti.re.kr/srch/selectPORSrchReport.do?cn=TRKO201800035283, (2000년) [5] 장창환, 고철승 “연안구조물 기술 동향 –소파블록을 중심으로”, 물과 미래 : 한국수자원학회지 = Water for future, 제51권 8호, 42 – 47 (2018년) [6] 이원, 박진호, 조용식 “사석 경사제 및 소파블록 경사제 호안의 월파특성” , 한국방재학회 2008년도 정기총회 및 학술발표대회, 443 – 445 (2008년) [7] 서승남, 오영민, 김창일 “경사식 방파제의 최적기술개발”, 한국해양연구원, https://scienceon.kisti.re.kr/srch/selectPORSrchReport.do?cn=TRKO201400020067, (2003년) [8] 김상기, 이재용, 김인철, 이창준 “테트라포드 경사식 방파제 대체를 위한 2.7ton㎥ 이상의 중량콘크리트를 활용한 직립방파제용회파블록 및 블록결속 기술 개발최종보고서”, https://policy.nl.go.kr/search/searchDetail.do?rec_key=UH1_00000127642424 [9] 회파블록 공법, http://www.yujoo.co.kr/bbs/content.php?co_id=0302 [10] 한국해양조사원, 파랑 및 반사파 https://www.khoa.go.kr/khoa/pgmctrl/selectDictionaryList.do?searchCondition=%ED%8C%8C

487


Exploring the Temporal Dynamics of Honeybee Acoustic Influence on Nectar Production in Lavandula Angustifolia Author 1

Full Name

:

Lee, Sung Joo

:

Asia Pacific International School

(Last Name, First Name)

School Name

Abstract Pollinators play a significant role in promotion of biodiversity and agricultural production. Among them, the most prominent pollinating group is bees. Through coevolution with pollinators, angiosperms have evolved mechanisms to attract pollinators for greater reproductive efficacy. One such example is flexible nectar production. Experimental evidence has suggested that pollinator sounds boost nectar production in certain species of flowers (Veits). In this research, we experimentally assessed the correlation between sugar concentration of lavender (Lavandula angustifolia) nectar and the sound of bees. Two groups of lavenders were prepared: one exposed to prerecorded honeybee wing sounds and a control. Measurements of nectar sugar concentration showed sugar levels in the experimental group was higher by 10 percent. The results suggest that honeybee sounds boost sugar secretion through nectar in lavenders.

Keywords Pollinators, Biodiversity, Sustainable Agriculture, Honeybee, Sound Detection

488


I. INTRODUCTION The relationship between plants and pollinators is recognized as a crucial factor in maintaining biodiversity and promoting species richness within ecosystems (Dodd et al.). Pollinators, particularly bees, are essential for the reproductive success of many plant species, which in turn support a wide array of other species that depend on these plants for food and habitat. Without biological pollination, most plants would be unable to reproduce effectively, leading to a cascade of negative effects on species that rely on pollen, seeds, or nectar as their primary food sources (Kearns). The decline in these plant species could disrupt entire ecosystems, diminishing biodiversity and weakening the ecological networks that sustain life. Additionally, pollination provides vital ecosystem services to the agricultural industry, with the reproduction of most crops dependent on animal pollinators (Westercamp). The economic implications are significant, as it is estimated that 87 of the world’s major food crops and over one-third of global crop production rely on animal pollination (Klein). The loss of pollinators could therefore have profound consequences not only for food security but also for the livelihoods of millions of people who depend on agriculture. Among various pollinators, bees are acknowledged as playing an especially dominant role in pollination, making them indispensable to both natural ecosystems and human agriculture (Allen-Wardell et al.). However, in recent years, there has been a dramatic decline in bee populations, a phenomenon first identified in 2006 and termed Colony Collapse Disorder (CCD). This disorder has led to a consistent decline in bee populations, with average losses of around 30% annually in affected regions (Vanengelsdorp; Steinhauer). The causes of CCD are complex and multifaceted, with research suggesting that no single factor is responsible. Instead, it is hypothesized that various environmental stressors, including pathogens, parasites, and poor nutrition, interact with anthropogenic influences to weaken bee colonies, making them more susceptible to collapse. Additional factors contributing to the decline in bee populations include anthropogenic influences such as warming from urbanization, the introduction of non-native species, and the widespread use of pesticides, all of which have been identified as significant risk factors to healthy bee populations (Hamblin; Stout; Abay). Urbanization and habitat fragmentation reduce the availability of foraging resources and nesting sites, while non-native species can outcompete native bees for resources or introduce new diseases. Pesticides, particularly neonicotinoids, have been shown to impair bees' navigation, foraging behavior, and immune systems, further exacerbating the decline. Consequently, the preservation of bees has become a critical issue in restoring ecological balance. The decline in bee populations is not only a concern for biodiversity but also for the resilience of ecosystems and the stability of food production systems. To maintain healthy bee populations, it is essential to ensure that bees have access to a balanced diet rich in essential nutrients such as carbohydrates, amino acids, lipids, and vitamins, with honey being a particularly prominent natural source of these nutrients. Honey is typically produced by floral glands known as nectaries, with the sugar concentration of nectar produced by a single flower ranging from 5% to 75% depending on circumstances (Huang). High sugar concentrations make flowers more attractive to pollinators, thereby increasing pollination potential. However, high sugar levels demand more energy from plants (Southwick), which can be a limiting factor, especially in environments where resources are scarce. As a result, plants have evolved flexible nectar production mechanisms that allow them to adjust their nectar output in response to external factors like pollinator activity (Veits). This adaptability enables plants to maximize their reproductive success while balancing their energy expenditures. Given the crucial role of bees in pollination and the threats they face, it is suggested that strategies be developed to harness these plant mechanisms and explore alternatives to bees that are less susceptible to climate change. These strategies could include the cultivation of plant species that produce high-quality nectar even under adverse conditions, as well as the development of agricultural practices that support pollinator diversity and resilience. By doing so, we can help mitigate the impacts of pollinator decline and ensure the continued provision of essential

489


ecosystem services.

II. KEYWORDS 1) Pollinators 1. Definition: a. Pollinators are organisms that play a crucial role in transferring pollen from the male structures of flowers (anthers) to the female structures (stigmas). This group primarily includes bees, butterflies, birds, and bats. Pollinators are vital for ecosystems and agriculture, significantly contributing to the maintenance of biodiversity and the enhancement of crop production. 2. Contribution to Biodiversity: a. Plant Reproduction: Many plant species rely on cross-pollination rather than self-pollination to reproduce. Cross-pollination increases genetic diversity, allowing plant populations to adapt better to environmental changes. This process helps maintain the diversity of plant species within ecosystems. b. Enhancement of Biodiversity: Pollinators assist in the reproduction of specific plants, enabling these plants to thrive and spread. As a result, a diverse range of plant species is maintained, providing habitats and food for various animal species. This contributes to the overall maintenance and enhancement of biodiversity. c. Ecosystem Stability: High biodiversity strengthens ecosystem stability. The presence of diverse plant species increases resistance to pests and diseases, ensuring that if one species declines, others can fill its ecological niche. This resilience enhances the ecosystem's recovery ability and ensures the continuity of ecosystem services. 3. Enhancement of Agricultural Production: a. Increased Crop Yields: Pollinators are essential for the pollination of many crops. For example, fruits and vegetables such as apples, strawberries, tomatoes, and almonds produce higher-quality and more abundant yields when pollinated by insects. This significantly boosts agricultural productivity and increases farmers' incomes. b. Improved Crop Quality: Crops pollinated by pollinators typically exhibit superior size, shape, and taste. Additionally, the germination rate of seeds and the growth rate of plants are often improved. This results in crops with higher market value. c. Economic Benefits: The economic value of crops that depend on pollinators is considerable. Many crops rely on this natural pollination process, leading to increased agricultural output and economic gains. d. Sustainable Agriculture: Pollinators reduce the reliance on chemical pollination methods by supporting natural pollination processes. This decreases the use of chemicals in agriculture, protects soil and water quality, and promotes environmentally friendly agricultural practices. 4. Threats and Responses: a. Threats to Pollinators: Habitat destruction, pesticide use, climate change, and diseases pose significant threats to pollinators' survival. The decline in pollinator populations can lead to reduced biodiversity and lower agricultural productivity. b. Conservation Efforts: To protect pollinators, it is essential to restore habitats, reduce the use of chemical pesticides, and address climate change. Additionally, adopting pollinator-friendly farming practices and protecting their habitats are crucial steps in ensuring their survival. 2) Honeybees as Pollinators 1. Basic Body Structure:

490


a. The body of a honeybee is divided into three main parts: the head, thorax, and abdomen. Each of these sections plays a critical role in the honeybee's function as a pollinator. 2. Head: a. Antennae: The honeybee's head has two antennae, which are responsible for its sense of smell. These antennae help the bee detect the scent of flowers, guiding it to the right flowers for pollination. They allow the bee to precisely locate flowers by detecting their odor and direction. b. Eyes: Honeybees possess two large compound eyes and three simple eyes (ocelli). The compound eyes are made up of hundreds of lenses, enabling the bee to perceive light and colors. Honeybees can also detect ultraviolet light, which helps them locate the center of flowers through UV reflections. The simple eyes detect light intensity, aiding in the bee's orientation during flight. c. Mouthparts: The mouthparts of a honeybee are complex, featuring a long tongue (glossa) adapted for collecting nectar. The tongue is ideal for sucking nectar from flowers and also plays a crucial role in gathering pollen. Additionally, the mandibles (jaws) of the bee are used for constructing the hive and handling pollen. 3. Thorax: a. Wings: Honeybees have two pairs of wings that enable fast and agile flight. These wings allow the bee to efficiently travel from flower to flower. The wings are powered by strong muscles located in the thorax, facilitating quick movement between different flowers. b. Legs: Honeybees have three pairs of legs, each equipped with specialized structures for collecting and transporting pollen. c. Forelegs: The forelegs are used to clean the bee's eyes or remove pollen from around the head area. d. Midlegs: The midlegs are used to transfer pollen to other legs or to clean the bee’s body. e. Hind Legs: The hindlegs have a special structure called a pollen basket (corbicula). This basket is made of stiff hairs and allows the bee to gather and transport pollen securely back to the hive. 4. Abdomen: a. Honey Stomach: The abdomen of a honeybee contains a honey stomach, where nectar is stored. The bee collects nectar from flowers and stores it in this stomach, then returns to the hive to share the nectar with other bees or to store it in the hive. This nectar serves as a source of energy for the bees and is also stored as food for the winter b. Wax Glands: The abdomen also houses glands that secrete wax, which is used to build the hive. Additionally, pheromones produced in the abdomen play a crucial role in communication and cooperation among bees. c. Stinger: The honeybee has a stinger at the end of its abdomen, which is used for defense. However, the stinger is not directly related to the bee's role as a pollinator. 5. Body Hair: a. The entire body of a honeybee is covered with dense hair, which is highly effective for collecting pollen. When a honey bee visits a flower, pollen adheres to these hairs, and as the bee moves to another flower, the pollen is transferred to the stigma, facilitating pollination. The hairs on the bee’s legs and body also have electrostatic properties, helping pollen to stick more effectively. 3) Biochemical Responses in Plants to Attract Honey Bees as Pollinators 1. Mechanism of Sound Detection in Plants: a. Vibration Detection: While plants do not possess sensory organs for hearing, they can detect vibrations within certain frequency ranges. The sound of honeybee wingbeats, typically in the range of 200-300 Hz, induces vibrations in the cells of flowers, which plants can detect. b. Mechanoreceptors in Plant Cells: Plants have mechanoreceptors in their cell membranes that sense

491


mechanical stimuli, such as vibrations. These receptors convert the vibrations into electrical signals, which activate various biochemical pathways within the plant, triggering a series of responses aimed at attracting honeybees. 2. Physiological Changes and Biochemical Reactions: a. Detection of Honeybee Sounds: When a plant detects the sound of honeybees, it undergoes several physiological changes to attract these pollinators. b. Increase in Nectar Sugar Content: Upon detecting honey bee sounds, plants increase the sugar concentration in the nectar produced by their nectaries. This process is regulated primarily by the activation of enzymes involved in carbohydrate metabolism. For instance, the activity of the enzyme sucrase, which synthesizes sucrose, is enhanced, resulting in higher sugar content in the nectar. c. Attraction Effect: Higher sugar content serves as a more attractive signal to honeybees, which prefer sweeter nectar. This preference encourages honeybees to visit the flowers more frequently, thereby increasing the chances of cross-pollination for the plant. d. Increase in Floral Scent: Honeybee sounds also stimulate the production of volatile organic compounds (VOCs) within the plant. These compounds are aromatic substances released by the plant to attract honeybees. The VOCs mainly include terpenes and phenylpropanoids, which emit specific fragrances that appeal to honeybees. e. Activation of Biochemical Pathways: The production of these volatile compounds is driven by the activation of specific biochemical pathways. For example, the activation of enzymes in the terpene synthesis pathway leads to the emission of scents that honeybees find particularly attractive. As honeybees produce sound while approaching flowers, these scents further enhance their attraction to the flowers. f. Changes in Flower Color: Some studies suggest that plants might alter the color of their flowers in response to honey bee sounds. This change could result from the altered expression of genes involved in pigment synthesis. For instance, the activation of the flavonoid synthesis pathway may intensify the flower's color. Brighter and more vivid flower colors provide a stronger visual signal to honeybees, making it easier for them to locate and visit the flowers. 3. Enhanced Interaction with Honeybees: a. Efficient Pollination: These physiological changes increase the frequency of honeybee visits to specific flowers, maximizing the chances of pollination. When honeybees repeatedly visit the same flower, cross-pollination is facilitated, enhancing genetic diversity and improving the reproductive success of the plant population. b. Long-Term Interaction: The interaction between honeybees and plants has evolved in a way that maximizes mutual benefits. Honeybees gain access to more nectar, while plants receive more efficient pollination services. This mutualistic relationship has developed over time to enhance the evolutionary success of both species. III. METHODS AND MATERIALS 2.1 Plant Model In this study, Lavandula angustifolia (lavender) was selected due to its well-known purple flowers, which are particularly effective in attracting bees (Illinois Extension). Lavender's vibrant color, combined with its aromatic qualities, makes it an ideal candidate for studying plant-pollinator interactions, especially since bees are highly attracted to flowers that offer both visual and olfactory cues. The choice of lavender is further justified by its global prevalence, making the findings of this study broadly applicable across different regions and climates where lavender is cultivated. The physical characteristics of the plants used in the study were carefully considered to ensure consistency and reliability in the experimental results. The plants, ranging in height from 35 to 50 cm, were selected to represent a mature growth stage, ensuring that they were fully capable of producing

492


nectar and engaging in typical pollination processes. The uniformity in plant height also helped to minimize variability in nectar production that might result from differences in plant maturity or size. The lavender plants were housed in cylindrical pots with a diameter of 13 cm and a height of 13 cm, dimensions that were likely chosen to provide adequate space for root growth while also being manageable in a controlled experimental setup. The use of standardized pot sizes ensured that all plants received similar conditions in terms of soil volume and root development, further reducing potential sources of variability in the experiment. Each pot contained approximately 5 to 7 flowers, with each flower measuring about 4 cm in length. This consistent flower count per plant was crucial for maintaining a uniform basis for measuring nectar production and sugar concentration. The size of the flowers, at about 4 cm each, is typical for Lavandula angustifolia and is important because flower size can influence both the amount of nectar produced and the ease with which pollinators, like bees, can access it. The choice of this specific floral dimension allows for a realistic assessment of how lavender interacts with its pollinators in natural conditions. Moreover, the relatively small variation in the number of flowers per plant (5 to 7) suggests that the researchers were mindful of controlling for potential differences in nectar output that could arise from having a larger or smaller number of flowers. By standardizing the floral abundance, the study could more accurately isolate the effects of external stimuli, such as bee sounds, on nectar production and sugar concentration. This careful attention to detail in the selection and preparation of the lavender plants underscores the importance of experimental design in ensuring that the results are both valid and generalizable. Furthermore, the choice of Lavandula angustifolia as the model plant also reflects an understanding of its ecological and economic significance. Lavender is not only a popular ornamental plant but also a valuable crop in the production of essential oils and honey. Therefore, insights gained from this study could have practical applications in agriculture and horticulture, particularly in optimizing conditions for pollinator attraction and improving crop yields. By choosing a plant with such broad relevance, the study enhances the potential impact of its findings, extending beyond basic research to practical implementations in sustainable agricultural practices.

<Figure 1> Plant Model (Set up for the stimulated group (Bee sound))

493


<Figure 2> Plant Model (Set up for the control group (No bee sound) 2.2 Bee Sound Sampling The honeybee sounds used in the experiment were pre-recorded, with a frequency range of 190 Hz to 250 Hz (McNeil). This frequency range was specifically chosen because it closely mimics the natural wingbeat sounds produced by honeybees, which are known to fall within this spectrum. The precision in selecting this range is crucial, as the vibrations generated by these specific frequencies are thought to trigger certain physiological responses in plants, particularly in the context of nectar production and pollinator attraction. The ambient background noise during the experiment measured 161 Hz, which is relatively low and unlikely to interfere with the targeted bee sound frequencies. This background noise likely represents general environmental sounds, such as wind or distant traffic, which were not expected to influence the plants in the same way as the bee sounds. The differentiation between the bee sounds and ambient noise is important, as it ensures that any observed effects on the plants can be more confidently attributed to the specific frequencies associated with the honeybee sounds. In the absence of the honeybee sounds, the background noise level was measured at 62 Hz. This significant drop in frequency further highlights the distinct nature of the bee sound stimulus used in the experiment. The lower frequency of the ambient noise without bee sounds underscores the controlled conditions under which the experiment was conducted, minimizing external variables that could confound the results. By maintaining a clear distinction between the bee sounds and ambient noise, the researchers ensured that the plants' responses could be accurately assessed, thereby strengthening the validity of the findings.

494


<Figure 3> Bee Sound Sampling (Background noise with bee sound was 161 Hz).

<Figure 4> Bee Sound Sampling (Background noise without bee sound was 62 Hz)

2.3 Experimental setup The flowers were divided into two groups, each consisting of four plants, to create a controlled experimental setup. The stimulus group was exposed to pre-recorded bee sounds, while the control group was not exposed to any bee sounds, allowing for a clear comparison of the effects of auditory stimulation on nectar production. The bee sounds were played using a GO 2 JBL speaker, set at a volume of 68 dB, for 12 consecutive days. The exposure was carefully timed to occur daily for 3 hours, from 5 PM to 8 PM, a period likely chosen to simulate the natural foraging times of bees when they are most active. To ensure consistency in other aspects of the plants' care, each plant received 140 mL of water at 4-day intervals. The exact amount of water was delivered simultaneously to both groups using an automatic watering system, which was programmed to supply water for 7 seconds every 4 days during the 12-day period. This automation reduced the risk of human error and ensured that water availability did not vary between the groups, which could have influenced the plants' nectar production independently of the sound stimulus. The two groups were strategically placed at opposite ends of a balcony, ensuring that only the stimulus group was affected by the bee sounds. This spatial separation was critical in preventing any potential crossover of the sound stimulus to the control group, thus maintaining the integrity of the experimental design. By controlling the location of the plants, the researchers could confidently attribute any

495


observed differences in nectar production to the auditory stimulus rather than other environmental factors. To further control the experiment, other variables such as sunlight exposure, humidity, and temperature were kept constant by placing both groups on the same balcony. This ensured that all plants were subjected to the same environmental conditions, apart from the bee sound exposure. The lavender seedlings used in both groups were grown under identical conditions before the experiment began, ensuring that the plants were of similar size and health, which minimized variability due to differences in plant development. The speaker used for the stimulus group was placed 3 cm away from the flowers, a distance carefully chosen to ensure that the sound was evenly distributed across the flowers. This close proximity maximized the likelihood that the vibrations from the sound waves would be effectively transmitted to the flowers, potentially influencing their physiological processes. The controlled distance also ensured that the sound intensity remained consistent across the flowers in the stimulus group, which was crucial for accurately assessing the impact of bee sounds on nectar production. This meticulous attention to detail in the experimental setup demonstrates the rigor with which the study was conducted, enhancing the reliability of the results.

<Figure 5> Background Noise Detection (Background noise with bee sound was 68 dB)

<Figure 6> Background Noise Detection (Background noise without bee sound was 28 dB)

496


2.4 Measurements A total of 40 flowers were sampled from each plant, with sampling taking place at 4-day intervals over the course of the 12-day experimental period. This sampling frequency was designed to capture any changes in nectar production and sugar concentration in response to the bee sound stimulus over time. The regular intervals allowed for a detailed analysis of how the exposure to bee sounds influenced nectar quality across different stages of the experiment. Nectar was carefully extracted from the flowers using a syringe, a method chosen for its precision in collecting small volumes of liquid without damaging the delicate flower structures. This technique ensured that the nectar samples were as uncontaminated as possible, which is critical for obtaining accurate measurements of sugar concentration. The sugar concentration in the nectar, expressed in Brix units, was determined using a salinity refractometer. The use of a refractometer allowed for precise quantification of the sugar content in the nectar, which is a key indicator of the nectar's attractiveness to pollinators like bees. By measuring the sugar concentration in Brix unis, the researchers could directly compare the sweetness of nectar from the stimulus and control groups, providing insights into the potential effects of bee sounds on nectar production. After collecting the nectar with the syringe, the sample was immediately placed into the refractometer for measurement. This prompt measurement helped to prevent any changes in sugar concentration that might occur if the nectar were to evaporate or degrade over time. Each Brix measurement was conducted with care to ensure consistency and accuracy across all samples. To maintain the accuracy of the measurements and prevent cross-contamination between samples, the refractometer was thoroughly cleaned with a disinfectant after each use. This step was crucial because even small residues from previous samples could skew the results, leading to inaccurate readings. By disinfecting the refractometer after each measurement, the researchers ensured that each nectar sample was assessed independently, which is essential for maintaining the integrity of the data. The average sugar concentration for each group was then calculated and plotted on a graph, providing a visual representation of the differences in nectar sweetness between the stimulus group exposed to bee sounds and the control group. This graphical representation allowed for an easy comparison of trends over the 12-day period, highlighting any significant effects that the bee sounds had on nectar production. The careful methodology used in sampling, measuring, and analyzing the nectar ensured that the findings were robust and could be reliably interpreted in the context of plant-pollinator interactions.

<Figure 7> Photo of the Lavender

497


<Figure 8> Photo of the extraction

<Figure 9> Photo of the refractometer

IV. RESULTS In the control group, the Brix values of the honey produced were measured at 14.00, 13.25, and 12.50 on days 4, 8, and 12 of the experiment, respectively, resulting in an average sugar concentration of 13.25 Brix. This steady decline in sugar concentration over time suggests that, in the absence of bee sounds, the plants' ability to maintain high nectar sugar levels may diminish, possibly due to natural physiological limits or the plant's strategic allocation of resources. As the plants continue to produce nectar without the external stimulus of bee sounds, they may prioritize other metabolic processes or conserve energy, leading to a gradual decrease in nectar sweetness. In contrast, the plants exposed to bee sounds exhibited consistently higher sugar concentrations in their nectar, with Brix measurements of 15.75, 14.50, and 13.50 on days 4, 8, and 12, respectively. The average Brix for this group was 14.58, significantly higher than that of the control group. This consistent elevation in sugar concentration suggests that the exposure to bee sounds acted as a form of stimulation for the plants, enhancing their metabolic activity and leading to increased nectar production with higher sugar content. The data indicates that the auditory stimulus provided by the bee sounds may mimic the presence of actual pollinators, prompting the plants to produce nectar that is more attractive to bees, thereby increasing the likelihood of successful pollination.

498


The stimulus group consistently produced honey with higher sugar concentrations at each measurement point compared to the control group. The differences in Brix between the control group and the stimulus group were 1.75, 1.25, and 1.0 on days 4, 8, and 12, respectively, with an average difference of 1.33 Brix. This consistent difference across the experiment indicates that the bee sounds had a measurable and sustained impact on nectar production. The initial response of the plants to the bee sounds was quite strong, as evidenced by the larger Brix difference on day 4. However, the gradual decrease in the Brix difference over time (from 1.75 to 1.0) suggests that the plants' response to the continued auditory stimulus may diminish slightly as the experiment progresses. This tapering effect could imply that while bee sounds trigger an initial boost in nectar production, the plants may eventually reach a threshold where further increases in sugar concentration are not energetically favorable or necessary. This threshold might represent a point at which the plant balances the energetic cost of producing high-sugar nectar with the ecological benefit of attracting pollinators. Alternatively, the observed pattern might suggest a form of acclimatization, where the plants begin to adjust to the constant stimulus, leading to a slight reduction in the relative enhancement of nectar production over time. This acclimatization could be a plant's way of optimizing resource use, ensuring that it does not expend more energy than necessary in maintaining elevated nectar sugar levels once a sufficient response to the initial stimulus has been achieved. Overall, the study's findings underline the importance of external stimuli, such as bee sounds, in influencing plant behavior and nectar production. The results also highlight the dynamic nature of plant responses to environmental cues, suggesting that while initial reactions can be strong, the long-term effects may stabilize as the plants adapt to their conditions. This understanding could have broader implications for agriculture and ecology, particularly in optimizing conditions for pollination and enhancing crop yields in environments where natural pollinator activity may be compromised.

<Figure 10> Graph of the relationship of sugar concentration and numbers of days

<Figure 11> Normalized graph by the average of the control group

499


#1 #2 #3 #4 Average STD

Day 4 Control 1 1 1.07 0.93 1 0.06

Stimulated 1.07 1.07 1.14 1.21 1.13 0.07

Day 8 Control 0.98 0.98 1.05 0.98 1 0.04

Stimulated 0.98 1.13 1.21 1.06 1.09 0.01

Day 12 Control 1.04 0.88 0.96 1.12 1 0.10

Stimulated 1.12 1.04 0.96 1.2 1.08 0.10

<Figure 12> Normalized data by the average of the control group

#1 #2 #3 #4 Average STD

Day 4 Control 14 14 15 13 14 0.82

Stimulated 15 15 16 17 15.75 0.96

Day 8 Control 13 13 14 13 13.25 0.43

Stimulated 13 15 16 14 14.5 1.12

Day 12 Control 13 11 12 14 12.5 1.12

Stimulated 14 13 12 15 13.5 1.12

<Figure 13> Normalized data by the average of the control group

V. DISCUSSION This study successfully demonstrated the correlation between the nectar production of lavenders (Lavandula angustifolia) and honeybee sounds by varying the duration of exposure to these pollinator sounds. The results clearly highlight the relationship between frequent insect visitation rates and increased nectar production, as evidenced by the higher mean sugar levels in the nectar of the group exposed to bee sounds compared to the control group. This indicates that lavenders respond to bee sounds by producing more sugar in their nectar, an adaptive behavior likely aimed at attracting more pollinators. The absence of a clear trend over the days exposed to bee sounds suggests that the lavender's reaction to this auditory stimulus is immediate and occurs over a short time span, rather than being a gradual process that builds up over prolonged exposure. However, the study faced several methodological limitations that should be considered when interpreting the results. Firstly, the lavenders were kept in an environment that wasn’t perfectly controlled, making them susceptible to fluctuating climate conditions. During the experiment, a severe heat wave caused temperatures to rise above ideal levels, likely stressing the plants and potentially affecting their nectar production. This environmental variability could have introduced inconsistencies in the data, making it difficult to isolate the effects of bee sounds on nectar production from other external factors. Another limitation was the focus on a single plant species, Lavandula angustifolia. Conducting the experiment exclusively on lavender makes it challenging to generalize the findings to other plant species. It remains unclear whether bee sounds would have the same effect on the nectar production of different plants or flowers. To strengthen the conclusions and broaden the applicability of the results, future research should include a variety of plant species to determine if this auditory stimulation effect is universal or species-specific. Additionally, the relatively small data set—only 24 data points collected over 12 days—limits the statistical power and robustness of the study's conclusions. While the observed correlation between honeybee sounds and nectar production is promising, a larger data set obtained over a more extended

500


period would provide more reliable results. Expanding the sample size and duration of the experiment would increase the credibility of the findings and allow for a more detailed analysis of the trends and potential long-term effects of bee sound exposure. Supporting the findings of this study, 2019 research on Oenothera drummondii also found a positive correlation between bee sounds and nectar sugar concentration (Veits). Further research has identified complex mechanisms involved in pollinator-plant interactions, such as the influence of pollinator wingspan on plant responses (Veits). These studies contribute to a deeper understanding of how specific pollinators interact with plants, enabling the mapping of flower preferences among pollinator species. Such knowledge is crucial for pollinator conservation efforts, as it provides guidelines for planting flora that are particularly attractive to specific pollinators, thereby stimulating and maintaining healthy pollinator populations. The potential economic benefits of this line of research are significant. By providing data that can be used to create pollinator-friendly areas or modify crops to be more attractive to pollinators, this study could have far-reaching implications for agriculture. Insect pollinators, particularly those from the order Hymenoptera, are essential for global agriculture, contributing an estimated 235–577 billion USD annually to agricultural output (IPBES, Khalifa). This represents approximately one-third of global food production. Given their ecological and economic importance, there is a growing movement worldwide to rehabilitate bees and other pollinators within urban ecosystems (USEPA). Efforts include enhancing urban greenery to be more pollinator-friendly, which could help mitigate the ongoing decline in pollinator populations. In conclusion, this research not only illustrates how bee sounds can stimulate nectar production in lavenders but also underscores the intricate relationship between pollinators and flowers. Understanding these relationships is critical as many major pollinators face population decline, threatening biodiversity and agricultural productivity. The data generated from this study and others like it can inform the creation of pollinator-friendly environments and the development of crops that are more attractive to pollinators, ultimately supporting both ecological health and economic sustainability.

VI. REFERENCES 1.

1.M. Veits et al., “Flowers respond to pollinator sound within minutes by increasing nectar sugar concentration,” Ecol. Lett., vol. 22, no. 9, pp. 1483–1492, Sep. 2019, doi: 10.1111/ele.13331.

2.

M. E. Dodd, J. Silvertown, and M. W. Chase, “Phylogenetic Analysis of Trait Evolution and Species Diversity Variation among Angiosperm Families,” Evolution, vol. 53, no. 3, p. 732, Jun. 1999, doi: 10.2307/2640713.

3.

C. A. Kearns, D. W. Inouye, and N. M. Waser, “ENDANGERED MUTUALISMS: The Conservation of Plant-Pollinator Interactions,” Annu. Rev. Ecol. Syst., vol. 29, no. 1, pp. 83–112, Nov. 1998, doi: 10.1146/annurev.ecolsys.29.1.83.

4.

C. Westerkamp and G. Gottsberger, “Diversity Pays in Crop Pollination,” Crop Sci., vol. 40, no. 5, pp. 1209–1222, Sep. 2000, doi: 10.2135/cropsci2000.4051209x.

5.

A.-M. Klein et al., “Importance of pollinators in changing landscapes for world crops,” Proc. R. Soc. B Biol. Sci., vol. 274, no. 1608, pp. 303–313, Feb. 2007, doi: 10.1098/rspb.2006.3721.

6.

“The Potential Consequences of Pollinator Declines on the Conservation of Biodiversity and Stability of Food Crop Yields,” Conserv. Biol., vol. 12, no. 1, pp. 8–17, Jul. 2008, doi: 10.1111/j.1523-1739.1998.97154.x.

7.

D. vanEngelsdorp et al., “Colony Collapse Disorder: A Descriptive Study,” PLoS ONE, vol. 4, no. 8, p. e6481, Aug. 2009, doi: 10.1371/journal.pone.0006481.

501


8.

N. A. Steinhauer et al., “A national survey of managed honey bee 2012–2013 annual colony losses in the USA: results from the Bee Informed Partnership,” J. Apic. Res., vol. 53, no. 1, pp. 1–18, Jan. 2014, doi: 10.3896/IBRA.1.53.1.01.

9.

A. L. Hamblin, E. Youngsteadt, and S. D. Frank, “Wild bee abundance declines with urban warming, regardless of floral density,” Urban Ecosyst., vol. 21, no. 3, pp. 419–428, Jun. 2018, doi: 10.1007/s11252-018-0731-4.

10. J. C. Stout and C. L. Morales, “Ecological impacts of invasive alien species on bees,” Apidologie, vol. 40, no. 3, pp. 388–409, May 2009, doi: 10.1051/apido/2009023. 11. Z. Abay, A. Bezabeh, A. Gela, and A. Tassew, “Evaluating the Impact of Commonly Used Pesticides on Honeybees (Apis mellifera) in North Gonder of Amhara Region, Ethiopia,” J. Toxicol., vol. 2023, pp. 1–13, Mar. 2023, doi: 10.1155/2023/2634158. 12. Honey Bee Nutrition (2010, August). Retrieved from http://nashbee.org/wpcontent/uploads/Honey-Bee-Nutrition-by-Zachary-Huang.pdf 13. E. E. Southwick, G. M. Loper, and S. E. Sadwick, “NECTAR PRODUCTION, COMPOSITION, ENERGETICS AND POLLINATOR ATTRACTIVENESS IN SPRING FLOWERS OF WESTERN NEW YORK,” Am. J. Bot., vol. 68, no. 7, pp. 994–1002, Aug. 1981, doi: 10.1002/j.1537-2197.1981.tb07816.x. 14. H. M. Appel and R. B. Cocroft, “Plants respond to leaf vibrations caused by insect herbivore chewing,” Oecologia, vol. 175, no. 4, pp. 1257–1266, Aug. 2014, doi: 10.1007/s00442-0142995-6. 15. O. Hamant and E. S. Haswell, “Life behind the wall: sensing mechanical cues in plants,” BMC Biol., vol. 15, no. 1, p. 59, Dec. 2017, doi: 10.1186/s12915-017-0403-5. 16. P. A. De Luca and M. Vallejo-Marín, “What’s the ‘buzz’ about? The ecology and evolutionary significance of buzz-pollination,” Curr. Opin. Plant Biol., vol. 16, no. 4, pp. 429–435, Aug. 2013, doi: 10.1016/j.pbi.2013.05.002. 17. H. Esch and D. Wilson, “The sounds produced by flies and bees,” Z. F r Vgl. Physiol., vol. 54, no. 2, pp. 256–267, 1967, doi: 10.1007/BF00298031. 18. A. Terenzi, S. Cecchi, and S. Spinsante, “On the Importance of the Sound Emitted by Honey Bee Hives,” Vet. Sci., vol. 7, no. 4, p. 168, Oct. 2020, doi: 10.3390/vetsci7040168. 19. T. J. Brundage, "Acoustic Sensor for Beehive Monitoring," US Patent No. 8,152,590 B2, April 10, 2012. 20. J. Withgott, “Taking a Bird’s-Eye View…in the UV,” BioScience, vol. 50, no. 10, p. 854, 2000, doi: 10.1641/0006-3568(2000)050[0854:TABSEV]2.0.CO;2 21. M. E. A. McNeil, “SOUNDS OF THE HIVE – PART”. 22. IPBES, “The assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services on pollinators, pollination and food production,” Zenodo, Dec. 2016. doi: 10.5281/ZENODO.3402857. 23. S. A. M. Khalifa et al., “Overview of Bee Pollination and Its Economic Value for Crop Production,” Insects, vol. 12, no. 8, p. 688, Jul. 2021, doi: 10.3390/insects12080688.

502


Wind Strength and Basil Growth: A Study on Thigmomorphogenesis and Agricultural Implications Author

Full Name

:

Park, Gaon

:

Dalat International School

(Last Name, First Name)

School Name

Abstract With climate change intensifying extreme weather patterns, including stronger winds, this study aims to explore strategies to enhance plant resilience, ultimately safeguarding our food systems and ecosystems. This study examines the effects of varying wind strengths on the growth of basil plants (Ocimum spp.), specifically focusing on height differences as a measure of plant growth under different wind conditions. By subjecting basil plants to different wind intensities in a controlled environment, we aim to understand how wind stress influences their vertical growth (height). The findings have potential applications in agriculture, contributing to the development of more resilient crops and strategies for managing wind-induced stress in plants, especially in the context of climate change.

503


Introduction The study of plant responses to mechanical stimuli, particularly wind stress, is increasingly significant in the context of climate variability (1). One of the key phenomena in this area is thigmomorphogenesis, which refers to the changes in plant morphology in response to mechanical stimuli such as touch or wind. While thigmomorphogenetic responses can include alterations in various aspects of plant structure, this study focuses specifically on the impact of wind stress on the vertical growth (height) of basil plants (Ocimum spp.) (7). Wind stress is a common mechanical force exerted on plants, inducing various responses that can lead to changes in growth patterns (2). Excessive wind can cause plants to oscillate and sway, potentially leading to structural failure if the roots or stems cannot withstand the force (3). Wind damage has significant economic impacts on crops, forests, and urban trees, and plays a crucial role in plant regeneration and successional changes in many ecosystems (4). Basil (Ocimum spp.), a popular culinary and medicinal herb, provides an excellent model for studying the impact of wind stress on plant growth (5). Basil is widely used as a spice and for medicinal purposes, influencing its high market value. Previous studies have shown that environmental factors such as soil moisture and air temperature significantly affect basil’s growth (6). However, the specific impact of wind stress on the height of basil plants remains less explored, making it the focus of this study. The choice of basil for this experiment is motivated by its economic importance and sensitivity to environmental conditions. Understanding how basil responds to wind stress, particularly in terms of height growth, can help optimize growing conditions in the farming industry, potentially leading to more resilient crop varieties that can withstand mechanical stress, ensuring consistent yields and quality. This research aims to investigate how varying wind strengths affect the height of basil plants. By understanding the underlying mechanisms that influence vertical growth under wind-induced mechanical stress, this study seeks to uncover how basil adapts to such environmental challenges. The findings could inform strategies for mitigating the impacts of wind stress on agriculture, contributing to broader ecological understanding and practical applications in crop resilience and environmental management.

Materials and Methods (a) Basil as a model system Basil (Ocimum spp.) was selected as the model system for this investigation due to its widespread use and economic importance in the agricultural sector. Basil is not only a popular culinary herb, commonly used in a variety of cuisines around the world, but it also has significant medicinal properties, including anti-inflammatory and antioxidant effects. Its sensitivity to environmental conditions, such as soil moisture, light, and temperature, makes basil an ideal candidate for studying the impact of mechanical stress like wind on plant growth. As this research aims to provide insights that can improve agricultural practices, basil’s relevance in both food and medicinal contexts adds practical value to the findings, potentially aiding in the development of more resilient crop varieties that can withstand environmental stresses while maintaining their quality and yield.

504


Figure 2: 3D Model of Experimental Setup for Investigating the Effects of Wind Strength on Basil Plant Growth and Morphology (b) Experimental setup Basil seeds were first germinated in a rectangular tray filled with nutrient-rich soil, a mix of organic potting soil, coarse sand, and compost (Cham Grow; 60769; 홍성20-가-11014호) Once the seedlings reached the 2-3 leaf stage, they were transplanted into individual planting pots of uniform size (4-cm in diameter), each filled with the same soil mixture. The pots were then placed in a controlled environment chamber to ensure consistent light, temperature (77 °F), and water conditions across all experimental groups. To simulate varying wind conditions, electric fans (Taizhou Zhonglian Electrical Co, LTD.; 1018995) with adjustable speed settings were set up at different distances from the plants, creating three distinct wind exposure groups: a control group (0 m/s) with no wind, a medium wind group with a moderate breeze (~1.5 m/s), and a high wind group with a strong breeze (~3 m/s). Wind speeds were measured and recorded using an anemometer for accuracy. Each group of basil plants was exposed to its respective wind conditions for a fixed duration of 4 hours daily over a period of one week. Throughout the experiment, the key growth parameter of plant height was measured and recorded. Additionally, observations were made for any signs of structural failure or other stress-related symptoms. The data collected was analyzed to determine the impact of wind stress on basil plant growth.

505


Results

Figure 1: The Effect of Varying Wind Speeds on the Growth of Basil Plants Over a 7-Day Period The experiment was designed to assess the impact of different wind speeds on the growth of basil plants. Three groups of basil plants were subjected to distinct wind conditions: no wind (control), moderate wind (1.5 m/s), and strong wind (3 m/s). The initial and final heights of the plants were measured over a 7-day period, and the normalized growth lengths were calculated to analyze the relative growth rates. The results indicate a clear differentiation in growth patterns among the three groups. The control group, which experienced no wind exposure, showed consistent growth across all plants, with an average normalized length increase of approximately 1.21 by the end of the experiment. This group exhibited the most substantial increase in height, reflecting the absence of mechanical stress. The moderate wind group displayed a slightly reduced growth rate compared to the control group, with an average normalized length of around 1.18 by the end of the experiment (8). While these plants still grew, the exposure to moderate wind appeared to hinder their vertical growth to some extent. The plants in this group also remained structurally stable, showing no signs of damage. In stark contrast, the high wind group exhibited the most significant reduction in growth. The average normalized length for this group was 0.96, indicating that the plants subjected to strong wind experienced a decrease in height. Additionally, all plants exposed to direct wind, particularly in the high wind group, showed signs of dehydration, as evidenced by a change in leaf color (9). The data collected from the experiment, as recorded in the spreadsheet, shows a clear reduction in plant height under high wind conditions (17). This reduction in growth can be attributed to the mechanical stress imposed by the strong wind, which not only limited vertical growth but also led to visible physical damage such as bending and stem breakage. Overall, the data suggest that while moderate wind exposure may only slightly hinder growth, strong wind significantly impedes plant development, highlighting the importance of managing wind stress and ensuring adequate water availability in agricultural practices.

506


Conclusion This research demonstrated that basil plants exhibit significant growth adaptations when exposed to varying wind strengths. These findings have practical implications for agriculture, particularly in regions where wind stress poses a challenge to crop productivity. By understanding the specific responses of basil to mechanical stress, such as the reduced height in response to high wind exposure, agricultural practices can be better tailored to enhance crop resilience in windy environments. Notably, the study also highlighted the impact of wind-induced dehydration on plant growth. The basil plants exposed to direct wind experienced color changes indicative of dryness, which was likely caused by increased transpiration rates and faster soil moisture depletion. This dehydration, particularly in the group subjected to the strongest wind, contributed to the reduced height observed in those plants. This suggests that managing water availability is crucial when cultivating crops in windy environments to prevent stress and ensure optimal growth. In light of the findings, future research should explore the long-term effects of wind stress on basil and other crops, with an emphasis on optimizing environmental conditions to support plant health and productivity. By building on the knowledge gained from this study, we can work towards developing strategies that not only protect our crops but also ensure sustainable agricultural practices in an increasingly unpredictable climate.

Discussion In this study, we investigated how varying wind strengths affect the growth of basil plants, specifically focusing on height as a key growth parameter. The experiment involved exposing basil plants to different wind intensities in a controlled environment and measuring the resulting changes in plant height. The findings revealed important insights into the impact of wind stress on basil growth, confirming that strong wind can significantly reduce plant height. Analyzing the data, a clear trend emerged: the plants exposed to stronger wind exhibited a notable reduction in height, likely as an adaptive response to minimize the risk of mechanical failure under intense wind stress (10). This reduction in height may help the plants withstand the mechanical forces exerted by the wind, but it also suggests a potential trade-off in overall plant growth and productivity (11)(12). In addition to the observed trends, some unexpected plant responses were noted, such as the bending or breaking of stems in the high wind group (13). This suggests that while the plants attempted to fortify their structures, the mechanical stress was severe enough to cause physical damage, leading to altered growth angles and, in some cases, stem failure. These observations highlight the importance of understanding the balance between flexibility and rigidity in plant adaptation to mechanical stimuli, as discussed in the introduction. Another significant observation was the change in leaf color across all plants exposed to direct wind. This discoloration indicates that the plants were experiencing dryness, likely due to increased transpiration rates caused by the wind (14). The wind not only accelerated water loss from the leaves but also contributed to faster soil moisture depletion, leading to overall dehydration. This dehydration, especially in the group exposed to the strongest wind, likely played a role in their reduced height, as the combined stresses of wind and water deficiency hindered their growth (15). Despite the controlled environment, there were potential sources of human error that could have influenced the results. These include slight inconsistencies in how the wind was directed towards each plant, variations in watering, and differences in monitoring the environmental conditions. Such factors could have contributed to the variations in plant responses, particularly the heightened stress observed in the plants exposed to the strongest wind.

507


Reflecting on the broader context presented in the introduction, these findings emphasize the complexity of plant responses to mechanical stress and the critical role of water management in mitigating the adverse effects of wind exposure. The study underscores the need for a comprehensive approach to assessing crop resilience, considering multiple environmental factors, including wind and water availability, to ensure sustainable agricultural practices(16).

References (1) Braam, J. (2005). In touch: Plant responses to mechanical stimuli. New Phytologist, 165(2), 373-389. (2) Niklas, K. J. (1992). Plant biomechanics: An engineering approach to plant form and function. University of Chicago Press. (3) Smith, D. L., & Ennos, A. R. (2003). The effects of wind on the mechanical and hydraulic properties of the stems of sunflowers (Helianthus annuus L.). Journal of Experimental Botany, 54(387), 845-849. (4) Ridge, I. (1992). Wind as an ecological factor. Studies in Biology (Vol. 6). Cambridge University Press. (5) Ali, M. B., Khandaker, M. M., & Oba, S. (2008). Comparative study on functional components, antioxidant activity, and color parameters of selected sweet basil (Ocimum basilicum L.) under different light conditions. Journal of Medicinal Food, 11(1), 79-84. (6) Jones, H. G. (2013). Plants and Microclimate: A Quantitative Approach to Environmental Plant Physiology (3rd ed.). Cambridge University Press. (7) McMahon, T. A. (1973). Size and shape in biology: Elastic criteria impose limits on biological proportions, and consequently on metabolic rates. Science, 179(4079), 1201-1204. (8) Grace, J., & Russell, G. (1982). The effect of wind on grasses: A study in a controlled environment. New Phytologist, 91(4), 487-492. (9) Telewski, F. W. (1995). Wind-induced physiological and developmental responses in trees. In Wind and Trees (pp. 237-263). Cambridge University Press. (10) Coutand, C. (2010). Mechanosensing and thigmomorphogenesis, a physiological and biomechanical point of view. Plant Science, 179(3), 168-182. (11) Telewski, F. W. (2006). A unified hypothesis of mechanoperception in plants. American Journal of Botany, 93(10), 1466-1476. (12) Peltola, H., Kellomäki, S., Hassinen, A., & Granander, M. (2000). Mechanical stability of Scots pine, Norway spruce, and birch: An analysis of tree-pulling experiments in Finland. Forest Ecology and Management, 135(1-3), 143-153. (13) Biddington, N. L., & Dearman, A. S. (1985). The effects of mechanically-induced stress on the growth of young plants. Annals of Botany, 55(4), 759-767. (14) Rieger, M., & Adams, S. R. (2003). Wind speed influences transpirational cooling and water use efficiency in fruit trees. Journal of Horticultural Science and Biotechnology, 78(3), 341-345. (15) Blum, A. (2011). Plant water relations, plant stress and plant production. Plant Breeding for WaterLimited Environments. Springer. (16) Mittler, R. (2006). Abiotic stress, the field environment, and stress combination. Trends in Plant Science, 11(1), 15-19. (17) Gaon Park. (2024). Data for Wind effect upon Basil Experiment. Google Sheets, https://docs.google.com/spreadsheets/d/1B8OjZzBnZLGaIECEcWBVKQ1XkQ0V4I88a5 gFMS9Rgxw/edit?usp=sharing.

508


Effectiveness of Air Purifier vs. Natural Resources in Reducing Dust Levels Author

Full Name

:

Park, Sieun

:

Big Heart Christian School

(Last Name, First Name)

School Name

Abstract This study investigates the effectiveness of air purifiers and other natural resources in reducing dust levels. The aim of this experiment was to observe if natural resources can be used to filter air for offices and households. A total of ten trials were conducted in three experiments to compare and analyze the change in dust levels between charcoal, plants, and air purifiers. Forcefully creating dust did not prove that natural resources can be an efficient way to purify air. Dust levels were not influenced by air purifier levels. In this study, the importance of human health is stressed, as well as the potential benefits of renewable energy for the environment and health.

Keywords Dusts, Air purifier, charcoal, plant, dust level, dust sensor, health, environment

509


Introduction Not all dust is perceptible to human eyes. They still exist, nevertheless. The sizes of dust vary from micrometers to millimeters (Maertens et al.). The definition of dust may differ from circumstances. Some dust may contain microscopic solids or liquid that may enter the lung, causing serious health problems (Simon). It is assumed by scientists that fine particulate matter that is harmful to human health is linked with 3 to 9 million premature deaths each year. Average dust level is aggravating everyday due to naturally occurring and human caused pollution which includes: dust winds, burning fossil fuels, exhausts from transportation, and agricultural sources (US EPA). Hence, 40% of people in the US have been living with susceptible amounts of dust (American Lung Association). While some families or individuals purchase air purifiers others buy natural products such as charcoal or houseplants to filter dust in their homes. Technology has now developed to the point where it is possible to measure dust level anywhere with ease using a substitute device. The inexpensive Arduino uno dust sensor can be an adequate alternative to measure dust levels due to its easy accessibility. The ease of this alternative enables people to monitor dust levels at home. It allows individuals to measure the change in dust levels under different circumstances. Air purifiers are often costly and require electricity in order to function. It might be impossible to filter air in areas where electricity is unavailable. Charcoal and houseplants are cheap and eco-friendly resources which are believed to purify air. If these potential purifiers are helpful, there would be various advantages, such as a positive impact on the environment and a reduction in consumer costs. Ecofriendly resources are commonly believed not to have a significant impact on improving air quality as air purifiers due to the difference between many researches. Changing definitions of dust from experiment to experiment contribute to differentiating conclusions of many research. Dust in this experiment is defined as “microscopic particles of material” and includes pollen; bacteria; smoke or ash; small bits of dirt, rock, or sand; skin cells, and hair (Dust).

Problem Considering that people stay more than one third of their entire life in houses, it is alarming that houses can collect up to 40 pounds of dust per year. This fact not only is certainly shocking, but also highlights the importance of maintaining clean air quality at home by filtering dust. (Helen of Troy Limited). In order to reduce household dust levels, it is crucial to raise people's awareness.

Purpose of the experiment This study aims to investigate and compare three methods for reducing indoor dust levels by comparing air purifiers and natural resources such as charcoal and houseplant. This experiment measured the effectiveness of each air purifying method. The independent variable in this experiment is a commercial air purifier, Coway AP-1510BH model; an air purifying plant, Dracaena fragrans; and activated charcoal. The dependent variable in this experiment is the concentration of the dust. The control variables were kept constant without any independent variable. The hypothesis of this experiment is that natural resources such as plants and charcoal can be eco-friendly substitutes for air purifiers. The hypothesis for the third experiment is that the level of the air purifier is positively correlated with dust levels.

Materials The following materials listed below were used when conducting this experiment. ● Arduino Uno kit ● UNO R3 Controller Board ● Female-to-male dupont wires ● USB cable ● Dust sensor

510


● ● ● ● ● ● ● ●

Dracaena fragrans (Happy Plant) Activated charcoal - 1kg Air purifier Coway AP-1510BH model Laptop Charger Measuring environment (closed) Duster Blanket

Methodology [ 24 hours experiment] 1. Setup During the experiment, all windows and doors were closed. No one was present in the room and permitted to enter the room. No one entered the room. Other extraneous variables were not introduced to the environment. The dust sensor was connected to the UNO R3 Controller Board using female-tomale dupont wires. Then, the dust sensor was connected to the laptop by using a USB cable. A code was made to read dust levels every second and log the average every hour then upload it to Arduino. 2. Baseline setting The dust sensor was placed in the controlled environment. By shaking out a duster, blanket, or pillow, the dust level was calibrated to the baseline. During initial calibration, dust level was set as 200, with a margin of error of 15 (185 ~ 215). To ensure that the dust sensor could measure the newly set dust level, it was shaken 10 times back and forth. 3. Air purifier trial The order for testing three independent variables did not affect the results. Following the baseline setting, the air purifier was turned on at the minimum level. After 24 hours, the printed results were recorded and saved. 4. Plant trial Step two was repeated before conducting the second experiment. The plant was placed in a controlled environment. After watering the plant, until the soil became wet, the experimenter left the room. The sensor saved the results for 24 hours. 5. Charcoal trial Step two was executed to set the baseline. The activated charcoal was placed in the room. The sensor read the data for 24 hours. 6. Data Analysis All the data collected over the 72 hours of the experiment were gathered. [ 1 hour experiment ] 1. Setup The experiment environment was set under control: the window and the door were closed and no human entered the room. The dust sensor was connected with the UNO R3 Controller Board using given wires in the Arduino uno kit. Afterwards, the dust sensor was used to measure the dust level. Code was written to monitor dust levels every five minutes and was uploaded to the Arduino.

511


2. Baseline setting The connected dust sensor was placed in the controlled environment. The dust level was set to the baseline by shaking out a duster, blanket, or a pillow. During initial calibration, dust level was set between 180 to 250. To ensure that the dust sensor could measure the newly set dust level, the dust sensor was shaken 10 times, back and forth. 3. Air purifier trial Again, the order for testing three independent variables did not affect the results during this one hour experiment. The air purifier was on at the minimum level. After the experimenter left the room, the dust sensor began recording. 4. Plant trial The initial calibration was newly set before conducting another trial. The plant was watered and placed inside the controlled environment. Data was collected from the sensor. 5. Charcoal trial A bag of charcoal was introduced to the room. The environment was reseted and the data was collected. 6. Control Experiment All independent variables were removed for the negative control experiment. Same standardization was applied. 7. Data Analysis The data collected during the one-hour experiment was confiled. [ Air Purifier trial ] 1. Setup The experiment was held under a controlled environment that eliminated all confounding variables before starting the experiment. The dust sensor and the UNO R3 board were connected. After creating a code, it was uploaded to the arduino. 2. Baseline setting The experiment started after the dust level had fallen. The dust sensor was placed in the room. 3. Air Purifier 1 hour trials <1, 3, 5> Different levels were selected on the air purifier. The data was collected. In preparation for the next experiment, the dust level needed to be decreased. The process was repeated.

512


Results [ 24 hours experiment ]

Figure 1. Concentration of dust level of a closed environment over 24 hours with the inclusion of air purifier, charcoal or plant. Air Purifier

Carbon (Charcoal)

Plant

0

188.55

210.04

210.04

1

38

48

52

2

39

44

50

3

38

46

51

4

39

45

51

5

42

44

51

6

40

41

51

7

39

39

50

8

43

41

50

9

41

47

50

10

41

51

49

11

42

48

50

12

41

47

50

13

41

46

50

14

40

45

52

15

41

45

49

16

41

45

50

513


17

42

45

50

18

41

47

50

19

39

47

50

20

43

45

49

21

44

45

50

22

44

44

50

23

42

47

52

24 42 46 51 Graph 1. Concentration of dust level in a closed environment with the inclusion of air purifier, charcoal or plant, measured every hour over 24 hours. In the 24 hours experiment, air purifier, charcoal, and plant’s efficiency were compared. For each independent variable, the dust levels were as follows: 188.55 for the air purifier, 210.04 for the charcoal, and 210.04 for the plant. First, the air purifier had reduced the dust level by 79.80% within the first hour. By the end of the experiment, after 24 hours, the air purifier reduced the dust level by 77.72%. Second, charcoal reduced the dust level by 77.10% by the first hour. At the end of the experiment, charcoal reduced dust levels by 78.09%. Lastly, the plant decreased the dust level by 75.20% within the first hour. Plant experiment resulted in a 75.89% reduction in dust concentration. Following the greatest decline within the first hour, all three variables maintained a similar reduction throughout the experiment. In this order, the air purifier, charcoal, and the plant exhibited the greatest decrease in one hour. However, over the entire 24 hours experiment period, the order of effectiveness changed: charcoal, air purifier, then the plant. [ 1 hour experiment ]

Figure 2. Concentration of dust level of a closed environment over 1 hour with the inclusion of air purifier, charcoal, plant, or the negative control.

514


Air Purifier

Charcoal

Plant

Negative Control

0

210

253.01

188.55

199.3

5

59

112

80

62

10

57

109

55

54

15

55

108

49

52

20

56

104

55

62

25

57

106

57

52

30

55

101

52

58

35

55

107

57

57

40

51

105

51

59

45

53

105

57

57

50

61

106

54

54

55

54

106

56

59

60

54

103

56

53

Graph 2. Concentration of dust level in a closed environment with the inclusion of air purifier, charcoal, plant, or the negative control, measured every five minutes for an hour. In light of the fact that the 24 hour dust experiment had not shown significant changes in dust levels after the first hour of the experiment, this one hour experiment was conducted to observe more detailed data within the first hour. The initial values for the variables are the following: 210, 253.01, 188.55, and 199.3. First, the air purifier reduced the dust level by 71.90% by the first five minute session. It reduced the dust level by 74.29% by the end of the experiment. Secondly, charcoal decreased the dust level by 55.73% by the first five minute session and 59.29% by the end of the experiment. For the plant, within five minutes of recording, the dust level had been reduced by 57.57%, and by the end of the experiment, the dust level had been reduced by 70.30%. The control group without any variables decreased the dust level by 68.89% during the first five minutes. It decreased the dust level by 73.40% until the end. The air purifier demonstrated the greatest decrease of dust level followed by control, plant, then charcoal for the first five minute time. The order of effectiveness from greatest to lowest for the entire one hour period is air purifier, control, plant, and charcoal. Controlled environment improved the dust quality more efficiently than the natural resources.

515


[ Different Air Purifier level experiment ]

Figure 3. Concentration of dust level of a closed environment for 1 hour with the inclusion of different levels, 1, 2, and 3, for the same air purifier.

Air Purifier 1 hour Level 5 Air Purifier 1 hour Level 3 Air Purifier 1 hour Level 1 0

70.39

48.91

48.91

5

98

99

103

10

100

104

104

15

98

100

101

20

105

98

98

25

102

96

98

30

93

99

105

35

99

97

98

40

102

97

96

45

100

101

98

50

102

94

103

55

104

96

100

60

101

98

98

Graph 3. Concentration of dust level in a closed environment with the inclusion of different levels, 1, 2, and 3, measured every five minutes for an hour.

516


Since two previous experiments revealed that air purifiers are the efficient way to reduce dust levels, the third experiment exhibits the difference between air purifier levels. The initial values for each level from lowest to highest is: 48.91, 48.91, and 70.39. Level 1 increased the dust level by 110.6% during the first five minutes and 100.4% for the entire trial. Level 3 increased the dust level by 102.4% during the first five minutes and 100.4% for the entire trial. Level 5 increased the dust level by 39.22% during the first five minutes and 43.49% for the entire trial. All three trials led to an increase in dust levels. Different air purifier settings affected airflow slightly.

Conclusion The hypothesis that natural resources can be an eco-friendly alternative for air purifiers can not be proved by this experiment alone. The results from both experiment one and two showed that natural resources made progress in improving dust, however, it was not as efficient as air purifiers. Experiment 1 showed that using air purifiers was the quickest way to reduce dust levels. Both charcoal and plants reduced dust levels but were not as quick as air purifiers. Air purifiers were the best way, again, in experiment 2, followed by charcoal and the plant. Natural resources do not show an immediate change as air purifiers, but do have effects in reducing dust levels. The hypothesis stating the level of air purifier was positively correlated with dust level is contradicting since the level of air purifier was not the deciding factor for the dust level. Initiating the air purifier did not make the air quality better but made it even worse. Dust level increased for all three trials regardless of their filtering level. In sum, air purifiers were the most efficient way of purifying air in experiment one and two and the level was not a pivotal factor in improving air quality. All three experiments did not make a clear conclusion and need further research that explores the settings and efficiency for air cleaning resources.

Discussion It is important to consider that other factors may have contributed to the results. All three experiments were not conducted under a perfectly controlled environment, hence, likely containing errors. The type of the plant and charcoal certainly changes the results. 100 to 1000 plants for every 10 square feet is needed to purify air (Cummings and Waring). Activated charcoal removes unpleasant odors in the air and the size of the charcoal blocks determine the efficiency (The Magic of Activated Carbon in Air Purifiers). This evidence implies that the dust reduction for plants and charcoal might potentially have happened due to natural settlement. The natural setting experiment also supports that dust reduction in this experiment may have been caused by natural settlement over time, not by any other factors. The dust of different sizes were forcefully created and introduced, therefore, naturally settling. The result for experiment 2 which demonstrated that air purifiers worsen air quality is likely to be caused by the resuspension of dust particles caused by the circulating air from the machine. In an air purifier, air is vacuumed, then filtered through an internal filter and recirculated (How Does an Air Purifier Work?). This suggests that the air purifiers filter may have got clogged as the experiment proceeded or the filter was not efficient enough. Furthermore, The cheap sensor had many difficulties in measuring dust during the experiments; the dust sensor repeatedly sent error messages. The humidity and the weather might have affected the changing dust level, even though the window was closed. When it is humid, dust particles tend to stick with each other, not being able to travel. Dust mites also can not move when the humidity level is 40 to 60% (Qualls). Moreover, the size and layout of the environment may have affected the experiment results. This experiment was held in a small room with many different objects. Experts suggest people to use air cleaners with a Clean Air Delivery Rate that can cover ⅔ the square footage of the room. If the environment is too small, the dust rotates within the activit boundary. If the environment is too big, the air cleaner can not cover the entire environment. Thus, the size of the environment affects the

517


efficiency of the air purifier (Guide to Air Cleaners in the Home). To eliminate all extraneous factors, in further experiment, the test should be done multiple times to ensure that the collected data are not random and the air purifier filter should be cleaned after every trial. Further Research Office administrators in companies and house owners can use many different kinds of purifiers to maintain a sustainable air quality in their living environment using both an electrical air purifier and natural resources at the same time. This can help people reduce reliance on electricity and greenhouse gas emissions (US EPA, Local Energy Efficiency Benefits and Opportunities). Improving air quality indoors and outdoors is crucial for people’s health. Experimental results provide valuable information about natural resource effectiveness and the impact of air purifier settings on dust levels as compared to air purifiers. Although the efficiency of natural resources was not fully concluded in this report, it is essential for future studies to find alternative methods for air purification. As long as continuous innovation in this area is pursued, we may see a healthier environment in the near future, which has benefits for society as a whole.

References Maertens, Rebecca M., et al. “The Mutagenic Hazards of Settled House Dust: A Review.” Mutation Research/Reviews in Mutation Research, vol. 567, no. 2, Nov. 2004, pp. 401–25. ScienceDirect, https://doi.org/10.1016/j.mrrev.2004.08.004. Simon, Dr David. Dust and Your Health. US EPA, OAR. Local Energy Efficiency Benefits and Opportunities. 5 July 2017, https://www.epa.gov/statelocalenergy/local-energy-efficiency-benefits-and-opportunities. Association, American Lung. 2024 ‘State of the Air’ Report Reveals Most ‘Hazardous’ Air Quality Days in 25 Years. https://www.lung.org/media/press-releases/sota-2024. Accessed 18 July 2024. Dust. https://education.nationalgeographic.org/resource/dust. Accessed 18 July 2024. Limited, Helen of Troy. What Does 40 Lbs. of Household Dust Look Like? https://www.prnewswire.com/news-releases/what-does-40-lbs-of-household-dust-look-like300340523.html. Accessed 18 July 2024. Cummings, Bryan E., and Michael S. Waring. “Potted Plants Do Not Improve Indoor Air Quality: A Review and Analysis of Reported VOC Removal Efficiencies.” Journal of Exposure Science & Environmental Epidemiology, vol. 30, no. 2, Mar. 2020, pp. 253–61. www.nature.com, https://doi.org/10.1038/s41370-019-0175-9. The Magic of Activated Carbon in Air Purifiers: How They Work and Their Benefits. https://www.blueair.com/gb/blog-all/activated_carbon_filters.html. Accessed 19 July 2024. How Does an Air Purifier Work? https://www.filtrete.com/wps/portal/en_US/3M/filtrete/hometips/full-story/~/why-you-should-purify-your-air/?storyid=eb519632-1671-4947-8ee6-af8daa7cb00b. US. Accessed 23 July 2024. Qualls, Mare. “The Surprising Relationship Between Dust and Indoor Humidity.” IAQ.Works, 28 June 2021, https://iaq.works/humidity/the-surprising-relationship-between-dust-and-indoor-humidity/. Guide to Air Cleaners in the Home. ---. Particulate Matter (PM) Basics. 19 Apr. 2016, https://www.epa.gov/pm-pollution/particulatematter-pm-basics.

518


The Differentiation of Combined-Cycle Engines for Efficient Hypersonic Flight: The Rocket-Based and the Turbine-Based Combined Cycle Engines Author

Full Name

:

Park, So Young

:

Faith Bible Christian High School

(Last Name, First Name)

School Name

Abstract To reach the unlimited border of the universe, humans have created inconceivable rocket engines starting from ancient missile launchers and fire arrows since the 13th century to jet and rocket engines with various propulsion systems such as the RBCC and TBCC combined-cycle engines [1]. Despite challenges, humans have continued to develop outstanding performances beyond the kármán line which is the boundary that separates Earth’s atmosphere and outer space [2]. Even though humans have explored the moon, the International Space Station (ISS) has been developed, and several highly developed telescopes have been sent to seek and reveal the undiscovered secrets of the universe, humanity still fails to meet its high expectations of succeeding in hypersonic flights to not only induce better conditions for discovering the universe but also to extend the limits of its knowledge and capabilities. This research paper will cover the analytical comparison of Rocket-Based Combined Cycle (RBCC) and Turbine-Based Combined Cycle (TBCC) engines. By differentiating the two combined cycle engines, the study clarifies their respective efficacies in design, performance, and future potentials in hypersonic flights, including space access. RBCC and TBCC engines are some of the most highly advanced engines, but so far, no full-scale RBCC or TBCC engines have been successfully launched into space or fully developed due to their experimental stages and the extreme costs [3]. This study aims to recognize the major difference and the limits that correspond with each type of engine, elucidating details for future aerospace developments.

Keywords Combined-Cycle Engines, RBCC, TBCC.

519


INTRODUCTION For a long time, engineers have noticed the potential of RBCC and TBCC engines for their high performance and the possibility of bringing hypersonic flights into reality. By combining with propulsion systems that are used at different speeds, both RBCC and TBCC engines increase the speed performance significantly compared to other single-designed engines which perform with lower speed capability and limited range. The TBCC engine has significant potential to reach hypersonic flights, particularly in Air Force and naval aviation. On the other hand, because RBCC engine has not been experimented and developed as much as the TBCC engine, there need to be more research focused on it to be The precise standards of the rocket engine create various limitations in the developmental process of rocket engines. Although these rocket engines can theoretically guarantee future generations to travel at the speed of Mach numbers, this is still only a dream yet to be accomplished. Despite the steady advancements of both the RBCC and TBCC engines, a merely superficial comparison doesn’t provide enough context to fully assess their adequacy in different hypersonic flight applications due to the complexity of manufacturing the combined-cycle engine itself and technological challenges such as materials, engine designs, and optimizing efficient fuel types. Hence, this study aims to comprehensively study and differentiate RBCC and TBCC engines to consider their comparative strengths and potential for enhancing hypersonic flight technologies. To achieve this aim, the following objectives will lead the research. First, the study will provide a detailed comparison between the RBCC and the TBCC engines to inform potential designs in atmospheric and space flights. This will require a thorough overview of the existing literature on both engines. Moreover, this study will work to enhance the understanding of foundational concepts such as fuel efficiency, thermodynamics, and thrust force by acknowledging and analyzing the performance characteristics of both RBCC and TBCC engines. Literature reviews on both engines, as well as performance reviews of products utilizing each engine, will be considered. These objectives will ensure and provide scientists with a clear view of the opportunities and challenges of each combined cycle engine, which will lead to a more knowledgeable and holistic approach to the enhancement of advanced propulsion systems for future hypersonic applications.

THESIS The significance of comparing the design, performance, and future potentials of the Rocket-Based Combined Cycle (RBCC) and the Turbine-Based Combined Cycle (TBCC) can provide a clear view of the efficacious benefits for a more flexible and profitable space discovery for hypersonic air crafts.

LITERATURE REVIEW COMBINED-CYCLE ENGINES Starting in the 1950s, the concept of combined-cycle engines was developed, one of which included the operation of the J-58 TBCC engine, with the first flight test of the SR-71 [4]. After, aerospace engineers and scientists concluded that combined-cycle engines were more efficient compared to rocket engines, The National Aeronautics and Space Administration (NASA) began to support Marquardt. Marquardt is an aircraft company that developed 36 different combined-cycle engines that were intellectually analyzed, where most of the engines were RBCC engines [4]. The early RBCC engine’s concept was a subscale model of an ejector ramjet from the Marquardt, composed of the inlet, primary rocket section, mixer, diffuser, afterburner, and exit nozzle. This combined-cycle engine functions differently than normal rocket engines, with the propulsion of combined-cycle engines functioning differently from low Mach numbers to high Mach numbers. While combined-cycle engines are known as the most efficient engines for hypersonic flights, to complete the flight, these engines require the most precise, accurate, and detailed performance metrics – which include but are not limited to the impulse of the engine, thrust-to-weight ratio, thermodynamics

520


that mainly include heat management and transition efficiency, and fuel efficiency. All these factors of the performance metrics are crucial to successfully operating the combined-cycled engine.

RBCC A rocket-based combined cycle (RBCC) engine is a combined engine that consists of a ramjet, scramjet, and a rocket engine that connects both atmospheric flight and spaceflight. This engine allows aircrafts to reach hypersonic speeds by switching several propulsion models, making it more manageable to reach the kármán line which is an arbitrary line that separates the artmosphere and outerspace [2]. The RBCC engine’s propulsion system functions differently from several ranges of Mach numbers. Starting from the ejector mode, the Mach speed range is takeoff to Mach 3, ramjet mode is Mach 3 to Mach 7, scramjet mode is Mach 7 to Mach 10, and lastly, the rocket mode is Mach 10 to space flights. RBCC engines have a single flow path that is utilized throughout the whole climb to space for a single-stageto-orbit (SSTO) design [5]. This makes the process less complex of creating a engine considering each part because a single flow path does not require various stages of the engine to be separated during the flight.

TBCC A turbine-based combined cycle (TBCC) engine also consists of a ramjet and scramjet propulsion system similar to the RBCC engine. However, instead of the ejector mode, TBCC engines include the turbine engine – also known as the turbojet or the turbofan – that is activated from takeoff until the subsonic speed to low supersonic speed is reached. While the RBCC engine allows both atmospheric flight and space flight due to it’s rocket propulsion system, the TBCC engine is more versatile for atmospheric flights. Lockheed Martin’s Sr-71 BlackBird and NASA’s X-43A is one of the examples of using the TBCC scramjet engine [6]. Even though those hypersonic vehicles are not fully adapted to the TBCC engine, utilizing aspects of the TBCC engine proves the promised performance of the TBCC engines in atmospheric flights.

METHODOLOGY To reach the goal of this research, information was gathered through original data, primary and secondary sources from scientific articles, and other additional research papers, mainly from NASA and Science Direct. To compare each combined-cycle engine, researching direct documents about experiments, research, and test flight records through NASA played a crucial role in gathering data to reach the goal of the paper. The main models used to differentiate the RBCC and the TBCC engine were from NASA’s GTX project that applied the RBCC engine structure and Pratt and Whitney’s J58 TBCC engine from NASA’s SR-71 BlackBird.

COMPARING THE DESIGN OF THE RBCC AND TBCC ENGINE The RBCC engine has 4 propulsion systems, and the TBCC engine has 3 propulsion systems. Due to their different design of propulsion modes, each engine fits into different aviation environments. While the RBCC engine is versatile for space flights, the TBCC engine is suitable for atmospheric flights. This concludes that RBCC engines are mainly utilized in space rockets and TBCC engines are mostly adaptable in aviation jets. In space flights, oxygen is no longer available for the engine. Due to this reason, an RBCC engine requires an oxidizer which increases the weight and fuel efficiency. Because RBCC engines are approaching for SSTO launch -a single-stage-to-orbit which does not drop any components of the rocket while reaching the orbit- it will be a challenge because, during the process of reaching orbit, the rocket itself will not be able to reduce weight by dropping components to increase the thrust [7]. The heavier the rocket is, the more thrust it requires the engine to generate, eventually consuming the fuel faster than a rocket without a heavy oxidizer.

521


On the other hand, since TBCC engines are not designed for atmospheric flights, it reduces the design complexity and weight limits for an oxidizer. Because oxygen is already in the environment where TBCC-based jets fly, TBCC engines don't have to carry an oxidizer. Additionally, when the turbine mode is activated in the TBCC engine, it creates a significant difference in speed and fuel efficiency because the turbine mode is efficient at subsonic and low supersonic speeds [8]. Therefore, it is challenging to conclude which engine is more efficient or beneficial in both atmospheric or space flights because of the difference in design that is adaptable and more efficient in different environments. However, the appropriate engine for each situation is clear when the specific environment is considered.

PERFORMANCE DIFFERENCE BETWEEN THE RBCC AND TBCC ENGINE FUEL Even though there aren’t full-scaled RBCC engine-based rockets that were successfully launched, NASA’s GTX project provides a good experiment of utilizing and experimenting with RBCC engines for SSTO vehicles that have the potential to operate to the speed above Mach 10. In this project, the type of fuel that was mainly used was hydrogen fuel – although oxygen was also mixed with the hydrogen at a ratio of 4 and a chamber pressure of 0.7 MPa. Despite the project being eventually canceled, ground tests of the engine showed the significance of how both inlet and fueled engine performance were validated in the speed range of Mach 7 and 10 conditions. Rather than using a hydrocarbon fuel, which is relatively easier to handle, it still provides the potential of hydrogen fuel, which can lead to a speed range of Mach 7 and 10 [9]. Unlike the lack of data on RBCC engines, one of the well-known uses of the TBCC engine is NASA’s SR-71 Blackbird. The Blackbird was powered by two Pratt and Whitney J58 axial-flow turbojets, including the afterburners. With the fuel weight of 80,000 pounds (12,000 gallons), the Blackbird was still able to reach the speed of Mach 3.2 [6]. The fuel that was used was the Jet Propellant-7 (JP-7), a kerosene-based fuel that includes various types of hydrocarbons with reduced amounts of aromatics, which makes the fuel cleaner. Because the aromatics were reduced, there were fewer impurities like sulfur, nitrogen, and oxygen. This JP-7 fuel was designed to allow high thermal oxidative stability [10]. These precise fuel development led the Blackbird fly successfully with demonstrating an adequate performance. Looking through the data, the Blackbird, at the altitude of 85,000 feet, successfully operated and produced 32,500 pounds of thrust for each engine (The comparison of each engine’s thrust will be specifically discussed later on) by consumption of 8,000 gallons of fuel per hour [6] [11].

THERMODYNAMICS Relating to fuels that are injected in each engine, thermodynamics is an inevitable challenge and the most considerate aspect for combined-cycle engines. The intense speed and the pressure lead the aircraft itself to an extremely high temperature. Controlling high-temperature engines does not always mean they should maintain a low temperature, but sometimes, when specific propulsion systems are activated, it requires a certain amount of temperature to create the thermal throat that is needed for the ejector ramjet from NASA’s GTX project. During the simulations of this project, a temperature of around 2500 K was found in the flame front. Meanwhile, where the rocket plume impinges on the flat plate, a temperature of around 3200 K was found, which was not high enough for it to form a thermal throat that is expected for ejector ramjet operation. In this situation, because the temperature was not high enough, the airflow and combustion may not have been properly regulated, which plays a crucial role in successfully functioning the ramjet or the scramjet engine [12]. Meanwhile, the TBCC engine based NASA’s SR-71 successfully managed the engine temperatures. Unlike NASA’s GTX project, which only simulated the engine performance using simulators, flying the SR-71 several times in service proves that managing the temperature successfully. When the SR-71 was flying at the altitude of 55,00 ft (16.764 kilometers), air entering the combustor’s temperature reached 1,400 F (760 Celsius), the

522


turbine inlet’s temperature reached 2,000 F (1093.3 Celsius), and the afterburner section temperature reached 3,200 F (1760 Celsius) [11]. The main reasons that the SR-71 was able to operate while managing the intense temperature was because of the thermal protection materials, high-speed and hightemperature instrumentation, special high emissivity black paint for the cockpit thermal environment, and several cooling equipment and systems for the aircraft itself and the engines. Even the paint for the cockpit was considered because every single factor of the aircraft affects various elements and problems to consider to avoid failures during the flight [6] [13]. Clearly, the TBCC based aircraft, the SR-71 has more structured thermal controlling systems equipped with successful flights compared to NASA's GTX project where it lacked producing thermal throat. Because the RBCC engine is adaptable and built for atmospheric and space flights, it requires the engine to go through more challenging flight environments than TBCC engines. Due to lack of oxygen in outer space, the RBCC engine will need to be versatile in functioning the engine without air intakes to produce thrust.

THRUST FORCE Because combined-cycle engines are known for their high speed capability and performance, having intense thrust force from the engine should be deeply developed and experimented with. To compare the thrust force for each combined-cycle engine, this paper will again use NASA's GTX project and the SR-71 aircraft. Each project and development of NASA's GTX project and the SR-71 produced extreme thrust force to reach supersonic speeds, especially for the SR-71 with a maximum speed of Mach 3.3. While the SR-71 was available for duty during the late 1900s, NASA’s GTX project was based on the RBCC engine simulating in the independent ramjet stream cycle initially to provide propulsion at less than Mach 3. During this project, there were various difficulties; therefore, there is no clear evidence and data for the thrust force. However, there were considerable simulation approaches to enhance and optimize the engine’s performance under specific conditions. While simulating how the engine works, the team left the amount of air flowing into the engine constant, which led to problems such as creating choke points at the physical throat of the engine by restricting the flow of air located midway along the combustor, including the exit. Additionally, several simulations were not capable of reaching a stable solution due to the unmatched constant mass flow to the engine’s dynamic behavior under various conditions. To fix these problems, the team kept the pressure constant at the engine’s inlet instead of keeping the mass flow constant. This shows that the engine is able to adjust the air flows based on the amount of heat being produced, which led the simulation to reflect how the engine should operate, creating stable results. On the other hand, during NASA’s SR-71’s supersonic flights, it was able to fly up to 2,500 miles with the speed of Mach 3.2 without refueling with a consumption of 80,000 gallons per hour [14]. The SR-71 produces 32,500, and the thrust to weight ratio is 0.44” [6] [15]. This practically means that for every unit of weight, the engine provides less than half a unit of thrust. And since the ratio is 0.44, the engine produces less thrust relative to the aircraft's weight. If so, to enhance the ratio and increase the overall thrust of the SR-71, it is required to develop the engine performance or reduce the weight of the aircraft because these two are the main aspects that directly and indirectly affect the thrust force. 93 percent of the SR-71’s structure, including the skin, is manufactured using titanium [11]. Titanium is lightweight and high in strength, which makes it suitable for supersonic and hypersonic aircraft that require less weight, such as the SR-71 [16]. For this reason, titanium is used to manufacture the SR-71, but to reduce the weight of the aircraft itself, more research and experiments are needed to find or create materials. Again, there is a challenge in comparing the RBCC and TBCC engines evenly due to their limits in information and data. However, NASA’s GTX projects provide a promising beginning for developing the RBCC engines in real world applications. For both RBCC and TBCC engines, the air intake plays a crucial role in operating and producing thrust. According to NASA’s GTX project, the team had to work on controlling the amount of airflow into the engine to prevent choke points. For TBCC engines,

523


when the scramjet mode is activated, there is a concern that shock waves, strong pressure waves that can affect the air intake system due to the high temperature and pressure, form. Therefore, when manufacturing both RBCC and TBCC engines, the air intake system has to be one of the considerations for enduring harsh environments.

FUTURE POTENTIALS FOR BOTH RBCC AND TBCC ENGINES IN ATMOSPHERIC AND SPACE FLIGHTS When full-scaled RBCC and TBCC engines are successfully developed, they will significantly provide numerous advantages in the aviation field due to the unlimited access to hypersonic and space flights. RBCC engines technically have a higher capacity of speeds, possibly reaching Mach 10 including with the availability of space flights. As mentioned and discovered, because RBCC engines are built for launching into space and TBCC engines are built for atmospheric flights, the future potential for combined engines depends on where they will be operated. Still, for RBCC engines, various steps remain for these engines to be created and operated at their fullest. Especially for RBCC engine vehicles, it will need to be tested in a more harsh environment due to space flights, considering the technological difficulties of reducing the weight of the vehicle will be the most significant challenge. However, seeking for fuels that provides better efficiency of the flight, less weighted cooler, and oxidizer, it will significantly increase the potential to commercialize this technology.

RESULTS & DISCUSSIONS Overall, both the RBCC and the TBCC engine presented their own advantages and disadvantages. As of now, starting from fuel efficiency, thermal control of the engine and aircraft itself, and producing thrust force, TBCC engine provides a greater potential of bringing supersonic and hypersonic flights in reality compared to the RBCC engines that are only being experimented with computer simulation programs, not actual test flights. However, these computer-based simulations delineate that creating RBCC engines to utilize them in real-world applications is impossible. Going through various successes and failures in simulations can establish a firm foundation to start making progress in constructing RBCC engines. For hypersonic flights, TBCC engines are better suited than RBCC engines. Not only this, but TBCC engines have a higher possibility of being reusable, just like NASA’s SR-71, which was in service for several years. Because TBCC engines are mostly built to take off and land horizontally, they require less complexity to operate for pilots and, in the future, even possibly for artificial intelligence. Additionally, since it is taking off horizontally, it is less complicated to consider the gravitational force because it will be a weaker force than it is to take off vertically. It is definitely important to develop full-scale RBCC engines, but enhancing the performance of the TBCC engines will lead to various ideas and considerations to develop the RBCC engines further. This might be the most efficient method to develop both engines simultaneously because TBCC engines are just a few steps away from RBCC engines. One of the limitations and, ironically, the main practical considerations of this research is the limited information and data on the successful launches of both full-scaled RBCC and TBCC engines. This challenges the research to reach a precise and effective comparison of the two different combined cycle engines. While combined cycle engines are one of the most promising engines for hypersonic flights, without various experiments and trials, it will be difficult to develop a full-scale combined cycle engine. However, analyzing past experiments and non-full-scale engine launches of both RBCC and TBCC engines will provide a clearer vision of the benefits and drawbacks of creating an accomplished combined cycle engine. Especially for RBCC engines, there have not been any finalized designs due to the complexity of the structure and precise requirements to consider when launching, such as the overall height and weight of the aircraft, size of the eject section, the design of the expansion nozzle, etc [17].

524


Completely solving the design and efficiency for each model will remain a challenge, but considering the comparison between each engine will eventually provide a path for engineers to bring full-scaled RBCC engine and TBCC engines into reality.

CONCLUSION While both RBCC and TBCC engines are practical in environments they are suited for, to avoid countless expenses for developing each engine, there will be an efficient way to create a combined engine of the RBCC and TBCC engine. The takeoff and landing would be horizontal, where additional objects would not be required to support the aircraft. When it comes to combining two different combined-cycles engines, the takeoff to low subsonic speed would be operated by the turbine mode adapted from the TBCC engine. And since both RBCC and TBCC consist of the ramjet and scramjet propulsion system, the new combined engine would consequently consist of the ramjet and scramjet mode. Lastly, to reach the goal of hypersonic flights, the rocket engine from the RBCC engine would be needed. Therefore if possible, it would be effective to incorporate another propulsion system in the TBCC engine in order to increase the speed capability of the engine. Indeed, technological limits and challenges are inevitable due to the precise requirements, especially when transitioning into different modes. Still, humans are consistently looking for solutions to problems that might not have evident answers, and this process will play a crucial role in overcoming various difficulties to reach neverending goals that seem impossible to achieve.

REFERENCES [1] “The New Korea.” Google Books, books.google.co.kr/books?id=dxW3MKxCWDwC&q=Hwacha&pg=PA149&redir_esc=y. Accessed 9 Sept. 2024. [2] "Kármán Line: Origin, Facts & Height." Study.com, Study.com, https://study.com/academy/lesson/karman-line-origin-facts-height-space.html. Accessed 9 Sept. 2024. [3] Zhang, Tiantian & Wang, Zhen-guo & Huang, Wei & Chen, Jian & Sun, Ming-Bo. (2019). The overall layout of rocket-based combined-cycle engines: a review火箭基组合循环发动机总体布局 究进展. Journal of Zhejiang University-SCIENCE A. 20. 163-183. 10.1631/jzus.A1800684. Accessed 9 Sept. 2024. [4] Clark, Casie and NASA Dryden Flight Research Center. The History and Promise of Combined Cycle Engines for Access to Space Applications. season-02 2010, ntrs.nasa.gov/api/citations/20110003575/downloads/20110003575.pdf. Accessed 9 Sept. 2024. [5] Huang, Wei, et al. “Survey on the Mode Transition Technique in Combined Cycle Propulsion Systems.” Aerospace Science and Technology, vol. 39, Dec. 2014, pp. 685–91, doi:10.1016/j.ast.2014.07.006. Accessed 9 Sept. 2024. [6] NASA. “SR-71 Blackbird.” NASA Facts, 22 Dec. 1997, www.nasa.gov/wpcontent/uploads/2021/09/495839main_fs-030_sr-71.pdf. Accessed 9 Sept. 2024. [7] Munoz, Andrew. Design of a Rocket-Based Combined Cycle Engine. Edited by Periklis Papadopoulos, 2011, www.sjsu.edu/ae/docs/project-thesis/Munoz.S11.pdf. Accessed 9 Sept. 2024. [8] Russell, Jared and Professor Kantha. Turbine-Based Combined Cycle Propulsion. season-03 2009, www.colorado.edu/faculty/kantha/sites/default/files/attached-files/russell.pdf. Accessed 9 Sept. 2024. [9] Thomas, Scott R., et al. “Performance Evaluation of the NASA GTX RBCC Flowpath.” Fifteenth International Symposium on Airbreathing Engines, 2001, ntrs.nasa.gov/api/citations/20010092480/downloads/20010092480.pdf. Accessed 9 Sept. 2024.

525


[10] Mola, Roger. “What a Blackbird Drinks.” Smithsonian Magazine, 21 Nov. 2014, www.smithsonianmag.com/air-space-magazine/what-blackbird-drinks-180953422. Accessed 9 Sept. 2024. [11] "SR-71 Propulsion." Aircraft Engine Historical Society, 2013, www.enginehistory.org/Convention/2013/SR-71Propul/SR-71Propul.shtml. Accessed 9 Sept. 2024. [12] Edwards, Jack R., et al. “Three Dimensional Numerical Simulation of Rocket-Based CombinedCycle Engine Response During Mode Transition Events.” NASA/CR, Mar. 2003, ntrs.nasa.gov/api/citations/20030022772/downloads/20030022772.pdf. Accessed 9 Sept. 2024. [13] Johnson, Kelly, et al. SR-71 ENVIRONMENTAL CONTROL SYSTEM DEVELOPMENT. www.enginehistory.org/members/Convention/2005/Presentations/LawPete/SR-71Overview2.pdf. Accessed 9 Sept. 2024. [14] "SR-71 Blackbird." SR-71 Online, www.sr-71.org/blackbird/sr-71/. Accessed 9 Sept. 2024. [15] “Lockheed SR-71 Blackbird.” Infinite Flight Community, 16 Nov. 2017, community.infiniteflight.com/t/lockheed-sr-71-blackbird/164180. Accessed 9 Sept. 2024. [16] Britannica, The Editors of Encyclopaedia. "titanium". Encyclopedia Britannica, 12 Aug. 2024, https://www.britannica.com/science/titanium. Accessed 9 September 2024. [17] Shen, Hu. "The Overall Layout of Rocket-Based Combined-Cycle Engines: A Review [Huojianji Zuhe Xunhuan Fadongji Zongti Buju Yanjiu Jinzhan]." ResearchGate, 2019, www.researchgate.net/publication/332421717_The_overall_layout_of_rocket-based_combinedcycle_eng ines_a_reviewhuojianjizuhexunhuanfadongjizongtibujuyanjiujinzhan. Accessed 9 Sept. 2024.

526


Temperature as an indicator for the extent of DENV transmission Author

Full Name (Last Name, First Name)

School Name

:

Shin, Kayla Kyungwon

:

Chadwick International School

Introduction Energy drinks are non-alcoholic beverages that contain a combination of ingredients, including caffeine, taurine, glucuronolactone, guarana, ginseng, vitamins, and Ginkgo biloba. These ingredients enable the consumer to reduce tiredness and improve performance and concentration (Al-Shaar et al, 2017), leading to a greater demand for energy drinks. Since their introduction in 1987, energy drinks’ presence in the market has grown exponentially (Statista, 2023). The greater demand is also a result of hundreds of new brands being launched on the market every year (Oregon Consulting Group, 2022). Caffeine, because it is a psychoactive substance that is capable of producing dependence and tolerance, is the ingredient in energy drinks that has sparked the most debate (Addicott, 2014). Although its safety has been debated, it is known that adverse health effects can be triggered if an adult’s caffeine consumption is up to 400 mg per day (European Food Safety Authority, 2015), (Nawrot et al, 2003), (Wikoff et al, 2017). Studies also recommend that children and adolescents should not exceed 2.5mg/kg body weight per day (Seifert et al, 2011), (Nawrot et al, 2003), (Wikoff et al, 2017, although the exact amounts of caffeine consumption have not been established yet (Temple et al, 2017). Furthermore, excessive caffeine consumption can lead to serious health issues such as cardiovascular diseases. With the growth of the industrialization of societies, the incidence of cardiovascular diseases has significantly increased. This is notably interrelated to an unhealthy diet, and several other lifestyle changes. Hence, to decrease the global burden of cardiovascular diseases, it is appropriate to determine the best diet habits and supplements to live a healthy life (Khiali et al, 2023). This study will focus on the relationship between caffeine consumption and blood pressure. One possible mechanism of the cardiovascular effects of caffeine is the blocking of adenosine receptors. Adenosine is a neurotransmitter that plays a role in vasodilation. By blocking the adenosine receptors, particularly the A1 and A2a receptors, caffeine causes blood vessels to constrict, leading to an increase in blood pressure (Nurminen et al, 1999). Extensive research has been done on the relationship between caffeine consumption and blood pressure, with varying results depending on factors such as age, frequency of intake, and individual health conditions. Köksal et al. conducted a cross-sectional study with 1,329 adults aged 20-60 and found a

527


positive association between daily caffeine intake and systolic blood pressure, though no significant association was found with diastolic blood pressure (Clarke, 2020). Geleijnse reviewed epidemiological evidence and found that short-term 1-12 weeks randomized controlled trials showed coffee intake of about 5 cups per day caused a small elevation in blood pressure up to 2/1 mmHg compared to abstinence or decaffeinated coffee (Gelenjise 2008). Kujawaski et al. examined a cohort of older adults over two years and found that participants who drank coffee every day had a significant increase in systolic blood pressure of mean 8.63 mmHg and mean blood pressure of 5.55 mmHg compared to those who rarely or never drank coffee (Kujawska et al, 2021). Zhican et al. conducted a meta-analysis of randomized controlled trials and found that more than one week of caffeine consumption elevated blood pressure by 2.62/2.66 mmHg, particularly in younger populations (Zhican et al, 2021). Although previous research has shown that caffeine can elevate blood pressure, it remains unclear how this effect varies across different age groups. Thus, this study aims to investigate whether the impact of caffeine on blood pressure differs significantly between individuals aged 30-45, 46-59, and 60+. Participants in this study consumed 1 can of Red Bull, an energy drink that contains 80 mg of caffeine per 250ml can, which is about the same amount as in a cup of home-brewed coffee (Red Bull Product Q&A). The null hypothesis for this study is that there is no significant difference in blood pressure before and after consuming caffeine among the three age groups. Understanding the differences among the age groups is crucial for providing age-appropriate dietary advice and for managing potential risks associated with caffeine consumption.

Methods Dataset The dataset was sourced from Kaggle [17], a crowdsourced web platform and online community for data scientists and machine learning practitioners owned by Google [18]. The dataset was created by Omar Sobhy. The data was collected through face-to-face interviews with 120 Egyptian volunteers. Each volunteer was asked for their gender, age group (30-45, 46-59, 60+), and blood pressure (mmHg) before and after drinking a can of Red Bull. The data collection period was two months. The original dataset had 120 data points with all three age groups combined. We subdivided the dataset by age group and created a total of 3 processed data tables each with 40 data points. Because the original dataset only had the before and after blood pressure, we added a new column, the difference between the before and after blood pressure, to the processed data table.

Data analysis Descriptive statistics, including mean and standard deviation, were calculated for each age group. The variance and p-value were also calculated. Data analysis including one-way ANOVA was conducted using Python, with libraries such as pandas, numpy, statsmodels, and seaborn. All values are rounded to four significant figures. In addition, a scatterplot was also created using Python to visualize the distribution of blood pressure differences across age groups and thus graphically represent ANOVA.

Results Figures 1, 2, and 3 represent the three graphs of blood pressure before and after consuming Red Bull for each age group. The blue line indicates the blood pressure before consumption, and the red line indicates the blood pressure after consumption. The vertical axis is the blood pressure in mmHg, and the horizontal axis is the participant ID (from 1 to 40). The figures show a clear increase in blood pressure post-consumption across all age groups.

528


Figure 1: Before and after blood pressure of participants aged 30-45

Figure 2: Before and after blood pressure of participants aged 46-59

529


Figure 3: Before and after blood pressure of participants aged 60+

The table below summarizes the descriptive statistics for the blood pressure differences across the age groups. Standard deviation (mmHg)

Variance (mmHg)2

p-value

Age group

n-value

Mean (mmHg)

30-45

40

13.88

4.947

24.47

2.854 x 10-20

46-59

40

15.43

7.060

49.84

1.305 x 10-16

60+

40

13.05

6.913

47.79

1.347 x x 1014

Table 1: Descriptive Statistics of Blood Pressure Differences by Age Group The p-value for each group is lower than 0.05, which indicates that there is a significant relationship between caffeine consumption and an increase in blood pressure. The mean blood pressure difference after consuming a can of Red Bull was highest in the 46-59 age group, followed by the 30-45 group, and lowest in the 60+ group. The variance of 49.84 and 47.79 in the 46-59 and 60+ age groups indicate that older participants experienced the greatest variability in blood pressure changes, while the 30-45 group had the least. This suggests that the changes in blood pressure after drinking caffeine may be more consistent in younger age groups compared to older age groups. To statistically compare different groups, one-way ANOVA was performed. Between mean squares (BMS) was 58.37(mmHg)2, and within mean squares (WMS) was 40.70(mmHg)2. The F-statistic (F) was 1.434. Calculations are in table 2. Figure 4 is a graph of the f-distribution with the p-value. As seen

530


in the figure, the F-value is too great for the p-value, the area under the curve to the right, to be lower than 0.05. When calculated, the p-value was 0.2437. This is not small enough to reject the null hypothesis. Therefore, although we can conclude that caffeine has an effect on blood pressure, it is inconclusive whether the effect varies across age groups. Between Mean Squares (mmHg)2

!

1 𝐵𝑀𝑆 = ( 𝐾−1

𝑛" (𝑥" − 𝑋)%

"#$

𝐾 = number of groups 𝑛" = number of observations in group 𝑖 𝑥" = mean of group 𝑖 𝑋 = overall mean = 14.12 𝐵𝑀𝑆 =

1 ⋅ {40(13.88 − 14.12)% + 40(15.43 − 14.12)% + 40(13.05 − 14.12)% } = 59.372 2 ≈ 58.37

Within Mean Squares (mmHg)2 𝑊𝑀𝑆 =

!

1 ∑! "#$

(𝑛" − 1)%

(

(𝑛" − 1)𝑠" %

"#$

𝐾 = number of groups 𝑛" = number of observations in group 𝑖 𝑠" % = variance of group 𝑖 𝑊𝑀𝑆 =

1 ⋅ (39 ⋅ 24.47 + 39 ⋅ 49.84 + 39 ⋅ 47.79) = 40.7013 ≈ 40.70 39 + 39 + 39

F-statistic 𝐹=

𝐹=

𝐵𝑀𝑆 𝑊𝑀𝑆

58.37 = 1.43415 ≈ 1.434 40.70

Table 2: F-statistic Calculation

531


Figure 4: Graph of F-distribution with P-value As seen in the figure, the F-value is too great for the p-value, the area under the curve to the right, to be lower than 0.05. When calculated, the p-value was 0.2437. This is not small enough to reject the null hypothesis. Therefore, although we can conclude that caffeine has an effect on blood pressure, it is inconclusive whether the effect varies across age groups. Additionally, Figure 5 shows that there is a lot of variability in blood pressure in each group, but not much variability between the three means. Therefore, we cannot say that the groups are different. However, all points are greater or equal to zero. This demonstrates that although the effect may not differ between age groups, the effect caffeine has on blood pressure still exists. Furthermore, group 2 has a significant amount of variability. Group 2 was also the group that had the greatest mean difference in blood pressure with a value of 15.43mmHg. This difference may have been due to the variability and inconsistency in the participants’ results that the high variance of 49.84(mmHg)2 demonstrates. Therefore, the large value of the mean difference in blood pressure for group 2 is not significant and is rather attributed to randomness.

532


Figure 5: Graphical representation of ANOVA (Group 1: age group 30-45, Group 2: age group 46-59, Group 3: age group 60+) Additionally, this scatter plot shows that there is a lot of variability in blood pressure in each group, but not much variability between the three means. Therefore, we cannot say that the groups are different. However, all points are greater or equal to zero. This demonstrates that although the effect may not differ between age groups, the effect caffeine has on blood pressure still exists. Furthermore, group 2 has a significant amount of variability. Group 2 was also the group that had the greatest mean difference in blood pressure with a value of 15.43mmHg. This difference may have been due to the variability and inconsistency in the participants’ results that the high variance of 49.84(mmHg)2 demonstrates. Therefore, the large value of the mean difference in blood pressure for group 2 is not significant and is rather attributed to randomness.

Discussion Our study verified the relationship between increased caffeine intake and an elevation in blood pressure, but could not verify whether the effect differs across all ages. There was no statistical evidence that the groups differ; however, it must be noted that this does not mean that the difference does not exist. Age-Related Differences in Caffeine Response The pattern of variability in blood pressure changes within age categories is not very consistent. There is an increase in the 46-59 age group compared to the 30-45 age group, while there is a decrease in the 60+ age group compared to the 46-59 age group. Thus, currently, the relationship between age and the effect caffeine has on blood pressure is insignificant. There is not enough statistical evidence to say that the groups are different. This may be because the study only utilized 40 data points for each group for a total of three groups. If the study is repeated using a larger dataset with more groups, the relationship may be verified. The larger dataset will reduce the individual variability and thus yield more accurate results.

533


A study by the Institute of Medicine (US) Committee on Military Nutrition Research mentioned that older adults could be more sensitive to the pressor effect of caffeine. According to this study, after caffeine ingestion, blood pressure increased significantly above baseline for older men, while it remained statistically unchanged in younger men (Institute of Medicine (US) Committee on Military Nutrition Research, 1970). Our results support this study, particularly for the 46-59 age group. The smaller mean change in the 60+ age group indicates, however, that the relationship between age and caffeine sensitivity is not perfectly linear. Physiological Mechanisms for Different Age-Related Patterns The large variability in each data point may be attributed to differences in the physiological mechanisms. Caffeine, being an adenosine receptor antagonist, causes an increase in the sympathetic nervous system. Changes in adenosine receptor density and sensitivity may contribute to the varying responses that we observed here. Furthermore, age-related changes in caffeine metabolism might also explain why the response in the 60+ group had the lowest mean difference of 13.05mmHg. A study by UCLA Health reported that adults between the ages of 65 and 70 took 33% longer to metabolize caffeine than younger participants, demonstrating that older individuals show a slower metabolism of caffeine. Blunting of this response of older people to caffeine because of the slower metabolism may account for the mean increase in blood pressure is less for the oldest group (Ko et al, 2022).

Comparison with Previous Studies Our results coincide in part with previous research in that the pressor impact of caffeine was greater in older individuals. However, our results indicate that this generalization is limited since the 46-59 age group had the greatest mean difference in blood pressure, not the age group of 60 and older. Furthermore, the greater difference in the 46-59 age group has a large variability. This indicates the need for examining across multiple age categories with a larger dataset to consider the cardiovascular effect of caffeine. Furthermore, the magnitude of the increase in blood pressure in the 30-45 age group of 13.88mmHg is significant relative to studies that found even young healthy adults have an acute increase in blood pressure after consumption of caffeine. This points out the nature of caffeine's pressor impacts at different ages. Although the 60+ age group had a lower average increase, the personal variability of that individual— in this case, represented by the standard deviation—might well support the point that older adults require careful monitoring of caffeine intake.

Conclusion This study was unable to demonstrate the dependency of age on caffeine's effect but was able to verify the relationship between caffeine consumption and cardiovascular responses of individuals in different age groups. Our results show that caffeine, delivered by a single can of Red Bull energy drink, significantly increased blood pressure in all groups tested, including those ages 30-45, 46-59, and 60+. This effect was most pronounced in 46-59 years of age, reflecting a very high average increase in blood pressure (15.43mmHg). The 30-45 age participant increase had a robust mean, just a bit less than the other means, at 13.88mmHg. The increase for the 60+ participants had the smallest overall increase in terms of the mean at 13.05mmHg, although there was a considerable individual range. The variability and large p-value of 0.2437 show that age may not be an important factor to consider in predicting individual responses to caffeine. The variability in the relationships of age with caffeine sensitivity observed in our study may underline the complexity of such an interaction and suggest that a detailed and larger study is needed.

534


Although this study provides important insights, there are limitations that could be resolved in future studies. One key limitation is that the time period between drinking Red Bull and collecting data on blood pressure is unknown. Other limitations include the need to account for habitual caffeine consumption, as individuals who consume caffeine regularly may experience fewer physiological effects due to caffeine tolerance (Evans et al, 2017). Future studies should also account for factors like gender (Temple et al, 2011), body mass index (Antonio et al, 2024), and genetic factors (Wikoff et al, 2017), as they may potentially influence the effects caffeine has on the individual. Longitudinal studies are also needed to assess the long-term influence of caffeine intake on blood pressure over age groups. In conclusion, our study illustrates that the acute effects of caffeine on blood pressure exist, but there are no significant age-related differences. These data contribute to the knowledge of the complex relationships between caffeine and cardiovascular health. Considering that the global consumption of caffeinated beverages, particularly energy drinks, has tremendously increased over the years, knowledge of age-related effects has continued to be of utmost importance in public health strategies and at the level of individual dietary decisions. With such findings, there is now a basis for future research to conduct studies with a larger dataset to develop more comprehensive, age-specific guidelines concerning caffeine consumption and to specify the long-term cardiovascular implications of regular caffeine consumption across the lifespan. Future research may also look into whether blood pressure after caffeine consumption can decrease below the baseline (blood pressure before caffeine consumption) after it has reached the maximum increase point to understand the long-term implications.

References [1] Al-Shaar L, Vercammen K, Lu C, Richardson S, Tamez M, Mattei J. Health effects and public health concerns of energy drink consumption in the United States: a mini-review. Front Public Health. 2017; 5:255. [2] Oregon Consulting Group. U.S. energy drinks industry report. 2022. Available from: https://business.uoregon.edu/sites/default/files/media/energy-drink-industry-report.pdf. Accessed 15 August 2024. [3] STATISTA. Energy & sports drinks—Europe. 2023. Available from: https://www.statista.com/outlook/cmo/non-alcoholic-drinks/soft-drinks/energy-sports-drinks/europe. Accessed 15 August 2025. [4] Addicott MA. Caffeine use disorder: a review of the evidence and future implications. Curr Addict Rep. 2014; 1: 186–192. [5] European Food Safety Authority (EFSA). Scientific opinion on the safety of caffeine. EFSA J. 2015; 13:4102. [6] Nawrot P, Jordan S, Eastwood J, Rotstein J, Hugenholtz A, Feeley M. Effects of caffeine on human health. Food Addit Contam. 2003; 20: 1–30. [7] Wikoff D, Welsh BT, Henderson R, Brorby GP, Britt J, Myers E, et al. Systematic review of the potential adverse effects of caffeine consumption in healthy adults, pregnant women, adolescents and children. Food Chem Toxicol. 2017; 109: 585–648. [8] Seifert SM, Schaechter JL, Hershorin ER, Lipshultz SE. Health effects of energy drinks on children, adolescents, and young adults. Pediatrics. 2011; 127: 511–528. [9] Temple JL, Bernard C, Lipshultz SE, Czachor JD, Westphal JA, Mestre MA. The safety of ingested caffeine: a comprehensive review. Front Psychiatry. 2017; 8:80.

535


[10] Khiali, S., Agabalazadeh, A., Sahrai, H., Baghi, H. B., Banaeian, G. R., & Entezari-Maleki, T. (2023, March 30). Effect of caffeine consumption on cardiovascular disease: An updated review pharmaceutical medicine. SpringerLink. https://link.springer.com/article/10.1007/s40290-023-00466-y [11] Nurminen, M.-L., Niittynen, L., Korpela, R., & Vapaatalo, H. (1999, November 4). Coffee, caffeine and blood pressure: A critical review. Nature News. https://www.nature.com/articles/1600899 [12] Clarke, M. A. (2020, May 5). Education on caffeine consumption to improve blood pressure for adults ages 19-65, who consume high amounts of caffeine daily. ScholarWorks @ Georgia State University. https://scholarworks.gsu.edu/nursing_dnpprojects/22/ [13] Geleijnse, J. M. (2008, October). Habitual coffee consumption and blood pressure: An epidemiological perspective. Vascular health and risk management. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2605331/ [14] Kujawska, A., Kujawski, S., Hajec, W., Skierkowska, N., Kwiatkowska, M., Husejko, J., Newton, J. L., Simoes, J. A., Zalewski, P., & Kędziora-Kornatowska, K. (2021, September 25). Coffee consumption and blood pressure: Results of the second wave of the cognition of older people, education, recreational activities, nutrition, comorbidities, and functional capacity studies (Copernicus). Nutrients. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8538539/ [15] Zhican Xu, Qingshu Meng, Xinyu Ge, Rulin Zhuang, Jing Liu, Xiaoting Liang, Huimin Fan, Ping Yu, Liang Zheng, Xiaohui Zhou, A short-term effect of caffeinated beverages on blood pressure: A meta-analysis of randomized controlled trails, Journal of Functional Foods, Volume 81, 2021, 104482, ISSN 1756-4646, https://doi.org/10.1016/j.jff.2021.104482. [16] Product Q&A. Red Bull Gives You Wings - RedBull.com. (n.d.). https://www.redbull.com/inten/energydrink/how-much-caffeine-is-in-a-can-of-red-bull [17] Sobhy, O. (2023, May 28). Redbul and heart rates. Kaggle. https://www.kaggle.com/datasets/omarsobhy14/redbull-and-heart-rates/data [18] Adegoke, J. (2023, April 17). A beginner’s Guide to Kaggle for Data Science. MUO. https://www.makeuseof.com/beginners-guide-to-kaggle/ [19] Institute of Medicine (US) Committee on Military Nutrition Research. (1970, January 1). Efficacy of caffeine. Caffeine for the Sustainment of Mental Task Performance: Formulations for Military Operations. https://www.ncbi.nlm.nih.gov/books/NBK223791/ [20] Ko, E., & Glazier, E. M. (2022, June 1). Caffeine sensitivity grows as people age. UCLA Health. https://www.uclahealth.org/news/article/caffeine-sensitivity-grows-as-people-age [21] Evans, S. M., & Griffiths, R. R. (2017, January 20). Caffeine tolerance and choice in humans psychopharmacology. SpringerLink. https://link.springer.com/article/10.1007/BF02245285 [22] Temple, J. L., & Ziegler, A. M. (2011, March). Gender differences in subjective and physiological responses to caffeine and the role of steroid hormones. Journal of caffeine research. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3621315/ [23] Antonio, J., Newmire, D. E., Stout, J. R., Antonio, B., Gibbons, M., Lowery, L. M., … Arent, S. M. (2024). Common questions and misconceptions about caffeine supplementation: what does the scientific evidence really show? Journal of the International Society of Sports Nutrition, 21(1). https://doi.org/10.1080/15502783.2024.2323919

536


Comparative Analysis of Photoresist LOR3A and Chromium Sacrificial Layers for Biocompatible Polymer SU8 Kirigami Structure for Neural Organoid Biosensor Fabrication Author 1

Author 2

Author 3

Full Name

:

Song, Jennifer

School Name

:

Dwight School Seoul, South Korea

Full Name

:

Tang, Jiayin

School Name

:

Suzhou Singapore Int’l School, China

Full Name

:

Manzoor, Maheen

:

International School Lahore, Pakistan

(Last Name, First Name)

(Last Name, First Name)

(Last Name, First Name)

School Name

Abstract Our study presents an innovative approach to fabricating a 3-dimensional, transparent, and flexible device for neural organoid biosensors, addressing the limitations of existing 2-dimensional sensors that fail to monitor the entire structure of neural organoids. By using kirigami-inspired designs and evaluating various sacrificial layers—including chromium and LOR layers—this research aims to enhance the biocompatibility and structural integrity of biosensors. Our findings highlight the advantages of using specific materials and techniques in constructing 3D structures that support longterm neural activity measurements, offering a more comprehensive understanding of neural function and potentially advancing neurodevelopmental and neurodegenerative disorder research. Based on our experimental results, we can conclude that all liftoff attempts using the LOR sacrificial layer were successful, whereas those using chromium were not. More specifically, the LOR 3 layer unexposed provides the most significant benefits compared to the other mediums.

Keywords Biocompatibility, Kirigami, Lift-Off Resist (LOR), Microelectromechanical Systems (MEMS), Neural Organoid Biosensors, Neural Organoids, Photolithography, Pluripotent Stem Cells (PSCs), Sacrificial Layers, SU8 Photoresist.

537


INTRODUCTION Research Objective This paper focuses on the fabrication of a 3-dimensional transparent and flexible device that supports neural organoids for long term neural activity measurements. A major issue with the current existing neurological biosensors is that it is only 2-dimensional. This limits the measurements of the electrical signal across the entire neural organoid, as 2-dimensional surfaces can only capture electrical signals from the surface layer of the organoid that is in direct contact with the device, leaving the inner layers unmonitored and unstimulated. Furthermore, neural organoids can experience necrosis in the center, meaning that the core of the organoid often deteriorates due to an insufficient nutrient and oxygen supply. The weaknesses posed by the 2-dimensional biosensor exacerbates this necrosis, as it cannot adequately assess and engage with their inner regions. To overcome this challenge, our 3-dimensional device, inspired by kirigami- the Japanese art of cutting paper- provides a novel solution. Our design and structure enable even access to both the surface and deeper layers of the organoid, offering a more accurate and comprehensive understanding of neural activity throughout the entire structure. Our methodology in designing these innovative structures includes the use of a sacrificial layer. Our goal is to develop a structure that is both simple and durable. This paper will evaluate the biocompatibility of four materials—chromium, unexposed LOR 2, unexposed LOR 3, and exposed LOR 2—and determine the most suitable sacrificial layer. This will be crucial in developing a reliable scaffold for potential organoid applications. Neural Organoids and Neural Organoid Biosensors The brain is one of the most intricate and inaccessible organs in the body. Inspecting its functionality and mechanisms, especially when it comes to neurodevelopmental and neurodegenerative disorders, is a major challenge in modern neuroscience. While traditional methods, such as utilizing animal models or post-mortem brain tissue, have contributed to research, they are limited in their ability to fully replicate human biology and physiological systems (Kim). As a result, there is a growing need for invitro models that can better mimic the brain’s structure and function. Neural organoids are 3-dimensional in-vitro cell cultures that model human brain anatomy features and its function and developmental processes. They are primarily developed from pluripotent stem cells (PSCs) and, in some cases, from adult stem cells or fetal tissue (Kim). These organoids exhibit selforganization and contain both progenitor cells—early-stage cells capable of developing into specific cell types—and differentiated cells, which are mature cells with specialized functions within a tissue or organ (Kim). This unique combination allows organoids to recreate nearly all regions of the human brain.

Figure 1. How Pluripotent Stem Cells (PSCs) Can Be Cultivated into a Brain Organoid (Made by BioRender)

Neural organoid biosensors are cutting-edge tools that merge human brain organoids with biosensing technologies. These platforms offer real-time monitoring and analysis of neural activity neurotoxicity and brain-like tissue functionality (Saglam-Metiner). Hence, our ultimate aspiration of our 3-

538


dimensional designs (beyond this paper) is to apply them to fabricating these biosensors for real-life use. By combining brain organoids with microfluidic devices and various sensing technologies- such as optical, electrical, and electrochemical sensors- these biosensors offer a sophisticated approach to studying brain functions, investigating diseases, testing pharmaceuticals, and modeling neurodegenerative conditions. Kirigami Kirigami, derived from the Japanese art of “cutting paper” involves both cutting and folding, unlike origami, which relies solely on folding (Chen). Kirigami allows for an increased kinematic degree of freedom by releasing the constraints on the sheet material. This results in structures that can deform and stretch while maintaining integrity. The technique is scalable, applicable to both micrometer-scale materials such as graphene and centimeter-scale paper (Chen). Key kirigami approaches, such as parallel and crosscuts, enhance flexibility and enable diverse deformations. This makes kirigami valuable for creating shape changing structures, flexible electronics, and robotic systems (Chen). We will be using kirigami in our study to design a 2-dimensional structure that lifts-off to become 3dimensional. The properties of kirigami will assist in creating structures that require precise deformation and adaptability.

Figure 2. Examples of Kirigami Structures (Sino)

SU8 Photoresist SU8 is a widely used negative photoresist in microfabrication, where UV light exposure renders the resistance insoluble. An epoxy resin dissolved in an organic solvent, SU8 is known for its ability to create high-aspect ratio sidewalls and deep 3-dimensional structures with vertical surfaces (Cargou). It is also commonly employed in the fabrication of microelectromechanical systems (MEMS), microfluidics, and other microscale devices. SU8 is commonly used in a lift-off process for patterning thin films in microfabrication. There are many viscosities available for SU8, enabling the formation of films with varying thicknesses- from a few micrometers to over a milliliter. The thickness of the SU8 layer can be controlled by adjusting the spin coater’s parameters of speed, acceleration, and time (Cargou). To use SU8, it is typically spin coated onto a substate to achieve the desired thickness. Alternatively, a metal or other material can be deposited over the entire substrate before coating SU8. Then, it’s followed by a UV exposure through a photomask, which will expose the SU8 areas. These areas are become cross-linked and adhere to the substrate, while the unexposed areas are removed with a developer solution. The final step involves removing the SU8, along with the material deposited on top, resulting in a patterned thin film in regions where SU8 was absent (Cargou). In our structures, the substrate will be either a deposited chromium wafer or a fused silica wafer with unexposed or exposed layers of LOR.

539


As our structures aim to create precise, high-resolution, 3-dimensional biosensing devices, SU8 will be employed to define the pattern that we design. SU8 will contribute to facilitating the fabrication of the complex, flexible, and transparent components of our advanced structure. Lift-Off Resist (LOR) LOR is a photoresist based on polydimethylglutarimide (PMGI) used in semiconductor and microfabrication processes. It serves as a sacrificial or undercut layer in bilayer lift-off processing, offering improved resolution, process control, and yield (Shalabi). LOR is commonly employed in applications requiring precise patterning, including the fabrication of Giant Magnetoresistance (GMR) and Magnetoresistance (MR) heads, wireless devices, optoelectronics, MEMS, and packaging (Shalabi). As part of the lift-off process LOR resists function as an undercut layer beneath the imaging resist, so that during development a reentrant profile is formed, with the LOR resist dissolved away on its sidewall, and an undercut remaining beneath the top imaging resist layer. This undercut is critical for lift-off to succeed; if the sidewalls of the resist are not undercut, but covered with material, the resist functions as an adhesive layer, rather than a protective mask, and only adheres to the substrate where it has been removed (Shalabi).

Figure 3. Comparison of a Positive VS Negative Photoresist (“Types of Photoresist”)

By experimenting with either two or three layers of LOR, we aim to leverage its advantages in achieving resolutions better than 0.25μm, high control over film thickness and patterning, and compatibility with various deposition processes and materials (Shalabi). LOR resist enhances lift-off processes by allowing precise tuning of the undercut profile through adjustments in bake temperature and time, thus providing exceptional fidelity and control. Sacrificial Layers Sacrificial layers are commonly utilized in the fabrication of MEMS, particularly for complex or movable components. The process begins with depositing a sacrificial layer on the substrate, which is then cured in selective areas. An upper layer of the desired material is applied on top of the sacrificial layer. A selective etching process subsequently removes specific portions of the sacrificial layer, leaving behind the intended structure and material (Lucibello et al.). Once again, the four different types of sacrificial layers we will by experimenting with are: 1. 2. 3. 4.

Chromium LOR 2 layer unexposed LOR 3 layer unexposed LOR 2 layer exposed

540


Figure 4. How Sacrificial Layers Can be Used in Microfabrication to Create Delicate Structures (Made by BioRender)

Characteristics of a Chromium Sacrificial Layer Metal sacrificial layers including chromium, typically only a few microns think, exhibit different properties compared to bulk materials, including variations in electrical resistivity, residual stress, hardness, chemical resistance, and corrosion resistance. Chromium is a very hard metal with a density of 7.19g/cm^3, making it lighter than noble metals (Mulloni). Chromium is often used as a protective coating due to its resistance to corrosion. Chromium has an electrical resistivity of 125 nΩm at 20 °C, ten times that of gold, copper, and silver (Mulloni). Chromium has a thermal conductivity of 93.9 W/m·K at 300 K and a thermal expansion coefficient of 4.9 μm/m·K at 25°C, both significantly lower than those of other MEMS metals (Mulloni). As a transition metal, chromium exhibits various oxidation states, including +2, +3, and +6, with chromium (III) being the most stable (Mulloni). Common chromium compounds used in MEMS are chromium oxide (Cr₂O₃), chromium nitride (CrN), chromium carbide (Cr₃C₄), and chromium silicide (CrSi₂), which forms when chromium is deposited on silicon. Moreover, chromium forms a protective oxide layer upon exposure to air, preventing further oxidation by blocking oxygen diffusion (Mulloni). In summary, chromium is often used as a sacrificial layer due to its corrosion resistance, ease of deposition, mechanical hardness, and versatility in forming various compounds with desirable properties for microscale devices. Characteristics of an LOR Sacrificial Layer LOR is also often used as a sacrificial layer due to its unique properties. It enables the patterning of extremely fine metal features with widths less than 0.25 micrometers, essential for creating precise components (Kayaku). Additionally, LOR can handle thick metal layers greater than 3 micrometers, making it suitable for applications requiring robust, durable metal layers (Kayaku). With a glass transition temperature (Tg) of around 190°C, LOR exhibits high thermal stability (Kayaku). It can be efficiently removed using standard resist stripping solutions, leaving a clean surface with minimal residue. Moreover, LOR’s adaptability to various spin-coating parameters makes it versatile for different photolithography and microfabrication processes, producing high-quality, low-defect coatings over a wide range of thicknesses (MicroChem). Photolithography (Exposing) Photolithography is a well-established technique in the semiconductor industry for transferring patterns from a lithographic mask onto a light-sensitive resist applied to a substrate (Albright). This process can use either positive or negative resists. In a standard approach with positive resist, a metal film is first deposited onto the substrate. A photoresist layer is then spin-coated and soft-baked at 60-100°C for up to 30 minutes (Albright). After exposure through a mask, which causes photochemical changes in the resist, it is developed and hard-baked at 120-180°C (Albright). The exposed metal areas are etched away, leaving behind the patterned metal nanoparticles.

541


In our study, we utilize the Karl Suss MA-6 Contact Aligner system for exposure, which offers precision mask-to-wafer contact printing in hard contact, soft contact, vacuum, and proximity modes. This system accommodates both irregularly shaped substrates and standard wafers up to 6 inches in size, enhancing our ability to fabricate intricate 3-dimensional biosensing devices with high precision.

METHODOLOGY

Figure 5. Flowchart of Overall Methodology

Kirigami Design for Mask Fabrication To create the three-dimensional structure, we decided to utilize kirigami. The goal of our design is to integrate with an enabled chronic recording of enacted cortical organoids while preserving their morphology, cytoarchitecture, and cell composition. While they used the spiral and honeycomb kirigami structures, we drew inspiration from rose petals and the layout of circular sewage drains. Our design featured strategic cuts and folds that transform from a flat 2-dimensional design into a 3dimensional basket-like structure when suspended in liquid.

Figure 6. Visual Representation for Inspiration for Mask Design

The specific placement of the cuts allows the structure to form a shape like a cup- casting into a similar pattern that arranges parallel to how rose petals spread out. Our petal-inspired design also offers the potential for easy insertion and removal of the organoid. Rose petals are arranged in a spiral pattern, with each petal overlapping to form a protective enclosure. Furthermore, the “petals” demonstrate a gentle and smooth opening and closing mechanism. In our kirigami design, this was mimicked by overlapping sections that create a seamless, protective when suspended to become 3-dimensional and the gentle curves in the cut patterns to imitate the curvature of rose petals, which can help form a more organic and round shape when transformed. The circular arrangement of the “petals” was influenced by the efficient radial design of sewage drains, which allows for even distribution and flow. The radial design of the circular sewage drains was especially insightful for efficient space utilization and fluid management. This concept was applied to our kirigami structure by assembling the cuts in a radial pattern, similar to the spokes of a wheel. This allowed for even distribution of stress when the structure transformed from two-dimensional to threedimensional. In addition, the incorporation of small, placed channels within the petal-like structures further optimized the flow, emulating the drainage function of sewage systems. We also found that the natural creation of the central slope when suspended into a three-dimensional structure will be

542


significant, as it can guide the flow of nutrients towards the organoid and facilitate the removal of waste products (Figure 7).

Figure 7. Rough Sketches of Our Initial Idea Both 2-D and 3-D

This initial sketch was transformed through Tinkercad, a 3D modeling program. We utilized the scribble option to create our drawing. Once we had a firm idea of how we wanted the 2D outline, we decided to use Autodesk Fusion 360, a more professional modeling program to finalize the design and print it as a mylar mask (Figure 8).

Figure 8. Transformation from Tinkercad Design to Autodesk Fusion 360 to Final Printed Mylar Mask

Transparent Fused Silica Wafer Preparation for Unexposed and Exposed LOR-3A First, we cleaned the wafer with isopropyl alcohol (IPA). Then, we dried it with an N2 gun, followed by placing the sanitized wafer on a hot plate at 180ºC for 3 minutes (Figure 9). This was done to break the pre-existing H2O bonds. We then cooled the wafer to room temperature by putting it on a Kimwipe.

Figure 9. Dehydrating the Fused Silica (SiO2) Wafer on a Hot Plate at 180 ºC

543


Next, the Laurell Controller was turned on and the spin coaster was lined with aluminum foil. The wafer was placed on the sample puck in the spin coater, followed by turning on the vacuum to ensure that it stayed in place while the spinning program ran. Droplets of LOR 3A were placed on the center of the wafer, until it spread out completely to the edges (Figure 10). Then, using the Laurell Controller, the program of the spin coater ran for 1 minute at a speed of 2000 rpm (rotations per minute) and an acceleration of 500 rpm.

Figure 10. Depositing LOR 3A Onto Fused Silica Wafer on a Spin Coater

Subsequently, the LOR coated wafer was placed on the hot plate at 180ºC for 5 minutes. For the LOR 2-layer samples (unexposed and exposed), the spin coating process was repeated one more time to create a bilayer. Then, it was placed on the hot place at 280ºC for 5 minutes, later cooling down to room temperature. For the LOR 3-layer sample, the LOR coating process was repeated for a third time to create a trilayer before baking it on the hot plate at 280ºC for 5 minutes and cooling to room temperature. Exposing the LOR 2 Layer Sample Succeeding the cool down of the second and final layer of LOR coated on the wafer, the exposed sample of LOR was exposed under 365nm of UV light for 8 seconds. Chromium Wafer Preparation via Deposition The wafer was cleaned with isopropyl alcohol, followed by blowing with the N2 gun. Then, the wafer was placed in a bell jar, where it was inside the glove box in a clean room. The bell jar was pumped down to remove all air and create a vacuumed atmosphere (Figure 11). Here, 30nm of chromium was evaporated onto the surface of the wafer, which resulted in a final product of a chromium wafer sample.

Figure 11. Bell Jar in Glove Box

544


In order to make sure that the SU8 polymer, which will be spin coated in coated in the next step, will adhere to the chromium surface, an oxygen plasma etch was additionally performed on the chromium wafer for 1 minute (Figure 12b).

Figure 12. (a) 30nm of Chromium on Wafer Ready to Plasma Etch, \ (b) Pink Light Emitting from Oxygen Plasma

Spin Coating Biocompatible SU8-50 and SU8-100 Polymers Followed by Pre-Exposure Bake Similar to coating the LOR, aluminum foil was aligned on the spin coater and each wafer was placed on the puck and set to vacuum. Then, either SU8-50 or SU8-100 was poured on the wafer until it reached the edge (Figure 13b). SU8-50 was used for the chromium wafer, while SU8-100 was used for the SiO2 LOR exposed and unexposed samples. The spin coater was spun at a parameter of 500 rpm for speed and acceleration of 100rpm for the first 10 seconds. Then, it directly followed with a new parameter of 1000 rpm for speed and an acceleration of 2000 rpm for 30 seconds.

Figure 13. (a) Pouring SU8-100 on LOR3A Fused Silica Wafer, (b) SU8-100 Deposited onto Chromium Silica Wafer Before Spin Coating

For the SU8-100, the spin coating process was repeated once more, and for SU8-50, the spin coating process was repeated three more times. For all four samples, a total of 200µm of SU8 was applied. Once the spin coating process was completed, the samples were placed on the hot plate and baked at 65ºC for 25 minutes, followed by a rise in temperature of 95ºC for 70 minutes. Successively, a cool down was also completed.

545


Photolithography of Mask and Post Exposure Bake Once all samples were baked and cooled down, a Karl Suss Machine was utilized to align the printed Mylar mask along with the wafer underneath (Figure 14a and 14b). Then, the pattern of the mask was exposed onto the wafers using 365nm of UV light for approximately 55 seconds (Figure 15a). The same mask sample was used for all four wafers, followed by a post exposure bake on the hot plate at 65ºC for 25 minutes then at 95ºC for 70 minutes (same as pre-exposure bake) (Figure 15b).

Figure 14. (a) Aligning the Mask with the Karl Suss Machine, (b) Placing the Wafer Underneath the Mask

Figure 15. (a) Post UV Exposure of the Mask Design, (b) Post Exposure Bake of the Chromium Wafer

3D Printed Handles and Miniature Brains With the help of Cole Duncan at the Sharf Lab at the University of California, Santa Cruz (USA), we were able to print handle-like structures and mini brains to imitate a real-life brain-machine interface that connects the structure with a processing component such as a computer (Figure 16a). The miniature brain was printed to emulate a real-life organoid.

Figure 16. Designing the 3D Printed Handles and Brains, (b) 3D Printed Mini Brains

546


Development and Lift-Off To lift off the completed structures from the wafers, all wafers were first placed inside the SU-8 developer for 10-15 minutes, then deionized water. After, the LOR samples were placed in the CD-26 developer for 30 minutes while the chromium sample was placed in the chromium etchant (Figure 17a and 17b). This was also placed in the deionized water again. A sonicator was used at times to support the lift off process manually (Figure 18).

Figure 17. (a) CD-26 Solution and Deionized Water, (b) Chromium Etchant (Left) and Deionized Wafter (Right)

Figure 18. Placing Chromium Wafer Inside Sonicator

RESULTS Unexposed LOR 2 Layers The unexposed LOR 2 layers were the first to fully develop in the SU8 developer, taking approximately 35 minutes. However, the process of etching away the LOR took much longer. After dropping the entire wafer into CD-26 and letting it sit for 20-30 minutes, it was apparent that some parts of the SU8 had begun to lift off. However, other parts were still stuck to the wafer, as shown in Fig. 19. This is most likely due to the accumulation of the SU8 at the edges of the wafer, which is caused by the spin coating process. To remove the edge beading, we gently used a razor around the edges and openings of the design, allowing the CD-26 to permeate more fully under the SU8 and etch away the LOR. We also placed the wafer in a sonicator for 5-10 minutes. This entire process did lead to significantly more liftoff; however, the SU8 was still unable to be completely removed from the wafer. To complete the process, we had to manually pry the SU8 layer off using a razor, gently wedging it between the SU8 and wafer until the structure was completely free. This additional effort indicates that while the SU8 structure developed relatively easily, the underlying LOR layers were more resistant to removal, necessitating physical methods to achieve complete separation. The strong adhesion between the LOR and the wafer suggests that further optimization of the LOR layer thickness and baking parameters is needed to improve the ease of removal in future processes.

547


Figure 19. Unexposed LOR 2 Layers After Siting in CD-26, With Bubbles Forming Under Areas Lifted Off

Unexposed LOR 3 Layers The unexposed LOR 3 layers displayed significantly different behavior from both the unexposed LOR 2 layers and chromium wafers. During the SU8 development process, the SU8 structure unexpectedly detached from the silicon wafer without the use of CD-26 developer, which is typically employed to etch away the LOR. The structure was able to cleanly and efficiently be removed from the wafer without much external force, as shown in Fig. 20a. One possible explanation for the detachment is that one of the LOR layers remained adhered to the SU8, while the other two layers stayed attached to the wafer, allowing for a clean detachment between the LOR layers, which was not possible for the unexposed LOR 2 layers. Another possible explanation is that the extra LOR layer acted as a buffer between the SU8 and the wafer. While the unexposed LOR 2 layers may have had some gaps where the SU8 could have attached to the wafer, the extra layer in the unexposed LOR 3 layers prevented this from happening, leading to the smooth removal. After the separation, we still submerged the SU8 structure in the CD-26 developer to remove any residual SU8, resulting in a significantly more transparent structure, as shown in Fig. 20b. The improved transparency and the overall ease of development in this case highlight the potential benefits of using an additional LOR layer, which could enhance the process by reducing adhesion-related issues.

Figure 20. The SU8 Structure Obtained from the Unexposed LOR 3 Layers Wafer, (a) the SU8 Structure After Unexpectedly Detaching from the Unexposed LOR 3 Layers Wafer, (b) the SU8 Structure After Being Cleaned of Any LOR Residual

Exposed LOR 2 Layers In order to release the SU8 structure from the exposed LOR 2 layers wafer, the wafer was first submerged in SU8 developer. During this process, we noticed that the wafer was developing in a similar manner as the unexposed LOR 3 layers, with slight liftoff occurring without the use of CD-26. In the hopes that the SU8 would be able to completely liftoff in this manner, we used a pipette over the wafer

548


to squeeze SU8 developer directly into the holes and edges of our kirigami pattern, as shown in Fig. 21. However, after exposing the wafer to SU8 developer for over an hour, it became apparent that further intervention was necessary to fully release the SU8 structure. The wafer was then submerged in CD-26 for 30 seconds. Additionally, we again used a razor to carefully score around the edges of the pattern, helping to initiate the release, and more SU8 developer was pipetted directly onto the wafer. These additional steps were crucial in ensuring that the SU8 structure detached cleanly from the substrate, as the development process alone was insufficient to achieve complete release. The need for manual intervention, such as the use of a razor and localized application of developer, indicates that the exposed LOR layers maintained a strong adhesion to the SU8 structure, even after extended development times. This suggests that while the LOR served its role as a sacrificial layer, the exposure and development conditions might require further optimization to reduce the reliance on mechanical assistance for release.

Figure 21. Pipetting SU8 Developer Over the Exposed LOR 2 Layers Wafer

Chromium The chromium layer was the second to develop during the SU8 development, but it presented the most challenges during the chromium etching process. After submerging in chromium etchant for 30-45 minutes, we noticed a few significant changes, with only a tiny part of the structure lifting off near the openings. To further the process, we removed the wafer from the chromium etchant and rinsed it with water and isopropyl alcohol (IPA) before leaving it to sonicate for 5-10 minutes. We also used the razor in a similar process as the unexposed LOR 2 layers, gently cutting around the edges of the outside and openings. Despite all this, there was still very little liftoff. As a last resort, we left the chromium wafer in chromium etchant overnight. Even after 20+ hours of being submerged, we were still unable to observe any liftoff, meaning we were unable to attempt a manual liftoff like we had with the unexposed LOR 2 layers, and we were ultimately unable to detach the SU8 structure. The difficulties encountered during the chromium etching process underscore the inherent properties of chromium, such as its high resistance to chemical etchants and strong adhesion to underlying layers. However, while chromium offers desirable mechanical properties, it’s clear that it is not suitable as a sacrificial layer and will not lead to successful liftoff.

DISCUSSION The results from the development and etching processes reveal significant differences in the behavior of LOR and chromium as sacrificial layers during an SU8-based fabrication. These differences are critical to consider when selecting materials for specific microfabrication applications, particularly in the context of biosensor development, where precision and efficiency are paramount. The unexposed LOR 2 layers, although successfully developed in the SU8 developer, required substantial manual intervention for their removal. The necessity of using a razor to address edge beading and subsequent sonication suggests that the LOR layer’s adhesion to the wafer was stronger than

549


anticipated. This strong adhesion complicates the lift-off process, particularly when uniformity and ease of removal are desired. The manual steps required to separate the SU8 structure from the wafer not only introduce variability into the process but also increase the risk of damaging the delicate structures being fabricated. The challenges faced with the unexposed LOR 2 layers highlight the importance of optimizing layer thickness, baking times, and overall processing conditions to mitigate these issues in future applications. In contrast, the unexposed LOR 3 layers demonstrated significant advantages, both in terms of ease of development and the quality of the resulting structures. The additional LOR layer effectively acted as a buffer between the SU8 and the silicon wafer, reducing the adhesion strength and allowing for a cleaner and more efficient lift-off. The improved transparency of the SU8 structure post-development further underscores the benefits of this approach, as it suggests a more uniform and defect-free layer. The success with the unexposed LOR 3 layers indicates that incorporating additional layers can be a straightforward yet effective and efficient strategy to enhance the lift-off process, making it particularly suitable for applications requiring precise patterning and minimal manual intervention. For the exposed LOR 2 layers, similar challenges were observed as with the unexposed LOR 2 layers. Although there was some initial liftoff during the SU-8 development, complete detachment required additional manual intervention, including the use of a razor and localized application of SU-8 developer. The persistent strong adhesion suggests that the exposure conditions may have altered the LOR’s properties, making it more resistant to the standard development process. This finding underscores the importance of optimizing both exposure and development parameters to reduce the need for mechanical assistance. The chromium layer, while effective in forming a stable sacrificial layer, was ultimately unsuitable as a sacrificial layer. Despite all our attempts, including the necessity for multiple manual interventions such as razoring and sonication, we observed no liftoff occurring at all, suggesting that chromium is not suitable for applications where rapid and straightforward lift-off is essential. Additionally, the chromium layer requires a time-intensive sputter deposition process, which contrasts sharply with the relatively simple spin coating method used for LOR layers. However, despite these drawbacks, chromium offers significant mechanical properties that could be valuable in specific scenarios outside of acting as a sacrificial layer. Its durability, hardness, and resistance to corrosion make it a viable option for applications where these attributes are critical. For instance, in environments where the fabricated structures are exposed to harsh conditions or where longterm stability is required, chromium’s robustness could outweigh the challenges associated with its processing. Thus, while chromium may not be ideal for a sacrificial layer, it remains a material of interest for niche uses where its unique properties are advantageous.

Figure 22. Final Lifted off SU8 Structures: (a) Exposed LOR 2 Layers on the Left, Unexposed LOR 2 Layers in the Center, Unexposed LOR 3 Layers on the Left. (b) Side View of SU8 Structure with 3D Printed Mini Brain

550


CONCLUSION Based off these experimental results, we can conclude that in general, LOR offers more advantages as a sacrificial layer than chromium. All liftoff attempts with a LOR sacrificial layer were ultimately successful, while chromium was not. Additionally, between the unexposed LOR two layers and exposed LOR two layers, the exposed LOR two layers did have a slightly more efficient liftoff. For the exposed LOR two layers, we didn’t need to sonicate and only needed to submerge in CD-26 for a brief amount of time. This indicates that exposing the LOR layers does make liftoff more efficient. However, the unexposed LOR 3 layers were ultimately the most effective and efficient sacrificial layer, offering clear advantages in terms of ease of development, structural integrity, and process efficiency. The ability to fine-tune the lift-off process by adjusting the number of LOR layers, or by exposing certain LOR layers, presents a versatile approach that can be adapted to meet the specific needs of various microfabrication tasks. In conclusion, the unexposed LOR 3-layer has demonstrated its effectiveness as a sacrificial layer for releasing a biocompatible, 200μm thick, flexible, and transparent SU8 structure. These findings are particularly relevant for advanced neural organoid biosensor projects. The use of this 3-layer LOR approach, combined with our mask design, facilitates the creation of 20 μm thin structures with golddeposited contacts and wires, as well as atomically thin films. Implementing these methods in longterm neural organoid sensing will enhance the ability to map neural activity over extended periods, offering valuable insights into neurodegenerative diseases and potentially leading to significant advancements in the field.

Works Cited Albright, Jessica. “Mastering Semiconductor Technology: Exploring the Fundamentals of Photolithography.” Brewer Science, 30 Apr. 2024, www.brewerscience.com/blog- photolithographyfundamentals/. Cargou, Sébastien. “Su-8 Photolithography: UV Exposure.” Elveflow, Elvesys, 10 Oct. 2022, www.elveflow.com/microfluidic-reviews/soft-lithography-microfabrication/su-8- photolithographyuv-exposure/. Chen, Shanshan, et al. “Kirigami/Origami: Unfolding the New Regime of Advanced 3D Microfabrication/Nanofabrication with ‘Folding.’” Nature News, Nature Publishing Group, 30 Apr. 2020, www.nature.com/articles/s41377-020-0309-9. Kayaku. “LOR & PMGI Lift-off Resists.” Kayaku, https://kayakuam.com/wpcontent/uploads/2019/10/RevPMGI-Resists-data-sheetV-rhcedit-100311.pdf. Accessed 6 Aug. 2024. Kim, Soo-hyun, and Mi-Yoon Chang. “Application of Human Brain Organoids-Opportunities and Challenges in Modeling Human Brain Development and Neurodevelopmental Diseases.” MDPI, Multidisciplinary Digital Publishing Institute, 7 Aug. 2023, www.mdpi.com/1422-0067/24/15/12528. Lucibello, Andrea, et al. “Smoothing and Surface Planarization of Sacrificial Layers in MEMS Technology.” Microsystem Technologies, vol. 19, no. 6, Springer Science+Business Media, Feb. 2013, pp. 845–51, https://doi.org/10.1007/s00542-013-1747-6. Accessed 6 Aug. 2024.

551


MicroChem. “LOR Lift-off Resists.” MicroChem, www.nanofab.utah.edu/wpcontent/uploads/2022/10/Positive-Lift-Off-Resist-LOR10B-Spec-Sheet-2.pdf. Accessed 5 Aug. 2024. Mulloni, Viviana. “Chromium in MEMS Technology.” Materials Science Research Journal, vol. 5, no. 2/3, 2011, pp. 211–29, www.proquest.com/docview/1701285325?pqorigsite=gscholar&fromopenview=true&sourcetype=Scholarly%20Journals. ProQuest. Accessed 6 Aug. 2024. Saglam-Metiner, Pelin, et al. “Humanized Brain Organoids-on-Chip Integrated with Sensors for Screening Neuronal Activity and Neurotoxicity - Microchemical Acta.” SpringerLink, Springer Vienna, 3 Jan. 2024, https://link.springer.com/article/10.1007/s00604-023-06165-4#Sec7 Shalabi, Nabil. “Lift-off Process Using a Bilayer Lor/PMGI and Any Resist.” Advanced Nanofabrication Facility,https://www.nanofab.ubc.ca/processes/photolithography/lift- off-processusing-a-bilayer-lor-pmgi-and-any-resist/. Accessed 19 Aug. 2024. Sino, Maria. Origami Kirigami, 23 Mar. 2015, https://msinoblog.wordpress.com/2015/03/23/origamikirigami/ “Types of Photoresist.” Samsung Display Newsroom, 17 Jan. 2022, global.samsungdisplay.com/29349/.

552


Antibacterial Effect of Silica-Agar Structure Coated with AgNPs Author

Full Name

:

Sung, David Jun

:

US International School

(Last Name, First Name)

School Name

ABSTRACT With climate change leading to more water scarcity and stress, places without good water purification systems are at risk for diseases. To tackle this problem, we studied reusable structures made of silver nanoparticles (AgNPs). When we mixed silica, agar, and glucose into a gel and treated it with AgNO3, the agar and silica absorbed 0.1 M AgNO3. The glucose in the agar then helped reduce the Ag+ ions, creating AgNPs that coated the silica-agar structure. A piece of this structure, which was 1.2 cm in diameter and 0.5 cm thick, reduced the bacteria E. coli and S. aureus by more than 1.2 times and 1.5 times, respectively, when placed in 50 mL of contaminated tap water for an hour. Even when we lowered the concentration of AgNO3 to 0.01 M, we still noticed antibacterial effects. Additionally, when we treated the solutions at room temperature and under incandescent light, there was hardly any increase in bacteria. Thus, we discovered that this mixture of melted agar, glucose, and AgNO3 can transform into a structure, called SGS (Silica-Glucose-Silver NPs), which can effectively and quickly control bacteria - preventing high risks of water pollution.

553


1. Introduction One of the issues caused by the impacts of climate change is water pollution and scarcity. Water contamination happens when pollutants from industrial activities, along with various harmful substances like copper, lead, fluorides, and waste, contaminate drinking water, especially during natural disasters such as floods, droughts, and typhoons, [1]. Water pollution is critical to study because it directly threatens the health and development of human life and can lead to serious diseases such as typhoid fever and cholera [2, 3]. In places like Vietnam, where I visited in January 2024, although there is a lot of rainfall, frequent natural disasters and untreated wastewater have resulted in severe water shortages. In the village of Tan Hung Ir, for instance, people face water issues when engaging in their livelihoods such as dyeing fabric, as the wastewater is dumped directly into nearby rivers. In Ho Chi Minh City, the canals exceed the limits for E. coli discharge, and the drinking water is consumed without purification, causing the population extremely vulnerable to diseases [4]. This issue is not just exclusive to Vietnam. It is a global phenomenon affecting regions like North Africa and Southeast Asia that are naturally water-scarce, with around 2.3 billion people experiencing water shortages[5]. Many studies are being conducted on various methods for purifying water using chlorine disinfection, solar energy, and filters. Chlorine disinfection is the most common method, but there are concerns about antibiotic resistance in bacteria from chlorinated drinking water【5】. Filters that use membranes or a mix of sand and iron to remove bacteria and arsenic need special facilities【5,6】. Solar-based methods include the solar ball, which collects evaporated water, and the Eliodomestico, which distills seawater using solar heat. Solvatten is another method that puts contaminated water in black containers to kill bacteria【6-8】. These methods are good because they can easily clean and purify water, but they depend on sunny weather. Other methods that don’t need special facilities include chlorine disinfection and using plant extracts like Alternanthera sessilis and Moringa oleifera to fight parasites and bacteria【6】. Plant extracts can also help create silver nanoparticles (AgNPs), which are effective against bacteria. These methods work quickly for cleaning water, but they can leave behind toxic residues because they are one-time use【9】. Silver nanoparticles are interesting because they can be easily made with plant extracts, work well against bacteria by affecting their cell walls, and can act as photocatalysts【10】. If AgNPs can be securely attached to a structure, their effectiveness in purifying water could improve. To do this, we plan to use silica and agar to create the structure. Silica (SiO2) is a common compound on Earth. It has pores that range from 0.5 to 300 nm and is resistant to water and chemicals. This makes it useful for enhancing photocatalytic effects, especially when combined with materials like titanium dioxide【11】. Agar is a substance extracted from seaweed. When heated and cooled, it forms a gel, which makes it a good material for bioplastics【12】. By mixing these two materials with glucose, which helps reduce silver ions (Ag+), we can create a gel. When we add silver nitrate, the glucose and Ag+ will react to form silver nanoparticles on the surface and inside the gel. This will create a structure made of silica and agar coated with silver nanoparticles (we’ll call it Slica Glucose Silver NPs or SGS). This method can boost the photocatalytic effects and antibacterial properties of AgNPs while also solving the problem of how to recover them.

Fig. 1. The process of creating a silica structure coated with AgNPs.

554


2. Method 2.1 Preparation of SGS (Silica-Glucose-Silver NPs) Structure and E. coli Inhibition 1 gram of silica was mixed with a 10% glucose solution, and then 0.1 M AgNO₃ solution was added in a 1:1 ratio and heated in a water bath at 60°C. Afterward, the resulting precipitate was checked, and the particles of AgNPs were observed using an electron microscope (JSM-7600F). Next, 10 grams of glucose, which acts as a reducing agent for silver ions, were added to 100 mL of a 1.5% agar solution and heated. After that, 10 grams of silica (dental silica, SiO₂·2H₂O), which has good water absorption and resistance, was added. When the silica mixed with the glucose and agar solution and became viscous, it was scooped out with a teaspoon and placed into a spherical silicone mold to set. At this point, as the temperature dropped below 40°C, the silica solidified into the shape of the silicone mold due to the gel formation of the agar. The spheres created were then placed in a 0.1 M AgNO₃ solution and put into a water bath at 60°C to observe any color changes on the surface of the SGS.

Fig. 2 shows the treatment of silica in a solution mixed with glucose and agar.

Fig. 3 illustrates the growth process of the SGS structure, where glucose mixed with silica and agar reacts with Ag+ ions to form AgNPs inside and on the surface. Initially, when the sphere comprising silica, agar, and glucose was placed in the AgNO3 solution, it appeared white. However, over time, the color converted from light brown to dark brown, indicating the formation of silver. The structure containing AgNPs created from agar and silica will be referred to as SGS (Silica Glucose Silver NPs). After creating SGS, it was treated with an E. coli-inoculated NB solution, and samples from each solution were taken to streak on NA plates to examine the levels of bacterial growth. 2.2 Strength of the SGS Structure To test the strength of the SGS, a mixture of silica, agar, and glucose was poured into a silicone mold with a diameter of approximately 20 mm. A sample for the strength test was then made by allowing the mixture to solidify in one part of the mold. After filling both sides of the mold, the mixture was, immediately placed in AgNO3 to form AgNPs (Type 1). Another structure (Type 2) was created by solidifying, drying, and treating the agar with AgNO3. The diameter of the entire structure was then measured, and the strength of the SGS was assessed using a force gauge (San Liang Force Gauge NK100).

555


Fig. 4 depicts the growth process of the SGS structure, where glucose and Ag+ ions react with silica and agar to form AgNPs on both the surface and interior. 2.3 Water Absorption and Antibacterial Effects of SGS Structures at Different AgNO3 Concentrations It is hypothesized that Ag+ ions leached out of the SGS ball into the NB solution, causing a color change. This transformation indicated that too many AgNPs had formed and accumulated on the surface. As a result, the concentration of AgNO3 used to react with the SGS ball decreased. Two different mixtures of 10 g and 15 g of silica were mixed with the glucose-infused agar solution and solidified in a silicone mold (1.5 x 2.5 x 1.7 cm). These were then placed in 0.01 M, 0.05 M, and 0.1 M AgNO3 solutions to generate AgNPs and were left to be dried. After measuring the size and weight of each piece, one piece of SGS was placed in 15 mL of distilled water and subjected to shaking in a water bath for 20 minutes. Afterward, each piece was removed, weighed, and combined with 20 μL of E. coli and S. aureus culture solutions in a composition of 0.2 mL of each solution and 0.8 mL of NB solution for cultivation, followed by measuring the absorbance. Similarly, samples were collected after 20, 40, and 60 minutes to observe the antibacterial effects over time. 2.4 Antibacterial Effects of SGS Pieces and Efficacy Under Incandescent Light The weight of the SGS piece that reacted with 0.01 M AgNO3 was approximately 2.3 g. To check if antibacterial effects could be observed with a smaller amount, a mixture of agar, glucose, and silica was placed in a petri dish to solidify, creating pieces with a diameter of 1.2 cm and thickness of 0.5 cm. Each piece was treated with 0.01 M AgNO3 and heated. The weight of the resulting pieces was about 0.1 g. One SGS piece was placed into 50 mL of distilled water containing 1 mL of E. coli and S. aureus solutions. The samples were analyzed under indoor conditions (323 Lux) and with an incandescent light (1527 Lux), and bacteria were collected at one-hour intervals. The absorbance was measured seven days later.

Fig. 5 shows the process of creating and drying the SGS piece.

556


2.5 Photocatalytic Effect of SGS Pieces and Changes in E. coli, S. aureus, and Chloride Ions Two SGS pieces weighing 0.08 g, reduced from the previously created pieces of 0.1 g, were placed in 10 mL of a 1% Congo red solution. The reason for using the Congo red solution is that it decomposes only through photocatalytic effects and not under silver ions in the absence of ultraviolet light [13]. The SGS pieces were placed under ultraviolet light (Sankyo Denki G6T5) and incandescent light (BYUL PYO M-50-6-F-HE) for two hours (determined as the effective time for observing antibacterial effects starting from one hour). The SGS pieces were treated in solutions containing E. coli and S. aureus, and the antibacterial effects were measured by checking absorbance. Then, the number of bacteria colonies was counted using the single plate-serial dilution spotting (SP-SDS) method. This method involved diluting the bacterial solution and applying 5 μL, so that it could drop onto the NA media. After two days, the number of colonies formed were determined and the average values and dilution ratios were used to calculate the number of bacteria. Finally, to determine how many Ag+ ions were extracted from the SGS pieces, two SGS pieces were placed into a 10 mL sample of tap water containing chloride ions. The change in chloride ions in the solution with the SGS pieces was estimated using the Mohr method. K2CrO4 (5 g) was dissolved in 20 mL of distilled water, and silver nitrate solution was added until a reddish precipitate formed. After waiting for the precipitate to settle, the final volume was adjusted to 100 mL. Then, 10 mL of each sample was treated with the previously prepared K2CrO4 reagent, followed by the addition of 0.01 M AgNO3 solution until the solution turned a light yellow color. This process was repeated three times to compute the average volume. Lastly, the volume of silver nitrate consumed when distilled water alone was added to the solution was recorded, and the following formula was utilized to calculate the amount of chloride ions. a: Volume of the consumed silver nitrate solution (mL), b: Volume of the consumed silver nitrate solution when distilled water was used (mL) f: Concentration factor for 0.01 M AgNO3 = 1.033

3. Results 3.1 Observation of Particles When Reacting AgNO3 with Silica and Glucose When glucose was absent, the precipitate formed was light brown. However, when glucose was added and reacted, the color changed to black, indicating a significant difference in the color of the precipitate. Observations under an electron microscope revealed that the particles by treating silica with AgNO3 were slightly more aggregated compared to those treated with glucose.

Fig. 6 shows the precipitate created by the reaction of silica, AgNO3, and glucose, along with electron microscope images of each precipitate.

557


3.2 Changes in E. coli Treated with SGS Balls After producing the SGS balls, they were treated with an E. coli-inoculated NB solution. One day later, when the solution was taken out of the incubator, it appeared cloudy overall. In particular, the solution containing the ball made from silica and agar, as well as the SGS ball that included glucose, became murky and turned brown after three days. This change is thought to be due to the reaction between AgNPs' Ag+ ions and the NB culture solution or E. coli. It could be possible for bacteria to reduce these Ag+ ions, as they can act as reducing agents. In fact, after taking samples from each solution, streaking on NA plates was performed to check the levels of bacterial growth. E. coli was not detected in the balls made from silica and agar that were treated with AgNO3, nor in the SGS balls made with glucose.

Fig. 7 displays photos of the mixed silica and agar balls, the balls treated with silica, agar, and AgNO3, and the SGS balls after being placed in E. coli for one day, as well as the cultivation results from each solution after streaking on NA plates. 3.3 Strength Testing of SGS Balls To create SGS Type 1, a mixture of silica, glucose, and agar was placed directly into AgNO3, while SGS Type 2 was made by allowing the silica, glucose, and agar mixture to solidify and dry before treating it with AgNO3. The sizes and strength of both types were tested.

Fig. 8 illustrates SGS Type 1 and SGS Type 2 which were both dried after treatment with AgNO3. The results indicated that the size and strength of Type 1 were 17.4 mm and 93.75 N, respectively, while Type 2 showed a size and strength of 15.5 mm and 70.5 N. This suggests that treating with AgNO3 before drying resulted in larger and stronger structures. The higher strength of Type 1 is likely due to Type 2 undergoing a drying process after reacting with AgNO3, which may weaken the structure. Type

558


2 also absorbed and reacted with AgNO3 while still in an unfixed state, resulting in less size reduction and minimal deformation of the overall structure.

Table 1 shows the sizes (right) and strength when SGS breaks (right). 3.4 Water Absorption and Antibacterial Effects of SGS Based on Silica and AgNO3 Concentration The observed color change suggests that Ag+ ions have leached out from the SGS ball into the NB solution, likely due to an excess of AgNPs formed on the surface. Consequently, the concentration of AgNO3 used for the SGS ball reactions decreased lowered. Initial evaluations measured how much water the SGS could absorb. The experimental results indicated that the weight of the SGS structure increased by 71.19% when the amount of silica rose from 10 g to 15 g, equating to approximately a 24% increase to 95.94%.

Table 2 compares the changes in the weight of the SGS in terms of water absorption before and after.

Fig. 9 illustrates the weight increase of SGS structures treated with 0.01 M, 0.05 M, and 0.1 M AgNO3 based on the previously mentioned ratios of silica.

559


After treating the structures with E. coli and S. aureus for 20 minutes, the samples were cultured for one day to observe bacterial absorbance. The results confirmed a significant reduction in E. coli and S. aureus, with the SGS made from 10 g of silica showing more effective antibacterial results, achieving effective results even at a concentration of 0.01 M.

Fig. 10 presents the changes in absorbance levels after mixing SGS-treated solutions with E. coli and S. aureus cultures. Samples taken after 20, 40, and 60 minutes showed that exposure to SGS for 60 minutes yielded the most profound reduction in E. coli and S. aureus levels, with S. aureus exhibiting a more considerable decrease. This difference is believed to be due to the structural differences in the cell membranes of E. coli and S. aureus.

3.5 Changes in E. coli and S. aureus in Solutions Treated with SGS Pieces The effectiveness of photocatalytic activity was assessed in both indoor conditions and under incandescent light. Under indoor conditions, both E. coli and S. aureus reflected minimal growth, and when the incandescent light was turned on, the absorbance of the bacteria was very low as well. Thus, whether under incandescent light or not, it was determined that adding one SGS piece to 50 mL of water effectively inhibited bacterial growth after one hour.

560


Fig. 11 shows the changes in bacterial absorbance over time when treated under indoor conditions and with incandescent light.

Table 3 details the changes in absorbance levels of E. coli and S. aureus when treated under indoor conditions and with incandescent light. 3.6 Decomposition of Congo Red and Changes in Chloride Ions in Water Treated with SGS When the SGS pieces were left for 5 months, there was no sign of decomposition, and their shape remained almost unchanged, indicating that the properties of silica enhanced water resistance.

Fig. 12 shows the appearance of the SGS after 5 months. To evaluate the photocatalytic effect, SGS pieces were treated with ultraviolet light or incandescent light. Two SGS pieces (approximately 0.2 g) were placed in 10 mL of a 1% Congo red solution for 2 hours. The reason for using Congo red is that it does not decompose under silver ions without ultraviolet light and only breaks down through photocatalytic effects [13]. After exposure, the UVtreated SGS pieces showed a 16% reduction in absorbance, while the pieces under incandescent light showed only a 2% reduction. This indicates that AgNPs need to be exposed to shorter wavelengths of light, such as ultraviolet light, to function as effective photocatalysts; the wavelength of incandescent light is insufficient to generate radicals like O2* or OH*.

561


Fig. 13 illustrates the change in absorbance of the Congo red solution treated with SGS. Additionally, two pieces of SGS (average weight 0.2 g) were placed in 10 mL of tap water for 2 hours, and the changes in the solution were observed. No color change was noted in the solution, as confirmed by observations from three samples.

Fig. 14 demonstrates the changes observed in the solution after 2 hours of SGS treatment (red arrows indicate no change in color). Samples were taken to analyze changes in chloride ions. The results revealed that the concentration of chloride ions in the 10 mL tap water was measured at 3.3 ppm, and after treatment with SGS for 2 hours, the concentration decreased to 2.567 ppm.

Table 4 outlines the amounts consumed when titrating the tap water with SGS against distilled water using AgNO3. The decrease in concentration can be attributed to Ag+ ions leaching out from the SGS into the tap water, reacting with the chloride ions present, suggesting that the amount of Ag+ ions released corresponds to the decrease in chloride ions.

562


3.7 Changes in Bacteria in Water Treated with SGS After treating the SGS pieces for 2 hours, samples were taken and tested for bacterial absorbance one day later. Under ultraviolet treatment, E. coli had an absorbance of 0, and S. aureus at 0.005 under incandescent light and 0.011 under ultraviolet light also reflected, very low absorbance. When calculating the CFU (Colony Forming Unit), the result was determined to be 0.44 CFU/mL, which is significantly low. This suggests that the SGS itself effectively impedes bacteria more rapidly than relying on photocatalytic effects.

Fig. 15 displays the changes in absorbance for E. coli and S. aureus after being exposed to ultraviolet and incandescent light for 2 hours.

Fig. 16 shows the changes in bacterial colonies of S. aureus based on dilution rates after a 2-day incubation.

563


4. Discussion and Conclusion AgNPs are typically produced using reducing agents, such as fruit peel extracts, to create AgNP solutions. In this study, glucose was mixed with silica and agar as a reducing agent for Ag+ ions. This material was reacted with AgNO3, resulting in a strong and water-resistant SGS structure due to the properties of silica and agar. The structure can be produced in various shapes, including spherical, rectangular, and disk forms, all of which demonstrated the effective formation of AgNPs and the ability to inhibit E. coli and S. aureus. The structures did not easily break under physical force, and despite the silica's properties that allow water absorption after drying, they did not decompose in the water. A minimum of about 0.1 g of SGS structure effectively inhibited bacteria in 50 mL of contaminated water after treatment for at least one hour. When SGS pieces were treated with Congo red solution and exposed to ultraviolet light for 2 hours, clear decomposition occurred, whereas no decomposition was observed under incandescent light. Though the exposure to E. coli and S. aureus solutions showed that E. coli was validly tested as negative according to drinking water standards by the Food and Drug Administration, S. aureus appeared at a very low level of 0.25 CFU/mL compared to a general detection limit of 100 CFU/mL. SGS pieces are found to function effectively as photocatalysts when subjected to shorter wavelengths of light; however, they also exhibit sufficient antibacterial effects due to the presence of AgNPs. The materials used to prepare the SGS structure for a total of 100 mL consist of 50 mL of 0.01 M AgNO3, 10

564


References [1] Hunter, Paul R., Alan M. MacDonald, and Richard C. Carter. "Water supply and health." PLoS Medicine 7.11 (2010): e1000361. [2] Jo, Su-Heon. "Water Pollution and Health." Health News 18.6 (1994): 34-38. [3] Zakar, Muhammad Zakria, Dr. Rubeena Zakar, and Florian Fischer. "Climate change-induced water scarcity: a threat to human health." South Asian Studies 27.2 (2020). [4] Le, Linh-Thy, et al. "Investigation of canal water quality, sanitation, and hygiene among residents living along the side of the canals - A cross-sectional epidemiological survey at Ho Chi Minh City, Vietnam." Case Studies in Chemical and Environmental Engineering 9 (2024): 100700. [5] Zinn, Caleb, et al. "How are water treatment technologies used in developing countries and which are the most effective? An implication to improve global health." Journal of Public Health and Emergency 2 (2018). [6] Islas-Espinoza, Marina, and Alejandro de las Heras. "Water appropriate technologies." Sustainability Science and Technology: An Introduction, CRC Press, Boca Raton (2014). [7] Spooner, Eric, and Lisa VanBladeren. "Solar Distillation in Rajasthan, India." (2013). [8] Isberg, Ulrika, and Karin Nilsson. "Life cycle assessment and sustainability aspects of Solvatten, a water cleaning device." (2011). [9] Noga, Maciej, et al. "Toxicological aspects, safety assessment, and green toxicology of silver nanoparticles (AgNPs)—Critical review: state of the art." International Journal of Molecular Sciences 24.6 (2023): 5133. [10] Thirumagal, N., and A. Pricilla Jeyakumari. "Photocatalytic and antibacterial activities of AgNPs from Mesua ferrea seed." SN Applied Sciences 2.12 (2020): 2064. [11] Luthfiah, Annisa, et al. "Silica from natural sources: a review on the extraction and potential application as a supporting photocatalytic material for antibacterial activity." Science and Technology Indonesia 6.3 (2021): 144-155. [12] Samateh, Malick, et al. "Unraveling the secret of seed-based gels in water: the nanoscale 3D network formation." Scientific Reports 8.1 (2018): 7315. [13] Thirumagal, N., and A. Pricilla Jeyakumari. "Photocatalytic and antibacterial activities of AgNPs from Mesua ferrea seed." SN Applied Sciences 2.12 (2020): 2064.

Single Plate-Serial Dilution Spotting (SP-SDS) General Bacterial Count 100 CFU/mL Food Standards for Drinking Water in Food Service Facilities Water Standards: E. coli must not be detected, no standard for Staphylococcus aureus. Food Standards: E. coli 10 CFU/g, Staphylococcus aureus 100 CFU/g. Food Safety Standards Methods for Confirming Bacterial Count Methodology Reference

565


Water balance in the Great Lakes Author

Full Name

:

Tung, Sooyeon

:

Seoul Foreign School

(Last Name, First Name)

School Name

Abstract The Great Lakes, a vital freshwater resource, face significant challenges due to climate change, which affects their water levels through altered precipitation patterns and increased evaporation. This study aims to analyze and model the water volume trends of the Great Lakes—Superior, Michigan & Huron, Erie, and Ontario—from January 1950 to December 2019. Using mathematical modeling and iterative differentiation techniques, the research identifies general trends and explores potential management strategies. The findings reveal fluctuating volumes for Lake Superior, stable trends for Lakes Michigan, Huron, and Ontario, and a concerning decline for Lake Erie. The study underscores the need for ongoing monitoring and adaptive management strategies to address the impacts of climate change on these critical water resources. Limitations include the reliance on monthly data and simplifying assumptions, with recommendations for future research to incorporate higher-resolution data and broader climatic factors.

Keywords Great Lakes, water volume trends, mathematical modeling, environmental impact, data analysis

566


Introduction and Aim The Great Lakes, situated in North America along the border of the United States and Canada, constitute the largest collection of freshwater lakes globally. They hold approximately 21% of the world's freshwater by volume, covering a total surface area of approximately 244,106 square kilometers and containing 22,617 cubic kilometers of water. These lakes began forming around 14,000 years ago, during the conclusion of the last Glacial Period. They also support a diverse aquatic ecosystem. They are integral to the cultural heritage of the continent, sustaining economies reliant on fishing, recreation, and manufacturing. According to data from the US National Aeronautics and Space Administration (NASA), the average global air temperature has increased by at least one degree Celsius since 1920, indicative of ongoing climate change. One of its most significant impacts is the amplification of the hydrologic cycle, continuous circulation of water in the Earth-Atmosphere system, leading to heightened evaporation rates and altered precipitation patterns, which could substantially affect river flows and freshwater availability. Managing the fluctuating water levels of the Great Lakes is an unavoidable challenge for communities residing in their vicinity. Due to their vast surface area and volume, these lakes exhibit distinct timescales in their water level behaviors compared to most inland bodies of water. I was intrigued by this topic as the Great Lakes' water dynamics is crucial due to their immense importance as a freshwater resource, not only for the surrounding communities but also globally. Given their significant volume and surface area, any changes in their water levels can have far-reaching implications for various sectors like agriculture, industry, and tourism, not to mention the ecosystems they support. The aim of this exploration is to analyze the volume trends of the Great Lakes—Superior, Michigan & Huron, Erie, and Ontario—from January 1950 to December 2019 and to develop mathematical models of each Great Lakes’ water volumes. Comprehending the water volume dynamics of these lakes is essential for implementing effective management strategies.

Mathematical modeling The Great Lake is consistent with 5 major lakes: Superior, Michigan & Huron, Erie, and Ontario. These lakes are connected with each other affecting each other’s water flow levels. Below is a general diagram of the structure of the Great Lakes.

Figure 1. Diagram of the Great Lakes (Andriana, 2024)

567


Below is a more detailed diagram of the water flow of the Great Lakes.

Figure 2: Schematic of the Great Lakes water network (Chegg). In Figure 2, solid black arrows represent connecting channel flow between lakes and the Dashed arrows represent diversions into and out of lakes.

Rate of change of the water balance in the Great Lake The general formula for the rate of change of the water volume of the lakes can be represented as shown. 𝑑𝑉

𝑑𝑡

= 𝑓𝑙𝑜𝑤 𝑖𝑛 − 𝑓𝑙𝑜𝑤 𝑜𝑢𝑡

V: Volume of the Great Lake (𝑚3) t: Time (months) Factors creating inflows of the lake are precipitation, water runoff, and flows from diversions or rivers. Factors creating outflows from the lake are evaporation and flows escaping through diversions or rivers. Key assumptions: 1. Precipitation, water runoff, and evaporation are assumed to occur uniformly across the entire surface area of each lake. However, in reality, variations in factors like sunlight exposure and geographical features may lead to non-uniform distribution. Additionally, factors such as precipitation and evaporation are subject to daily variability due to weather conditions, whereas the provided data is on a monthly basis. 2. The surface area of each lake is considered constant, based on the latest available measurements. Nonetheless, natural phenomena could alter the surface area over time. 3. The model only considers significant inputs and outputs of the lakes, including precipitation, evaporation, diversion, river flow, and water runoff. This allows the model to model out the general accurate flow of the water, but this lack of preciseness might lead to some inaccuracy of the modeled data. 4. Flows within each lake are assumed to remain constant throughout each month, as the available

568


data is on a monthly basis. 5. The rate of river flows is assumed to be consistent throughout the observation period, as the data used will have a month time step. 6. Due to its size difference, differences in volume for Lake St. Clair are disregarded, resulting in equal flows for the St. Clair River and the Detroit River. This is because their sizes are significantly smaller compared to other lakes. 7. Lake Michigan and Lake Huron are treated as a single combined lake for modeling purposes. This is done to simplify the system; treating them separately would require consideration of two distinct systems, each potentially with different flows from Lake Superior. 5 parameters given: ● ● ● ● ●

Precipitation (P) → Unit : mm over the respective lake surface area (𝑚 ) per month. 2 Water runoff (R)→Unit : mm over the respective lake surface area (𝑚 ) per month. 2 Evaporation (E)→Unit : mm over the respective lake surface area (𝑚 ) per month. 2 Flow out (Fo) → Unit: 𝑚3/𝑠 Diversions (D) → Unit: 𝑚3/𝑠

Although the function will be efficient in showing the general trend of the Great Lake, because there are main assumptions and factors that are not yet considered in the model, these functions will not be fully accurate in real life applications.

569


Parametrization and Implementation Some unknown parameters needed for calculations are surface area and volume of each lake and they are found from research. The data for May 1977 was used as it was the only data that could be found for all of the lakes’ surface area and volume.

Monthly time step is used for the rate of change formulas. To determine a suitable time step, we calculate the standard deviation and mean of precipitation, water runoff, and evaporation for each month for Lake Superior. As the largest of the Great Lakes, Lake Superior significantly influences the water balance trend of all the lakes in the region. By comparing the standard deviation to the mean for each month (January, February, March, etc.), we can evaluate the appropriateness of the time step. If the standard deviation is small relative to the mean, it indicates that the monthly data are consistently at similar levels, highlighting Lake Superior's stable water flow patterns. Standard deviation is a crucial measure of dispersion, indicating how far, on average, the data points are from the mean. A small standard deviation suggests that the data points are close to the mean, while a large standard deviation implies greater variability. Statistical measures of the water volume of each months:

The formulas are substituted from above equations: The chart below displays the calculated mean and standard deviation values for each month over 70 years for Lake Superior.

570


Months

Mean

Standard Deviation

January

203,686,755,007

73,442,034,189

February

198,935,936,756

73,915,421,971

March

196,057,821,967

74,321,230,228

April

199,065,865,824

75,143,643,227

May

205,646,652,449

76,013,640,746

June

212,963,539,627

76,774,018,316

July

218,029,954,646

76,548,686,976

August

221,011,929,302

76,281,876,376

September

221,934,001,322

76,394,785,548

October

221,224,377,293

76,646,975,950

November

217,766,496,179

76,458,763,168

December

212,082,152,990

76,810,354,129

The mean percentage of the standard deviation of all of the months to the mean of the corresponding months is 35.99%. The standard deviation being about 35.99% of the mean for Lake Superior is acceptable within the context of the study for several reasons. Firstly, although 35.99% might seem large, it indicates consistent and relatively contained variability. Given Lake Superior's vast size and the numerous factors influencing its water levels, such as precipitation, evaporation, and runoff, this level of variability is reasonable since the lake's immense volume means even substantial absolute variations represent a smaller relative change. Secondly, the Great Lakes system involves complex, interconnected hydrologic processes, including seasonal variations and inputs and outputs from rivers and diversions. A 35.99% standard deviation suggests the model accurately captures this inherent variability rather than reflecting inaccuracies. Thirdly, the study spans nearly 70 years, during which natural climate fluctuations lead to variability in monthly water volumes. The consistency of the standard deviation across months implies stable, year-to-year fluctuations, indicating that while there are natural variations, the overall system remains stable. Thus, the time step for this investigation is suitable to be set as a month.

Iterative Method of Differentiation An iterative method is a computational approach that repeatedly applies a sequence of operations to approximate solutions to mathematical problems.

571


Above is the formal mathematical representation of the iterative method. In this investigation, the iterative method is employed to predict historical and future lake water volume data. Using 1977 as a fixed reference point, we adjust the pre-1977 data by subtracting the rate of change and the post-1977 data by adding the rate of change. This approach allows us to create a dataset for both periods before and after 1977. The volumes of the Great Lakes were collected in May 1977, corresponding to the 330th value in the datasets. Since there are 840 data points with the 330th value in the middle, numerical derivatives should be computed using two methods. This method is applicable to both linear and nonlinear datasets with large numbers of variables. The iterative method has been used to determine the lake volumes for various years based on the May 1977 volume data of the Great Lakes.

572


From the graphs, we can observe the volume trend spanning 70 years. In the case of Lake Superior, despite notable fluctuations, the river seems to naturally regulate its volume, as evidenced by its ability to recover from steep decreases and transition to an increasing trend. However, the current data indicates a sharp increase in water volume, raising concerns about potential consequences of sustained growth. Around 1982, Lake Superior experienced a significant increase in water volume, a phenomenon not mirrored by the other Great Lakes—Erie, Michigan, Huron, and Ontario. Several key factors contributed to this disparity, rooted in geographical, climatic, and hydrological differences. Superior's northernmost position and large watershed brought more direct precipitation and higher inflow from rivers, streams, and rainfall during the late 1970s and early 1980s. Additionally, Lake Superior’s location resulted in more winter ice cover and cooler temperatures, which reduced evaporation and allowed more water to remain in the lake. In contrast, the southern lakes, such as Erie, had less ice cover and higher evaporation rates, offsetting any increases in precipitation and leading to less noticeable changes in water levels. Nevertheless, the continuous rise could lead to flooding, impacting agriculture and the surrounding communities. As it can be seen from the modeled graph, the lakes Lake Michigan, Huron, and Ontario exhibit a relatively stable and consistent trend, suggesting normalcy. This trend can be seen as accurate as USA Today reported that, despite some fluctuations, water levels in Lakes Michigan, Huron, and Ontario are expected to follow a similar pattern, consistent with the graph's trend. Lake Erie experiences a persistent volume decrease over the 70-year period. This presents a critical issue as future water scarcity may arise, impacting not only Lake Erie but also influencing the interconnected ecosystem of the Great Lakes and the societies dependent on them. The model's accuracy is suggested by the U.S. Army Corps of Engineers, which revealed in 2020 that water levels on Lake Erie have declined in the past and are expected to continue declining in the near future, mirroring the trend shown in the graph. The iterative method demonstrates a somewhat accurate volume trend for the Great Lakes. However, no more accurate volume data for the lake before or after 1977 was available, limiting the ability to fully validate the model's accuracy. Due to the lack of precise volume data for the lake before and after 1977, fully validating the model's accuracy is challenging. Despite this, the similar trends observed in various online articles suggest that the method's results align with actual trends observed in real life. Nonetheless, several limitations affect the method's full precision and reliability. A major limitation is that water trends can change daily, and the model's time steps are too long to allow for a detailed analysis of the Great Lakes.

573


Engineering Solution Through the modeled graphs, issues can be spotted for both Lake Superior and Lake Erie. For Lake Superior, although there is a continuous increase in water levels, the lake is still in the stage of recovering from an earlier decrease. This issue could be disregarded for some time in the future. However, if the water level continuously rises, which is a likely prediction based on the graph showing a consistent increase over about 400 months, this could lead to concerning problems. These include not only flooding but also the potential for increasing water levels in all the other lakes, causing significant changes to the environment and ecosystem around them. Lake Erie, on the other hand, presents the greatest issue with its constantly decreasing water flow. The graph shows a steady decline over the past 840 months, indicating a clear and ongoing trend that is likely to continue into the future. This is an extremely important issue that should be addressed, as the constant decrease in water levels can lead to severe consequences. These include the disruption of aquatic habitats, negative impacts on water quality, and reduced availability of water for surrounding communities and industries. Urgent measures are needed to mitigate these effects and preserve the ecological and economic health of the region. According to the National Weather Service, it has been observed that the water levels of Lake Erie have consistently decreased over the past six decades. As an extension of the investigation, a model that suggests and illustrates a straightforward engineering remedy for this problem has been developed. While many advocate for regulating the outflow from Lake Erie into Lake Ontario via the Niagara River as the best solution, there are potential concerns, particularly regarding the ecological consequences, for creating a water deficit in Lake Ontario. Statistical measures of the inflow/outflow data for Lake Erie:

The formulas are substituted from above equations:

574


The engineering solution reveals that the lake's volume has stabilized to near-constant levels. This approach offers localized control, minimizing ecological risks. By implementing this solution, the lake's volume stabilizes, indicating a successful resolution to the water flow issue in Lake Erie.

Conclusion, limitation and Extension to exploration This investigation has successfully analyzed and modeled the volume trends of the Great Lakes— Superior, Michigan & Huron, Erie, and Ontario—from January 1950 to December 2019. By employing mathematical modeling and iterative differentiation techniques, it provided valuable insights into the long-term water volume dynamics of these lakes. The findings reveal the fluctuating yet recovering nature of Lake Superior's water volume, the consistent trends observed in Lakes Michigan, Huron, and Ontario, and the persistent decrease in Lake Erie’s water volume. This investigation underscores the importance of continuous monitoring and effective water management strategies to ensure the sustainability of these crucial freshwater resources.

575


However, there are several limitations that affect the accuracy and reliability of the findings. Firstly, the study uses monthly data for modeling, which may not capture daily variations and short-term fluctuations in water levels. More specific data could provide a more precise model. Secondly, the simplifying assumptions, such as uniform precipitation, evaporation, and constant surface areas, may oversimplify the complex interactions and variations across different parts of each lake. Additionally, the reliance on data from May 1977 as a reference point limits the model's accuracy, particularly for periods before and after this date, due to potential inconsistencies and lack of precise historical volume measurements in real life. The model also excludes factors such as groundwater interactions, human interventions (water extraction, regulatory changes…), and climatic events (storms, droughts…), which could also significantly impact the water balance. Lastly, the proposed engineering solutions, such as regulating outflows, do not fully consider the broader ecological consequences and may have unintended effects on downstream ecosystems. To further enhance the investigation, the investigation can incorporate higher resolution data, such as daily or weekly measurements, to improve the model's accuracy and responsiveness to short-term changes in water levels. Moreover, utilizing more advanced hydrological and climatic models that integrate additional variables like temperature, wind patterns, and human activities could also enhance predictive accuracy. Including various climate change scenarios in the models would help in understanding and preparing for future impacts on the Great Lakes' water volumes. Additionally, extending the investigation to include detailed ecological assessments of proposed interventions would ensure a holistic approach to managing water levels that protects both human and environmental interests. Developing and simulating different water management policies and strategies, including conservation efforts, infrastructure improvements, and cross-border agreements, could provide practical recommendations for stakeholders and policymakers. Further, creating interactive tools and visualizations based on the models to educate and engage the public and local communities about the importance of sustainable water management practices would foster greater awareness and involvement in protecting these vital freshwater resources. Through these extensions, future research can build upon the findings of this investigation to provide a more comprehensive understanding and effective management strategies for the Great Lakes' water resources.

Bibliography Blust, F.A. "The Water Levels of Lake Erie – Spring 1963." National Museum of the Great Lakes, nmgl.org/the-water-levels-of-lake-erie-spring-1963/. "Climate Change Indicators: Great Lakes Water Levels and Temperatures." United States Environmental Protection Agency, www.epa.gov/climate-indicators/great-lakes. Do, Hong X., et al. "Seventy-year long record of monthly water balance estimates for Earth's largest lake system." Scientific Data, 21 Aug. 2020, www.nature.com/articles/s41597-020-00613-z. "Evidence." National Aeronautics and Space Administration, science.nasa.gov/climatechange/evidence/. "The Great Lakes Water Level Dashboard." Great Lakes Environmental Research Laboratory, NOAA, www.glerl.noaa.gov/data/wlevels/dashboard/info/. Gronewold, Andrew D., et al. "Coasts, Water Levels, and Climate Change: A Great Lakes Perspective." Climatic Change, vol. 120, no. 4, 1 Aug. 2013, pp. 697-711, https://doi.org/10.1007/s10584-013-0840-2. Kayastha, Miraj B., et al. "Future Rise of the Great Lakes Water Levels under Climate Change." Journal of Hydrology, vol. 612, Sept. 2022, p. 128205, https://doi.org/10.1016/j.jhydrol.2022.128205.

576


Kuhlman, Mary. "Another Record Month for Lake Erie Water Levels." Cleveland Scene, 6 Apr. 2020, www.clevescene.com/news/another-record-month-for-lake-erie-water-levels-32795526. "Lake Erie High Water Level." National Weather Service, www.weather.gov/cle/event_20190511_Lake_HighWater. Euler, Leonhard. "Institutionum calculi integralis volumen primum." University of the Pacific, 1768, scholarlycommons.pacific.edu/cgi/viewcontent.cgi?article=1341&context=euler-works. "LAKE SUPERIOR RETROSPECTIVE." University of Michigan, glisa.umich.edu/lake-superiorretrospective/. Massing, Dana. "Lake Erie water levels continue falling from record highs. Here's every Great Lakes' 2023 forecast." USA Today, 2 Feb. 2023, www.usatoday.com/story/news/nation/2023/02/02/lake-eriewater-levels-down-great-lake s/11170308002/. "Monthly Bulletin of Great Lakes Water Levels." US Army Corps of Engineers, www.lre.usace.army.mil/missions/great-lakes-information/great-lakes-water-levels/water- levelforecast/monthly-bulletin-of-great-lakes-water-levels/. Strang, Gilbert, and Edwin Jed Herman. "5.3: The Fundamental Theorem of Calculus." Libre Texts Mathematics, math.libretexts.org/Bookshelves/Calculus/Calculus_(OpenStax)/05%3A_Integration/5.03%3A_The_F undamental_Theorem_of_Calculus. "Standard Deviation." National Institutes of Health, www.nlm.nih.gov/oet/ed/ stats/02-900.html. "Report OF THE INTEKNATIONAL JOINT COMMISSION UNITED STATES AND CANADA ON THE Preservation and Enhancement of Niagara Falls." INTERNATIONAL JOIN'T <:OMMISSION, 1953, ijc.org/sites/default/files/Docket%2064%20Preservation%20%26%20Enchancement%20Final%20Re port%20195 3-05-05.pdf

APPENDIX

577


The Impact of Generative AI on Culture and the Impact of Culture on Generative AI Author 1

Full Name

:

Yeo, Iksun (Justin)

School Name

:

Busan Foreign School

(Last Name, First Name)

ABSTRACT: The paper discusses the results of a study that delved into advanced algorithms of ChatGPT, which was intended to find out if there are any differences in responses based on the conventional balance of the food recipes prompted. Specifically, the data focuses on the differences between English and Korean prompts. The data were collected by asking for five recipes in each language (5 American and 5 Korean recipes), consisting of 20 prompts, as each recipe was prompted in both languages. The results of the study demonstrated that the language of the prompt for the recipe did impact the result of the AI model, as the recipe with the primary language of culture showed more complexity in the model’s response, with extra tips and more complexity, visually noticeable in the response. However, the details in the raw texts, instead of the patterns in data tables, reinforce this conclusion.

578


1. INTRODUCTION 1.1 Background As AI technology is emerging as the primary cornerstone of the 21st century, it is evident that it has affected modern society. For instance, ChatGPT is used in numerous fields, such as education, business, and STEM. Furthermore, countless people, like companies, now use various AI models to attract users and bypass their development methods to facilitate their development. According to PwC’s Global Artificial Intelligence Study, AI will contribute up to $15.7 trillion to the global economy in 2030 [6]. Not only this, according to an article by International Finance, businesses implementing AI chatbots have reported a 30% reduction in customer service costs and improved response times; some companies have achieved up to 80% automation in handling customer queries, significantly enhancing customer experience [7]. Indeed, AI technology causes both positive and negative impacts. In terms of education, people have viewed the idea of AI negatively. Regarding education, It is noted that “23-24 school year, 63 percent of teachers said students had gotten in trouble for being accused of using generative AI in their schoolwork, up from 48 percent last school year” [9]. Oxford University Press’ research has also found that 68% of UK and 69% of English language teachers in Europe say they see benefits for education but are mindful of the risks. Although numerous AI detection tools have been developed, generative AI performance attracts students from their appropriate education [1]. An article writer, Eric Rosenbaum from CNBC further reports, “When it comes to actual usage, a similar spike occurred, with 46% of teachers and 48% of students saying they use ChatGPT at least weekly, with student usage up to 27 percentage points over last year” [10]. In-depth, however, there is a surging question that people want to know if the users’ prompts can differ based on different languages, and based on the tradition of that direct language, the responses can be better compared to those of diverse languages. As AI technology progresses to be popularized worldwide, evident in numerous quantitative resources, we aim to delve into food recipe prompts by asking ChatGPT in English and Korean to see if there are any differences. 1.2 Hypothesis This study hypothesizes that the AI’s response to the recipe's primary language will show more complexity and give additional information that I have not asked for compared to the secondary language response. It is presumed that the AI will adapt to the prompted user and go beyond what the user is asking, as the AI will conceive the user has advanced knowledge of the culture as they use their primary language. When the recipe prompt is written in the language of the culture the food belongs to, the AI-generated response will provide richer detail and complexity than when written in another language. This is because the AI will assume that the user understands the culture more when using the ‘primary language.’ 1.3 In this paper, we In this paper, we delve into the algorithm of artificial intelligence technology to observe if there is a disparity in their results depending on the prompt’s language.

2. METHODS 2.1 AI Model The AI model we chose to experiment on was ChatGPT by OpenAI, as it is the most advanced AI model expected to generate accurate results. Euronews also states, “ChatGPT remains by far the most popular AI tool, trailed by these less-famous but still widely used chatbots, image generators, and writing tools” [2]. Considering this, we also thought it would be highly applicable among other AI models, as a substantial amount of people around the globe use theChatGPT, and the level of understanding and

579


consensus would be more complex in contrast to other models.

Figure 1. Block Diagram of Trial 2.2 Prompt Design The figure above is the block diagram of the whole research process. To prevent us from creating any discrepancies, we had to keep the format of each prompt equal to each other, just in different languages (English and Korean). For instance, if we ask ChatGPT for the recipe to make burgers, we must maintain the compromise of only asking for the recipe. As mentioned above, the prompt design should be simple to notice a vivid difference between the two languages. 2.3 Coding Once ChatGPT had answered all the prompts, including all the chosen food recipes, we collected data to compare the outcomes of the two languages and organized it into a table and graph. Specifically, we focused on the number of steps, ingredients, and word count of each response provided by ChatGPT. 2.4 Data & Analysis Method When creating a line chart from data tables, we laid out the graphs; the strategy I used was to organize it into variables ‘Primary’ and ‘Secondary,’ with ‘Primary’ as the primary language of the food recipe and ‘Secondary’ being the secondary language of the food recipe, as two languages were prompted in every food recipe for comparison. We did this to notice better visuals as there are mixed foods cultures, instead of simply categorizing them as ‘English’ and ‘Korean,’ which wouldn’t possibly show any patterns.

3. RESULTS 3.1 Side-by-Side (Raw Text) Figures 2 and 3 show the response by ChatGPT when asked for a food recipe for a hot dog, with Figure 2 being asked in English and Figure 3 being asked in Korean. Putting the quantitative results aside and first focusing on the specific details of each recipe and comparing these two responses, the most significant difference is that the English version shows additional topping combinations based on different styles of hot dogs. In contrast, the Korean version does not provide any further tips. Taking into account that the primary language version of the food recipe provides details that weren’t even demanded in the prompt, it is assumed that the English version provided detailed information as the hot dog is an American-originated food; it proves the fact that there is a difference between different language prompts due to the difference of the food’s origin and its culture.

580


In addition, the Korean response shows that the number of ingredients is greater than the English response; it displays numerous utensils for cooking, while the English version does not. It is also assumed that the reason is that hot dogs in America are considered a deli instead of a meal, while hot dogs are considered a meal in Korean conventions. This is also because of the difference in standards of appropriate food consumption in Korea and the United States. ‘The Korea Herald,’ a Korean news article, found that the average energy intake by Koreans per day was 1,853 kcal in 2021 [3]. Meanwhile, the U.S. Department of Agriculture (USDA) has found that the average calorie intake in the United States is estimated from 3,600 to 3,800 kcal in 2021 [4]. Since their average calorie intake is highly distinct, it establishes why there are different standards in terms of food among the two countries.

Figure 2. ChatGPT Responses of Hot Dog, English

581


Figure 3. ChatGPT Response of Hot Dog, Korean Figures 4 and 5 show ChatGPT’s responses to a recipe for Dill Pickles. Primarily, looking at the end of both reactions, the Korean response states that the food can be stored for two months, while the English response provides explicit details about when the best flavor would be and what certain circumstances will change the flavor quality.

582


Figure 4. ChatGPT Response of Dill Pickles, Korean

583


Figure 5. ChatGPT Response of Dill Pickles, English Besides these, there were still more parts slightly different from each prompt’s opposite language of the recipe. There was a slight difference in the measurements for seasonings as the two countries’ understanding of measurement is assumed to be slightly different from the other. Furthermore, adding to what was mentioned about the notable points of the hot dog recipe where the English response gave

584


additional tips, the Korean version also gave additional tips at the end for a fried rice recipe, while the English version did not. 3.2 Data Table & Average

Figure 6. Data Table of # of Steps

Figure 7. Data Table of # of Ingredients

Figure 6 displays the data table of the number of step categories in the primary and secondary languages. Figure 7 displays the data table of the number of ingredients in the primary and secondary languages. The data doesn’t quite show any patterns worthy of a clear conclusion. However, the mean value of some ingredients in the section of primary language shows a higher value than the mean value in the secondary language section; nevertheless, the same idea does not apply to the data table of the number of steps, which could not be a practical data to correlate with the hypothesis. 3.3 Figures (Line Charts)

Figure 8. Line Chart of # of Steps

585


Figure 9. Line Chart of # of Ingredients

Figure 10. Word Count Line Chart (English and Korean) In Figure 8, it is revealed that five recipes tended to have more steps in their primary language than the secondary language, one recipe had an equal amount of steps, and four recipes had more steps in the secondary language in contrast to their primary language. This study is interesting, as we initially assumed that ChatGPT would have a more extensive database in the chosen food recipe’s primary language, which will then respond in a deeper context with more extraordinary steps. As mentioned before in 3.2, it doesn’t seem like the mean values show much of a correlation regarding differences in language impacts.

586


Figure 9 also displays a pattern similar to Figure 8, with four recipes containing more ingredients in their primary language than the secondary language, one with an equal number of ingredients, and five with more steps in the secondary language than in their primary language. Figure 10, however, reveals that all the responses prompted in English contained more words than Korean-prompted responses. This compulsive research might be due to the difference in the scale of the language database that ChatGPT has learned. Researcher Byunghyun Ban from Arxiv, a researchsharing platform, backs up this point, stating, “English-based datasets are sufficient to show off the performances of new models and methods, but a research needs to train and validate the models on Korean-based datasets to produce a technology or product, suitable for Korean processing” [2]. Furthermore, it should be noted that this data reinforces a claim on the differences of language datasets in AI models rather than traditional impact based on the two languages.

4. Discussion 4.1 Errors During our research, we discovered possible errors that could have potentially influenced the data's results. Because of the different formats of words and letters between Korean and English, we had to translate the Korean response into English using Google Translate. This process might have affected our result, as translating might not extract accurate data, potentially lessening the number of words. A solution to mitigate this could include requesting ChatGPT to translate the text, as Google Translate solely translate texts word by word. Another room for improvement might be the selection of food recipes. As there are ubiquitous amount of food recipes existing–even when narrowed down into American and Korean food recipes. For instance, there are about 200 existing types of Kimchi recipes, which is quite surprising for a single type of food [7]. Due to this, because the selection food recipes to prompt and analyze was decided by us, there might have been better options that proves the study’s hypothesis better. There were also concerns about how to count the number of steps when we were collecting quantitative data. Responses by ChatGPT displayed the number of steps, but they contained sub-steps where there were multiple bullet points in one section. Because of this, we didn’t know if we should count each of them as one step. 4.2 Contextualization To conclude, given the information above, the quantitative data fails to prove the hypothesis; instead, the one-to-one comparison of the raw texts emphasizes specific differences that relatively reserve the hypothesis. It is apparent that through intricate methods to approach the hypothesis, the results didn’t quite meet our expectations but instead answered another research question we did not consider. In addition, it delineated a new point on the difference between English and Korean datasets in a ChatGPT model.

587


REFERENCES [1] “Ai in Education: Where We Are and What Happens Next.” Oxford University Press, 15 May 2024, corp.oup.com/feature/ai-in-education-where-we-are-and-what-happens-next/. [2] Ban, Byunghyun. “A Survey on Awesome Korean NLP Datasets.” arXiv.Org, 6 Dec. 2021, arxiv.org/abs/2112.01624. [3] Carbonaro, Giulia. “The 10 Most Popular AI Tools - and Yes, CHATGPT Is Still Number 1.” Euronews, 2 Feb. 2024, www.euronews.com/next/2024/02/02/these-are-the-10-most-widelyused-ai-tools-and-the-people-who-using-them-the-most. [4] Eun-byul, Im. “Koreans Eat Less Carbs, More Protein: Report.” The Korea Herald, The Korea Herald, 27 Nov. 2022, www.koreaherald.com/view.php?ud=20221127000079#:~:text=in%20dietary%20habits.-,Acc ording%20to%20the%20Korea%20National%20Health%20and%20Nutrition%20Examinatio n%20Survey,kcal%20for%20women%20in%202021. [5] “Food Availability and Consumption.” USDA ERS - Food Availability and Consumption, www.ers.usda.gov/data-products/ag-and-food-statistics-charting-the-essentials/foodavailability-and-consumption/. Accessed 13 Aug. 2024. [6] I. F. Desk, “AI to drive GDP gains of $15.7 trillion with productivity,” International Finance. Accessed: Aug. 14, 2024. [Online]. Available: https://internationalfinance.com/technology/aidrive-gdp-gains-15-7-trillion-productivity-personalisation-improvements/ [7] Murray, Lorraine. “Beyond the Cabbage: 10 Types of Kimchi.” Encyclopædia Britannica, Encyclopædia Britannica, inc., www.britannica.com/story/beyond-the-cabbage-10-types-ofkimchi. Accessed 16 Aug. 2024. [8] PricewaterhouseCoopers. “PWC’s Global Artificial Intelligence Study: Sizing the Prize.” PwC, www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligencestudy.html. Accessed 13 Aug. 2024. [9] Prothero, Arianna. “New Data Reveal How Many Students Are Using AI to Cheat.” Education Week, Education Week, 26 Apr. 2024, www.edweek.org/technology/new-datareveal-how-many-students-are-using-ai-to-cheat/2024/04. [10] Rosenbaum, Eric. “Ai Is Getting Very Popular among Students and Teachers, Very Quickly.” CNBC, CNBC, 11 June 2024, www.cnbc.com/2024/06/11/ai-is-getting-very-popular-amongstudents-and-teachers-very-quickly.html.

588


Exploring the Role of Rare Gene Variants in Autism Spectrum Disorder: Insights from Songbird Models Author

Full Name

:

Yoon, Soyeon

:

Daegu International School

(Last Name, First Name)

School Name

Abstract Autism Spectrum Disorder (ASD) is a condition that affects roughly 75,000,000 people, normally impacting one’s ability to communicate and interact with others1,2. Despite significant research funding, no definitive cause or cure for ASD has been identified yet3. However, recent findings highlight the role of FOXP1, a protein regulator that influences the nervous system and is associated with behaviors linked to ASD, particularly language acquisition and communication. FOXP1, short for Forkhead Box P1, is a protein responsible for regulating the expression of several genes that influence the functioning of the human nervous system. Thus, those that lack even a single copy tend to show autism traits4. Luckily, scholars have found that the Forkhead Box protein and gene are also present in songbirds, namely the zebra finch. This is crucial in allowing them to conduct accurate animal model studies, as songbirds share a trait with humans that rats, one of the most commonly used animal models for human brain research, don’t have; the ability to communicate with others in their species through innate vocalizations. Since the gene has a significant relationship with language acquisition in both species, studying it may pave the way for new breakthroughs in autism research.

Keywords Biomedical and Health Sciences; Genetics and Molecular Biology of Disease; Autism Spectrum Disorder; Forkhead Box; Songbird Model

589


Introduction: The name, Autism, is derived from the word “autós”, meaning “self” in Greek5. It is a direct reference to a major symptom pertaining to the disorder, experiencing difficulties with social interaction, or “withdrawal within self”. However, Autism is a spectrum disorder, meaning that it affects a wide range of behaviors that is not limited to how one communicates with other individuals, but can also cause development of specific interests, seizures, delayed movement skills, and sensitivity to stimuli such as sound and taste6. The first recorded case of ASD was identified in 1943, involving a patient named Donald Triplett7. It was diagnosed by Austrian-American psychiatrist and physician Leo Kanner after first receiving a 33-page-long letter from Triplett’s father, Beaman Triplett, who contacted Kanner out of concern for his 5-year-old boy that had shown signs of “schizophrenia” ever since he was born8. Initially, that was the same conclusion Kanner came to, as Autism wasn’t a known condition at the time. He too was unable to come up with a specific diagnosis for the child, at least not until 1943; by then, the psychiatrist had encountered 10 other cases of children that displayed signs similar to Triplett. Shortly after the same year, Leo Kanner published an article titled “Autistic Disturbances of Affective Contact”, a list where he organized all the symptoms that had been observed over his years of experience. Soon, Donald Triplett became recognized as Case 1, Donald T., the first identified individual with Autism9. However, although the diagnosis of a previously unknown condition was nothing but a major breakthrough in both the field of scientific research and psychiatry, due to the recency of autism’s discovery, many didn’t have the resources or background information at the time, leading to the propagation of many misconceptions and stigma about the disorder or those affected by it. In some cases, autistic individuals were put through different treatments such as shock or chelation therapy, which can be counterproductive and even harmful for the patient10. Additionally, widespread misunderstanding of the condition in the past led to the erroneous belief that autism resulted from a lack of maternal warmth, unfairly placing blame on parents for their children’s condition. Fortunately, such misinformation has been on the decline as continuous advancement in technology and medicine allowed the identification of specific biological and genetic factors contributing to the development of autism. With modern knowledge and equipment, scientists are even capable of identifying whether or not a child in a mother’s womb has the possibility of being diagnosed with autism with a common ultrasound, though this method is only possible starting from the second trimester of pregnancy. According to a study conducted by researchers from the Azrieli National Centre for Autism and Neurodevelopment Research, fetuses that display any signs of physical anomalies - especially in the heart and kidney - have a 30% chance of being born with ASD. This is a crucial finding as previous studies have proven that early diagnosis and subsequent treatment is capable of tripling the child’s social ability despite their disorder11. Yet, the fact remains that such methods are but temporary measures that are meant to mitigate the effects of Autism on an individual’s life, not a cure that can completely eradicate it. Although ASD is classified as a neurological disorder in the most recent edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), it is also true that over 1000 different changes in genes have been attributed with the disorder. Therefore, it is also difficult to conclude that the condition solely arises from mutations in the brain. In the end, the challenge in finding a treatment for ASD lies in the numerous unconfirmed factors involved, as each individual may exhibit a unique variation of the disorder depending on the specific genes affected. While genetics are estimated to contribute 40-80% of the risk for developing ASD, both genetic and environmental factors generally play a role in its onset. Previous research suggests that for some cases of patients with ASD, the disorder had resulted from environmental causes such as advanced parental age at time of conception, extreme prematurity, and prenatal exposure to pollution12. Thus, one must be careful not to assume that a single symptom can be enough to diagnose someone with autism.

590


In addition, for every case of autism, there is a 2-4% chance that the condition results from rare gene mutations or chromosomal abnormalities ((J. M. Fu, F. K. Satterstrom, M. Peng, H. Brand, R. L. Collins, S. Dong, B. Wamsley, L. Klei, L. Wang, S. P. Hao, C. R. Stevens, C. Cusick, M. Babadi, E. Banks, B. Collins, S. Dodge, S. B. Gabriel, L. Gauthier, S. K. Lee, L. Liang, A. Ljungdahl, B. Mahjani, L. Sloofman, A. N. Smirnov, M. Barbosa, C. Betancur, A. Brusco, B. H. Y. Chung, E. H. Cook, M. L. Cuccaro, E. Domenici, G. B. Ferrero, J. J. Gargus, G. E. Herman, I. Hertz-Picciotto, P. Maciel, D. S. Manoach, M. R. Passos-Bueno, A. M. Persico, A. Renieri, J. S. Sutcliffe, F. Tassone, E. Trabetti, G. Campos, S. Cardaropoli, D. Carli, M. C. Y. Chan, C. Fallerini, E. Giorgio, A. C. Girardi, E. HansenKiss, S. L. Lee, C. Lintas, Y. Ludena, R. Nguyen, L. Pavinato, M. Pericak-Vance, I. N. Pessah, R. J. Schmidt, M. Smith, C. I. S. Costa, S. Trajkova, J. Y. T. Wang, M. H. C. Yu, D. J. Cutler, S. De Rubeis, J. D. Buxbaum, M. J. Daly, B. Devlin, K. Roeder, S. J. Sanders, M. E. Talkowski. A rare coding variant in the NRXN1 gene associated with autism. Nature Genetics. 54, 1320 -1331 (2022).)). As can be observed in the table below, there are over 1,000 different types of gene mutations that can cause ASD, such as those affecting the SCN2A and FOXP1 genes. Each of these variations contributes to the onset of not only autism but other complications unique to each variant. The FOXP1 gene ranks as the 7th most common mutation, even though there aren't many overall. Changes can mean various things, from slight alterations to the complete elimination of a specific gene in the person, like how mutations in the FOXP1 gene are typically associated with language and communication problems14. Interestingly, birds—such as the zebra finch—also possess this gene sequence, which coincidentally determines whether a baby songbird can learn to sing from their tutor (in this case, their parent). These findings have the potential to expand and break new ground that may one day lead to a treatment or cure for autism, as in this study, the aim is to observe the different effects of these gene mutations and to review what animal models, such as songbirds, have demonstrated over years of research into this relatively new field.

Figure 1 - This collection of data shows the results of a study on rare coding variation in genes that was published in nature genetics15.

591


Discussion: Methods This paper consists of several insights and results of different studies around the world, ranging from those related to the brain to the others which specifically dealt with the relationship between FoxP mutations and autism, including how this can be further studied through a zebra finch animal model. It mostly relied on the narrative synthesis approach to gather data from websites and online databases such as the National Library of Medicine, Nature, and The Transmitter - all prestigious and well-known organizations. The focus of this article was built around the study led by Todd Roberts - associate professor of neuroscience at the University of Texas Southwestern Medical Center in Dallas - regarding how the silencing of FoxP1 genes affected the young zebra finches’ capacity to learn songs from older songbirds of the same species. In order to ensure accuracy and reliability of this writing, most of the papers or articles that were referenced are no longer than 20 years old, a large portion of the information having been gathered from articles of the 2021 study conducted by the team at the University of Texas. Rare gene variants As rare as all the different cases of genetic variants associated with Autism Spectrum Disorder may be, these mutations are also highly complex, with each anomaly being further divided into additional categories that branch out even more. Figure 1 illustrates this complexity. The x-axis lists the 72 different genes listed associated with ASD identified with a False Discovery Rate (FDR) of ≤ 0.001, a value that suggests that there is a low possibility of the listed genes being falsely associated with autism. The y-axis records the log10Bayes Factor, a value which indicates the prevalence of cases with the specific type of gene being associated with ASD. Finally, each bar is color-coded to indicate the type of genetic variant, such as protein-truncating variants, missense variants (MPC ≥ 2 or 1 ≤ MPC < 2), copy number variant deletions, and duplications. The extended TADA model provides an in-depth explanation that reveals how different aspects contributed to the emergence of the variants listed above. Overall, most variants are known to target the human genome, the blueprints of our body that manage synthesis of protein, cell function, and many more necessary processes humans require in order to survive. Thus, mutation that affects our genes can be extremely consequential and harmful for anyone. This is especially because even a slight change in the DNA sequence or production can be life-threatening, as can be demonstrated by the explanation of how the four types of variants can affect our body, even for a person that doesn’t have autism. First, a protein-truncating variant (PTV) refers to a type of mutation in which a shorter version of the protein is produced16. PTVs are often related to the onset of severe obesity or type 2 diabetes. Missense variants, on the other hand, directly impact the gene sequence, where a single base substitution results in the production of an abnormal amino acid in place where another should have been17. Therefore, these types of variants can even lead to illnesses such as sickle-cell disease, a complication that is most commonly associated with this type of mutation18. Lastly, the duplication or deletion of copy number variants are associated with altered gene dosage, a value that shows how many copies of a certain gene are present in a cell. Although copy number variants are typically harmless and actually make up about 12 percent of the human genome, deletions and duplications can interfere with the gene dosage level changes that may trigger diseases that vary in severity19. Among all these variant types, it can be seen on the figure above that PTV, the protein-truncating variant, is the most common type of mutation. Among the five different modes of inheritance of the listed variants, one can observe that the De Novo mode of inheritance (DN) is the most prevalent, having a % log BF value that is the most close to 100%. De Novo signifies a type of mutation in which, although the offspring was born from its parents, a mother and a father, they possess a genetic alteration that wasn’t previously seen in previous generations.

592


Therefore, autistic children affected by this type of mutation are typically first in their family to carry the disorder, which might have the possibility of resulting in a delayed diagnosis20.

Figure 2 - A diagram that illustrates the various effects mutations in certain genes can lead to ((K. J. Peall, M. J. Owen, J. Hall. New insights into genetic factors in neurodevelopmental disorders. Nature Reviews Neurology. 20, 7-21 (2024).)).

FOXP FOXP genes are largely divided into four different subgroups from Fox1 to FoxP4. Typically, they consist of a special DNA binding domain which is commonly known as the forkhead/winged helix domain, including two other structures: a leucine zipper motif and zinc finger domain22. Although FOXP1 itself is not a mutated gene, complications with it may often be problematic. Belonging to the subfamily of the forkhead box transcription factor family, Forkhead Box P1 gene is a protein coder, responsible for laying out a blueprint to prepare the cell for the specific-gene transcription in which a DNA strand can be converted into one of messenger RNA. This process is crucial in allowing the transfer of information throughout the cell to determine which type of protein should be made when these mRNA are later transferred to the ribosomes23. In most cases, normal-functioning FOXP1 genes are responsible for not only neural development but for inhibiting pancreatic cancer growth. Research findings have observed that De Novo FOXP1 mutations lead to the development of symptoms such as motor delay, difficulties in speech, intellectual disability, and the expression of autistic features. In the past, abnormal FOXP1 splicing due to truncation were mostly associated with the Opitz C trigonocephaly syndrome, another rare genetic condition that contributes to abnormal suture closure of a newborn infant’s cranial skull, as demonstrated in figure 324.

Figure 3 - The image above shows a picture of a baby with Trigonocephaly, a condition that is usually marked by a pointed or triangular-shaped forehead25.

593


According to a study conducted by Roser Urreizti and his team from the University of Barcelona, this type of abnormal splicing results from a skipping of exon 16 and a premature STOP codon ((R. Urreizti, S. Damanti, C. Esteve, H. Franco-Valls, L. Castilla-Vallmanya, R. Tonda, B. Cormand, L. Vilageliu, J. M. Opitz, G. Neri, D. Grinberg, S. Balcells. A De Novo FOXP1 Truncating Mutation in a Patient Originally Diagnosed as C Syndrome. Scientific Reports. 8, 694 (2018).)). If exons are the information that are transcribed onto the RNA in order to deliver the blueprint of the necessary proteins to the ribosome, the STOP codon acts like a scissor that determines when and where the protein synthesis process should come to an end. Thus, this is why a premature STOP codon can incorrectly splice, or truncate, the FOXP1 protein leading to the De Novo FOXP1 mutation in this case.

Figure 4 - This illustration depicts the process of protein synthesis through transcription and translation of DNA material27.

FOXP2, another closely related gene from the same family as FOXP1, is the gene that is most responsible for managing human speech and language development28. Due to its core function in speech management, abnormalities with this gene can even trigger Child Apraxia of Speech (CAS), a condition that significantly impairs one's ability to speak, even though comprehension of conversations remains intact. Together, all speech-related disorders influenced by abnormal splicing of the FOXP2 gene are called FOXP2-related speech and language disorder, or FOXP2-SLD in short. In recent years, much research has been conducted in this area to study these phenomena, but scientists were unable to reach a consensus on a clinical diagnosis criteria for the FOXP-related-syndrome yet. Usually, medical practitioners rely on observation of common symptoms of the disorder, such as reading/spelling impairment, receptive and expressive language impairment, and low average IQ29. FOXP and birds Surprisingly, some studies have located the same genes - FOXP1 and FOXP2 - in a species of songbird known as zebra finches. In fact, the two different species have a lot more in common than many might think. Although research completed in the past regarding the human brain was mostly conducted using rat brains due to theirs being “remarkably similar” to that of humans, recent findings have confirmed that humans share almost identical FOXP1 & FOXP2 expression patterns, as well as localization to parallel structures in the brain with zebra finches, enhancing the accuracy of comparative studies that makes use of this species. For both humans and songbirds, these genes can be found in high concentrations near subcortical structures like the basal ganglia, a region in the brain responsible for managing voluntary movement22. In 2021, a group of researchers from University of Texas Southwestern Medical Center in

594


Dallas attempted to further study this gene in a controlled environment by setting up a behavior and gene-expression based lab. Because of the advantage they provide in research related to language and speech difficulties, zebra finches were selected to be the subject of research. In the study, researchers examined how disrupted expression of the FoxP1 gene can influence the stage at which juvenile zebra finches learn how to sing from adult finches through a cultural transmission of birdsong. Typically, human babies slowly learn to speak their first words by going through several milestones for both speech and behavior as they grow older. From about 6 months in, infants reach what is known as the babbling stage. At this point, infants are only able to mimic sounds and cannot be distinguished by language, as babies from different parts of the world sound alike at this age. Finally, they usually move on to the holophrastic (single-word) stage forming complete words once they are approximately 12 months old. From then on, the babies continue to develop their language skills as time passes until they are eventually capable of speaking in complex sentences30. During this entire process, children have a social model, or ‘tutor’ - usually - their parents which helps them along the path to language development and is also most responsible for the baby’s first language. Similarly, juvenile songbirds acquire their songs by imitating the vocalizations and behaviors of adult birds. Unlike human infants, who learn language through a broader, more contextual process, young songbirds focus on memorizing specific song motifs and practice them repeatedly until they become proficient in their song. Even if the finches’ song learning may seem simpler compared to humans, the underlying mechanisms of imitation and learning are almost identical. Both heavily rely on the firing of mirror neurons during the procedure of learning the behavior from their respective tutors. In fact, previous research regarding this topic has also shown that FoxP1 is abundantly expressed in these mirror neurons as well ((F. Garcia-Oscos, T. M. I. Koch, H. Pancholi, M. Trusel, V. Daliparthi, M. Co, S. E. Park, F. Ayhan, D. H. Alam, J. E. Holdway, G. Konopka, T. F. Roberts. Comprehensive analysis of genetic variants linked to neurodevelopmental disorders. Sci. Adv. 7, 6 (2021).)) With all of this in mind, the researchers began by injecting an artificially made molecule that silences FoxP1 expression into young male zebra finches that were 35 days old. Selecting this age group and sex was a crucial step in the research as this is usually when most birds start to memorize and learn songs from their tutors. The reason the scientists limited the subjects to males only was because of the fact that males tend to sing more than females, which are capable of it but don’t do it as often as males32. The silencer was directly injected into a section in the finches’ brain called the HVC. Interconnected with another part called area X, the two regions contain an abundance of FoxP1 neurons. Afterwards, the team continued the study by placing these molecule-treated birds in the same cage as an adult bird who acted as the ‘tutor’, expecting the limited expression of the gene to hinder the birds’ ability to learn to sing. However, the initial trial’s results proved otherwise. Unlike their initial hypothesis, even when the molecule decreased FoxP1 protein count by half, these birds all showed that they were fully capable of song-learning. One thing that they all had in common was that all the young finches had already heard the songs of mature birds before they had received the treatment. Based on this conclusion, Roberts and his group tried modifying the initial trial’s setup before repeating it. This time, the researchers made sure that the young finches that were being tested had grown up in isolation so that they didn’t have the chance to be exposed to songs. When this new group of birds were then put in the same cage as their tutors, the young finches demonstrated difficulty in memorization and mimicking of adult songbirds. In addition, the songs that they did produce were disorganized and unlike that of normal zebra finches.

595


Conclusion The conclusion reached by the team of scientists from the University of Texas in their recent study paved the way for autism research, providing insight that can be the backbone of future development in both knowledge and treatment of autism. Although only a select few of autistic individuals are impacted by mutations in genes like FOXP1 and FOXP2, this finding could still influence understanding of the condition due to the nature of the disorder itself, which typically takes a heavy toll on one's ability to communicate and mimic standard speech. Therefore, due to almost identical structure between the brains of humans and zebra finches, further examination and analysis could prove fruitful, possibly providing a reliable model for finding methods to mitigate symptoms that are presented in those with autism. For instance, a study published on the Mayo Clinic website has already shown that autistic children usually show positive responses to well-organized constant treatment and rehabilitation, allowing them to enhance not only their communication skills but behavior as well33. Future studies could explore similar experimental designs to investigate the behavior of birds from the corvid family, known for their remarkable intelligence, which has been compared to that of a two-yearold human child. In fact, research focusing on this family of birds may serve as a better comparison that can allow the study’s findings to be extended to humans as well. Despite the obvious success of the study explained in this article, one cannot say that it did not have some limitations of its own, the mainly addressed shortcoming being that the scientists only tested FOXP1 at the time, despite the fact that FOXP2 was also a prevalent factor in contributing to speechrelated symptoms such as Apraxia. Later, however, the team released another article regarding research they conducted over the other gene in the FOXP family, which also provided useful information in autism research34. Although autism is currently identified as an ‘incurable’ disorder - especially genetic conditions in general - many are continuing their research in the field in the hopes of one day being able to find the answer to the problem that had kept people searching for years.

References 1.

E. Mark. 51 Autism Statistics: How Many People Have Autism? Discovery ABA Therapy. www.discoveryaba.com/statistics/how-many-people-haveautism#:~:text=Around%201%25%20 of%20the%20world%27s,diagnosed%20with%20autism%20spectrum%20disorder (2023).

2.

National Institute of Mental Health. https://www.nimh.nih.gov/health/topics/autismspectrum-disordersasd#:~:text=Autism%20spectrum%20disorder%20(ASD)%20is,first%202%20years%20of% 20life (2024).

3.

H. Siddique.Study says the cost of autism is more than cancer, strokes and heart disease. The Guardian. https://www.theguardian.com/society/2014/jun/09/autism-costs-more-cancerstrokes-heart-diseas e (2014).

4.

S. Deweerdt. Autism gene interference silences song memory in birds. The Transmitter. https://www.thetransmitter.org/spectrum/autism-gene-interference-silences-song-memoryin-birds/ (2021).

5.

A. Mandal. Autism History. News Medical. https://www.news-medical.net/health/AutismHistory.aspx (2023).

6.

Signs and symptoms of autism spectrum disorder. Centers for Disease Control and Prevention. https://www.cdc.gov/autism/signs-symptoms/index.html (2024).

596


7.

M. Levitt, M. Yu, M. L. Kelly, J. Summers. Remembering Donald Triplett, the first person to be diagnosed with autism. NPR. https://www.npr.org/2023/06/22/1183842725/rememberingdonald-triplett-the-first-person-to-be- diagnosed-with-autism (2023).

8.

G.D. Fischbach. Leo Kanner's 1943 paper on autism. The Transmitter. https://www.thetransmitter.org/spectrum/leo-kanners-1943-paper-on-autism/ (2007).

9.

R. Pallardy. Donald Triplett. Encyclopaedia Britannica. https://www.britannica.com/biography/Donald-Triplett.

10. R. Moller. History timeline of autism. Abtaba. https://www.abtaba.com/blog/historytimeline-autism (2023). 11. A routine prenatal ultrasound can identify early signs of autism, study finds. Science Daily. https://www.sciencedaily.com/releases/2022/02/220209112107.htm (2022). 12. Autism. National Institute of Environmental Health Sciences. https://www.niehs.nih.gov/health/topics/conditions/autism (2024). 13. Autism spectrum disorder: Causes. MedlinePlus. https://medlineplus.gov/genetics/condition/autism-spectrum-disorder/#causes (2024). 14. J. M. Fu, F. K. Satterstrom, M. Peng, H. Brand, R. L. Collins, S. Dong, B. Wamsley, L. Klei, L. Wang, S. P. Hao, C. R. Stevens, C. Cusick, M. Babadi, E. Banks, B. Collins, S. Dodge, S. B. Gabriel, L. Gauthier, S. K. Lee, L. Liang, A. Ljungdahl, B. Mahjani, L. Sloofman, A. N. Smirnov, M. Barbosa, C. Betancur, A. Brusco, B. H. Y. Chung, E. H. Cook, M. L. Cuccaro, E. Domenici, G. B. Ferrero, J. J. Gargus, G. E. Herman, I. Hertz-Picciotto, P. Maciel, D. S. Manoach, M. R. Passos-Bueno, A. M. Persico, A. Renieri, J. S. Sutcliffe, F. Tassone, E. Trabetti, G. Campos, S. Cardaropoli, D. Carli, M. C. Y. Chan, C. Fallerini, E. Giorgio, A. C. Girardi, E. Hansen-Kiss, S. L. Lee, C. Lintas, Y. Ludena, R. Nguyen, L. Pavinato, M. Pericak-Vance, I. N. Pessah, R. J. Schmidt, M. Smith, C. I. S. Costa, S. Trajkova, J. Y. T. Wang, M. H. C. Yu, D. J. Cutler, S. De Rubeis, J. D. Buxbaum, M. J. Daly, B. Devlin, K. Roeder, S. J. Sanders, M. E. Talkowski. A rare coding variant in the NRXN1 gene associated with autism. Nature Genetics. 54, 1320-1331(2022). 15. Rare coding variation provides insight into the genetic architecture and phenotypic context of autism. Talkowski Lab. https://talkowski.mgh.harvard.edu/2022/09/07/rare-codingvariation-provides-insight-into-the-ge netic-architecture-and-phenotypic-context-of-autism/ (2022). 16. Truncating variant. Genomics Education Programme. https://www.genomicseducation.hee.nhs.uk/glossary/truncatingvariant/#:~:text=Definition,of%2 0the%20protein%20being%20produced. 17. Z. Zhang, M. A. Miteva, L. Wang, E. Alexov. Analyzing Effects of Naturally Occurring Missense Mutations. NCBI. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3346971/#:~:text=Missense%20mutations %20ca n%20affect%20DNA,may%20cause%20diseases%20%5B20%5D (2012). 18. A. Hernández. Missense mutation. Osmosis. https://www.osmosis.org/answers/missensemutation (2022). 19. I. Lobo. Copy number variation and genetic disease. Nature Scitable. https://www.nature.com/scitable/topicpage/copy-number-variation-and-genetic-disease-911/ (2008). 20. De novo mutation. Health in Code.

597


https://healthincode.com/en/patient-information/genetics-and-hereditary-diseases/basicconcepts- of-genetics/modes-of-genetic-inheritance/de-novo-mutation/ (2024). 21. K. J. Peall, M. J. Owen, J. Hall. New insights into genetic factors in neurodevelopmental disorders. Nature Reviews Neurology. 20, 7-21 (2024). 22. I. Teramitsu, L. C. Kudo, S. E. London, D. H. Geschwind, S. A. White. Parallel FoxP1 and FoxP2 Expression in Songbird and Human Brain Predicts Functional Interaction. NCBI. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6730014/ (2004). 23. FOXP1 forkhead box P1[Homo sapiens(human)]. NCBI. https://www.ncbi.nlm.nih.gov/gene/27086 (2024). 24. Trigonocephaly. Chrysallida. https://chrysallida.com/en/deformities/trigonocephaly/ (2024). 25. Trigonocephaly. HCA Healthcare. https://www.hcahealthcare.co.uk/conditions/trigonocephaly (2024). 26. R. Urreizti, S. Damanti, C. Esteve, H. Franco-Valls, L. Castilla-Vallmanya, R. Tonda, B. Cormand, L. Vilageliu, J. M. Opitz, G. Neri, D. Grinberg, S. Balcells. A De Novo FOXP1 Truncating Mutation in a Patient Originally Diagnosed as C Syndrome. Scientific Reports. 8, 694 (2018). 27. W. Anderson. DNA & mRNA: introns, and exons. Schoolwork Helper. https://schoolworkhelper.net/dna-mrna-introns-and-exons/ (2022). 28. FOXP2. ScienceDirect. https://www.sciencedirect.com/topics/neuroscience/foxp2#:~:text=FOXP2%20is%20a%20g ene% 20that%20has%20been%20linked%20to%20speech,et%20al.%2C%202008) (2024). 29. A. Morgan, A. Hons, S. E. Fisher, I. Scheffer, M. Hildebrand. FOXP2-Related Speech and Language Disorder NCBI Bookshelf. https://www.ncbi.nlm.nih.gov/books/NBK368474/ (2016). 30. C. Whitlock. Language development stages: Developing language at home. Healthy Young Minds. https://www.healthyyoungminds.com/language-development-stages-developinglanguage-at-hom e/ (2024). 31. F. Garcia-Oscos, T. M. I. Koch, H. Pancholi, M. Trusel, V. Daliparthi, M. Co, S. E. Park, F. Ayhan, D. H. Alam, J. E. Holdway, G. Konopka, T. F. Roberts. Autism-linked gene FoxP1 selectively regulates the cultural transmission of learned vocalizations. Sci. Adv. 7, 6 (2021). 32. M. Chao. Songs in the Key of Life: A Closer Look at Why and How Birds Sing. Finger Lakes Land Trust. https://www.fllt.org/songs-in-the-key-of-life-a-closer-look-at-why-andhow-birds-sing/#:~:text=I n%20general%2C%20we%20define%20bird,bonding%2C%20especially%20in%20the%20 tropic s (2022). 33. Autism Spectrum Disorder. Mayo Clinic. https://www.mayoclinic.org/diseasesconditions/autism-spectrum-disorder/diagnosis-treatment/dr c20352934#:~:text=Educational%20therapies.,social%20skills%2C%20communication%20a nd%20behavior (2018). 34. L. Xiao, D. P. Merullo, T. M. I. Koch, M. Cao, M. Co, A. Kulkarni, G. Konopka, T. F. Roberts. Expression of FoxP2 in the basal ganglia regulates vocal motor sequences in the adult songbird. National Library of Medicine. https://pubmed.ncbi.nlm.nih.gov/33976169/ (2021).

598


How does the educational background (of South Korean students) influence their attitude toward North Korean defectors? Author

Full Name

:

An, Chaehyeon

:

Chadwick International School

(Last Name, First Name)

School Name

RESEARCH PROPOSAL Variables Independent Variable:

Educational background of South Korean students

Dependent Variable:

Attitude toward North Korean defectors

Controlled Variables:

1. Range of age of participants 2. Number of years participants attended their school (with a specific type of education system) 3. Number of years participants lived in Korea 4. Current nationality of participants 5. Participant’s possession of dual citizenships 6. Participant’s interest in politics 7. Participant’s past experience/ opportunities to meet North Korean defectors 8. Participant’s connection to North Korea (family or peer) 9. Participant’s past experience/ opportunities to meet North Korean defectors 10. Participant’s connection to North Korea (family or peer)

Operationalization of the Independent Variable How to define “educational background”: Experimental condition #1: South Korean students who attend international schools Experimental condition #2: South Korean students who attend Korean schools

599


How to define “South Korean students”: Ethnic components: i. Being born in Korea ii. Having a Korean bloodline iii. Living in Korea for the majority of one’s life Civic component: I. Maintaining a Korean nationality II. Being able to speak and write in Korean III. Abiding by the Korean political and legal system IV. Understanding Korean culture and traditions

Operationalization of the Dependent Variable Measure #1: How South Korean citizens view North Korean citizens (Kim) Participants select one out of the four options: 1. One of us 2. Neighbors 3. Stranger 4. Enemy 5. No interest Measure #2: How accepting South Korean citizens are of North Korean citizens (Kim) Participants select one out of the three options and provide a short reason for the selection: 1. NK citizens should not be admitted (+reason) 2. NK citizens should be admitted under specific conditions (+reason, what conditions) 3. NK citizens should be admitted (+reason)

Operationalization of the Controlled Variables Range of age of participants

All participants’ age should range from 16 to 18

Number of years participants received the specific type of education

Participants should have received the same type of education (either International or Korean) for 6+ consecutive years. Participants should be engaged in the same type of education during the time of the survey.

Number of years participants lived in Korea

All participants should have resided in Korea for more than ten years

Current nationality of participants

All participants’ current nationality should be South Korean

Participant’s possession of dual citizenships

All participants should not possess dual citizenship

Participant’s interest in politics

All participants will answer a self-report (questionnaire) rate their interest in politics from 0 to 10 (0 indicating no interest and 10 indicating being deeply involved). The rating should be greater than or equal to 7. Participants who did not fulfill this requirement will be disregarded.

600


Participant’s past experience/ opportunities to meet North Korean defectors

All participants will answer a self-report (questionnaire) that rates their past interactions with North Korean defectors from 0-10 (0 indicating no experience and 10 indicating daily engagement).

Participant’s connection to North Korea (family or peer)

All participants will answer a self-report (questionnaire) that questions their personal connection to North Korea. They will respond by selecting one of the three options: none, somewhat, or many.

Literature Review Keywords- North Korean defectors, South Korea, Education, International Education, Korean Education, Attitude, Acceptance, etc. List of sourcesSource #1: https://keia.org/sites/default/files/publications/jiyoon_kim.pdf Source #2: https://www.tandfonline.com/doi/abs/10.1080/1369183X.2022.2141213 Source #3: https://link.springer.com/article/10.1007/s12144-021-02518-5 Source #4: https://journal.fi/store/article/view/8718 Source #5: https://www.sciencedirect.com/science/article/abs/pii/S01471767 1630089X There is a substantial amount of research on the effects of political ideologies on the attitudes toward North Korean defectors. The existing literature (Lee, Ha, Jang) examines the correlation between political ideologies and North Korean defectors and suggests that shared identities help citizens in a host state shape more favorable attitudes toward immigrants. However, there is a lack of studies that directly focus on the impacts of education on shifting national identities, therefore, influencing their views on North Korean defectors. Thus, my research question will aim to investigate the correlation between ways educational background (of South Korean students) influences students’ attitudes toward North Korean defectors.

Hypothesis South Korean students attending international schools (those with a more globally focused educational background) will be more accepting of North Korean defectors than students with a more homogenous/ traditional educational background.

Theory This research paper draws from two theoretical frameworks: Contact Hypothesis (Allport, 1954) and Social Learning Theory (Bandura, 1977). The Contact Hypothesis suggests direct interactions with diverse groups can lead to a reduction in prejudice. The Social Learning Theory posits that people learn behavior by observing the actions and attitudes of those around them. Both theories hold relevance in showing how international education can develop a more accepting attitude toward North Korean defectors. First, being surrounded by teachers and peers from different ethnicities may promote a relatively more positive attitude toward those with contrasting cultural backgrounds. Likewise, South Korean students receiving international education may be more accepting of North Korean defectors. Secondly, South Korean students attending international schools are more likely to become directly influenced by teachers, peers, class content, etc that emphasizes internationalism, empathy, and inclusivity. As such, observing the behavior and attitudes of “role models” or content emphasizing diversity may cause students to have a similar perspective on North Korean defectors, and therefore be more accepting of them.

601


Methodology/Data 1) Sample Selection

Random Sampling: 90 students representing each population group (180 in total) 1. 2. 3. 4.

9 regions of South Korea 10 students per region (for each population group) 10 students should come from different schools that are randomly selected The one student per school is also selected randomly (within the controlled age range)

2) Data Gathering: Survey

1. 2. 3. 4. 5.

Type of data: Qualitative

Type of data: Qualitative

Measure #1: How South Korean citizens view North Korean citizens (Kim)

Measure #1: How accepting South Korean citizens are of North Korean citizens (Kim)

Participants select one out of the four options:

Participants select one out of the three options and provide a short reasoning for the selection:

One of us Neighbors Stranger Enemy No interest

1. 2. 3.

NK citizens should not be admitted (+reason) NK citizens should be admitted under specific conditions (+reason, what conditions) NK citizens should be admitted (+reason)

Limitations ■ ■ ■ ■

Limited sample size → harder to generalize results and have high reliability Presence of possible extraneous variables → disturbs the causal relationship between the IV and DV Sensitive topic Variance between individual schools

602


Significance ■ ■ ■

Studying North Korean defectors is of interest because they can provide a proxy view of South Koreans toward North Korea Changing views on national identity (since South Korea has traditionally valued ethnic homogeneity) Provides insights into how international and Korean education can shape social and political attitudes

References Author links open overlay panelShang E. Ha a, a, b, AbstractThis paper examines how national identity is associated with South Koreans’ attitudes toward North Korean defectors and their opinions on the relationship between two Koreas. Using a nationally representative survey, Alesina, A., Anderson, B., Bidet, E., Borgadus, E. S., Brown, R., Brubaker, R., Choi, N., Chung, E. A., Cumings, B., Figueiredo, R. J. P. D., Em, H., Fitzgerald, J., Haggard, S., Helbling, M., Huddy, L., … Karakayali, N. (2016, October 20). National Identity in a divided nation: South Koreans’ attitudes toward North Korean defectors and the reunification of two Koreas. International Journal of Intercultural Relations. https://www.sciencedirect.com/science/article/abs/pii/S014717671630089X Jones, A. G., Whitehead, G. E. K., & Bang, H. (2022, January 8). Exploring the impacts of a South Korean alternative school on North Korean refugees’ educational attitudes, satisfaction, and behavior - current psychology. SpringerLink. https://link.springer.com/article/10.1007/s12144-021-02518-5 Lee, D. (2013, January 1). The influence of North Korean plitical ideologies on the integration of North Korean defectors in South Korea. Studia Orientalia Electronica. https://journal.fi/store/article/view/8718 National Identity and Attitudes Toward North Korean defectors. (n.d.-a). http://keia.org/sites/default/files/publications/jiyoon_kim.pdf National identity, partisanship, and attitudes toward north ... (n.d.-b). https://www.tandfonline.com/doi/abs/10.1080/1369183X.2022.2141213

603


Enhancing the Effectiveness of Biofeedback Therapy for Headache Relief through Relaxation Techniques: ANS Arousal and Psychological Readiness Author

Full Name

:

Chung, Ahyoung

:

Laurel Springs School

(Last Name, First Name)

School Name

RESEARCH PROPOSAL

Abstract Headaches, especially migraines and tension-type headaches (TTH), affect people worldwide and often leads to significant health related problems and reduce quality of life. Biofeedback therapy offers a nonpharmacological alternative to medication for headache treatment by providing real-time feedback on physiological responses. This study investigates whether combining relaxation techniques with biofeedback therapy can improve outcomes of treatments by reducing autonomic nervous system (ANS) arousal and enhancing psychological readiness. This study offers a supplementary version of an established biofeedback therapy in addition to a potential non-pharmacological treatment by concentrating on these two elements.

Keywords Biofeedback Therapy, Headache, Migraine, Tension-Type Headache (TTH), Relaxation Technique, Autonomic Nervous System (ANS), Psychological Readiness

604


Introduction Though not life-threatening, but threatens people’s daily lives, headache is highly prevalent worldwide. Affecting approximately 40% of the population, or 3.1 billion people in 2021 (World Health Organization: WHO, 2024), headache is now a major health problem. In 2022, the estimated global prevalence of active headache disorders in high-income countries was 52.0%. Migraine affects approximately 14.0%, tension-type headaches (TTH) occur in 26.0%, and headaches occurring on 15 or more days per month (H15+) affect 4.6% of the population (Stovner et al., 2022). Coping with pain, patients usually take medications to treat headaches. While medication is a widely acknowledged treatment for headaches, it does not apply to every patient. In some cases, patients often stop taking preventative medication, finding it ineffective or causing side effects (Groth et al., 2022). Moreover, due to the possibility of distracting or negatively impacting the baby’s development, medication is the ultimate treatment for pregnant patients. Regarding these factors, there is an urgent need to develop an alternative non-pharmacological treatment. Biofeedback, a technique of gaining voluntary control over physiological functions using auditory or visual signals, is introduced as a new treatment for headaches. Measuring a body parameter, such as heart rate and breathing through a machine attached to the patient, the computer processes the information and reports it back to the patient. As one receives the feedback, the data will be analyzed by the doctor or therapist, who will then suggest an appropriate therapy session. However, whether biofeedback therapy is an effective treatment is still uncertain and contentious. Some study argues that it is just time consuming and costly, comparing the change in a mean number of headaches over time and the change in a mean number of medications over time, with random patients receiving biofeedback therapy (William M. et al. 2009). Another claim that the effectiveness of it depends on various factors. These factors include vasomotor responses; autonomic nervous system arousal; biochemical changes; cognitive, mood, and behavioral changes (Park & Yu, 2004). Regarding these counterarguments about the efficacy of biofeedback therapy in treating headache, this study aims to develop a treatment that increases the efficacy of biofeedback therapy, thus increasing its effectiveness. Among the three different treatments of headache; medication treatment, relaxation technique and biofeedback therapy, this study will explore how relaxation technique can affect the biofeedback therapy, by reducing ANS arousal and monitoring psychological readiness of the patients, which are mangable through relaxation technique.

Figure 1. Distribution of Headache Types

605


Figure 2. Reasons for discontiuntation of preventive treatments

Literature Review Biofeedback Therapy Biofeedback is a technique that allows individuals to control certain physiological functions by providing real-time data about their bodies. Mechanisms of biofeedback therapy involve EMG biofeedback and temperature biofeedback. Electromyography biofeedback, also known as EMG biofeedback, involves monitoring muscle tension, specifically the frontalis and trapezius muscles. Patients receive visual or auditory feedback that decreases as muscle tension reduces, promoting relaxation. Another mechanism is temperature biofeedback which patients learn to increase the temperature of their hands or fingers. The theory is that stabilizing vasomotor response can help reduce headache symptoms. (Park & Yu 2004). The long-term goal is for patients to internalize control over bodily processes, allowing them to manage stress and prevent headaches without the need for constant monitoring (Park et al. 2006).

Relaxation Technique Relaxation technique is a method that helps reduce stress and promotes a calm state of mind and body. It is often used as a non-pharmacologic treatment for conditions like headaches and migraines. Types of relaxation techniques include; education in pain theory, basic relaxation training, meditation, selfhypnosis, cognitive therapy, and art and movement therapy (Park & Yu, 2004). Focusing on stress reduction, it leads to muscle relaxation and headache relief (Park et al., 2006).

Autonomic Nervous System Arousal and Headache The Autonomic Nervous System (ANS) regulates involuntary body functions such as heart rate, blood pressure, respiratory rate, and digestion. Patients with TTH often exhibit lower heart rate variability

606


(HRV), indicating decreased flexibility in the ANS, specifically in parasympathetic function, which is one of the two branches of ANS. Two major causes that lead to TTH are stress and muscle tension, both of which are regulated by the ANS. Because migraine is associated with dysautonomia, or aberrant ANS function, it is also connected to the ANS. According to research, therapies that increase ANS flexibility may be able to lessen the frequency and intensity of migraine attacks (Gevirtz, 2022).

Psychological Readiness and Headache Patients' readiness to engage with biofeedback, including their ability to relax and focus during sessions, plays a crucial role in determining the outcome. Anxiety, stress, and a lack of confidence in the technique can diminish the therapeutic effects of biofeedback (Park & Yu 2004).

Research Question and Hypothesis Research question for this study aims to identify the influence of relaxation technique on biofeedback efficacy. Patients receiving biofeedback treatment combined with relaxation techniques will show a significant reduction in the frequency and severity of migraines and tension-type headaches compared to patients receiving relaxation techniques alone.

Research Aim 1 This study's primary goal is to ascertain the impact of a reduction in autonomic nervous system (ANS) arousal during biofeedback treatment in order to address the research issue.

Research Design and Method The goal of a randomized controlled trial (RCT) is to determine how the frequency and intensity of migraines and tension-type headaches are affected by lowering autonomic nervous system (ANS) arousal with biofeedback therapy.

Participant Participants will include adults aged 18 to 55 diagnosed with migraines or TTH. They should have a minimum 1-year history of headache, though no history or psychiatric or cardiovascular disorders that could affect ANS regulations. Patients who are taking autonomic-regulation medications or undergoing other treatments for headaches will be excluded. There will be a total of 60 participants, divided into two groups: Group A and Group B. Group A will receive both biofeedback and ANS monitoring, focusing on measuring and reducing ANS arousal using biofeedback therapy. Group B will receive biofeedback therapy alone without direct ANS monitoring.

Intervention Following division, ten weekly biofeedback sessions utilizing thermal and EMG biofeedback will be administered to each group. Thermal biofeedback will be used to detect hand temperature and EMG biofeedback will be used to monitor muscle tension. Throughout the sessions, Group A's ANS responses (HRV, GSR) will also be observed.

Data Collection ANS arousal will be measured for Group A only through recording heart rate variability (HRV) and galvanic skin responses (GSR). HRV will be measured to assess autonomic balance as higher HRV indicates lower AND arousal. GSR will be recorded to measure sympathetic activity during sessions. Headache frequency and severity will be measured for both Group A and B. Patients will keep their headache diaries to record the frequency, intensity, and duration of headaches over time. Intensity will be determined through scaling it on a scale 1 to 10, 1 being subtle pain, and 10 being extreme pain.

607


Data Analysis Reduction in ANS arousal such as HRV, GSR, and EMG will be analyzed by ANOVA to compare the difference in ANS arousal reduction. Regression analysis will be performed to examine changes in headache frequency and severity correlate with reductions in ANS arousal.

Research Aim 2 The second aim is to determine the effect of psychological readiness on biofeedback treatment.

Research Design and Method Aim 2 will also employ a randomized controlled trial (RCT) to evaluate the impact of psychological preparedness on the efficacy of biofeedback therapy.

Participant Participants must meet the same general requirements as in Aim 1, which include being an adult between the ages of 18 and 55 who has been diagnosed with TTH or migraines and has had headaches for at least a year. No other inclusion or exclusion criteria is needed for this. There will be a total of 60 participants, categorized into two groups based on their psychological readiness. Group C will consist of patients with relatively high scores on a validated psychological readiness test while Group D will consist of patients with relatively low readiness scores.

Intervention Following categorization, each group will undergo ten weekly biofeedback sessions with an emphasis on thermal and EMG feedback to enhance self-regulation and lessen tense muscles. A Psychological Readiness Questionnaire will be used to measure motivation, stress management skills, and openness to therapy in both groups following the sessions.

Data Collection Psychological Readiness and Engagement will be measured for both Group C and D. Engagement during sessions will be measured through therapist or doctor observations and ratings such as attentiveness and willingness to engage in the therapy session. Headache frequency and severity will be measured by the similar method with Aim 1. Self-reported headache diaries will be used to track the frequency, severity, and duration of headaches throughout the study period.

Data Analysis The impact of psychological readiness on treatment outcomes such as headache reduction and patient engagement will be analyzed using regression analysis to seek for the relationship between psychological readiness scores and the efficacy of biofeedback therapy in reducing headache frequency and severity. ANOVA will be utilized to compare the outcomes between the high readiness group and low readiness group at baseline.

Result (Anticipated) It is predicted that, in comparison to the standard biofeedback without relaxation technique combination, participants in the individualized biofeedback therapy group will experience more decreases in headache frequency and severity. It is anticipated that a notable drop in ANS arousal will correspond with a reduction in headache frequency and intensity, improving therapeutic effectiveness. A better treatment outcome is also expected to be associated with a higher level of psychological preparedness. The two outcomes will demonstrate that psychological preparedness and ANS arousal may be controlled with relaxation methods, two aspects that affect the effectiveness of biofeedback treatment.

608


It is anticipated that combining biofeedback and relaxation techniques will have synergistic benefits that improve long-term results and lessen dependency on pharmaceuticals and medical services.

Conclusion The purpose of this study is to present evidence in support of the use of relaxation techniques and biofeedback therapy in conjunction to treat tension-type headaches and migraines. This work may lead to the development of more potent non-pharmacological headache treatments by determining the variables influencing biofeedback efficacy. With the potential to lower drug use and healthcare expenses while enhancing the patients' quality of life, the findings and outcomes are anticipated to aid in the treatment of patients.

Limitation There are a number of limitations to this study that could affect how broadly applicable the results are. First, it is difficult to make more generalizations due to the small sample size of 60 participants. Furthermore, long-term effects on headache management are not fully addressed by the study because it focuses on a 10-week biofeedback therapy session. The use of self-reported headache diaries raises the possibility of bias in the information gathered. Moreover, the research's scope is limited if people with mental or cardiovascular illnesses are excluded.

References World Health Organization: WHO. (2024, March 6). Migraine and other headache disorders. https://www.who.int/news-room/fact-sheets/detail/headache-disorders Stovner, L. J., Hagen, K., Linde, M., & Steiner, T. J. (2022). The global prevalence of headache: an update, with analysis of the influences of methodological factors on prevalence estimates. The Journal of Headache and Pain, 23(1). https://doi.org/10.1186/s10194-022-01402-2 Groth, M., Katsarava, Z., & Ehrlich, M. (2022). Results of the gErman migraine PatIent Survey on medical Care and prOPhylactic treatment Experience (EPISCOPE). Scientific Reports, 12(1). William, J. M., Kathryn, H. RNCS, ANP-BC, and Richard, G. (2009) Efficacy of Biofeedback in the Treatment of Migraine and Tension Type Headaches. Pain Physician 2009; 12:1005-1011 • ISSN 1533-3159 Park, J.-E., & Yu, B.-H. (2004). Migraine; psychiatric issues and biofeedback. Korean Journal of Headache, 5(1):(10–22). Gevirtz, R. (2022). The role of the autonomic nervous system in headache: biomarkers and treatment. Current Pain and Headache Reports, 26(10), 767–774. https://doi.org/10.1007/s11916-022-01079-x Kim, S.-W. (2003). Introduction of biofeedback. Korean Journal of Headache, 4(1)(41–47).

609


The Reason Individuals Prioritize their Own Profits over Addressing the Climate Crisis: Psychological Instincts Author

Full Name

:

Jo, Yunchae

:

Yongin Samuel Christian Academy

(Last Name, First Name)

School Name

RESEARCH PROPOSAL

Abstract The climate crisis has been a significant global concern for a considerable amount of time, but both companies and governments often fail to do their utmost to address the climate crisis. This research investigates why individuals, who comprise these entities, tend to prioritize their own profits over addressing this urgent issue. This problem is significant because it underscores a basic obstacle to taking effective action on climate change, which is rooted in our natural tendency to prioritize our personal interests in the moment. I hypothesize that governments and corporations, comprised of individuals, prioritize their profits over addressing the climate crisis due to individuals' psychological instincts. The study will look into global climate agreements to see if they are being followed and analyze ESG companies for signs of greenwashing. In summary, the goal of this research is to reveal the psychological reasons that impede effective climate action, leading to a better balance between profit interests and environmental duties.

610


Introduction For a long time, the climate crisis has been a serious global issue. But why do companies and governments, composed of individuals, not do their utmost to address the climate crisis? Individuals who are driven by their psychological instincts will likely prioritize their own profits over the climate crisis.

(Figure 1: This figure shows the change in global surface temperature compared to the long-term average from 1880 to 2023) source: climate.nasa.gov

(Figure 2: This figure displays the annual greenhouse gas emissions) source: EDGAR, edgar.jrc.ec.europa.eu/report

611


(Figure 3: This figure shows incidents of greenwashing in ESG Technical Companies) source: RepRisk

Hypothesis I hypothesize that governments and corporations, composed of individuals, prioritize their profits over addressing the climate crisis and that is due to individuals' psychological instincts.

Specific Aims: Specific Aim 1: To determine whether the accords are being adhered to by examining the climate agreements reached globally. Specific Aim 2: To identify instances of greenwashing by analyzing ESG companies.

Potential pitfalls and alternative strategies: The field of universal psychology is subject that may change over time, therefore segmenting surveys by generations can aid in the comprehension of any psychological variances that may result from different age groups.

612


References: Furer, Mathias. “RepRisk Data Shows Increase in Greenwashing with One in Three Greenwashing Public Companies Also Linked to Social Washing.” RepRisk, 3 Oct. 2023, https://www.reprisk.com/research-insights/news-and-media-coverage/reprisk-data-shows-increase-ingreenwashing-with-one-in-three-greenwashing-public-companies-also-linked-to-social-washing. Accessed 29 Sept. 2024. “GHG Emissions of All World Countries.” EDGAR, edgar.jrc.ec.europa.eu/report_2023. Accessed 29 Sept. 2024. “Global Surface Temperature.” NASA, NASA, 7 Feb. 2024, climate.nasa.gov/vital-signs/globaltemperature/?intent=121. Lee, Young-ae. “UNEP: ‘There Is No Way to Avoid a 1.5 Degree Temperature Rise’... If Things Continue This Way, a 2.8 Degree Rise.” DongaScience, 28 Oct. 2022. Marshall, George. Don’t Even Think about It: Why Our Brains Are Wired to Ignore Climate Change. Bloomsbury Publishing, 2015. “On the Rise: Navigating the Wave of Greenwashing and Social Washing.” RepRisk, www.reprisk.com/research-insights/reports/on-the-rise-navigating-the-wave-of-greenwashing-andsocial-washing. Accessed 29 Sept. 2024. Son, Young-ho. “Paris Agreement Goal ‘Golden Time’ One Year Left, Climate Crisis Deepens Due to Rising Greenhouse Gas Concentrations.” Business Post, 8 Apr. 2024.

613


Developing the Error Correcting Mechanism of DNA Data Storage Through the Intervention of Optical Discs Author

Full Name (Last Name, First Name)

School Name

:

Kim, Rinho

:

Gyeonggi Academy of Foreign Languages

RESEARCH PROPOSAL

Abstract DNA is an attractive next-generation storage data medium, but limitations arise with approximately 1% error rate per read. This might damage the stored data and undermine the role of DNA as a data storage. Therefore, it is hypothesized that incorporating DNA and conventional discs into a single data storage unit will significantly decrease errors from encoding and decoding. In this research, the ability of the prototypical hybrid design to reference and recover data will be investigated. Also, the utility of complementary correction between ECCs(Error Correction Codes) of each section will be considered. Introduction Electricity and CO2 emissions today are getting out of control. As AI technology is developing rapidly, it demands even more energy. This is leading to an estimated 135% increase in electricity usage from 2022 to 2026 (Fig 1.). Following that, CO2 emissions by data centers are predicted to rise from 1% to 3% from 2022 to 2026 (Fig 2). So, DNA with data density, longevity, and sustainability would rise to the surface. However, several papers show that about 1% of sequenced data contain errors (Fig 3.). If 1 million bases are sequenced, about ten thousand bases would be altered. This undermines DNA’s benefits as a data storage.

614


DNA is an attractive next-generation storage data medium, but limitations arise with approximately 1% error rate per read. This might damage the stored data and undermine the role of DNA as a data storage. Therefore, it is hypothesized that incorporating DNA and conventional discs into a single data storage unit will significantly decrease errors from encoding and decoding. In this research, the ability of the prototypical hybrid design to reference and recover data will be investigated. Also, the utility of complementary correction between ECCs(Error Correction Codes) of each section will be considered. Hypothesis I hypothesize that incorporating DNA and disc into a single data storage unit will significantly decrease errors from encoding and decoding.

Aims 1. To investigate the storage unit's ability to refer to the metadata of the disc, in order to fix the altered core data of DNA 2. To determine the feasibility of complementary correction between ECCs in the disc and ECCs in DNA.

615


Potential Pitfalls & Alternatives 1. The presence of different ECCs in each proportion might not work together. One alternative is to let the error correction codes work outside the unit. The errors can be identified and corrected after sequencing each part of data (Fig 5.).

2. For some reason, metadata might fail occasionally and lose its ability as a reference source. To prevent this, finding ways to cross-reference between them could be valid (Fig 6.).

References ■

Buckley, Sean. “IEA Study Sees AI, Cryptocurrency Doubling Data Center Energy Consumption by 2026.” Data Center Frontier, 8 Mar. 2024, www.datacenterfrontier.com/energy/article/33038469/iea-study-sees-ai-cryptocurrency-doublingdata-center-energy-consumption-by-2026.

Ceze, Luis, et al. “Molecular Digital Data Storage Using DNA.” Nature Reviews Genetics, vol.20, no. 8, 2019, pp. 456–66, https://doi.org/10.1038/s4157601901253.

“Explore Illumina Sequencing Technology - Massively Parallel Sequencing with Optimized SBS Chemistry.” Illumina, 2022, www.illumina.com/science/technology/next-generation-sequencing/sequencing-technology.htm 1.

Goldman, Nick, et al. “Towards Practical, High-Capacity, Low-Maintenance Information Storage in Synthesized DNA.” Nature, vol. 494, no. 7435, Feb. 2013, pp. 77–80, https://doi.org/10.1038/nature11875.

616


L, Jennifer. “The Carbon Countdown: AI and Its 10 Billion Rise in Power Use.” Carbon Credits, 28 Feb. 2024, carboncredits.com/carbon-countdown-ais-10-billion-rise-in-power-use-explodes-data-centeremission/.

Hare, Jason. “What Is Metadata and Why Is It as Important as the Data Itself?” Opendatasoft, 25 Aug. 2016, www.opendatasoft.com/en/blog/what-is-metadata-and-why-is-it-important-data/.

Notman, Nina. “Is DNA the Future of Digital Data Storage?” Chemistry World, 15 July 2024, www.chemistryworld.com/features/is-dna-the-future-of-digital-datastorage/4019749.article?adredir=1&adredir=1.

617


Enhancing Education through AI mentoring: Exploring Benefits and Challenges of Personalized Learning Support Author

Full Name

:

Lee, Byoung Ju

:

Global Vision Christian School Mungyeong Campus

(Last Name, First Name)

School Name

RESEARCH PROPOSAL

Abstract This research investigates the educational effect of AI mentoring through personalized learning support. As technology develops rapidly as time progresses, AI becomes an important tool in education. AI mentoring has the potential to give feedback to students that are fitted. This research is to find out whether AI mentoring affects positively on achieving educational goals and enhancing self-directed learning skills. Despite the promising prospects, challenges such over reliance on AI and data security concerns may arise. To handle these challenges, AI will be combined with teacher oversight to encourage independent learning and critical evaluation. Also data should be transparent and establish stringent protection measures. This research aims to contribute to the growing body of literature on AI in education by providing empirical evidence of its benefits and proposing strategies to overcome potential pitfalls. By leveraging AI’s capabilities responsibly, we can unlock its full potential to transform educational experiences, fostering improved learning outcomes and greater student autonomy. This study offers valuable insights for educators, policymakers, and technology developers to enhance the effectiveness of AI mentoring in educational settings.

618


Introduction Technology is developing rapidly as time progresses. The network also improved from 4G to 5G, which makes for a better environment. There is so much evidence of fast technology. There Ai is one of the hottest issues of advancing technology which it creates tremendous benefit. Among those Ai, ai mentoring creates educational benefits by individualization. Implementing AI mentoring in educational settings significantly enhances student learning outcomes and ability to self-directed learning by providing personalized feedback. Aim1 is to research the effect of ai mentoring on achieving educational goals. Aim2 is to research the effect of ai mentoring on self-directed learning. Potential pitfalls and alternative strategies are over-reliance on AI and Data Privacy. Students may become too dependent on AI tools, reducing critical thinking and problem-solving skills. Also, collecting and using student data raises concerns about privacy and data security. Alternative strategies for these pitfalls are combining AI mentoring with human oversight to encourage independent learning and critical evaluation and implement robust data protection policies and ensure transparent communication about data usage to students and parents.

References: 1. Kram, K. E. (1985). Mentoring at work: developmental relationships in organizational life. Academy of Management Journal, 26(4). 2. Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education. Boston, MA: The Center for Curriculum Redesign. 3. Castillo, C. (2019). Algorithmic bias in rankings. In Companion Proceedings of the 2019 World Wide Web Conference (May, 2019), San Francisco, CA. 4. Youyou, W., Kosinski, W., & Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. 5. Palmer, C. (2019). How to mentor ethically. Monitor on Psychology, 50, 70. 6. Chine, D. R., Gupta, S., & Koedinger, K. R. Personalized Learning2: A Human Mentoring and AI Tutoring Platform Ensuring Equity. 7. Chine, D. R., Gupta, S., & Koedinger, K. R. (2022). Development of scenario-based mentor lessons: An iterative design process for training at scale. In Proceedings of the Ninth ACM Conference on Learning Scale.

619


Drug-Based Epigenetic Modulation of Aging: Therapeutic Approaches and Mechanistic Insights Author

Full Name

:

Lee, Handong

:

Global Vision Christian School Mungyeong Campus

(Last Name, First Name)

School Name

RESEARCH PROPOSAL

Abstract Aging has recently been proposed by some researchers to the WHO to be included as a disease in the 11th Revision of the International Classification of Diseases (ICD-11). Aging and its consequent diseases, including cancerous, cardiovascular, and neurodegenerative disorders, are major global health challenges. Epigenetic modifications, such as DNA methylation and histone modification, are key contributors to the aging process. This proposal addresses the urgent need to develop therapeutic interventions, especially drug-type treatment, targeting these epigenetic changes to cure aging and agerelated diseases. We hypothesize that drug-based modulation of epigenetic markers can effectively slow aging and reduce the incidence of associated diseases. To test this, we will first develop and evaluate novel drug candidates using AI-driven deep-learning modeling to identify compounds that regulate epigenetic modifications. Next, we will investigate the mechanisms through which these drugs affect gene expression and cellular aging, and assess their efficacy in models of Alzheimer’s disease and cancer. By overcoming potential challenges in pinpointing specific epigenetic effects, this research has the potential to lead to transformative therapies for aging and age-related diseases.

620


Introduction The World Health Organization (WHO) recently received suggestions from several researchers that aging and old age should be classified as diseases in the 11th Revision of the International Classification of Diseases (ICD-11). This reflects the growing recognition of aging as a pathological process rather than a mere passage of time. Among the many causes of aging, one significant factor is the accumulation of environmental influences over a lifetime, leading to gene expression changes, referred to as epigenetic modifications. Epigenetic modifications, such as DNA methylation, histone modifications, and chromatin remodeling, play a key role in regulating gene expression without altering the underlying DNA sequence. These changes accumulate over time and result in the loss of normal gene expression; hence, the susceptibility to diseases such as cancerous, cardiovascular, and neurodegenerative disorders. These alterations affect cellular function, disrupt tissue homeostasis, and promote the onset of agerelated diseases. Given the central role of epigenetic modifications in the aging process, therapeutic strategies that target and modulate these changes hold great potential for slowing down aging and treating associated diseases. Drugs controlling or suppressing epigenetic modifications could offer significant advancements in rejuvenation technologies and disease therapies. This proposal explores drug-based interventions that focus on modulating epigenetic changes to extend lifespan and prevent age-related diseases.

Specific Aims The first aim is to develop novel drug candidates targeting DNA methylation inhibitors and histone modification regulators. According to previous studies, Cannabidiol, an active ingredient in cannabis, regulates DNA methylation in brain regions. Through AI techniques such as deep learning generative models and molecular modeling tools that recently gained attention as effective means for drug candidate discovery, it will be possible to extract components with similar structures to already verified ones or substances with potential effects. Moreover, the candidates will be tested in cell and animal models to evaluate their efficacy. If candidates are proven to be effective in these models, the long-term impacts on humans will be investigated so that it can be utilized in the clinical stage in the future. The second aim is identifying the mechanism by which drug candidates regulate gene expression in aged cells damaged by epigenetic modifications, and based on this, to verify the therapeutic effect on aging-related diseases such as Alzheimer's and cancer. It would thoroughly investigate and analyze various mechanisms, such as whether the candidates affect the method cell signaling pathways work or what components are produced in the body, and observe correlative experiments or causal relationships for identification. Furthermore, it is planned to achieve even more advanced medical technologies through verified systems.

Pitfalls and Alternatives, and Future Research One potential pitfall is the intertwining complexity of epigenetic changes, making it difficult to pinpoint the impact of specific modifications on aging treatment. As an alternative strategy, we will precisely develop specific disease models to analyze the particular influence of epigenetic changes in detail, allowing us to refine our approach. This proposal is intended to treat aging or related diseases by taking a pill formulated in oral form. To do so, it is necessary to discover specific substances, identify their mechanism, and determine whether they could be manufactured into a medication. After that, it must pass virtual and clinical trials before it can be used directly. To facilitate this process, we suggest future research to find a method for accelerating clinical trials and precise exploration of drug candidates and synthesis of ingredients by AI-driven techniques.

621


Figure 1: Illustration of epigenetic modification mechanisms. Source: Libertas Academica (https://www.flickr.com/photos/libertasacademica/29846750431), Licensed under CC BY 2.0.

Figure 2: Graph of potential substances' ability to treat DNA methylation.

622


References 1. Ahn Y. The regulation in the ubiquitination machinery for H2B ubiquitination [Master's Thesis, Kangwon National University]. http://www.riss.kr/link?id=T13854541 2. Dutchen S. Loss of Epigenetic Information Can Drive Aging, Restoration Can Reverse It. Harvard Medical School News & Research. January 12, 2023. https://hms.harvard.edu/news/lossepigenetic-information-can-drive-aging-restoration-can-reverse. 3. Sales AJ, Guimarães FS, Joca SRL. CBD modulates DNA methylation in the prefrontal cortex and hippocampus of mice exposed to forced swim. Behav Brain Res. 2020;388:112627. doi:10.1016/j.bbr.2020.112627 4. Saul D, Kosinsky RL. Epigenetics of Aging and Aging-Associated Diseases. International Journal of Molecular Sciences. 2021; 22(1):401. https://doi.org/10.3390/ijms22010401 5. Shin D. Molecular mechanism of cellular aging. Korean Zoological Society 1998 19th Biological Science Symposium Lecture. Published online October 1998:25-30. https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE09497676 6. Wang K, Liu H, Hu Q, et al. Epigenetic regulation of aging: implications for interventions of aging and diseases. Signal Transduction and Targeted Therapy. 2022;7(1):1-22. doi:https://doi.org/10.1038/s41392-022-01211-8 7. Zhang J, Wang S, Liu B. New Insights into the Genetics and Epigenetics of Aging Plasticity. Genes. 2023; 14(2):329. https://doi.org/10.3390/genes14020329

623




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.