Summer Research Scholars 2016

Page 1

2016 SUMMER RESEARCH SCHOLARS


SCHOOL OF BUSINESS 2016 Summer Research


Science Learning Tools: Online and Hands-On Student: James P. Noeker | Partnering Faculty: Carolyn Predmore, Ph.D School of Business, Manhattan College, New York, New York “By the care you take of your students, show that you have a real love for them.” – St. Jean-Baptiste de la Salle (Med 80.3 – feast of St. Nicholas) Abstract There is a great deal of discussion about online education versus traditional face-to-face in classroom courses. As colleges venture into the burgeoning world of online education there are multiple choices to make. In the June 24 edition of The Chronical of Higher Education, the UT-Austin has found great success in combining a select group of students to attend the class in person and the remainder of the class may be online synchronously (Blumenstyk, 2016*). This paper explores the opportunity of creating learning tools that can be made from folding or manipulating paper into a tool and creating videos that illustrate the use of the tool (s) to support, explain and create an interaction between the student and the concept to further solidify the student’s understanding. Since there is increasing emphasis on science and technology in education, this paper is looking at elementary science concepts as a base for creating learning tool creation.

Our Goals:  Create an interactive learning tool to assist in STEM education among elementary age students (K – 6).  Use video to simulate a “push-in” science teacher, which would satisfy Common Core requirements, using an interactive demonstration.  Design a video which used low cost materials (such as paper and other common classroom items) to lead hands-on demonstration of science topics so budgeting factors did not affect quality of education or retention of information.  Unite subjects to create relevance – “Why can’t students boost literacy, while simultaneously learning science?”  Make learning fun for students.  Our long term ambition is to develop a learning service with content sourced from our LaSallian network to assist underprivileged schools and districts in enhancing STEM education. What we accomplished:  Our work output consisted of a script, accompanying video, and teacher’s guide to assist in teaching science subjects to elementary age students (using the video). Now that we have proven it’s feasibility, we can proceed in producing more videos, creating a website to host videos and other learning resources, and distributing same. References Blumenstyk, G. (2016) “Same time, many places:Online courses return to origins,” The Chronicle of Higher Education, LXII(39), June 24, p. 11


"All the world is made of faith, and trust, and pixie dust.” – J.M. Barrie, Peter Pan “The best way to find out if you can trust somebody is to trust them.” ― Ernest Hemingway

Background Introduction:  Real world decision are frequently mixed motive and interdependent in nature – here a decision maker’s choice affects outcomes for others, and others’ choices influence the decision maker’s outcomes.  One common example is social dilemmas, where individuals and organizations acting rationally in their own best interest make everyone worse off (Hardin, 1968).  In typical social dilemmas, individuals can choose to act in their self-interest (defect) or for the collective benefit of all (cooperate). Past Research:  Recognition that the decision is interdependent and concern about the Other’s outcome influence the decision maker’s choice (Arora et al, 2015), as does expectations about how the Other might act (Bogaert et al, 2008).  Social Value Orientation (SVO) – an individual trait measuring how much a person is concerned about others (Murphy & Ackermann, 2013) – predicts choice in social dilemmas.  Arora et al (2012) suggest that social context influences whether or not social dilemmas are viewed as interdependent decisions.  Trust of the Other may also be a contextual moderator: Joireman et al (1997) found that level of trust moderates cooperative behavior of proFigure 1: The four types of SVO socials. Thus SVO (an individual variable), social context (situational variable), and trust of the Other (individual and/or contextual variable) have all been shown to influence the decision to cooperate or defect to some extent. This may, in part, be by changing how much the decision maker is concerned with the mutual outcome of all impacted by the decision.

Study 1: Variables affecting concern for the Other in the US Results:

Motivating Questions For This Research: 1. Beyond individual variables (SVO), how do contextual variables (like economic framing and trust of the other) influence the concern for Others? 2. Are there differences in the influence exerted by these variables in a hightrust culture (e.g. US) compared with in a low-trust culture (e.g. India)?

Study 1: Variables affecting concern for the Other in the US Design & Method:  Mturk study with 375 participants (Age 18-69, 47% male) from the US  Participants completed SVO scale (Murphy) and Tightness-Looseness Scale (Gelfand et al, 2011)  They were randomly assigned to one of three economic frames: loss, break even or neutral, and gain  Read a resource dilemma and answered questions about level of trust for the Other, and concern for the Other’s outcome.  Made a decision to defect or cooperate in the dilemma.

Results: Table 3: Differences in predicting Concern for the Other between the US and India: Estimated Coefficient (Standardized Error), ***:p<0.001, **: p<0.01, *: p<0.05

Table 1: Predicting Concern for the Other as a Function of SVO, Trust and Context in the US: Estimated Coefficient (Standardized Error), ***: p<0.001, **: p<0.01, *: p<0.05

Model 1: SVO is a significant predictor of concern for the Other’s outcome. Model 2: Introduction of economic context shows that it is perhaps a stronger predictor than SVO, though both are significant predictors of concern for the Other. Model 3: The complete model with trust illustrates that although context continues to predict concern for the Other’s outcome, SVO ceases to be a significant predictor. The high significance of contextualized trust as a predictor suggests that perhaps it may be trust that underlies SVO. We posit that contextualized trust captures the prosocial (proself) orientation of a person while including a situational component by focusing on a specific Other. Thus it is the combination of economic context and individual operationalized as specific trust that appears to be the vital predictor of concern for the Other.

What is less well understood however, is how these three variables (SVO, context and trust) collectively influence choices in social dilemmas. Trust is also a cultural construct and systematic variation in levels of trust across cultures is well-documented (Gunia et al, 2011).

Study 2: Cross-cultural Comparison to India

Figure 2: Factors influencing Concern for the Other in the US

Study 2: Cross-cultural Comparison to India

Model 1: SVO continues to be a significant predictor for concern for the Other in India while economic context does not matter. The presence of contextualized trust as a significant predictor for both cultures however, suggests that it is the expected social interaction with the Other that determines Concern for the Other. Model 2: Adding strictness of cultural norms as a predictor greatly improves the model for India, due to the stronger social norms and expectations than in the US. Thus Indians state greater concern for the Other because that is what is socially expected. Arora et al (2012 & 2015) showed that concern for the Other predicts cooperation in the US. Paradoxically in India this does not seem to hold true as higher concern does not lead to more cooperation in comparison to the US (Table 4). As contextualized trust is actually the dominant factor, the concern for the other required by social script in India may just be cheap talk as it does not translate into more cooperation. In the end, it is the level of specific trust for the Other that determines cooperative behavior.

Design & Method:

Table 2: Comparison US to India

 American tend to be more prosocial than Indians. This may appear contradictory with the commonly held view of India as a collectivistic culture, but SVO actually measures concern for Others in general and not necessarily for the in-group, which is more meaningful in collectivistic cultures.  The lower Trust for the Other is congruent with India being a low trust culture in general.  Paradoxically, the self-reported Concern for Others in India is greater than in the US. Perhaps since India has stricter cultural norms, it is the norms that determine what is explicitly stated, but may serve only as cheap talk (Gelfand et al, 2011).

Table 4: Comparison of Decision made, US and India

Conclusion

 Mturk study with 370 participants (Age 18-81, 67% male) from India  Participants followed a procedure identical to that in Study 1. Results:

Figure 3: Factors influencing Concern for the Other in India

 In the US, concern for the Other in an interdependent decision can be thought of as a person with situation interaction where contextualized trust plays a significant role.  In India concern for Others is mainly driven by tight social norms, which raise the level of self-reported concern for the Other.  Concern for the Other is a significant predictor of cooperative action in the US. Thus specific trust determines concern for the Other, which in turn determines cooperation in a social dilemma.  In India however concern for the Other does not determine cooperation as the stated concern may just be socially expected cheap talk. It is specific trust that determines cooperative action in a social dilemma. Future Research Directions:  Extension of study to Argentina (low trust, low norms) & Japan (high trust, high norms)  How can we increase trust in interdependent decisions across cultures? Contact Information: Marc Stefan Hoeller (mhoeller01@manhattan.edu) This research was supported by the Jasper Summer Fellow Scholarship of Manhattan College, NSF-CNH Grant 1211613, and IAI Grant CRN2031


From Followers to Consumers: Examining Return on Investment of Marketing Strategies in the Age of Social Media

DevelopmentDirection DevelopmentDirection

Patrick Faccas & Dr. Grishma Shah

Abstract

Social media and email marketing are becoming increasingly vital to an organization’s marketing and communications strategies. This research explores the return on investment of both time and capital in email and social media marketing. Data was collected and analyzed to determine the best techniques for building a small business through email marketing and engagement on social media. Results suggest that authentic relationship building and engagement lead to market transactions and better returns on time invested in marketing.

Background

The advancement in technology has changed the way consumers purchase products and interact with corporations across the world. Consumers can now contact a corporation or interact with top executives by simply logging into one of the social media networks. This is an incredible opportunity for corporations to build relationships and generate revenue (Liu & Lopez, 2014). Nonetheless, there is some debate over the value of social media marketing and its various extensions such as email marketing. Corporations are beginning to see the value in having a social media presence and it ability to engage multiple stakeholders, but the marketing strategy to employ along with its return on investment is debatable. Corporations want to know that investment in social media will lead to sizable return on investment, brand awareness, engagement and word of mouth interactions. There are many strategies that are being tested because technology has made measuring return on investment easier for marketing departments (Hoffman & Fodor, 2010). Nonetheless, the more important question on how to get the best return on investment for the substantial capital and more importantly, time invested in social media marketing, remains unexplored.

Research Questions •

What techniques turn email marketing and social media engagement into revenue?

What methods provide the most engagement and create loyalty among prospective consumers?

How to generate brand identity on social networks and revenue based on that identity?

• •

• •

Methodology

Findings

Acquire individuals emails in exchange for free gift. Implement various email marketing strategies such as indoctrination, engagement, ascension, segmentation, and re-engagement once email is acquired. Target subscribers with blogs, affiliate products, and personal life emails. Target active users on Instagram of similar businesses by following them then engaging the user with praise techniques for engaging with account.

Before: Opt-In-Form

After: Opt-In-Form

Before: Emails

After: Emails

Final Instagram Analysis

• • • •

Indoctrination Emails

• •

Link Clicks

Initial Revenue

Final Revenue

• • • •

• • • •

Ebook: $0 Amazon: $0 ClickBank: $0 Total: $0

Ebook: $7.67 Amazon: $4.12 ClickBank: $67.10 Total: $78.89

Conclusions

Relationship building in social media is vital to the success of the business. The relationship should be based off honest interaction that will create active engagement with the consumers. Language use is an important factor in higher engagement both emails and social media posts. The more “real” or intriguing the copy writing results in higher overall engagement. Consistency in emails and social media posts are required to grow consumer basis. Active interaction allows for greater return on investment. Prospects of small businesses require longer time fostering relationships because brand is not as well known.

Select References

Deiss, R. (2015). Invisible Selling Machine. Digital Marketing Lab, LLC. Hoffman, D. L., & Fodor, M. (2010). Can You Measure the ROI of Your Social Media Marketing? MIT Sloan Management Review, 52(1), 41–49. Liu, Y., & Lopez, R. A. (2014). The impact of social media conversations on consumer brand choices. Marketing Letters, 27(1), 1–13. http://doi.org/10.1007/s11002-0149321-2 Brunson, R. (2015). Dotcom Secrets. New York, New York, New York: Morgan James Publishing.


A Repository for Social Media Twitter Analytics Nasri Yatim Computer Information System Supervisor: Musa Ja’far, Ph.D. Manhattan College

Introduction: The Twitter world is

Results: A well structured, normalized MySQL database and

an “Ocean of ephemera. A library of Babel” Spur of the moment realtime window into our thoughts, beliefs, emotions or desires and the spawning of many social media networks and the forking of multiple competing communities. The content, does not conform to the traditional natural language processing rule, it is free flowing.

a Python application that meets the requirements.

1http://www.nybooks.com/daily/2013/01/16/librarians-twitterverse/

Project Objectives:

• Build a well normalized, well structured MySQL database schema to store twitter data bundles. • Use Python & Python-Twitter API to populate the repository with data • Decouple Social Media Analytics Process from the Data Collection Process • Facilitate Social Media Analytics by making the content available in forms & formats that allow the non-programmer to perform Ad-Hoc, Interactive, Visual Analytics on the data and across Multiple Dimensions. Identify

• What is a tweet? • What is the profile of a user?

Analyze

• The Structure • Environment

Implement

Twitter Schema: A MySQL EER

The Mentions, the Hashtags, the Timelines, the Twitterer profile, the Circle of friends, the Social networks, the Geo-locations, the Apps used when bundled together become a corpus of rich content for digital humanities researchers. However The current state of Social Media Analytics Technology require very strong programming skills in multiple anguages and platforms.

Twitter Schema: A UML Diagram

1

Proposed Project: Build an application that uses Visual Studio Enterprise 2015, Python 3.4 Tools for VS , Twitter Developer Account and Python for MySQL Connector for Python to authenticate and stream Twitter data, parse the data and store it into the MySQL database schema. • get_tweets_by_[screen_name|content|geo_location] • get_user[profile | friends] • extract[hashtags|mention|urls|symbols|geoloc|url] from the tweets • upload[user|tweet|hashtags|mentions|friends|…] to the database • Use the MySQL Workbench, the collected data are then retrieved using SQL queries displayed The Environment:

•Visual Studio Enterprise 2015 Python 3.4 tools for VS •twitter API 1.1 & tweepy 3.5 •MySQL 5.7.X & MySQL Workbench


The Relationship Between Financial Literacy and Financial Behavior Evan Perotto Faculty Advisor- Aileen Lowry Farrelly, CPA, MS Manhattan College, School of Business Introduction Being financially literate is very important in life because it often leads to a successful and stable financial future. The research conducted attempts to evaluate the relationship of how those who are financially literate behave financially.

Objectives -Analyze the literature and surveyed results that currently exist evaluating financial literacy and financial behaviors. -Evaluate if there is a relationship between financial literacy and financial behavior. -Determine how to educate those who are not financially literate.

• • • •

Data Analysis

People who are financially literate tend to have more wealth accumulation, less debt and are better prepared for retirement. Retirement planning is now highly complicated and many people are scarce to participate due to the complexity. Many individuals don’t understand basic concepts in regards to compound interest, inflation and risk diversification. Topics to be covered are Compound Interest, Inflation, Investing, Risk Diversification and Retirement Planning.

Data Results

Questions Correct Incorrect Don't Know Q1. Main function of the stock market 71.50% 20.20% 8.30% Q2. Knowledge of mutual fund 63% 13.60% 23.30% Q3. Relation between interest rate and bond prices 31.60% 43.80% 24.60% Q4. What is safer: company stock vs. mutual fund 71.40% 4% 24.50% Q5. Which Is riskier: stocks vs bonds 80.20% 5.40% 14.50% Q6. Highest return over long period of time: savings 62.30% 27.50% 10.20% account, bonds or stocks Q7. Highest fluctuations: savings account,bonds or stocks 88.30% 4.50% 7.10% Q8. Risk diversification 74.90% 18.40% 6.80% -Annamaria Lusardi and Olivia Mitchell “HOW ORDINARY CONSUMERS MAKE COMPLEX ECONOMIC DECISIONS: FINANCIAL LITERACY AND RETIREMENT READINESS”. Nber.org. September 2009 page 27

Conclusions -There is a relationship between successful financial behavior and being financially literate. -The cost of attaining financial literacy causes a level of inequality between certain populations. -We have created a packet to solve the financially literacy issues noted from analyzing surveyed results.

Future Plans -Test the packet created to see if we have solved the gaps that were noted from the analyzed results.

References -Maarten van Roojj, Annamaria Lusardi, Rob J. Alessie “FINANCIALLY LITERACY, RETIREMENT PLANNING, AND HOUSEHOLD WEALTH”. Nber.org. August 2011


SCHOOL OF EDUCATION & HEALTH 2016 Summer Research


Why do Students Hate Math?

Making Math Fun

Results

Too many math classrooms lend themselves to a lecture-practice format where students are taught a concept and then instructed to complete practice problems. This method does little to capture the excitement and attention of students and does not help correlate what they are learning to their everyday life. No wonder so many people dislike Math!

In this study I sought to test how incorporating engineering activities in the classroom would help students to better understand mathematical concepts. I created two lesson plans: one with an engineering activity (Group A) and one without (Group B), both on the same topic: the line of best fit. I used the Barbie Bungee Jump activity for group A to help students connect the line of best fit with a concrete example. I started off by teaching both groups the different ways to find the line of best fit: 1.The eyeballing method 2.The average method 3.The 2-point form I then carried out the following steps for each group:

Group A - 13 students 62%

Group B – 6 students 42%

100% 15%

65% 15%

Math Kills My Vibe

Teachers need to change their approach to teaching math. With each passing generation, more and more students are developing distaste for the subject, despite its relevance to their life. If we wish to continue advancing in technology, science, and medicine we need more mathematicians - we need more students who love math!

What Have Others Found? •Presenting students with real world problems encourages them to think critically, to take risks, to learn from their mistakes, to communicate their thoughts, and to make connections between math and their everyday life. (James R. Town & Alison M. Espinosa, 2015) •Using a Calculus based activity to question the shape of a tuna can made students less anxious and led them to score “on average…20 percent higher on the final exam for this topic.” Students also found the lesson more interesting and relevant. (Antonella Cupillari, 2015)

EYEBALLING method

Group A – Engineering Activity •

• •

AVERAGE method Sandwich

Total Fat (g) 9

Total Calories 260

Cheeseburger

13

320

Quarter Pounder Quarter Pounder with Cheese

21

420

30

530

Hamburger

2-POINT form

Group B – LecturePractice Format Students were put into groups • Students practice and gathered their materials: the 3 methods as a rubber bands, a Barbie doll, & class with the measuring tape. teacher. Students measured the Students practice • distance their doll fell with each the 3 methods by rubber band they added to her completing ankles. problems from a Students recorded the data in worksheet. the chart from the worksheet. Students created a scatter plot by transferring their data from the chart to a graph in the form of points. The x-values corresponded to the number of rubber bands and the y-values corresponded to the distance fallen. Students used one of the 3 methods taught to them to predict how many rubber bands they would need in order to successfully drop their doll off the stairwell. The goal was to get as close to the bottom step as possible without hitting it. Students tested their predictions and then reflected as a group and as a class on their results.

In the end, both groups were given the exact same assessment. By doing so, I could gain a better idea as to which group profited more from which method of teaching.

Class Average Score Highest Score Lowest Score

The results show that on average, the group who participated in the activity did better than the group who followed the traditional lecture-practice format. Most of the students did not gain a firm grasp of the concept. However, those from group A understood more what the line of best fit is and what it is used for. On the other hand, group B had a stronger knowledge of the 3 methods.

What Does this Mean? There is a lot more to the data than meets the eye. Although the results coincide with my hypothesis, there were several outside factors which affected the final outcome. •Students were not ready for the content being presented to them and I was misinformed in this matter •Changes were made last minute as to which students were to be taught •The classroom was not well secured and managed by the supporting teachers for group A •Materials were not supplied as originally agreed upon

Next Steps 1.Use required Field Hours to repeat experiment 2.Investigate engineering designs for middle and high school math topics 3.Create additional math lessons with engineering designs 4.Work with a middle/high school math teacher to incorporate these approaches more scientifically

References Town, J. R., & Espinosa, A. M. (2015, October). Racing Toward Algebra and Slope. Retrieved August 21, 2016, from http:// www.jstor.org.www.library.manhattan.edu/stable/pdf/10.5951/ mathteacmiddscho.21.3.0169.pdf Capillary, A. (2015, February). Math in a Can. Retrieved August 21, 2016, from http://www.jstor.org.www.library.manhattan.edu/ stable/pdf/10.5951/mathteacher.108.6.0434.pdf? _=1471837235791


The Effects of Carbohydrate Swishing on Anaerobic Endurance and Muscular Strength Tedd Keating, PhD, CSCS*D, Lauren Dougherty, BS. PPT, Devin Prant, BS, PPT

Jasper Summer 2016 search Scholar Program

Results

Background

Our results revealed there was no significant difference (p<.01) when we compared the power of the placebo (sugar free powerade) and the carbohydrate beverage (regular powerade). There also was no significant difference between the the grip strengths values (p<.01).

he technique of carbohydrate rinsing is an nnovative fitness technique that is being used in eplacement of ingesting carbohydrates for mmediate energy. Previous research has suggested hat there may be unidentified oral receptors in the mouth that are solely activated in the presence of a arbohydrate. This suggest a direct mouth to brain onnection.(Chambers)It also has been shown to mprove high intensity endurance exercise erformance. (Jeukendrup)

Purpose

he purpose of this study is to compare the effects of arbohydrate swishing on lower body anaerobic ndurance and grip strength.

Methods

Sixteen recreationally fit college students (ages 19-24) volunteered to be subjects in our two day experiment. Upon signing informed consent, the students were shown how to do the Manham Step Test(using 17 inch Reebok Step) and how to use the grip strength dynamometer. Once mastery of these movements, the students began their two minute warm up. We then handed out the unknown drinks, which was either the placebo (sugar free Powerade) or the carbohydrate beverage(Powerade). The beverages were presented in a double blind fashion and in a counterbalanced sequence. The students swished the beverage they were handed for ten seconds and then performed the grip strength test by squeezing the dynamometer for 5 seconds to assess isometric strength. Using the same drink(either sugar free or regular powerade), they rinsed it in their mouth for an additional 10 seconds and then immediately perform the Manham step test. This entails repeatedly stepping up and down on risers without stopping for one minute. Two days later, we repeated this trial with the same volunteers using the alternative drink. We recorded the number of the steps they took during the Manham Step Test and their grip strength to compare the difference between the two days.

Figure 3: Shows the standard error means for the power and grip strength of the placebo and non placebo.

Conclusion

Figure 1 and 2: Participants perform the Manham step test References- "Jeukendrup - Trusted Sports Nutrition Advice & Exercise Science News." Jeukendrup - Trusted Sports Nutrition Advice & Exercise Science News. N.p., n.d. Web. 26 Aug. 2016. Chambers, E. S., M. W. Bridge, and D. A. Jones. "Carbohydrate Sensing in the Human Mouth: Effects on Exercise Performance and Brain Activity." The Journal of Physiology. Blackwell Science Inc, 15 Apr. 2009. Web. 26 Aug. 2016.

We can conclude from our two day trial that carbohydrate swishing does not have an immediate effect on anaerobic endurance and muscular strength. Although our research along with other studies have not found statistical differences between the placebo and non placebo, the pendulum still swings in a favorable direction. Many studies have shown carbohydrate swishing seems to have an effect on longer durations(30-75 min) activities rather than our short duration one minute step up test and five second grip strength test. (Jeukendrup) Our research may suggest that the “unidentified oral receptors� have a reaction time more favorable to longer duration aerobic activities.


An Experimental and Modeling Study to Op6mize the Performance of a Knee Brace Veronica Valerio, Kelsey Thomas, Dr. Parisa Saboori, and Dr. Lisa Toscano Department of Mechanical Engineering and Department of Kinesiology

INTRODUCTION

Anterior cruciate ligament (ACL) injuries appear more frequently in women athletes than in male athletes. Anatomically, women compared to men have smaller intercondylar notches, restric6ng the movement of the ACL, thereby causing the femoral condyle to more easily damage the ACL. The pinching that this ligament undergoes then results in tears and ruptures. Also, women rely more on the ligaments in the knee for stability when landing aNer a jump by turning their feet inwards to compensate for a larger bending moment; and this improper landing style makes the pinching of ACL even more likely. Many knee braces exist to resist this foot movement, but there is no study that compares the effec6veness of their ability to reduce ligament loading.

PROCEDURE

Scoring 25-­‐30 = Low risk athlete, 11-­‐24 = Poten6al risk athlete, 7-­‐10 = At-­‐risk athlete

Test Single-­‐leg Stance (hold for 30 seconds)

Anatomy of Knee

Last summer, a team at ManhaQan College developed a finite element model of the leg that included the femur, knee, 6bia, fibula, and associated muscles and ligaments. This model used a preexis6ng leg brace that was designed to implement a proper jumping stance to determine the stress the knee ligaments would undergo. The team was then able to create a new brace that resulted in beQer posi6oning during a jump.

OBJECTIVE

The goal of this project is: •  To examine the knee and the direc6ons that it can move without producing excessive torque , and thereby help to provide a beQer understanding of which movements tear the ACL •  To develop a knee brace and determine if bracing can be used during training sessions to increase the awareness of safe movements •  To determine if the new brace will reduce stress on the ACL, while also ensuring that the brace does not lead to a new different injury, and seeing if this brace can be used to help train safe jumping techniques. •  To op6mize the design by using the results from objec6ve one to iden6fy the parameters that affect ACL stress and thereby adjust these parameters to minimize the likelihood of injury.

METHODS AND MATERIALS

Jump EvaluaDon Form Pre and Post Test: Screening the Athlete (Braced and Non-­‐Braced Condi6on)

Score Flat Feet (1 point)

High Arch (2 points)

LeH Right

The program NX 9 was used to design the assembly of brace components, and to simulate the mo6on of the knee.

Normal (3 Points)

Single-­‐leg squat

Knee tracks over inside foot (1 point)

Knee tracks Knee tracks over second over first toe and third toe (4 points) (6 points)

Jumping technique (10 reps)

Knee travels inwards (1 point)

Knee tracks Knee tracks over second over first toe and third toe (2 points) (3 points)

Landing technique (10 reps)

Knees bend but only slightly (1 point)

Knees bend Knees bend >30 degrees >30 degrees and track over but track second and inward third toe (4 points) (6 points)

Single-­‐leg hop

Knee tracks over inside foot (1 point)

Knee tracks Knee tracks over second over first toe and third toe (4 points) (6 points)

Hamstring curls

Less than 40% Equal to 40% body weight body weight (1 point) (2 points)

Greater than 40% body weight (3 points)

Pelvis LiN

Can liN one Cannot liN knee and keep one knee and pelvis level keep pelvis but low back level muscle work (1 point) (2 points)

Can liN one knee and keep pelvis with no stress on back (3 points)

CONCLUSIONS

When the valgus deformity is present, the magnitude of the stress on the ACL is significantly larger than when the valgus is not present. The results of the screening test will be able to determine if the risk of an ACL tear is lowered with the use of the brace

FURTHER RESEARCH

•  Further work will be conducted using a larger sample group to validate the findings. •  This study will be replicated with an emphasis on gender differences. •  More studies will be conducted to enlarge the data set associated with the degree to which bracing reduces ACL injuries in all popula6ons.

•  •  •  •  •

ACKNOWLEDGEMENTS

Dr. Parisa Saboori Dr. Lisa Toscano Dr. Graham Walker Dr. Rani Roy and the Center for Graduate School and Fellowship Advisement This project was funded by a grant from the Lasallian Research Scholars Program

REFERENCES

[1] K. Donald Shelbourne, Thorp J. Davis, Thomas E. Klootwyk. “The Rela6onship Between Intercondylar Notch Width of the Femur and the Incindence of Anterior Cruciate Ligament Tears: A Prospec6ve Study”. The American Journal of Sport Medicine (2016), 22(2). [2] Robert McAlindon. “ACL Injuries in Women.” Hughston Clinic [3] Mary Lloyd Ireland. “Why Are Women More Prone Than Men to ACL Injuries?” Hughston Clinic [4] Karl F. Orishimo, M Liederbach, IJ Kremenic, M Hagins, E Pappas. “Comparison of Landing Biomechanics Between Male and Female Dancers and Athletes, Part 1”. American Journal of Sports Medicine May 2014 vol. 42 no. 5 1082-­‐1088 [5] Mark D. Tillman, Chris J. Hass, Denis Brunt, and Gregg R. BenneQ. “Jumping And Landing Techniques In Elite Women’s Volleyball”. Journal of Sports Science and Medicine (2004), 3, 30-­‐36. [6] Bobbie S. Irmischer, Chad Harris, Ronald P. Pfeiffer, Mark A. Debeliso, Kent J. Adams, And Kevin G. Shea. “Effects Of A Knee Ligament Injury Preven6on Exercise Program on Impact Forces In Women”. Journal of strength and condi6oning research, (2004), 18(4), 703–707. [7] Loraine Piccorelli, Erin Hamm, Lisa Toscano, Parisa Saboori, “An Experimental and Modeling Study of Injury Preven6on Knee Brace” ,2015 Symposium on Lasallian Research, 2015, MN. [8] Hugh Herr. “The New Bionics That Let Us Run, Climb, and Dance”. Dir. Hugh Herr. Perf. Hugh Herr, Adrienne Haslet-­‐Davis. TED Talks, March 2014 [9] Tyson, Alan and Ben T. Cook. Jumpetrics. Champaign, IL: Human Kine6cs, @004. Print. v  This is a collabora6ve work between the Mechanical Engineering and Kinesiology Departments.


SCHOOL OF ENGINEERING 2016 Summer Research


Saw Mill River: An Assessment of Water Quality in an Urbanized Stream Liliana M. Calix, Michael W. Frugis, Jessica M. Wilson PhD, Kevin J. Farley PhD Department of Civil and Environmental Engineering, Manhattan College

Introduction

Saw Mill River Daylighting

Time Trends: 2009-2016

The Saw Mill River is located in Westchester County, NY. The river is approximately 24 miles long and flows through largely suburban areas before reaching the City of Yonkers. Large stretches of the Yonkers portion of the river flow underground or are channelized by concrete walls. Past research has shown that the river had high levels of bacterial contamination in the Yonkers portion due to stormwater and illegal wastewater discharges (Derderian and Carbonaro, 2012).

Results from upstream and downstream sites were compared to previously collected data from 2009 to 2011. In comparison to these data, the fecal coliform concentrations upstream at S1 are similar throughout 2009-2016. Downstream at S5, there is a decrease in fecal coliform bacteria concentrations during dry weather events. This is likely due to the removal of waste discharges upstream of this site. Unfortunately, we were not able to construct time trends for the daylighted section of the river because data were not collected at S6a and S6b before the river was uncovered in 2012.

In recent years, a series of corrective actions were taken in Yonkers to improve the environmental quality of the Saw Mill River. These include daylighting portions of the river, redirecting stormwater, and identifying and removing waste discharges.

(count/100mL)

100,000

The objective of this research was therefore to determine if improvements on the Saw Mill River have resulted in improved water quality.

Methods

Total phosphorous increased for both dry and wet weather conditions as the river flows into Yonkers. This is due to urban runoff into the river as well as other wastewater sources from the city of Yonkers.

Lawrence Street Downstream, Ardsley

N/A

S2 S3 S4 S5 S6a

Hearst St., Yonkers Odell Ave., Yonkers Walsh Road, Yonkers Mill Street, Yonkers Van der Donck Park Upstream, Yonkers

N/A N/A N/A Phase 2 Phase 1

S6b

Van der Donck Park Downstream, Yonkers Phase 1

0

0.1 0 S1a S1b S2 Geomean

S3 S4 S5 S6a S6b Sample Site

Geomean Dry

Geomean Wet

(count/100 mL)

Dry Geomean

Rainfall

1,000,000

100,000 10,000 1,000

6 5

100,000

4

10,000

3 2

1,000

1 0

100 S1a

S1b

7/12/2016 - Dry 8/9/2016 - Dry

S2

S3 S4 Sample Site

7/19/2016 - Wet Wet Geomean

S5

S6a

S6b

7/26/2016 - Wet Dry Geomean

Sampling Date

Recommendations Since fecal coliform concentrations in the Saw Mill River are still above the New York State Water Quality Standards of 200 counts/100mL, further corrective actions should be considered by the City of Yonkers.

Enterococcus Coliform

0.4

0.2

Wet Geomean

Fecal Coliform Bacteria at Site 5

100,000

0.3

Dry

100

(count/100 mL)

Lawrence Street Upstream, Ardsley

(mg/L PO4-P)

S1a

1

Wet

1,000,000

Total Phosphorous 2016

S1b

2

Sampling Date

Fecal Coliform

0.5

Daylight Phase N/A

3 1,000

Summer 2016 Results

In addition, fecal coliform and enterococcus bacteria concentrations were found to decrease between stations S6a and S6b during dry weather events. Since bacteria die-off at faster rates in sunlight, we believe that the concentration decreases are in part due to daylighting in this section of the river.

Name

4

100

Fecal coliform and enterococcus bacteria also increase for both dry and wet weather conditions as the river flows into Yonkers, indicating stormwater and wastewater loads.

Sample Site

5 10,000

Precipitation (inches)

Sampling Sites

Courtesy of Groundwork Hudson Valley

(count/100mL)

Two wet weather and two dry weather water quality surveys were conducted during the summer of 2016. A total of eight stations were sampled starting in Ardsley and ending in Yonkers just before the Saw Mill discharges into the Hudson River. Samples were tested for various water quality parameters including pH, temperature, conductivity, total phosphorous and nitrate. In addition, fecal coliform and enterococcus bacteria were measured as indicators of disease causing organisms.

6

Precipitation (inches)

Fecal Coliform Bacteria at Site 1

Sampling should be continued to document improvements in water quality.

10,000

A focused sampling study should be performed for the daylighted portion of the river to evaluate the effects of die-off and sedimentation in reducing fecal coliform and enterococcus bacteria concentrations.

1,000

Acknowledgments

100 10 S1a

S1b

S2

S3 S4 Sample Site

S5

S6a

S6b

This project was funded by the John D. Mahony Fund for Undergraduate Research and the Blasland, Bouck and Lee Endowment. The help of Ms. AnnMarie Mistroff (Groundwork Hudson Valley) and Mr. Andrew Grundy (PS&S) is gratefully acknowledged. We would also like to thank John Abbatangelo, Xiao Lin, Kathy Ammari and Nelson da Luz from the Environmental Engineering Research Laboratory for all of their help.


Testing and Application of a Modified Normalized Sediment Load Function for the Tidal, Freshwater Hudson River Nelson da Luz, Kevin J. Farley Ph.D. Department of Civil and Environmental Engineering, Manhattan College

1185

-1.806 1.141 -1.821 2.344 0.330 0.973

405

-1.901 0.753 -1.907 2.617 0.318 0.992

329

-1.753 1.521 -1.687 2.076 0.305 1.316

212

Global Fit (NSL G) -1.945 0.929 -1.976 2.339 0.332 0.951

2. Construct a 15-year continuous record of sediment loads from Lower Hudson tributaries.

3. Compare the sediment load record to observed sediment loads 83 miles downriver at Poughkeepsie NY to evaluate trapping of sediment in the tidal, freshwater section of the river. Sediment trapping in the river may have important implications with respect to PCB contamination in the Lower Hudson River.

4 3 2 1 0 -1 -2 -3 -4

3500 3000 2500 2000 1500 1000 500 0

log QN

Frequency

1. Use NY-USGS monitoring data to develop and test a modified version of the Normalized Sediment Load (NSL) function for relating daily flows to sediment loads.

-1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2

Research Objectives

The objectives of this study were threefold:

A Global NSL fit was developed using 13,959 normalized data from all six tributaries as shown below. Data was binned according to ranges of QN values.

log LN

*Section of sub area in the study area

10%

-50 -100

5%

log QN

0%

SS Error G Error The total mass percent error for a given tributary was examined for both site specific (SS) and global (G) NSL parameters. Average errors for each tributary (along with the range of minimum/maximum error associated with individual flow bins) are shown to be less than ±35% for most tributaries. 120 100 80 60 40 20 0 -20 -40 -60 -80

Oct-14

Oct-13

Oct-12

Oct-11

Oct-10

Oct-09

Oct-08

Oct-07

Oct-06

Oct-05

Oct-04

Oct-03

Oct-02

Oct-01

Oct-00

Oct-99

15%

0

18

100 90 80 70 60 50 40 30 20 10 0

Observed Loads at Poughkeepsie

16

Total Loads Above Poughkeepsie

14

Upper Hudson Contribution (%)

12 10 8 6 4 2 0

Upper Hudson Percent Contribution

-1.723 1.195 -1.747 1.938 0.206 0.928

20%

50

Oct-14

3450

Oct-13

-1.786 1.146 -1.835 2.329 0.290 0.910

Oct-12

4616

Oct-11

-2.119 0.986 -2.174 2.435 0.302 0.916

Oct-10

Break Drainage Area Point

Oct-09

Slog LN

Oct-08

b2

Oct-07

Flood log a2

Oct-06

b1

Oct-05

3*

Hudson at Waterford Mohawk at Cohoes Rondout at Rondout Catskill Creek at Catskill Kinderhook at Rossman Roeliff Jansen at Linlithgo

Non-Flood log a1

Observed suspended sediment loads at the USGS monitoring station below Poughkeepsie were compared to the estimated loads from tributaries that entered the Lower Hudson above Poughkeepsie to evaluate sediment trapping in the tidal, freshwater section of the river. The 90 day rolling average of sediment load contribution to the total above Poughkeepsie from the Upper Hudson is also shown.

Oct-04

Site

Model Testing

Measured sediment loads were compared to calculated sediment loads using both site specific (SS) and global (G) NSL parameters. The total mass percent errors were calculated for individual flow bins. Bins with more than 30 sediment load data measurements are represented by the filled symbols. 100 25%

Oct-03

2

1

  

Total Loads Above Poughkeepsie USGS Measured Loads NSL Estimated Loads

Oct-02

Model Calibration

Paired observations of suspended sediment loads and flows were normalized and fit using separate regression lines for nonflood and flood conditions for data from six tributaries.

LN

20 18 16 14 12 10 8 6 4 2 0

Oct-01

 2.303 2 Slog  log LN  2

LN  10

Cumulative Load (106 Tons)

Although monitoring of sediment loads has been extensive, many gaps exist in the data records. In this study, a modified Normalized Sediment Load Function is evaluated as a means of filling in the data gaps for sediment loads in the tidal, freshwater Hudson.

Projections: Daily estimates of log LN for non-flood and flood conditions were determined directly from the regression equations. Since this calculation is based on NSL regressions that were developed in log space, computed log LN values correspond to the median or 50th percentile value of the probability. Median log LN values were therefore converted into arithmetic means as follows:

Model Projections NSL regressions (SS) were used to estimate sediments loads for periods with missing sediment load data. NSL estimated loads along with measured loads and total sediment loads above Poughkeepsie are presented below as cumulative loadings over a 15 year period.

Cumulative Load (106 Tons)

Non-flood: log L  log a  b log Q N 1 1 N Flood: log LN  log a2  b2 log QN

Calibration: Determination of BP, log a1, b1, and b2 values was accomplished by minimizing the sum of the squared residuals about the regression lines for non-flood and flood conditions.

% of Total Sediment Load

Drainage area was used to transform daily flows and sediment loads (Qd, Ld) into normalized flows and sediment loads (QN, LN): Q L QN  d LN  d DA DA QN and LN were then fit using separate regression lines for nonflood and flood flow conditions assuming log-log relationships:

where BP is the break point (i.e. the delineation between non-flood and flood flow conditions).

-1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2

The Hudson River drainage basin consists of three main subareas: (1) the upper Hudson from Mt. Marcy to Troy, (2) the Mohawk from Rome to Troy, and (3) the lower Hudson from Troy to New York Bay (Levinton and Waldman 2011). For the purpose of this study, the USGS monitoring site on the Hudson River below Poughkeepsie is considered the downstream limit of the tidal, freshwater portion of the estuary.

Method: The Normalized Sediment Load (NSL) function (HydroQual 1996), is a six-parameter regression method that is used to estimate daily suspended sediment loads for rivers with limited or no sediment load data. A modified version of the NSL function is described here.

The intercept of the regression equation for flood conditions (log a2) was fixed and was set as: log a log a1   b1  b2   log (BP) 2

Mass Percent Error

The Lower Hudson River receives sediment loads from the Upper Hudson and Mohawk Rivers at Federal Dam (Troy NY), and several other tributaries along the tidal, freshwater section of the river. The New York U.S. Geologic Survey (NY-USGS) has conducted extensive monitoring of sediment loads from the Upper Hudson, the Mohawk River, Catskill Creek, Rondout Creek and several smaller tributaries over the past several years.

Modified NSL Function

Mass Percent Error

Introduction

The results of this analysis suggest that up to 7 million tons of sediment may have been retained in the tidal, freshwater section of the river. This is equivalent to an average accumulation rate of approximately 4 mm/year. Over the 15 year period, the Upper Hudson contributes approximately 30% of the total daily sediment load entering the Hudson above Poughkeepsie. The contribution of sediment load from the Upper Hudson has important implications on PCB contamination in the Lower Hudson River.

Acknowledgements This project has been funded through the Blasland Bouck & Lee Endowment.


Acknowledgement

Automated Quantitative Analysis of Terminal Tree Branch Similarity by 3D Registration

The author is grateful to the Catherine and Robert Fenton Endowed Chair to Dr. L.S. Evans financial support for this research.

By: Joseph Brucculeri

The purpose of this project is to develop a method to quantitatively compare branch terminals of several tree species to determine if a reiterative branching pattern exists. A program in MATLAB will be written, tested, and modified to accomplish this task. To quantify the results, the Root-Mean-Square-Error (RMSE) will be calculated between the branches. Up to eighty-five terminals from five tree species were tested. User Inputs information Start MATLAB Program: Point Branch Comparison

Sample Terminal Branches

Y Branches

Y+1 Branches

User enters excel file, two cells containing the points describing the geometric structure of branches, and decides whether to scale or not

Write Point Information and Connection Matrices Sample Point Information Matrix Point # XYZ Coordinates Point Tags

Sample Connection Matrix

Terminal Y+1 Branches are first compared without the +1 (side branch below the Y).

Read Excel file and Write a Connection Matrix for each branch

Results and Discussion Brief Terminal branches were successfully RMSE Description compared quantitatively. Pairs were < .5 Identical compared with and without the use Very .5 - 1.0 Similar of scaling. Scaling the branches to a similar size, their geometric features 1.0 - 1.5 Similar can be compared more accurately. Fairly Simple bifurcated terminals (Y 1.5 - 2.0 Similar Branches) all had RMSE values of less than 1.5 which indicates that Barely 2.0 - 3.0 Similar branches were similar. When an additional side branch is considered > 3.0 Distinct (Y+1 Branches), RMSE doubled for most species, indicating a decrease in Table 1 similarity (Table 1).

Begin to write a Point Information Matrix for each branch  Store Point Number in first column  Store XYZ coordinates in columns 2:4  Tag each point by level and order and store in columns 5:6 of the Information Matrices

Cornus florida

Zelkova serrata

SY

UY SY

UY

S Y+1 U Y+1

Tilia Acer americana palmatum

SY

S Y+1

SY

Prepare Branches for Comparison by Adding Artificial Points

Example Level and Order Tags 4 5 3 4 0 0

3 3 0 0

2 2 0 0

1

1

Both tags are vital for an accurate branch comparison

Morus rubra

SY

S Y+1 UY

Scan first branch for 0 Tag Points and for each:  Find the two whole number order Tag Points (A & B) it lies in between  Find its relative location between Points A & B  Find Points A & B on the second branch, and all the points that make the path between them  Find XYZ coordinates of a new point along the path such that it is in a relative location of the 0 Tag Point on the first branch  Add the XYZ coordinate as a new point with number and tags of zero to the second information matrix Repeat process but scanning second branch and placing new point in the first branch’s information matrix

An Equal number of points is needed to quantitatively compare the branches. Artificial Points added in relative positions to better accurately compare

The figures below highlight two terminal branches. Figure 1 quantifies two simple Y branches and Figure 2 quantifies two Y+1 branches.

Scaling of Branches to better compare

Scaling? Yes

Calculate Scale Factor by finding the ratio between the branch lengths

 Multiply the first branch by the scale factor  Register Branch Point Matrices with rot3dfit MATLAB function  Transform Connection Matrix of the first branch  Calculate Root Square Mean Error (RMSE) to quantify dissimilarity  Plot Registered Branches for observable comparison

RMSE

RMSE

3.76

3.82

0.46

1.01

U Y+1

U Y+1 UY

UY

Figure 1

Figure2 2 Figure

No

Set Scale Factor to 1


Determining the Structures of ZSM-18 and SUZ-9 by Eric A. Castro

You ca menu also c

Department of Chemistry and Biochemistry, Manhattan College Introduction to Zeolites

Conclusion The ATOMS Computer Program

Zeolites are hydrated aluminosilicate minerals made from inter-linked tetrahedra of alumina (AlO4) and silica (SiO2) [1]. There ae about 40 natural zeolites and more than 150 zeolites have been synthesized. The most commonly mined forms include chabazite, clinoptilolite, and mordenite [2]. – They are solid materials, containing pores and cavities. The pores allow zeolites to store water molecules thus making them hydrated. – Their porous structure allow for them to have the ability to be utilized as molecular sieves for industrial separation and adsorption processes [3]. – Zeolites possess catalytic properties which can be applied to petroleum processing, petrochemicals, and solution control [4].

Research on SUZ-9 It is highly probable that SUZ-9 is the largest in a family of 12-ring zeolites (Table 1). All members of the family have hexagonal symmetry and similar chemical adsorption properties consistent with 12ring pores. By studying the five different zeolites within this family (see Table 1, it was discovered that there are only five basic building units that combine to assemble the five known zeolites.

Table 1. Family of Zeolites

LTL

CAN

d6R

GEM

PAU

Experimental Methods

The GSAS-II Computer Program

How do we determine the topology and structure of a new zeolite? Framework topology refers to the geometrical array in three dimensional space of the basic tetrahedral structural units [5]. I was able to construct the building unit PAU (shown in Diagram 1) by looking into the Zeolite Data Base and discovering it was an eight ring structure. Once I had constructed the building unit PAU, Christine Schmidt was able to construct her most successful model (illustrated in Figure 1). Through working with framework topology, research students have a grasp of geometric understanding of spatial relation of a structure and how its constituent parts are interrelated or arranged.

We installed and learned how to use GSAS-II [8], an up-to-date integrated collection of the most powerful crystallographic programs for powder X-ray diffraction data. It handles all the steps in diffraction analysis, such as data reduction, peak analysis, indexing, Pawley fits, small-angle scattering fits, and structure solution in addition to structure refinement. It can be used with large collections of related datasets for repeated refinements and for parametric fitting to these results. Two main reasons why twe were eager to use GSAS II is because of the ability to use Charge Flipping [8] and Monte Carlo/ Simulated Annealing [8] techniques for solving crystal structures. Introduction to Simulated Annealing

.

Figure 1. CS3 Model With New PAU Units

Using X-ray Powder Diffraction Data X-ray diffraction is a common technique for the study of crystal structures and atomic spacing. It’s primary application is used for the identification of unknown crystalline materials. X-ray diffraction can be implemented through the powder diffraction or singe crystal approach. Synchrotron powder diffraction data for SUZ-9 and ZSM-18 were provided by Dr. J.M. Bennett. By analyzing powder diffraction data the unit cell, and possibly the symmetry can be determined for an unknown structure. Decomposing the powder pattern into integrated intensities (shown below in Figure 2), usually allows the space group, and possibly the structure to be determined by powerful crystallographic programs [6].

Figure 2. SUZ-9 powder X-ray pattern RESEARCH POSTER PRESENTATION DESIGN © 2015

www.PosterPresentations.com

ATOMS [7] is a sophisticated computer program designed for displaying structural results.

Simulated Annealing is an algorithm that's able to find a good enough solution in a reasonable amount of time. This is because we can't realistically expect to find the optimal solution within a sensible length of time, we have to settle for something that's close enough within a short timeframe. The simulated annealing algorithm was originally inspired from the process of annealing in metal work. Annealing involves heating and cooling a material to alter its physical properties due to the changes in its internal structure. In our work, we hope to anneal building units (shown in Diagram 1) to produce the framework topology of SUZ-9. Simulated Annealing in GSAS-II The Simulated Annealing routine in GSAS-II is combined with Monte Carlo techniques. We are learning how to use Monte Carlo/Simulated Annealing (MC/SA) because other methods, including Charge Flipping, did not produce a complete solution for the SUZ-9 structure. The steps for this method begin with peak selection and fitting, then indexing to identify the lattice parameters. A Pawley refinement is used to obtain a set of structure factors needed for the MC/SA runs. We are just now learning how to use the MC/SA program.

At present, physical model building combined with hints from powerful theoretical structure solving programs have only given hints about the structure of SUZ-9. It is imperative to understand the experimental methods behind any research project. In the case of determining unknown structures such as SUZ-9, it is not certain that one method may work or not. Backtracking steps and looking into new approaches can allow for new results to emerge as in the case with applying the Monte Carlo/Simulated Annealing.

References

1) IBRAHIM, SITI AIDA BINTI. “SYNTHESIS AND CHARACTERIZATION OF ZEOLITES FROM SODIUM ALUMINOSILICATE SOLUTION. “UNIVERSITI SAINS MALAYSIA INSTITUTIONAL REPOSITORY 1, no. 1 (2007): 2-8 2) Ober, Joyce A. "Mineral Commodity Summaries 2016." Mineral Commodities Summaries (2016): 1-2. Jan. 2016. Web. 8 Aug. 2016 3) Roberts, C.W. “Molecular Sieves for Industrial Separation and Adsorption Application” The Property and Applications of Zeolites: The Proceedings of a Conference organized jointly by the Inorganic Chemicals Group of the Chemical Society and the Society of Chemical Industry, The City University, London, 18th-20th April 1979. Ed. R.P. Townsend London: Chemical Society, 1980. Print 4) Vaughan, D.E.W. “Industrial Uses of Zeolite Catalysts” The Property and Applications of Zeolites: The Proceedings of a Conference organized jointly by the Inorganic Chemicals Group of the Chemical Society and the Society of Chemical Industry, The City University, London, 18th-20th April 1979. Ed. R.P. Townsend London: Chemical Society, 1980. Print 5) Breck, D.W. “Potential Uses of Natural and Synthetic Zeolites in Industry” The Property and Applications of Zeolites: The Proceedings of a Conference organized jointly by the Inorganic Chemicals Group of the Chemical Society and the Society of Chemical Industry, The City University, London, 18th-20th April 1979. Ed. R.P. Townsend London: Chemical Society, 1980. Print 6) David, W. I. F. Structure Determination from Powder Diffraction Data. Oxford: Oxford UP, 2002. Print. 7) Ravel, Bruce. "ATOMS : Crystallography for the X-ray Absorption Spectroscopist." J Synchrotron Radiat Journal of Synchrotron Radiation J Synchrotron Rad 8.2 (2001): 314-16. Web. 8) Toby, Brian and Von Dreele, Robert (2013). J. Appl. Cryst. 46, 544

Acknowledgements Financial support from The Camille and Henry Dreyfus Foundation Senior Scientist Mentor Program, and participation in the Summer Research Scholars Program at Manhattan College is gratefully acknowledged. Thanks to Dr. R. Kirchner, mentor, and Chrissy and Gertrude for allowing me to work with great individuals.

You ca SLIDE NORM

The te and te ones o

Adjus The d requir

To add click o You ca docum FORM

You ca reform create

RIGHT option on the

If you save a to VIE in Pow Maste

Save y or “Pr

When Poster the po Power to pri throu Third for m


Biochemical Methane Potential of Sludge and Food Waste for an Existing Wastewater Treatment Plant’s Co-Digestion Upgrade

Introduction 

Anaerobic digestion of sewage sludge is a well known and often used method of solid stabilization

Biogas production, rich in methane, can be collected and used to reduce a wastewater treatment plant’s (WWTP) operating cost

Co-digestion is the addition of organic substrates with the goal of increasing methane production while reducing landfill waste

Tim Conway, T.J. Bolen, Hossain Azam Civil & Environmental Engineering Department, Manhattan College, hossain.azam@manhattan.edu

Materials & Methods

A New York state WWTP is upgrading to perform Co-digestion and an evaluation of expected performance is required

The Biochemical Methane Potential Assay is a simple and reproducible analysis used to investigate the co-digestion of various substrates

Anaerobic Digestion  Series of processes, that occur in an oxygen free environment, in which microorganisms break down organic matter  Methane (CH4) and CO2 gases are produced as a result of these processes

Objectives 

Operate 2 lab scale digesters, one experimental, one control to determine the co-digestion of food waste with biomass

Feed the experimental digester various substrates along with the usual primary sludge to optimize methane production

Perform Biochemical Methane Potential (BMP) assays to determine the optimal mixture of various substrates

Examine the kinetics of biogas production for the mono- and co-digestion of various substrates

BMP Results

Co-digestive wastes include cheese whey, grease interceptor waste, food processing byproducts, and pulped food waste. Measurements of TS, VS, TSS, VSS were performed according to Standard Methods Total COD, soluble COD, Ammonia, Orthophosphate, Alkalinity, and Volatile Acids where determined using HACH Kits  With the exception of Total COD, all samples were filtered through a .45 µm membrane before analysis

Digester Experiment

 Each Digester has the produced gas travel through a Wet Tip Gas meter  Each tip is calibrated to 100ml of gas produced  Tips are recorded by HOBO ware pendants  Digesters are run at a Solid Retention Time (SRT) of 30 days  All substrates fed are at a total solids concentration of 2.5-3%  Operating Temperature maintained at 35°C

Digester Results  The reactors showed similar gas production while both being fed Primary Sludge  The addition of Waste Activated Sludge (WAS) to Reactor 2’s feed caused a significant reduction in the volume of biogas produced

BMP Experiment

 Substrates: Primary Sludge, Waste Activated Sludge, and Cheese Whey  Inoculum: Digestate from WWTP  Each bottle contains a specific mixture of substrate and inoculum based on their volatile solids content  A set of control bottles containing only digestate and secondary effluent were used to account for the methane produced from organic matter within the digestate

BMP Experiment  The bottles were flushed with nitrogen gas and sealed before they were placed in an incubator at 35°C  The methane production was measured daily by passing the gas through a 2N NaOH solution to capture the CO2 and H2S which are also produced during the process

 Cheese Whey has a high volumetric methane potential due to its high volatile solids content. It has potential to be a favorable substrate for co-digestion.

 The co-digestion of cheese whey and WAS produced the most methane per gram of volatile solids added.

Conclusion

 The performance of Reactor 2 during the co-digestion of waste activated sludge and primary sludge was representative of the results achieved from the Biochemical Methane Potential assays.  Waste activated sludge proved to be an inefficient additive substrate for co-digestion with a specific methane yield of 182 (mL CH4)/(VS added). Though when combined with cheese whey, showed a significantly good production of biogas.  The results of the BMP assays showed the potential for cheese whey to be a viable additive substrate for co-digestion. With a specific methane yield of 563 (mL CH4)/(VS added).

Future Work  Investigate the Biochemical Methane Potential of other waste streams such as grease interceptor waste, food processing by-products, and pulped pre-consumer food waste

BMP Results

 The methane production over time shows the kinetics for the bio-degradability of different substrates as well as the specific methane yield  The microorganisms need more time to break down certain organic matter, as shown by the substrates containing cheese whey

 Analyze the co-digestion of primary sludge and various waste streams for the experimental digester  Obtain microbial analyses for BMP assays and Reactors

Acknowledgments The author thanks the Manhattan College Faculty Start-Up Grant and John Mahony Summer Scholarship for funding this research.


RESULTS AND CONCLUSION __________________________

ACKGROUND ckground _-----------______________________________-----53555_______

For many years humans have admired frogs for their physical behaviors, abilities d biomechanics. Humans have even adopted certain behaviors of frogs such as the g jump (i.e. leapfrog) and the breaststroke in swimming. The natural abilities of gs are therefore worth imitating and researching. Using robots to imitate animal haviors is a valuable tool to further study and understanding animal movements d operations. Figure 1a) Motion of the model Frog legs .

Materials andAND Methods __________________----------------__55555_________ MATERIALS METHODS

The simplified physical model of the frog legs involves creating two different pairs of legs using the software NX9: one short pair and one long pair. Through the use of compression springs, the short legs simulate the opening of the legs and contraction phase of the breaststroke (Figure 4). The long pair of legs with tension springs simulate the extension and closing of the legs during the breaststroke (Figure5).

Figure 1b) Breast stroke demonstration for comparison.

Figure 8 Graph associated with the Force test

Figure 9 Graph produced by theory

A mathematical model for the system (Fig. 11) was developed by using Newton’s second law to determine the system force (Fig. 10).

b) a) Much research has been conducted on robotic animal simulations such as fish and ds, however little to no work as been done to study the biomechanics of the way which a frog swims . The breaststroke is an energy efficient way of swimming; gs, as well as people can swim for long periods because it does not use as much ergy as other strokes (Davidson, 2014). Therefore due to its many advantages the aststroke is a commonly used swimming technique.

Figure10 Derivation of force equation

The purpose of this experiment is to research the biomechanics associated with swimming motion of a frog in an effort to identify features that might be plicable to engineering scenarios. It also aims to integrate fluid mechanics and botics in order to create a model for a robot that imitates the mechanisms involved he breaststroke or the frog swim. The first motion of the frog’s leg kick was amined by analyzing and testing a simplified physical model (figure 2 and figure 3). en a mathematical simulation of the experimental system was developed to allow dynamics of the model to be studied in more depth. This research was important cause studying the mechanisms involved in a frog’s swimming capabilities could d to improvements in swimming techniques, the development of future water hicles and the design of prostheses for amputees or injured animals who wish to m.

Figure 11 system for mathematical model

The theoretical (Fig. 9) and experimental (Fig.8) forces are within the same order of magnitude. Deviations between the two could have resulted from drag forces and water motions that were unaccounted for during the experimental trial.

TRODUCTION roduction

ure 2 Long legs model to simulate extension and closing phase of the og kick”.

The forward force produced by the long legs (Fig. 5) is due to the large body of water between the legs, when the legs are open, being forced out into the surrounding area, when the legs are closed. This works because of Newton’s second law of motion; the forward force is associated with the rate of change of linear momentum of the water being expelled. Equally, the short legs (Fig 4) produced a small backwards force because as the legs open more water enters into the space between the legs and as the water rushes in a small backwards thrust results. The force test allowed this force to be measured experimentally (fig. 8).

NEXT? WHAT’S NEXT_____________________________________

The next step for this experiment is to build an actual robotic model to simulate the “frog kick” used in swimming. This will involve designing a system that employs two prismatic joints acting in unison to simulate the extension Figure 4 Short leg experiment demonstrating Figure 5 Long leg experiment demonstrating the and retraction of the legs, while two revolute joints will simulate the opening the small backward motion produced by the forward motion produced by the extension and and closing of the legs (Figure 1a). The legs will be designed using NX9 (Figure opening and retraction of the legs during the closing of the legs during the breaststroke. 12), and then 3D printed. Subsequently, the system will be controlled using an breaststroke. Arduino controller. The system will then be tested to evaluate its performance with respect to the power and speed of the system for different design and A force test was also conducted on the long legs. The force test involves using PASCO Xplore glx, a portable data logging device, and a force sensor to obtain an experimental value operating parameters (e.g. leg lengths and actuator cycle times).

and a graph of the force produced by the legs (Figure 7) .

Figure 12 NX9 design of the robotic model legs with prismatic joints.

In addition, a program with a fluid-solid interaction (FSI) capability as part of

Figure 3 Short legs model to simulate the Figure 6 PASCO Xplore glx and force retraction and opening phase of the sensor. “ frog kick”.

udent: Kathia Coronado Advisor: Dr. Graham Walker of the Mechanical Engineering dep. cknowledgements: Dr. Rani Roy, Dr. Rani Roy, the Center for Graduate School Fellowship dvisement, and the Lasallian Research Scholars Program.

Figure 7 Demonstration of the force test on the long legs model.

its computational fluid dynamic (CFD) multi-physic software package (COMSOL), will be used to create a computer simulation of the physical model of the system to study the fluid dynamics associated with the motion of the system as it travels through water. The purpose of this simulation is to identify the dynamics associated with the fluid motion produced by the leg motion. The fluid flow motion will also be used to validate the simulation model by comparing the results to the experimental data. Finally, subsequent simulations will attempt to determine the optimal system configuration to maximize the model velocity and the minimum system power needed to operate the physical model.


Feasibility of Remote Sensing for Comprehensive Assessment of Water Quality of Inland Lakes in New York Student: James Curra Advisor: Dr. Kirk Barrett Manhattan College’s 2016 Jasper Summer Research Scholars Introduction

• New York, as well as other states, devote great effort to assess the quality of its inland lakes. However, the costs of conventional water sampling and analysis are too large to permit a comprehensive spatial and temporal assessment of such waters. • Remote-sensing (RS) methods enable comprehensive monitoring because of their low cost, large spatial extent, and detailed spatial resolution. • Methods for using satellite images, such as NASA’s Landsat Thematic Mapper (TM), to assess water quality have been developed to some degree. • Waterbodies reflect wavelengths that are absorbed by various band sensors on the satellites (Figure 1). • The reflectance, or brightness absorbed by the bands should correlate to parameters such as Secchi disk depth (SD).

Methods

1.Collect ground measurement data. 2.Collect Landsat images taken +/- 7 days from ground measurement as per Hellwegger et al. and with >10% cloud coverage as per Kloiber et. al.. About 10 images were used per lake or as many as could be obtained. 3.Input Landsat images into image processing software (ArcGIS). 4.Extract spectral data from a pixel with no disturbances (haze, clouds, reflections in water). 5.Use MS Excel to calculate correlation between in-situ data and satellite measurements for each lake individually and all together. 6.Calculate a, b, and c for each lake individually and all together.

c)

b)

d)

Figure 5: Plots of R2 by trophic state for a) Kloiber equation, b) Blue (1) band equation, c) Red (3) band equation, and d) 1/3 equation. Median appears in red. Oligotrophic N = 4. Mesoligotrophic N = 7. Mesotrophic N = 11. Mesoeutrophic N = 4. Eutrophic N = 9. All N = 35. RMSE for All: a) = .48, b) .43, c) .43, d) .46.

Project Description

Table 1: Lakes Used in Model

Results (cont.)

Results

Figure 1: Landsat TM band specifications.

• An equation correlating the band reflectances to in-situ measurements of Secchi disk depth will be created. This equation will be made to appeal to all of New York’s lakes. Kloiber et al. created a widely accepted equation, lnSD=a(Band1)+b(Band1/Band3)+c, with a, b, and c changing as necessary. • Bands 1 (Blue) and 3 (Red) are the most crucial bands for this assessment as band 1 penetrates surfaces better than other bands but is most susceptible to atmospheric scattering) and vegetation absorbs nearly all red wavelengths. • Although previous studies have been done on the feasibility of remote sensing for SD, no large scale studies have been conducted on New York and most studies focus on creating equations per image. • In this study, 35 lakes of varying trophic states will be assessed throughout all of New York. As many lakes on NY DEC’s 303(d) Impaired Waters list with available data were chosen (Table 1).

a)

Figure 2: Correlation between predicted Secchi disk equations and actual Secchi disk depth.

Conclusion/Discussion

• Remote-sensing of water quality may not be as feasible as anticipated. • Overall, a slight correlation between the spectral data and the ground measurements exists (Figure 2, Figure 3) but the correlation for the lakes individually range from very high to near non-existent. • Geography or region does not appear to influence the success of the proposed equation by Kloiber (Figure 4). • There is no apparent correlation between the trophic state of a lake and the success of this model although there is insufficient evidence due to a lack of data (Figure 5).

Figure 3: Predicted SD (m) via the Kloiber equation vs. actual SD (m) via ground measurements. N = 352. R2=.40. RMSE = 1.507. SEE = .96.

Acknowledgements • Manhattan College’s Center for Graduate School and Fellowship Advisement • New York State DEC/CSLAP

References • Hellweger, F.L., Schlosser, P., Lall, U., Weissel, J.K., Use of satellite imagery for water quality studies in New York Harbor, Estuarine, Coastal and Shelf Science, Volume 61, Issue 3, November 2004, Pages 437-448 • Kloiber, S.M., Brezonik, P.L., Olmanson, L.G., Bauer, M.E., 2002a. A procedure for regional lake water clarity assessment using Landsat multispectral data. Remote Sensing of Environment 82, 38–47. Figure 4: Map of research area divided by physiographic region and R2.


Reducing Steel Corrosion with Mine Tailings Students: Felipe DeMelo and Feksi Basha Faculty Advisor: Dr. Nossoni

Introduction

Over time steel rebars in reinforced concrete experiences corrosion due to chloride ions entering the system from seawater and deicing salts. Single handedly, chloride ions is the largest factor of corrosion on highways and bridges (Lambert). According to the NACE (National Association of Corrosion Engineers), United States spends around $8.3 billion dollars per year in repairing highways and bridges due to corrosion (Virmani). The goal of this research is to take advantage of a byproduct of mine processing operations and use it in reinforced concrete to diminish corrosion. Thus, increasing the life time of concrete, as well as reducing the amount of chat piles accumulated over time. Several tests were performed in order to understand the impact of adding chat into the concrete. In accordance with standard ASTM methods, testing for acidsoluble chloride in concrete measures the total chloride content whereas testing for water-soluble chloride in concrete determines the free chloride content. The difference between total chloride and free chloride will produce the bonded chloride, also known as binding chloride, in concrete and this informs how much chloride content the concrete can withstand before the chloride infiltrates to the steel bars. A Toxicity Characteristic Leaching Procedure (TCLP) was conducted in order to determine the concentration of leachable heavy metals such as lead and zinc. The non-steady-state chloride migration coefficient, determined by the Nordtest Method, provides a measure of the resistance of the tested material to chloride penetration. Compression test was also performed in order to the determine the compression strength of concrete.

Methods and Materials Acid-Soluble Chloride Test Procedure

1. Weigh 10 grams of concrete sample and transfer to a 250 mL beaker. 2. Add 75 mL of water to the sample. 3. Carefully disperse the sample with 25 mL of dilute (1+1) nitric acid and stir with a glass rod. 4. Pipet 3 mL of hydrogen peroxide if the smell of hydrogen sulfide is heavily present. 5. Add 3 drops of methyl orange indicator and stir. 6. Use a watch glass to cover the beaker and let the sample stand for 1 to 2 minutes. 7. Notice the color of the solution above the settled concrete sample. Slowly stir and add drops of nitric acid if the color of the solution is NOT a pink or reddish color. 8. Add 10 additions drops of nitric acid once the solution has turned a pink or reddish color. 9. Cover the beaker and allow it to boil for a few seconds. 10. Remove the beaker from the hot plate and filter the sample through a coarse-textured filter paper in a 500 mL Buchner funnel and a filtration flask using suction. 11. Transfer the solution back to the 250 mL beaker and allow to cool to room temperature. 12. Add 2 mL of 0.05N NaCl solution (2.9222 grams of NaCl to 1 liter of water). 13. Set the beaker on top of a magnetic stirrer and let solution mix with a stir bar. 14. Obtain a millivolt reader. 15. Immerse the electrodes of the millivolt reader into the solution. 16. Don’t allow the stir bar to strike the electrodes. 17. Fill a burette with 0.05N AgNO3 solution. 18. Gradually titrate with 0.20 mL increments. 19. Continue adding 0.05N AgNO3 until large changes in the millivoltmeter stops. 20. Once the solution passes the equivalence point then the change per increment will decrease. 21. Create a blank determination using 75 mL of water in place of sample. Water-Soluble Chloride Test Procedure 1. Weigh 10 grams of concrete sample and transfer to a 250 mL beaker. 2. Add 50 mL of water to the sample. 3. Cover the sample with a watch glass and allow it to boil for 5 minutes. 4. Leave the sample for 24 hours. 5. Filter the sample through a coarse-textured filter paper in a 500 mL Buchner funnel and a filtration flask using suction. 6. Transfer the solution back to the 250 mL beaker. 7. Add 3 mL of dilute (1+1) nitric acid and 3 mL of 30% of hydrogen peroxide solution. 8. Repeat steps 9-20 above from the acid-soluble test procedure.

Acid and Water Soluble

Discussion

Results

Batch F0 F25 F50 F75 F100

Table 1: Acid-Soluble Chloride Test mL AgNO3 N %Cl 10.827 0.0231 0.04696 11.570 0.0216 0.04963 9.323 0.0268 0.04023 13.139 0.0190 0.05429 12.899 0.0194 0.0536

ppm 47.0 49.6 40.2 54.3 53.6

Batch F0 F25 F50 F75 F100

Table 2: Water-Soluble Chloride Test mL AgNO3 N %Cl 8.636 0.029 0.036 9.391 0.027 0.041 7.705 0.032 0.030 10.187 0.025 0.044 10.232 0.024 0.045

ppm 36.4 40.6 30.1 44.3 44.5

When performing the acid and water soluble test we determined that there is little to no change in the amount of bonded chloride found within different samples. With the biggest change within the samples being 1.5 mg/L. In addition, the compressive strength of concrete decrease as the percent of fine chat increased. Approximately a 4,100 psi difference between the concrete mix that contain no chat with concrete mix that contained 100% chat. As for the coefficient of migration test, using Nordtest method, the coefficient of migration decreased as the percent of fine chat increased. Greater change in the coefficient occurred from zero percent chat to twenty-five percent chat, as shown in Figure 6. While attempting to obtain a concentration of lead for each batch the AA machine (Atomic Absorption Spectrosscopy) wasn’t providing an answer. In this case, the concentration of lead for all batches is significantly low that the machine is not sensitive enough to pick it up. In the other hand, as the percent fine of chat increased the zinc concentration increased. As a result 100% chat contained approximately 0.25 mg/L of zinc.

Table 3: Bonded Chloride in Concrete Acid Soluble Water Difference Batch ppm Soluble ppm (ppm) F0 47.0 36.39 10.6 F25 49.6 40.58 9.1 F50 40.2 30.07 10.2 F75 54.3 44.34 9.9 F100 53.6 44.53 9.1

Conclusion

There are some potential benefit of including chat in the concrete, even though it slightly decreases the compressive strength of concrete. The lower coefficient of migration the harder it is for chloride to penetrate through the concrete. In addition, the concentration of lead and zinc is significantly low that won’t be considered hazardous for the environment, according to EPA water quality standards.

References

Figure 3: Pink/Reddish solution after the methyl orange is added

Figure 4: Adding 0.05N AgNO3 solution to the sample.

Compressive Strength of Concrete Concrete Strength (psi)

Abstract

The waste left over from mining operations known as mine tailings (chat) was tested for the intention of using it to reduce steel corrosion in reinforced concrete. Chat containing several heavy metals, mainly lead and zinc, could have potential benefits for the concrete industry. The goal of this research it to fully understand the chat would have in concrete.

25000 15000

10000 5000 0

0

25

50 Percent Fine Chat

75

100

Acknowledgements

I would like to take minute to thank Dr. Nossoni, Dr. Mahony, Feksi Basha, Umar Miah, Daniel Hussey, and all the students in the environmental lab in helping me out through this research. Couldn’t do it without you guys, thank you!

Migration Coefficient, x1012 m2/s

Chloride Migration Coefficient (NT Build 492)

Figure 2: Equipment's needed to perform the acid and water soluble test

NT Build. "NT Build 492 Concrete, Mortar and Cement-Based Repair Materials: Chloride Migration Coefficient From Non-Steady-State Migration Experiments." NT Build, 199 1-8. Virmani, Paul. "The United States Cost of Corrosion." Cost of Corrosion Study Overview. NACE International, 2016.

20000

Figure 5: Percent of Chat vs. Axial Load Figure 1: Mine tailings waste piles

Lambert, Paul. "Steel Reinforced Concrete - Corrosion of the Reinforcing Steel." Azom Materials. Corrosion Protection Association, 13 Mar. 2002.

11 10.5 10 9.5 9 8.5 8 7.5 7

0

25

Percent Fine Chat

50

Figure 6: Percent of Chat vs. Migration Coefficient

75


Surface Water Discharges of Disinfection By-­‐products from Wastewater Treatment Plant Ef=luent

Fiona Brigid Dunn, Jessica M. Wilson Department of Civil and Environmental Engineering, Manhattan College Abstract

Wastewater treatment plants in New York City are combined systems that treat the precipitation and runoff from wet weather events along with municipal wastewater Klow. To protect surface water sources, the efKluent is disinfected with chlorine before it is discharged into surface water. During disinfection, chlorine reacts with organic matter in the efKluent, resulting in the formation of carcinogenic disinfection by-­‐products (DBPs). If DBPs are present in the efKluent, the general public can be exposed to these surface water contaminants through recreational use of the receiving waters. The objective of this work was to study the formation of regulated and emerging disinfection by-­‐products in two wastewater treatment plant efKluents. Wastewater efKluent samples were collected and batch tests were conducted under different chlorination conditions (dose and contact time). Samples were collected at different time intervals from each test and analyzed for DBPs in the following groups: regulated species such as trihalomethanes (THMs) and haloacetic acids (HAAs) as well as species that are not currently regulated including halogenated acetonitriles (HANs) and chlorinated solvents.

Materials & Method

Results: High Chlorine Dose-­‐Wet Weather Events Plant A THMs and Chlorinated Solvents

Haloacetic Acids

Results: High Chlorine Dose-­‐Wet Weather Events Plant B HANs and Chlorinated Solvents

Haloacetic Acids

4 mg/L Chlorine

4 mg/L Chlorine

6 mg/L Chlorine

6 mg/L Chlorine

Table 1. Summary of all DBPs that were analyzed.

•  EPA Method 551.1 to analyze THMs, HANs, and chlorinated solvents. •  EPA Method 552.3 to analyze HAAs. •  Wastewater was sampled from two different plants in NYC. •  Dry weather samples were collected after secondary treatment. •  Wet weather samples were collected from primary inKluent to simulate the disinfection of combined sewer overKlow. •  Bench-­‐scale chlorination tests were conducted using different initial chlorine doses. •  Calibration curves were made for THMs, HAAs, HANs, and chlorinated solvents to identify DBP compounds in the wastewater samples. •  Samples were analyzed using a gas chromatograph with electron capture detection (GC/ECD).

8 mg/L Chlorine

8 mg/L Chlorine

•  HANs did not form and chlorinated solvents formed at relatively low concentrations. •  Bromodichloromethane (from THM class) was observed only under high chlorination conditions. •  HAAs were found at much higher concentrations than chlorinated solvents.

Results: Low Chlorine Dose Plant A Chlorinated Solvents

Haloacetic Acids

1 mg/L Chlorine

Experimental Conditions

•  Plant A and Plant B are wastewater treatment plants that serve NYC and discharge into NYC waterways. •  Low chlorine dose tests were run on standard wastewater samples. •  High chlorine dose tests were run on wet weather samples. •  Chlorination tests were run in 3 liter glass beakers. •  60 mL samples were taken before chlorination and after different chlorine contact times. •  Total residual chlorine was monitored during the test according to EPA Method 334.0 using DPD Total Chlorine Reagent Accuvac Ampules.

2 mg/L Chlorine

Chlorine Dose Chlorine Contact (mg/L) Time (minutes) 1

15, 30, 60

4*

15, 30, 60

2

6* 8*

15, 30, 60 15, 30, 60 15, 30, 60

*Wet weather samples were collected during periods of heavy precipitation.

•  HAAs showed higher concentrations than HANs under low chlorination conditions. •  THMs were not observed and HANs formed but at concentrations near the method detection limit. •  Some chlorinated solvents (Trichloroethylene, 1,1,1-­‐Trichlorethane) were found in the wastewater prior to chlorination and remained at low concentrations during disinfection.

•  DBP concentrations increase with increasing chlorine dose. •  At higher chlorine doses, some species of HANs and chlorinated solvents (BCAN and Trichloroethane) increase with increasing chlorine contact time. •  Samples taken at 60 minutes during the 6 and 8 mg/L chlorine tests did not meet quality control standards based on the percent recovery of the surrogate and internal standard. •  The total residual chlorine decreased with increasing contact time. •  The residual chlorine decreases because the chlorine is being used to kill the bacteria found in wastewater (fecal coliforms, enterococcus, etc.). •  Under high chlorination conditions, there is an elevated level of residual chlorine which can be harmful to aquatic life in the receiving waters.

Discussion

•  Dominant species that forms in chlorinated wastewater are HAAs. •  Concentrations of HAAs are highest at the highest initial chlorine dose. •  Trichloroacetic Acid is present in all wastewater samples prior to chlorination possibly due to the synthesis of trichloroethylene and chloral hydrate. •  HANs and chlorinated solvents show similar results to HAAs at 4 mg/L and 6 mg/L. •  Certain HAN and chlorinated solvent species increase with increasing contact time during the 8 mg/L chlorine test. •  NYC Department of Environmental Protection monitors receiving waters for bacteria, turbidity, temperature, and dissolved oxygen to test the effectiveness of the wastewater treatment plants.1 •  Wastewater treatment plants using high chlorine doses should monitor their efKluent as higher concentrations of DBPs will be released into surface waters.

Acknowledgements & References

•  John D. Mahony Fund for Undergraduate Summer Research •  Erin McGovern and Mohamed Diallo for Research Assistance 1New York City Department of Environmental Protection, How New York City Protects Its Water Environments, http://www.nyc.gov/html/dep/html/wastewater/wwsystem-­‐protects.shtml.


Precipitation, Inhibition, and Dissolution Characteristics of Struvite in Wastewater Systems

Sebastian Gerlak, Arvind Kannan, Richard F. Carbonaro, and Hossain M. Azam Civil and Environmental Engineering Department, Manhattan College, hossain.azam@manhattan.edu

Experimental Approach

XRD Analysis

 Modeling, using MINEQL+ and PHREEQC is being employed to understand the behavior of the minerals under varying environmental conditions.  Microcosm experiments are being conducted based on modeling outcomes to validate the effects of chelating agents on struvite formation, inhibition and dissolution.  The precipitate characterization of struvite had been confirmed using XRD (X-ray diffraction) techniques.  Precipitated struvite crystal structure had been microscopically analyzed and was compared to commercially acquired struvite.

12.6 – 13.15

Co lo r

S lightly s oluble , Colorle s s , white , de hydrate s in dry, ye llowis h, warm air brownis h, light gre y

Sp. De n s ity 1.7

MINEQL+ Modeling

2 1 0

5

10

Tim e (h o u rs )

P O4 + EDTA

15

Mg + EDTA

20

25

P O4-Co n tro l

Dissolution Experiment with EDTA

5 4 3 2

Microscopic and XRD Results

Conditions for solids:

Solid phase calculations:

Saturation Index (SI) = Log (Q/Ksp) where, Q = ion product for the solid Ksp = solubility constant for the solid

a) SI > 0; Solution is oversaturated with respect to the solid b) SI = 0; Solution is in equilibrium with respect to the solid c) SI < 0; Solution is undersaturated with respect to the solid

Microscopic Analysis

y performing laboratory bench scale experiments and modeling studies, four aspects of struvite minerals are eing investigated:

1

 Experimental struvite crystals that were precipitated are the same size as the commercially acquired struvite.  Both struvite samples exhibit the same orthorhombic crystal structure.  There are XRD peaks observed of the precipitates match perfectly with characteristics of struvite which may means no other crystalline mineral has formed.  Commercial and experimental samples of struvite were matched with database peaks of struvite.

Figure: Saturation indices for Struvite at 5 mM Mg2+, 5 mM NH4+, 5 mM PO4-P, and 100mM NaCl in presence of different chelating agents.

 Struvite is under-saturated in the presence of chelating agents.  SI indices of struvite decreases as the concentration of chelating agents (EDTA and NTA) increases.  The effects of chelating agents: EDTA > NTA > Citrate  Saturation indices vary significantly in the presence of different types and concentrations of chelating agents (EDTA, NTA, Citrate).

Experimental Results Precipitation Experiment with Different CHES Buffer Concentrations

Ma g n e s iu m

6 5

25 m M CHES

4

50 m M CHES

100 m M CHES

3 2 1 0

0

5

10

Tim e (h o u rs )

15

20

25

P h o s p h o ru s

Co n c e n tra tio n (m M)

6 5

25 m M CHES

4

50 m M CHES

100 m M CHES

3

0

0

5 Mg + EDTA

0

Experimental Struvite (10x Zoom)

0

5

10

Tim e (h o u rs )

15

20

25

Am m o n ia

6 5

25 m M CHES

4

50 m M CHES

100 m M CHES

3 2 1

Commercial Struvite (40x Zoom)

0

Experimental Struvite (40x Zoom)

Tim e (h o u rs )

15

Mg -Co n tro l

20 P O4 + EDTA

Experimental Conclusions Precipitation:  Significant Mg2+ and PO43- removal were observed for all three experimental conditions of CHES buffer concentrations. However, maximum removal was observed with 100 mM CHES.  pH remained constant with 100 mM CHES buffer, while maximum pH drop was recorded with the 25 mM CHES buffer.  pH drop can affect amount of struvite precipitated.  Maximum removal observed at 100mM CHES buffer: Mg2+(4.63 mM), NH4+ (3 mM) and PO43- (3.75 mM). Inhibition/Dissolution:  Based on experimental data, it was observed that addition of EDTA inhibited the formation of struvite.  There was no formation of any solid precipitate in the experiment with EDTA indicating that EDTA is an effective method for inhibition of struvite formation.  The lack of solid precipitate indicated that all the soluble magnesium was bound to the chelating agent.  5 mM struvite was targeted in the dissolution experiment.  After 24 hours no visible struvite was observed in the dissolution experiment with EDTA as a chelating agent.

Future Work

1

Commercial Struvite (10x Zoom)

10 P O4-Co n tro l

Exp. Cond.: Concentrations set as 5 mM Struvite, 25mM EDTA, and 100mM NaCl run at 22 ºC, 100 RPM, 9.5 pH

2

Co n c e n tra tio n m M

A. Formation potential and precipitation kinetics of struvite under different environmentally relevant conditions (e.g. turbulence, temperature) and parameters (e.g. pH, alkalinity, ionic strength etc.). . Inhibition characteristics and kinetics using different chelating agents (e.g. EDTA, NTA, etc.). . Dissolution potential of the minerals under phosphonate based and other types of chelating agents (e.g. DTPMP, NTMP, etc.). D. Precipitation/Inhibition/Dissolution characteristics and kinetics of struvite in anaerobically digested centrate.

3

6

Effe c t o f Diffe re n t Ch e la tin g Ag e n ts o n th e In h ib itio n o f S tru vite

MINEQL+Results

Objectives

4

Exp. Cond.: Concentrations set as 5 mM Mg2+, 5 mM NH4+, 5 mM PO43-, 25mM EDTA, and 100mM NaCl run at 22 ºC, 100 RPM, 9.5 pH

Co n c e n tra tio n m M

u vite

S o lu b ility

5

Mg -Co n tro l

Formation of struvite: p Ks o

6

0

Conceptual Model

n e ra l ame

Inhibition Experiment with EDTA Co n c e n tra tio n (m M)

Struvite (NH4MgPO4·6H2O) is an important phosphate mineral found in natural and engineered systems which precipitates and crystalizes when magnesium, ammonium, and phosphate react in a 1:1:1 ratio. Due to the significant need for wastewater reuse, distribution, and resource recovery the potential of struvite formation requires detailed investigation.. Struvite scale buildup in pipes, valves, and pumps will slow the flow of sludge at the plant, greatly diminishing the operating efficiency of the plant, as well as increasing maintenance costs. Current removal methods of struvite are destructive to equipment, time-consuming, inconvenient, require necessary downtime, and expensive. Although struvite is a problem when left uncontrolled and unmonitored, it can lead to economic benefits to treatment plants when recovered. Controlled precipitation and collection of struvite crystals can be sold on the commercial market as a slow-release fertilizer and be an environmentally sustainable method of phosphorus sources as it is a very scarce mineral nutrient that is essential for all life on Earth.

Co n c e tra tio n (m M)

Introduction

0

5

10 Tim e (h o u rs ) 15

20

25

Exp. Cond.: Concentrations set as 5 mM Mg2+, 5 mM NH4+, 5 mM PO43-, and 100mM NaCl run at 22 ºC, 100 RPM, 9.5 pH

 Further modification and optimization will be performed of the MINEQL+ and PHREEQC modeling to represent different conditions of water and wastewater treatment and distribution systems.  Inhibition and dissolution experiments will be run using other chelating agents such as IDA, NTA, HEDTA citric acid, acetic acid, NTMP, DTPMP, ect.  Addition of Ca2+, Fe3+, Fe2+ to the precipitation and inhibition experiment and observing what other minerals form as well as tracking their concentrations throughout the experiment.  Recycle of industrial magnesium waste as a source of magnesium for centrate experiments.

Acknowledgment: We thank the Manhattan College “Start Up Grant” and MC Lasallian Summer Research Scholarship for Financial Support. We also acknowledge Dr. Kirchner and Dr. Santulli for their help with XRD Analysis.

25


Kinetic Studies to Optimize Chemical Dissolution and Inhibition of Common and Exotic Oilfield Scales

• Mineral scales form when produced water elevated in divalent metals (e.g., Ba2+, Sr2+, Pb2+) reacts with water containing high concentrations of anions (e.g., SO42-, S2-). • The resulting scales are insoluble in water and require treatment before well production can resume.

• Mineral scales can be chemically treated by addition of chelating agents to complex the metal and can be prevented from reforming after treatment by addition of phosphonate-based compounds.

Materials and Methods

• Bench scale experiments are being conducted in closed Teflon flasks with a magnetic stirrer at constant stirring rate. • Chelating agents and its metal-ligand concentration are being analyzed using capillary electrophoresis (CE).

MINEQL+ Modeling

• Chemical equilibrium problem is essentially a non-linear algebraic problem. • Each algebraic equation corresponds to a mass balance expression on an individual chemical component.

• Free metal concentrations are being analyzed using inductively coupled plasma atomic emission spectroscopy (ICP-AES) and/or atomic absorption spectroscopy (AAS). • Free anion concentrations are being analyzed using ion chromatography (IC).

Zn2+ + S2-

Conditions for Solids:

here, Xj = the concentration of component j Ci = the concentration of species i Ki = the equilibrium constant for species i Tj = the total concentration of component j Ai,j = the stoichiometric co-efficient of component j in species I Yj = the mass balance equation for component j m = the number of species N = the number of components

Formation of SrSO4, PbS, and ZnS: + SO42Pb2+ + S2-

where, Q = ion product for the solid Ksp = solubility constant for the solid

for j = 1, n

Conceptual Model SrSO4 (s) PbS (s) ZnS (s)

Concentration of Sr2+ in presence of EDTA 25

Saturation Index (SI) = Log (Q/Ksp)

for i = 1, m

Table 2. Task 1 (Dissolution) experimental conditions.

a) SI > 0; Solution is oversaturated with respect to the solid b) SI = 0; Solution is in equilibrium with respect to the solid c) SI < 0; Solution is undersaturated with respect to the solid

15 Duplicate 1

10

Duplicate 2

5

0

0.5

1

2.5

3

Concentration of Sr2+ from precipitation of SrSO4 400 40°C

300

70°C

200 100 0 0

1

2

3

4

Time (hours)

6 Saturation Index

2

Inhibition:

8 Supersaturated

4

Conclusion

2

• Based on the MINEQL+ results, SrSO4 is undersaturated at high pH (11-12) in the presence of chelating agent (EDTA).

0 -2 -4

• Saturation indices of SrSO4 decreases as the concentration of chelating agent (EDTA) increases.

Undersaturated

-6 -8 2

3

4

5

SrSO4 (No Chelating Agent) Table 4. Task 3 (Preliminary studies on chemical interference effects on scale formation, dissolution, and inhibition) experimental conditions.

1.5 Time (hours)

Saturation Indices for Strontium Sulfate (SrSO4)

Table 3. Task 2 (Inhibition) experimental conditions.

20

0

MINEQL+ Results

Table 1. Common and exotic oilfield scales and associated solubilities and solubility products.

Dissolution:

Solid Phase Calculations:

Mass Balance Calculations:

• While the treatment of common scales (e.g., BaSO4) has been well researched, the use of chelating agents to treat exotic scales (e.g., PbS, ZnS) that form under high temperature and high pressure conditions has not been well explored.

Sr2+

Batch Experiments: Results

Program for computing equilibrium chemical speciation in homogeneous and heterogeneous aquatic systems.

Concentration (mg/L)

• The formation of insoluble mineral scales during oil production significantly affects well efficiency and can damage the well and production equipment.

Concentration (mg/L)

Introduction

Goldie N. Gunawan, Fiona Brigid Dunn, Jessica M. Wilson, Hossain Azam Department of Civil and Environmental Engineering, Manhattan College

6

7 pH

8

SrSO4 (EDTA = 5mM)

9

10

11

12

SrSO4 (EDTA = 25mM)

Figure 2. Saturation indices for SrSO4 at 5 mM Sr2+, 5 mM SO42-, 100 mM NaCl in presence of EDTA.

XRD Analysis

• The XRD analysis shows that commercial SrSO4 salt matched perfectly with the database peaks for SrSO4.

• From the dissolution batch experiment, the concentration of Sr2+ increases throughout the duration of the experiment. • The slight peak at hour 1 of duplicate #2 might be caused by the formation of metal-ligand complex Sr-EDTA, which were not tested during this experiment. • The inhibition batch experiment shows that the concentration of Sr2+ decreases more rapidly at 70°C.

• A white precipitate (SrSO4) formed in both temperature (40°C and 70°C).

Future Work

• Further modification and optimization of inhibition and dissolution of oilfield scales with chelating agents and phosphonate-based chemicals. • Precipitation of ZnS and PbS to be conducted in the laboratory for bench scale experiments of inhibition and dissolution. • Collection and characterization of produced water and its microcosm studies with different scales and optimized chelants/inhibitors. • MINEQL+ results will also be supplemented with PHREEQC to observe the kinetics of SrSO4, PbS, and ZnS formation, dissolution, and inhibition under different chelating agents and inhibitor chemicals.

Figure 1. Conceptual model of proposed research.

Objectives By performing laboratory bench scale experiments and modeling studies, three aspects of common and exotic oilfield scales are being investigated : A. Dissolution potential and kinetics of the oilfield scales using chelating agents (EDTA and DTPA). B. Effective combination of scale dissolver and inhibitor using phosphonate-based chemicals (NTMP, HEDP, DTPMP, and BHPMP).

C. Effectiveness and optimization of chelating agents and phosphonate-based chemicals under different produced water dilution conditions for field applications.

Modeling Approach • •

Modeling, using MINEQL+, is being employed to understand the behavior of the minerals under varying environmental conditions.

Batch experiments are being conducted based on modeling outcomes to validate the effects of chelating agents and phosphonate-based chemicals on oilfield scales formation, inhibition and dissolution. The precipitate characterization of the oilfield scales are being confirmed using XRD (X-ray diffraction) techniques.

Acknowledgements Figure 3. Comparison of commercial SrSO4 salt with database peaks of SrSO4.

• American Chemical Society Petroleum Research Fund • Manhattan College Jasper Summer Research for summer housing funding • Stephanie Castro for Research Assistance



Development of a method of modeling tree branch stresses with variable leaf placements and leaf characteristics. Jesse Jehan

This project expands upon previous research done at Manhattan College. Previously, branches were modeled with leaves and petioles represented by a point load. This project developed a model that added real leaves with petioles that eventually provided more accurate estimates of branch stresses (D).

Preliminary petiole testing

Petiole and leaf data collection to obtain accurate mechanical properties and dimensions

Leaf model testing and validation

Before petioles could be added to Abaqus models, mechanical properties needed to be determined. This was done by treating petioles as a cantilever beam. Bending tests could then be performed to get the mechanical properties required by Abaqus.

Future Research: The author plans to integrate these results into Immediate-Tree, a Matlab program designed to reduce the modeling process on Abaqus down to 2 minutes, as it currently can take over 8 hours to model one small branch with leaves. Also, the author hopes to explore the possibility of using the developed model in future research to investigate stresses along branches if various leaf shapes and sizes are used.

Full Stress analyses using developed model and comparison to previous studies

D C B A

The picture above shows an enlarged section of the sample branch. Note the physical differences between modeling with a point load (B) and modeling with 3D leaves (D). The closer the model is to actuality, the more accurate the results will be.

Stresses are shown along the branch from base (x=0) to tip (x=.275). Curve A shows the branch without leaf loads. Note the higher stresses when point loads (B) were added, when petioles (C) were added and when petioles with leaves (D) were added.

Acknowledgements: The author is grateful to the Catherine and Robert Fenton Endowed Chair and to Dr. L.S. Evans for financial support for this research. Also, the author would like to thank Dr. Zahra Shahbazi, Dr. L.S. Evans and Katherine Petrizzo for their contributions and guidance in this project.


Imaging Techniques for Earlier Detection of Lung Cancer Author: Kevin Lynch Advisor: Dr. George Giakos Manhattan College Electrical/ Computer Engineering Department

• • •

•

•

•

•

•

Cancer claims 1 in 7 deaths worldwide. 14% of new cancers are in the lungs.

Each year more people die from lung cancer than colon, breast and prostate combined.

The purpose of this study is to develop an efficient, low-cost lung cancer screening technique. This was accomplished by examining histopathology slides of healthy lung tissue, in-situ carcinoma lung tissue, and Stage I carcinoma lung tissue slides, using digital polarimetric imaging techniques.

Part 1: Polarizer Calibration

•

We found the polarizers maximum and minimum power output, which were separated by 90° This allowed us to polarize the set up to both perpendicular (or cross) and parallel polarization. We need these for calculating the DOLP of the different lung tissue later on.

• •

• • •

Research Background

We set up a polarimetric imaging system, varying the target’s angle.

The transmitter branch (Generator) and receiver branch (Analyzer) consisted of a light source and power analyzer, coupled to two linear polarizers, respectively.

Angular measurements were performed under parallel-polarized and crosspolarized geometries. This was all in attempt to find the Degree of Linear Polarization of light against different stage cancer cells. While attempting the angular measurements, we found interesting results for the variances of the samples under cross polarized light.

•

Part 2: Cancer Cell Analysis

Polarizer 1 is set to its maximum degree, while Polarizer 2 is set to its minimum degree first and then its maximum degree. The generator is aimed at the target containing the lung tissue, the readings are taken, and the DOLP is calculated. The formula for DOLP can be seen below where Imax stands for incident of light under parallel polarization and Imin stands for the incident of light under cross polarization.

• đ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇ =

Data Analysis

• We utilized two different geometries for our Generator and Analyzer set up. • After finding the output for both parallel and cross polarization we used data mining to find trends in the variance of the polarization readings, this was the novelty of the experiment. • Below is the histograms for cross polarization variance versus degree. CP Variance

-12

Below is a histogram of the DOLP of the different lung tissues versus the degree in which the angular measurement was taken.

-6

0.7 0.6 0.5

•

0.4 0.3 0.2

•

0.1 0 -10

-8

-6

-4

-2

0

2

4

Degrees Normal

-10

-8

-6

-4

-2

0

2

4

6

8

10

12

đ??źđ??źđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘š −đ??źđ??źđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘š đ??źđ??źđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘š +đ??źđ??źđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘š

DOLP of Preferred Samples vs. Degrees

DOLP

• •

Introduction

In Situ

Stage I

6

8

10

•

-4

-2

0 Normal CP

2

4 In Situ CP

6 8 Stage I CP

Conclusions

10

12

The DOLP vs Degree plot highlights the discrimination potential our proposed technique has among different lung tissues. The variance in cross polarization readings for in-situ tissue is much higher then both Stage I carcinoma and healthy tissue. This can lead to exciting new discrimination techniques for in-situ carcinoma tissue.


SOFTWARE DEFINED RADIO BASED RF TEST ENVIRONMENT Researcher: Stephen Miller

Research Mentor: Dr. Brent Horine

INTRODUCTION

TEST ENVIRONMENT

CONCLUSIONS

Research Intent

TEST ENVIRONMENT CHAIN

Results

• The goal of this research was to construct a software defined radio based radio frequency (RF) test chain to provide an environment in which to perform a great variety of tasks such as hardware modeling, digital signal processing, and allowing for a direct comparison of sent and received signals. • The secondary goal of this research was to begin to model the non-linear characteristics of a wideband power amplifier using this testing environment.

Influence

Digital vs. Analog Signal Processing

• Digital signal processing has many advantages over analog signal processing, such as the accuracy of digital filters

Ideal Filter

Transmitted Digital Signal

Zynq Processor

Driver Power Amplifier

Power Amplifier

Gain Stage 1

Amp. Under Test

Directional Coupler

Attenuator

Spectrum Analyzer

B

Attenuator SDR Board

SOFTWARE CONTROLLER • This setup utilizes a program to directly write to the SDR’s transmit ports, and read from its receivers. • Provides a direct means of controlling transmitted and received data, and allows for the transmission of complex signals not easily sent using an analog signal generator. Input Digital Samples (From File)

Write Samples Into TX Buffer

DAC Write From TX Buffer to RF TX Port DAC

Real Filter

Write from RX Buffer to File

Write from RF RX Port to RX Buffer

Read from RF RX Port

AMPLIFIER CHARACTERIZATION

Analog Filter

Digital Filter

• This setup provided an efficient means of transmitting and receiving any user defined signals. • The signal chain in absence of any analog signal modulation or amplification produced a relatively high fidelity reception of the signal that was transmitted. • The transmitted and received signals that were sent and sampled through the software controller are shown below. • Approximately 10 dBm of signal power was lost. A majority of this attenuation was presumably resultant of line loss within the analog domain. However, only slight distortion, particularly out of band distortion, was introduced. This contributes to a more accurate reception of the transmitted signal.

Transmitted Signal

Received Signal

Write From TX Buffer to RF TX Port

System Under Testing Compare Transmitted and Received Samples

C

SDR Board

Received Digital Signal

Zynq Processor

• This setup can be used as a key analysis tool for various hardware and digital signal processing testing. • The usage of software defined radio (SDR) simplifies a great deal of pressing issues that are present in purely analog systems. • SDRs are extremely versatile and can be tailored to numerous unique applications. • Digital signal processing provides many significant advantages such as a lower cost of implementation, simpler and more accurate filtering, and ultimately a higher signal to noise ratio.

ANALOG DOMAIN

DIGITAL DOMAIN

• The SDR board serves as a precisely controlled platform to perform various hardware and signal analysis, such as amplifier characterization shown below.

• Characterization of the amplifier under test has shown to be accurate since it corresponds with the factory specifications. Analog characterization is nearly as accurate, but proves to be more difficult to tune precisely than through SDR. • This testing environment decreased the likelihood of misleading characterizations commonly caused by long analog signal chains.

FUTURE APPLICATIONS

• This test environment provides a solid basis for any future development of power amplifier modeling, digital signal processing, and other hardware testing. • Various modern modulation techniques, such as digital predistortion, can be tested within this system. These techniques often seek to improve an RF amplifier’s efficiency, and this setup provides the fundamental chain to accomplish this. • This platform can be utilized to prototype various products under development. Its versatility allows for a very mobile range of test applications


Bridge Scour Monitoring Applications for Autonomous Underwater Vehicles Presenter: Amy Sniffen – Adviser: Dr. Brent Horine Manhattan College Summer Research Program

Introduction: • Bridge scour occurs when flowing water around bridge pilings removes sand and rock, compromising the integrity of the structure. Evidence of scour can be seen in Figure 1. • Autonomous Underwater Vehicles (AUVs) are a cost effective solution for detecting bridge scour. AUVs can be designed to maneuver around bridge piers while simultaneously scanning for evidence of scouring. • An AUV currently used by Duro UAS onto which this bridge scour application can be loaded is shown in Figure 2.

Figure 1: Evidence of Scouring found near piling in Platte River in Maxwell, Nebraska

Objectives: • To investigate the feasibility of scour assessment using an AUV • To propose a method of pier detection and navigation that will enable an AUV to asses a bridge pier while also avoiding collision through the use of hierarchical mode declarations • To determine the factors and issues involved with deploying an AUV in simulation and in real time • To create a simulation that will demonstrate the path the AUV will take once deployed. An example simulation screen is shown in Figure 3.

Figure 2: Duro UAS AUV

Figure 4: State Diagram of Hierarchical Mode Declarations • Once the AUV is deployed, it immediately enters SENSE Mode. In SENSE Mode, the AUV is checking for any objects that may be present, whether they be bridge piers or obstacles. • When an object is detected, the AUV exits SENSE Mode and enters SCOUT Mode. Once in SCOUT Mode, the AUV moves in a rotational pattern around the object until it can determine whether the object is a pier or an obstacle. • If the object is determined to be a pier, the AUV begins its PIER ASSESSMENT navigational pattern. The PIER ASSESSMENT navigational pattern is shown in Figure 5. • If the objected is determined to be an obstacle, the AUV returns to SENSE Mode. Conclusion: • AUVs are a feasible solution for monitoring bridge pier scouring • The simulations have displayed the adaptability of the vehicles while also demonstrating their usefulness in practical applications

Figure 3: Example Simulation Screen Figure 5: PIER ASSESSMENT Navigational Pattern

As shown above, the AUV enters a gradually increasing circular pattern that enables the vehicle to get a complete picture of the bridge pier. The AUV changes direction after a complete rotation to ensure that each angle of the pier is monitored.


SCHOOL OF LIBERAL ARTS 2016 Summer Research


Prosecuting Rape: Sexual Violence During Armed Conflict in the Former Yugoslavia and Rwanda

Former Bosnian Serb Military Leader Ratko Mladic Appears at The Hague Accused of War Crimes

Summary

Recommendations

Rape of Sabine Women

Church Memorial Sites of Massacres During Rwandan Genocide


Police/Community Relations in Reference to the Black Lives Matter Movement Kelsey Cannamela Department of Psychology, Manhattan College Under the supervision of Nuwan Jayawickreme, Ph.D.

INTRODUCTION •  Recent media coverage has portrayed an increasingly hostile relationship between the African-American community and police officers stemming from police encounters that have resulted in death. In the wake of these deaths came an eruption of protests from members of the community and the formation of the Black Lives Matter movement. (Yancy & Butler, 2015) •  The goal of the current study is to identify the different variables that predict police officers’ attitudes towards the Black Lives Matter movement. With the formation of the Black Lives Matter movement and the protests that followed came an influx of varying opinions and viewpoints of the community concerning police/community relations. •  Understanding police officer’s attitudes towards the movement is important because their individual attitudes can affect the way they perform their job and interact with the community they police. The degree to which these viewpoints and changes in behavior are predicted by experienced tension with the community (either experienced themselves or by other officers in their precinct), the ethnicity of the police officers, sources where they get their news from, and their moral attitudes were examined.

Table 3

RESULTS •  Each survey item was grouped into one of four groups: violence, viewpoints, impact, or personal. The items and the groupings can can be found in Table 1. •  There were 6 significant positive correlations amongst the survey items, shown in Table 2. •  There were significant differences between conservative and neutral sources for the survey item shown in Table 3, such that participants who watch conservative news outlets tended to have higher ratings for the survey item describing a decrease in verbal hostility. •  Participants who watch a combination of only conservative and neutral media outlets were found to be more likely to view the Black Lives Matter movement as having a negative impact on police/ community relations compared to participants who watch a range of conservative, liberal, and neutral sources. This finding approached significance (see Figure 1 and Table 4). •  Hispanic police officers were more likely than Caucasian officers to perceive the Black Lives Matter movement as legitimate (see Figure 2).

(A six-point scale was used for all of the items; 1-Strongly Disagree, 6-Strongly Agree) Hostility from Community Since the start of the Black Lives Matter movement, I have seen an increase in verbal hostility from members of the community that I serve (M=4.68, SD=1.43) Since the start of the Black Lives Matter movement, I have seen a decrease in verbal hostility from members of the community that I serve (M=1.84, SD=.96) Since the start of the Black Lives Matter movement, I have seen an increase in physical hostility from members of the community that I serve (M=4.06, SD=1.52) Since the start of the Black Lives Matter movement, I have seen a decrease in physical hostility from members of the community that I serve (M=2.04, SD=.99)

Attitudes towards Police

I feel that the people I serve view police officers more negatively after the start of the Black Lives Matter Movement (M=4.70, SD=1.27) I feel that the people I serve view police officers more positively after the start of the Black Lives Matter Movement (M=2.15, SD=1.22)

Impact of Black Lives Matter movement

I believe the Black Lives Matter movement has had a positive impact on police/community relations (M=1.54, SD=.95) I believe the Black Lives Matter movement has had a negative impact on police/community relations (M=5.26, SD=.96)

Police Viewpoints of Community

The following surveys were administered to determine police officer’s viewpoints of the Black Lives Matter Movement and the different psychological variables that influence that viewpoint: 1) demographics (age, race, gender, time on the force), 2) media sources they used, 3) police attitudes towards the Black Lives Matter movement, 4) and the Moral Foundations Questionnaire, which measure the following morals: Harm/Care, Fairness/ Reciprocity, In-group/Loyalty, Authority/Respect, and Purity/ Sanctity. (Giammarco, 2016)

T ! able 4 I believe the Black Lives Matter movement has had a negative impact on police/commun ity relations

Mean:

95% CI:

Top Rated Media Source Conservative

Mean, 95% CI 2.27, 1.80-2.75*

Liberal

1.72, 1.38-2.06

Neutral

1.38, 1.10-1.65*

Conservative+ Neutral Media Outlets

Liberal+Neutral Media Outlets

Neutral Media Outlets

Conservative +Liberal +Neutral Media Outlets

5.83

4.79

5.25

5.22

5.40-6.26*

3.96-5.61

3.73-6.77

4.93-5.50*

! Figure 1: Ideology of News Sources

Figure 2: Race Difference Amongst Survey Item

Table 1: Categorical Groupings of Black Lives Matter Movement Items

METHODS The sample included 68 police officers in total. I recruited officers from Perth Amboy Police Department in New Jersey and Tuckahoe Police Department in New York. Police officers of all ranks were included in our sample, as well as both full-time and retired officers. Participants:

Since the start of the Black Lives Matter movement, I have seen a decrease in verbal hostility from members of the community that I serve

Since the beginning of the Black Lives Matter Movement, I feel more anxious when approaching dangerous situations in the community that I serve (M=3.64, SD=1.57) Since the start of the Black Lives Matter Movement, my strategies when responding to emergency calls, have changed for the better (M=3.46, SD=1.60) Since the start of the Black Lives Matter Movement, my strategies when responding to emergency calls, have changed for the worse (M=2.00, SD=1.05)

Legitimacy of Movement Motives

I believe that the concerns of the Black Lives Matter Movement are legitimate (M=2.58, SD=1.30)

Table 2: Significant Correlations for Black Lives Matter Survey Items Survey Item Pearson Correlation Increase in Verbal Hostility; r= .322 (p=.021) Community Views Police More Negatively Decrease in Verbal Hostility; Community Views Police More Positively

r=.287 (p=.041)

Decrease in Physical Hostility; Community Views Police More Positively

r=.278 (p=.048)

Feeling more Anxious in Dangerous Situations; Strategies Changed for the Better

r=.304 (p=.030)

Community Views Police More Negatively; Strategies Changed for the Better

r=.311 (p=.027)

Strategies Changed for the Better; Progressivism !

r=.327 (p<=.019)

CONCLUSIONS •  Police officers who primarily watch conservative news outlets were more likely to perceive a decrease in verbal hostility, and participants who watch conservative and neutral news outlets were more likely to view the movement as having a negative impact on police/community relations. Additionally, Hispanic officers were more likely than White officers to view the concerns of the movement as legitimate. •  A possible explanation for why the other hypotheses were not supported may be due to the limited representation of different police stations and the minimal sample size. •  The complicated nature of our results indicates that police officers – as with all groups – defy easy stereotyping. Police officers may wrongly assume black Americans are more violent or dangerous compared to white Americans; the general public may wrongly assume that all police officers are looking to harm or kill certain groups of people. Both sets of assumptions demonstrate the danger of stereotyping and the erroneous . beliefs that can result from it. Statistically based evidence is required to that targets the inaccuracies of common societal convictions.

REFERENCES Giammarco, E.A. (2016). The measurement of individual differences in morality. Personality and Individual Differences, 88,26-34. Yancy, G. & Butler, J. (2015). What’s wrong with ‘All Lives Matter’. New York Times. For questions and/or comments, kcannamela01@manhattan.edu


Merging Disciplines and Deconstructing the Self: The Effects of Psychosis on Identity Gregory Inzinna, Dr. Maeve Adams, Dr. Nuwan Jayawickreme Branigan Scholars Grant 2016, Manhattan College Adulthood/Onset of Schizophrenia

Early adolescence/Early Adulthood

In 1957, my grandfather took my grandmother to his high school prom. She said that he rented a red corvette convertible and drove 100 mph to the beach for after-prom.

In the early 70s, my grandfather was a teacher in poor areas of the South Bronx. He liked working with children because of they listened to the ideas he had to share.

As a young college student, my grandfather was a lifeguard. In his school’s journal he wrote an article “And the Lord Came By…” about a child who had recently drowned in his pool. It was beginning of a lifelong contemplation of what it means to die. “It happened so quickly and was so final.”

In order to analyze my grandfather’s condition as a man with schizophrenia, I start with a question: What specifically caused my grandfather to spontaneously begin campaigning for the presidency during what seemed to be a very stable, middle-adulthood stage of his life?

Attached to each of the numerous copies of The Freedom Manifesto, he included a stamp of his own face because he was afraid of his intellectual property being stolen.

In 1978, he became a bishop of The Universal Life Church and began to preach about his own theory of a “common sense” religion.

The Freedom Manifesto (1976) was the platform of my grandfather’s presidential campaign:

“The Black Balloon” (left) by my grandfather, Pasquale Muccigrosso, was created in 1975 using oil on canvas. My grandfather then altered the piece to create “The Black Balloon (revised)” (right) in 2005 using permanent marker and glitter. The original was painted during the early stages of his schizophrenia while the alteration occurred during the final stages of his illness and his life. The alteration represents the chaos of thoughts running through his mind.

“Any insane coward that uses any authority in any area (Religion, Politics, Education, Psychiatry, Government, Economics) to convince you that you must let him control your thoughts and actions (natural behavior), is trying to make you his slave. A slave is less than human. Just learn to be yourself. You have a right to your own bodies” (p.2).

Explore this question through research :

Draw meaningful implications:

During middle adulthood, a narrative identity starts to form (McAdams, 2013). Thusly, he began to see his negative experiences with schizophrenia, not as static events that are bound to the past, but as dynamic ideas that could be used for good. In this way, they are both part of his story and the stories of others.

Hannah Arendt, in her The Human Condition, shows us that by becoming part of the stories of others, he might gained a kind of immortality. If people continue to talk about his ideas after his death, his influence on others can live on. “Contemplation is the word given to the experience of the eternal” (Arendt, 20)

,

My grandfather saw a relationship between two problems: the act of diagnosis that imposes an inflexible medical conception of health and well-being and the ways in which e wider world we live in conditions our lives through the imposition of normative conceptions of selfhood, sociability, sexuality, gender, authority, intelligence, mortality, and eedom. With the archive of his artifacts, I wish to retell his story because the resistance he staged not just to his diagnosis, but to diagnosis in general, has much to tell us bout the way that we try to understand a life like his--a life that, without the archive, might simply seem plagued and defined by mental illness.”


http://wagingnonviolence.org/featur e/a-place-where-its-easier-to-be-go od/

Dorothy Day’s Witness and Manhattan College Alannah Boyle | Dr. Kevin Ahern | Manhattan College

Dorothy Day Background ●

Dorothy Day was a journalist and activist from the age of 18. She wrote for socialist newspapers and went to prison in 1917 for protesting for women’s right to vote. In the 1920s, she began a common law marriage and became pregnant. At age 30 after the birth of her daughter Tamar, Day converted to Catholicism. Soon after her conversion, in 1933, Day met Peter Maurin and he encouraged her to start a newspaper, The Catholic Worker, as a response to the suffering and violence of the time. Together, they transformed the paper into a movement with houses of hospitality, farms, and publications across the country. Day served as the leader of this movement, living in Catholic Worker Communities, until her death in 1980. While hailed by many for her deep Catholic spirituality and commitments to the corporal works of mercy, Day and her movement faced strong opposition for their strict stance of pacifism (including during World War II), their willingness to engage in civil disobedience, and their strong opposition to nuclear weapons and the “warfare state.”

Lasallian Roots of the Catholic Worker Peter Maurin, the co-founder of the Catholic Worker movement, was a spiritual mystic, poet, teacher, handyman, and farmer. ● He was born to a peasant family in Oultet, France in 1877. At 16, he entered the Brothers of the Christian Schools and was inspired by St. John Baptist de la Salle’s commitment to the poor and intellectual engagement. ● In 1902, he left the brothers after the French government closed many Christian schools and became involved in Le Sillion, a Catholic lay movement committed to social engagement. ● He moved to Canada and eventually to New York, living and working with the poor. ● As Day recounts, he was the main source of inspiration to her. She outlined his vision in 1977: ●

“Round-table Discussions, Houses of Hospitality and Farming Communes–those were the three planks in Peter Maurin’s platform. There are still Houses of Hospitality, each autonomous but inspired by Peter, each trying to follow Peter’s principles. And there are farms, all different but all starting with the idea of the personalist and communitarian revolution...He had given everything he had and he asked for nothing, least of all for success. He gave himself, and–at the end–God took from him the power to think.” - Dorothy Day on Peter Maurin ●

The influence of the Lasallian charism on Maurin as he formed his ideals, and eventually brought them to Day -- seem clear when examining the parallels between the goals of the De La Salle brothers, and the goals of The Catholic Worker.

Where is the Catholic Worker Now?

Images from www.catholicworker.org, www.manhattancollege.edu and www.ignationspirituality,com.

The Lasallian Mission and The Catholic Worker Movement Lasallian Star: Inclusive Community Catholic Worker: Hospitality Lasallian Star: Respect for All Persons Catholic Worker: Works of Mercy Lasallian Star: Concern for the Poor and Social Justice Catholic Worker: Voluntary Poverty and commitment to peace

● Both are rooted in the Catholic tradition and affirm a commitment to Catholic social teaching and social justice. ● Both include the celebration of diversity and welcoming all person’s into their community regardless of race, sex, or religion. ● Both recognize the basic dignity that all human beings have. ● Both work to be in solidarity with those who are suffering, and work to advocate for those without a voice.

Dorothy Day and Manhattan College ● Day engaged the Brothers in the NY area in several ways, including celebrating Mass with them in their chapels near St. Mary’s House in NYC or the Peter Maurin Farm (near Barrytown). ● She visited and spoke at Manhattan College on several occasions and worked with several professors, including Joseph Fahey, a leading scholar in the Catholic peace and labor movements. ● In 1974, she was awarded the Saint La Salle Medal by the College.

A Challenge to Manhattan College ● ROTC

❖ Manhattan College officially houses and supports an Air Force ROTC program. ❖ As a pacifist, Day opposed all wars, as well as the preparedness for war. ❖ In a 1936 article in The Catholic Worker titled Pacifism, Day wrote, “We oppose, moreover, preparedness for war, a preparedness which is going on now on an unprecedented scale and which will undoubtedly lead to war.” ❖ Day strongly disproved of Catholic school having ROTC programs and the preparation of students for inevitable war.

● Access to Campus

❖ Manhattan College has many resources, including facilities and programs that could benefit the poor and working class in the Bronx and Yonkers. ❖ For example, it pays prominent speakers present to only a handful of students and faculty. ❖ In line with the school’s Lasallian Mission, Day’s vision of hospitality, and Maurin’s vision of intellectual engagement with the poor, campus events and resources should be more open to the local community, especially those who are poor.

• The above map shows all of the Catholic Worker hospitality houses currently inhabited and serving the poor throughout the United States. • There are currently 216 hospitality houses in the United States and 32 houses internationally, spanning across 5 continents. • To discover the community closest to you, visit: http://www.catholicworker.org/communities/directory.html

Methodology • Research and understanding of Dorothy Day’s vision and ideas were collected with a focus on primary sources. • Much of the research collected was through conversation and visits to Catholic Worker communities. Information was collected through conversations with persons who knew Day personally, and have dedicated their lives to continuing her vision. • Articles written by Day and published throughout all points of her life were tracked down in various libraries - often times sources could only be used in said libraries. • All readings done were either written by Day, or written by someone who knew her personally. • Research will be developed this semester.

Selected Bibliography / Works Cited Writings by Dorothy Day • “Pacifism” Dorothy Day. 1936. The Catholic Worker. (Catholicworker.org) • “The Aims and Means of The Catholic Worker,” (Catholicworker.org) • “Peter Maurin 1877-1977” May, 1977 The Catholic Worker (Catholicworker.org) • The Long Loneliness: the Autobiography of Dorothy Day. San Francisco, Harper & Row, 1981 c.1952. • Ellsberg, Robert (ed). Dorothy Day, Selected writings: By little and by little. Maryknoll, NY, Orbis Books, c.1992. Writings About Dorothy Day or Peter Maurin • Forest, Jim. All Is Grace: A Biography of Dorothy Day. New York, Orbis Books, 2011. • Ellis, Mark. Peter Maurin: Prophet in the Twentieth Century. New York, Paulist Press, c.1981.191p. • Sheehan, Arthur T. Peter Maurin: Gay Believer. Garden City, NY, Hanover House, 1959 Manhattan College and the Lasallian Mission • Salm, Luke, The Work Is Yours: The Life of Saint John Baptist de La Salle, Christian Brothers Publications, 1996 • Salm, Luke, “Lasallian Values in Higher Education,” La Salle University Address, March 18, 1993 • “Manhattan College Mission and Values” (Manhattan.edu)


AN EXAMINATION OF THE HUMANITARIAN CONCERNS SURROUNDING REFUGEE CRISES THE EDWARD BRANIGAN SCHOLARS GRANT

By Rachel Gerard, Advisor Dr. Jordan Pascoe

ABSTRACT

REFUGEES AND COSMOPOLITAN HOSPITALITY

In my research I aim to redefine the obligations of developed nations and society as a whole in response to humanitarian crises by use of Immanuel Kant’s theory of Cosmopolitan Hospitality. Additionally, I hoped emphasize the importance of human beings over geographic borders, specifically in the case of Syrian refugees who I argue are trapped in a barbaric state of nature due to their lack of citizenship, identity, and personhood as a result of the ongoing volatile situation in their homeland. Because of their intolerable surrounding state of nature, millions of Syrians are forced to seek asylum from their oppressive circumstances. For many, this escape has led to immense poverty and poor conditions in overcrowded and underfunded refugee camps. For others, this exodus has led to their destruction; the very condition with which Kant legitimizes a person’s right to hospitality. At the same time, millions of Syrians also remain internally displaced within the ruins of what was once a functioning state. For these reasons, I argue that only responses of justice-that is, fundamental changes in international law-- can have an effective impact on the current situation in Syria as opposed to small scale acts of charity which have begun to dwindle immensely at this time. I frame these ideas through first hand accounts and images of Syrian experiences facing various versions of injustice because the integral and pervasive role media and journalism has played in the various responses to the crisis is significant, and additionally these media have provided a platform to further analyze and convey the dire conditions of Syria. I plan to explain these harsh realities through the images and stories of the very people they have affected and continue to affect. Finally, in my research, I underscore the importance of reframing humanitarian conflicts with a universal approach and fundamentally and epistemically shifting the way in which human beings conceive global problems that seem to reoccur throughout history.

When Kant commences his discussion on the right to Hospitality, he immediately sets the groundwork for the context of this issue when he states, “we are concerned here with right not with philanthropy” (8:358). Thus he immediately disqualifies hospitality as a matter of altruism and instead deems it a right, or an issue of justice. The connotations of refuge having to do with right, as opposed to virtue are significant because Kant consequently identifies refuge as a matter of just and juridical importance. Accordingly, refugees are deprived of actualizing their rights including, overall freedom, acquisition of property, and the ability to develop projects to carry out their rights. This issue of injustice is wrong according to Kant because refugees are hindered from actualizing both their internal freedom and external freedom in the way they are forced to act out of desperation and fear, as a result of conditions and often, the laws in their own country.

A HOSPITABLE COSMOPOLITAN RESPONSE AND INTERNATIONAL LAW In line with Kantian thought, that because Syria itself is not a state by Kant’s definition and instead represents a state of nature, that it is rightful for other countries to intervene in this situation, not only to liberate Syrians from injustice but also because an existing state of nature poses a threat to universal Cosmopolitan law. Furthermore, Syrians refugees who suffer from an extension of a barbaric state of nature in refugee camps and often desolate poverty too warrant this response. According to Kant, “The state of nature (status naturalis) is not a state of peace among human beings who live next to one another but a state of war, that is, if not always an outbreak of hostilities, then at least the constant threat of such hostilities” (8:349). In other words, not only is a state of nature inherently dangerous directly to the region it manifests itself within, but moreover it is a threat to the world in its entirety, because it has the potential spread of violence and threatening conditions in the form of war and hostility-- in accordance with this rationale, we can justify the necessary response of other countries on the basis of moral Cosmopolitanism to cease and inhibit the spread of a state of nature for domestically for the sanctity of its residents as well as the very condition of international justice.

Because Kant qualifies asylum as a right, we can identify his qualification as stemming from the imperative that every individual possess freedom. According to Kant, freedom is our only “innate right” or entitlement based on reason. With this sentiment, Kant also suggests that asylum corresponds to the right to actualize freedom in the face of persecution and at the same time, indicates that human life supercedes geographic distinctions and certain states’ rights.

JUSTICE VS. CHARITY

Commissioner of Refugees definition of a refugee:

“Refugees are persons fleeing armed conflict or persecution…Their A possible reason for the lack of funding and intervention in Syria is a result of the situation is often so perilous and intolerable that they cross national emphasis on grappling with this crisis through the response of charity rather than borders to seek safety in nearby countries, and thus become internationally recognized as "refugees" with access to assistance from States, UNHCR, attempting to fundamentally reestablish the international guidelines concerned with and other organizations. They are so recognized precisely because it is too justice. dangerous for them to return home, and they need sanctuary elsewhere. These are people for whom denial of asylum has potentially deadly consequences.” I suggest not that autonomy be taken away from states, but for states to have to bend The conditions causing this lack of freedom often result from a state of nature environment that represents a unilateral will accompanied by such poor conditions that certain laws to extraordinary circumstances that that transcend their autonomy and the overall rights citizens, who as Kant argues, by nature, are inclined to pursue, are threaten humanity as a whole, such as matters including the relocation of persons in compromised. As a result, citizens are forced to seek solace from civil states in order crises as immense as the contemporary Syrian refugee crisis. to escape this life threatening uncivil and barbaric condition in search of a civil condition-- thus becoming refugees. Accordingly, in order to enforce the freedom of some individuals as being SYRIA AS A STATE OF NATURE Refugees near the border of Ben Gardane, March 2011. Paolo Pellegrin, reproduced from The New York Times

INTRODUCTION: COSMOPOLITAN HOSPITALITY

Immanuel Kant in his work from 1795, Toward Perpetual Peace outlines the various necessary conditions of the deemed “perpetual peace” he argues in favor of. The most effective condition he presents in terms of human rights concerning refugees is his idea of Cosmopolitan Hospitality. In the third definitive article of his work, Kant argues that hospitality, or the absence of hostility towards a stranger arriving on the soil of a foreign country is a right warranted by the stranger. Kant legitimizes this right on the basis that all human beings have an equal claim to the common possession of the earth. 1
 According to Kant, because initially all human beings had equal claim to the world, every person is entitled to benevolence everywhere so long as they themselves act nonviolently and accordingly, he illustrates that this type of social interaction cannot be avoided. However, Kant concedes that a stranger may be turned away from a state foreign to them in accordance with a state’s rights, unless this person is facing destruction. If the individual is otherwise confronted by his or her death, then they are not to be turned away, thus these individuals are entitled to an implicit extension of hospitality because, in essence, they have no other choice than to be on this foreign soil. Numerous scholars have discussed the ambiguity of Kant’s sentiment, questioning the role of negotiation between nations, whether this act of hospitality is an instance of altruism or if Kant is suggesting that asylum constitutes a moral claim pertaining to human rights and what specific obligations both the local and the “visitor” have in this arrangement. In my research, my goal is to reconcile these points and offer insight into how to grapple with these tensions from the framework of justice while at the same time emphasizing the obligations we owe refugees on international scale using Kantian groundwork.

As a result of these unbearable conditions, millions, fled their homeland with little money hoping for asylum in other regions, however many of these individuals remain waiting in the poor conditions of underfunded and overcrowded refugees camps in surrounding countries. Others have perished in their attempt to seek asylum in Europe, while at the same time millions of Syrians remain trapped within the total war environment of Syria with no weapons, medical care, or protection and certainly no solace in this ongoing catastrophe. In a way, these various conditions in existence for Syrians parallel the different states of nature Kant presents. Overall, the ongoing infraction of human rights within Syria constitutes a Kantian state of nature when considering the lack of a juridical system and the ubiquitous evidence of an existing unilateral will without regard to the external freedom of others. For this reason, according to Kant, it is in the best interests of Syrians with regard to moral law, to exit this state- which many have attempted to do.

compatible with the freedom of everyone, the international law regarding human rights should be clarified to specifically indicate the obligations of each state or collective group of peoples capable of contributing based on analyzing a variety of matters such as economic factors, population, and so on. This way, in the event of humanitarian catastrophe all states are balanced in their obligations and subsequently the response to global instability can be improved. Although, in a sense each country is losing the ability of refusal to decline participation in certain emergencies, because of the nature of humanitarian crises and the human rights that are endangered, securing the life of some individuals outweighs the inconvenience of being the agent to do so. the right to Hospitality for refugees and consequently violating Cosmopolitan law.

In a United Nations report that followed the Syria crisis for 2014, it described the situation in this region by stating, “the phenomenon of alienation emerging from an environment rife with violence that has systematically empowered inhuman conduct in the execution of heinous crimes and violations that are contrary even to the rules of war. This state of exception has shocked the collective consciousness and entrenched a culture of fear, futility, indifference, passivity and subordination” (U.N. 2014). Here, the savage and barbaric conditions present in the state of nature Kant warns about are all too evident. For so many the absence of freedom and access to justice has manifested itself the various travesties these individuals have faced.


Impact of Religious Expression on Perceptions of Job Applicants Presenter: Sherin Shaju – Advisor: Dr. Arno Kolz Manhattan College Jasper Summer Research Scholars Program

Introduction: Previous research shows that job applicants who indicate a religious affiliation receive a negative hiring bias with Muslim applicants facing the most discrimination (Greenhouse, 2010). • An experiment created profiles for job candidates on social networks and job applications were sent out to employers. When comparing interview initiations for a Muslim versus Christian candidate, the Muslim candidate received a 13% lower callback rate (Acquisti & Fong, 2015). • A study involved confederates dressed in Muslim-identified or nonreligious attire applying for retail jobs; confederates were assigned to conditions in which they did/didn’t provide stereotype-inconsistent information. Interactions were shorter and rated more interpersonally negative when applicants wore Muslim attire and did not provide stereotype-inconsistent information in comparison to applicants who wore nonreligious attire (King & Ahmad, 2010). • Another experiment sent out fictitious resumes to advertised job openings. Resumes were altered to indicate an affiliation with one of seven religious groups or a control group. Results showed applicants who indicated a religious identity in general were 26% less likely to receive a response from employers. Of the seven religious groups included in this study, Muslims suffered the highest levels of discriminatory treatment, Catholics experienced moderate levels, and Jews received no evident discrimination (Wallace, Wright, & Hyde, 2014). The purpose of this present study is to understand the impact that outward religious expression has on the perceptions of job applicants. Methods: 97 participants were asked to carefully review a resume and complete two surveys. The selection process was restricted to only include participants who are working a full time job and have experience reviewing resumes. These participants were randomly assigned to one of six conditions. Six resume conditions were constructed as stimuli. These resumes were identical in content except the name of the job applicant and picture that appeared on the resume. The name of the applicant varied to represent a religious affiliation: Aaron Goldstein (Jewish), Mohammad Hussein (Muslim), and Gabriel Thomas (Christian). Three out of six resumes indicated a low degree of religious expression. These resumes were identical, including the picture (pictured is a Caucasian male wearing a dress shirt), except the differing names listed above. The other three resumes indicated a high degree of religious expression. These resumes were identical in content but the pictures and names varied. Aaron Goldstein’s resume was paired with a picture of the same Caucasian male but this male was wearing a yamaka. Mohammad Hussein’s resume was paired with a picture of the same Caucasian male but this male was wearing a taqiya. Gabriel Thomas’s resume was paired with a picture of the same Caucasian male but this male was wearing a cross. The pictures paired with these resumes varied to represent varying degrees of religious expression. Although adding photos on resumes is not common practice, it was necessary for the purposes of this study. Negative reactions participants had for the mere presence of a photo were not included in the analysis. A survey was created which had seven questions to assess participants’ willingness to hire this job applicant. Question 1 was “Overall, how qualified for a job do you think this person is?” (QUALIFIED) and scored using a numerical scale from 1 (not at all) to 4 (very qualified). Question 2 was “Would you consider hiring this person for a job?” (HIRING) and scored using a numerical scale from 1 (definitely not hire) to 4 (definitely hire). Question 3 was “What is your general impression of this person?” (IMPRESSION) and scored using a numerical scale from 1 (very bad) to 4 (very good). Question 4, an open-ended question, asked “Why do you feel this way about this person?”. Question 5 was “How professional does this candidate appear?” (PROFESSIONAL) and scored using a numerical scale from 1 (not at all professional) to 4 (very professional). Question 6 was “How dedicated does this candidate seem to his career?” (DEDICATED) and scored using a numerical scale from 1 (not at all dedicated) to 4 (very dedicated). Question 7,an open-ended question, asked “If you could change something about this candidate, what would it be?”. The open-ended questions were scored by adding up the number of negative (NEGATIVE COMMENT) and positive comments the participant made (POSITIVE COMMENT). At the end of the survey, there was a question which asked if the participant believes that the job applicant is affiliated with a particular religion which meant to confirm if the participant was privy to the religious expression. A second survey asked for participants’ demographics.

Results: Significant main effects were found for religion on HIRING (F=4.81, p< .01), DEDICATED (F=3.25, p< .04), NEGATIVE COMMENT (F=6.29, p< .003), and POSITIVE COMMENT (F=.618, p< .003). Inspection of marginal means indicated that in all cases, the Muslim job applicant was viewed the most favorably and the Christian job applicant the least favorably. Significant main effects for religious expression were found for HIRING (F=5.31, p< .024), IMPRESSION (F=4.47, p< .037), and NEGATIVE COMMENT (F=7.34, p<.008). Examination of marginal means revealed that in all cases those who did not express a religious preference were viewed more favorably. Finally, significant interactions were found for HIRING (F=4.07, p< .02) and QUALIFIED (F=2.99, p< .05). In both of these cases, Muslim applicants were judged more favorably when they expressed a religious preference while Christian and Jewish applicants were viewed less favorably when they expressed a religious preference.

Discussion: A theme emerged of participants consistently evaluating job candidates negatively as the job candidates’ degree of religious expression increased. However, unexpectedly, when expressing religious preferences, Muslim applicants were viewed more favorably in comparison to Christian and Jewish applicants. Even when solely examining religion, Muslim applicants were viewed more favorable in comparison to Christian and Jewish applicants. These results can be explained by the social desirability bias which causes participants to provide survey answers which they deem acceptable i.e. non-discriminatory responses (Grimm, 2010). Another explanation is positive feedback bias which is when people provide more praise and less criticism for an individual because they’re aware they have a bias against the individual and are overcompensating for this bias i.e. evaluating Muslim candidates more favorably to overcompensate for an existing bias against Muslims (Harber, 2004). Limitations of this research study included a small, narrow sample which lacked diversity in both age and gender with the sample mainly consisting of young females all from the New York area.

Table 1. Means and Standard Deviations for all Dependent Variables by Experimental Condition Hiring: Qualified: Impression: Professional: Dedicated: Religious Religion: Expression: No: Christian: Mean: 3.13 3.36 3.27 3.07 3.27 SD: .640 .633 .704 .799 .799 Jewish: Mean: 3.35 3.24 3.53 3.29 3.65 SD: .493 .437 .514 .588 .493 Muslim: Mean: 3.29 3.07 3.36 3.14 3.43 SD: .469 .267 .633 .663 .514 Yes: Christian: Mean: 2.75 3.15 2.92 2.85 3.08 SD: .452 .801 .760 .899 .954 Jewish: Mean: 2.75 3.19 2.94 2.88 3.25 SD: .683 .834 .854 .957 .683 Muslim: Mean: 3.45 3.59 3.41 3.55 3.77 SD: .596 .590 .590 .596 .528

Table 2. Marginal Means and Standard Deviations for Religion and Religious Expression Hiring: Qualified: Impression: Professional: Dedicated: Religion:

Religious Expression:

Negative Comment:

Positive Comment:

0.64 .497 0.18 .393 0.29 .469 0.83 .389 0.69 .479 0.36 .492

0.64 .497 0.82 .393 1.00 .000 0.58 .515 0.56 .512 0.91 .294

Negative Comment:

Positive Comment:

Christian: Mean: SD: Jewish: Mean: SD: Muslim: Mean: SD: Yes: Mean:

2.96 .587 3.06 .659 3.39 .549 3.06

3.26 .712 3.21 .650 3.39 .549 3.35

3.11 .737 3.24 .751 3.39 .599 3.14

2.96 .838 3.04 .805 3.39 .645 3.16

3.18 .867 3.41 .617 3.64 .543 3.43

0.73 .452 0.42 .502 0.33 .478 0.58

0.62 .496 0.70 .467 0.94 .232 0.72

SD: Mean: SD:

.682 3.26 .535

.744 3.22 .471

.749 3.39 .614

.857 3.17 .677

.755 3.46 .622

.499 0.36 .484

.454 0.82 .387

No:

References:

Acquisti, A., & Fong, C. (2015, July 17). An Experiment in Hiring Discrimination Via Online Social Networks. Retrieved June 10, 2016. Greenhouse, S. (2010). Muslims report rising discrimination at work. The New York Times, 23. Grimm, P. (2010). Social desirability bias. Wiley International Encyclopedia of Marketing. Harber, K. D. (2004). The Positive Feedback Bias as a Response to Out-Group Unfriendliness1. Journal of Applied Social Psychology, 34(11), 2272-2297. King, E. B., & Ahmad, A. S. (2010, November 02). An Experimental Field Study Of Interpersonal Discrimination Toward Muslim Job Applicants. Personnel Psychology, 63(4), 881-906. Wallace, M., Wright, B. R., & Hyde, A. (2014). Religious Affiliation and Hiring Discrimination in the American South A Field Experiment. Social Currents,1(2), 189-207.


Examining the Effects of Socioeconomic Status on News Media Consumption in the Bronx By Anthony Capote Research Adviser: Dr. Cory Blad

esearch Questions How do people of varying socioeconomic statuses consume news? What sorts of news are most important to people on the basis of social class? How do people get their news?

Key Assumptions • Poorer people will more often read tabloids and watch local broadcasts •

Wealthier people are more likely to read broadsheets and watch national news

Wealthier people consume news more often overall

Data Analysis • News media companies and scholars might have seriously misinterpreted the consumption habits of their intended audiences • Internet will likely play a major role in how scholars understand news consumption • Conventional ideas of what kinds of news matter to the poor—as well as to the wealthy and middle class—are generally wrong Conclusions • Much of the conventional wisdom and beliefs surrounding the culture of news media consumption are seriously outdated and, in many cases, dead wrong • The poor are not more inclined to read tabloids or watch local news

18� 16� 14�

Methods • Survey sample • 57 respondents

12� 10� 8� 6�

Yes� No�


Searching for the Historicity of Arthurian Legends By John Evans Advisor: Dr. Jennifer C. Edwards In the year 1190, the monks of Glastonbury Abbey made a remarkable claim. According to the historian Gerald of Wales, Abbot Henry had discovered the lost grave of King Arthur. Influenced by dreams of the legendary King, Glastonbury’s monks dug between two large stone structures that had worn away with time. Sixteen feet under these stone markers they found a large stone and a lead cross with an inscription: “Here lies buried the famous King Arthur with Guinevere his second wife in the isle of Avalon.” Digging further, the monks located a large oak coffin containing two skeletons: One belonged to an enormous man with a Primary Sources • • • • • •

Gildas (6th c) Jordanes (6th c) Bede (7th c) Nennius (9th c) Annales Cambriae (10th c) Geoffrey of Monmouth (12th c) • William of Malmesbury (12th c) • Gerald of Wales (13th c) • Sir Thomas Malory (15th c)

skull wound visible. The other belonged to a woman with preserved blonde hair. Gerald relates that a monk reached greedily into the grave to snatch up the hair in a state of ecstasy. But upon clasping the golden locks, the hair disintegrated in his grasp. This was no matter. The monks had unearthed what they had been looking for. Their archeological find received royal patronage almost at once. Pilgrims flocked to the otherwise unpopular and destitute Abbey. Arthur, the Once and Future King, was home. According to Gerald, a great stone tomb was constructed for the royal couple and placed in one of the Abbey’s chapels.

The Legendary Arthur

The King Arthur as we know him derives from many sources each built upon a patchwork of contradictory lore. Much of the Arthur of Legend can be traced back to Sir Thomas Malory. This fifteenth-century author was by no means the original transmitter of the epoch. On the contrary, much of his book, “Le Morte D’Arthur” draws from previous sources such as the embellished work of Geoffrey of Monmouth’s “The History of the Kings of Britain.” It is Mallory who assembles the current framework of Arthurian romance, giving us the chivalrous knight in shining armor, the sword and the stone, the adultery of Guinevere, the quest for the Holy Grail, the wizard Merlin, and the regal castle of Camelot.

For centuries the bones of Arthur slept peacefully in the heart of Glastonbury undisturbed by the machinations of Medieval Britain. But all of that would change when Henry VIII broke with Rome and dissolved the monasteries. The Glastonbury Abbey was sacked, its relics spoiled, and its secrets burnt to the ground. All that remains of the relics discovered in 1190 are the shattered remnants of Arthur’s second regal tomb. But was this truly the last resting place of King Arthur? If so, who was he? Where might we find him in contemporary medieval sources?

Ambrosius Aurelianus

Gildas, Bede, Nennius, and Geoffrey all refer to this Romano-British General who lived during the late 400s. Each successive author accords Ambrosius a lesser and lesser role in the defense of Britain. In our earliest source, Gildas, he occupies center stage as the Romano Briton “left alone” to protect the island against the Saxon hoard. We are told that his parents were “adorned with the purple” and that they were likely slaughtered in the Saxon advance. We are also told that the British won victory at Badon Hill, a turning point in the conflict with the Saxons. The question remains, was Ambrosious at Badon? The victory at Badon seems to be associated with Ambrosius in Gildas and Bede. Yet Nennius depicts Badon as Arthur’s greatest victory with the slaughter of over 900 men in one day.

Riothamus

The sixth-century historian Jordanes recorded the life of the “King of the Britons” called Riothamus. Unlike other candidates for Arthur, we have written evidence of his existence preserved in a letter addressed to him from the Gallic senator Sidonius Apollinaris. The letter is striking for its portrayal of Riothamus as someone deserving respect. The name Riothamus may be translated to “high king.” The name Riothamus may be a title, which leaves open the possibility that this figure bore the name Arthur. It is unclear whether this king was overlord of Britain or of Brittany, a French province where Britons had migrated and preserved their ancient customs. Riothamus apparently left his power base to fight the Visigoths who wished to seize France from the Romans. He may have disappeared near a town named Avalon, a name that appears as the last resting place for Arthur in Geoffrey epic.


FACTORS INFLUENCING PATIENTS’ MEDICAL FACILITY DECISIONS Madison Swenton, Maria Maust-Mohl Department of Psychology, Manhattan College

Abstract

Results

Discussion & Conclusion

The goal of this study was to better understand which factors influence medical facility choice and satisfaction. Participants (n = 97) were recruited through social media and e-mail to respond to the anonymous online survey created and distributed on Qualtrics. The results demonstrated that primary care physician’s offices were the most recently visited and most preferred facility choice. The top factors for recently visiting a primary care physician included a previous visit, insurance coverage, and an established relationship with a doctor. A majority of participants recently visited the facility they preferred and were satisfied with their care. These findings can help inform improvements in the care being offered at various medical facilities.

The primary care physician’s office was the most preferred (n=74 participants, 76.3%) and most recently visited facility choice (n=61 participants, 62.9%). Urgent care was the second most preferred and visited facility, followed by hospitals, nursing homes, and other facilities (Figure 1). The survey items were reliable (Cronbach’s α = 0.676). Using a cross tabulation analysis, the top factors for most recently visiting a primary care physician was based on previously visiting that office, insurance coverage, and an established relationship with a doctor (Table 1). Overall, participants were satisfied with the quality of the care they received at any facility. The highest satisfaction ratings were reported for primary care, followed by urgent care and hospitals. Satisfaction appeared to be influenced by the duration of care and communication skills of medical staff.

The results of this study showed that primary care physicians were the most preferred (76.3%) and most recently visited by patients’ (62.9%). Urgent care was the second most preferred (17.5%) and most recently visited (16.5%), followed by hospitals, other facilities and nursing homes (Figure 1). The top factors for recently visiting primary care were previously visiting that location, insurance coverage, and an established relationship with a doctor (Table 1). Anticipated wait time (87.50%) and proximity to home (81.25%) appear to play a huge role for participants’ decisions to visit urgent care. These results did not support my hypothesis, but suggest that participants were more familiar and comfortable with their primary care physician and were thus more likely to visit those facilities. Previous studies support the significance of the patient-doctor established relationship which plays a major role in greater satisfaction (Cleary & McNeil, 1988; Boquiren, Hack, Beaver, & Williamson, 2015). Urgent care clinics tend to involve quick and flexible service, but lack the patient-doctor relationship that exists in primary care offices.

Introduction

80

Healthcare is an important aspect of every American’s life, yet there are constant concerns about quality of service, insurance policies, convenience, and expenses. Patient satisfaction surveys can be used to obtain feedback from people utilizing different medical services. The results of these surveys can allow for comparisons among facilities and doctors, provide incentives to improve patients’ experiences, and help other patients’ make healthcare decisions. Inquiries of patient satisfaction have been used as a technique since the 1960’s consumer movement (Boquiren, Hack, Beaver, & Williamson, 2015), but inconsistent methods have results in challenges in evaluating the quality of medical care. More recent studies of patient satisfaction have tended to focus in on specific departments, such as oncology (e.g., Lis, Rodeghier, & Gupta, 2009). However, little research has looked at the more general aspects of patient’s medical care choice. A few studies have identified factors that may impact medical care choices such as proximity to home, severity of illness/symptoms, waiting time, and more (e.g., Andersen & Newman, 2005). Investigating these factors and how they relate to patient satisfaction can help further improve the United States healthcare system.

60

Goal of this study: To focus on patient's recent choice of medical facility, satisfaction at that particular facility, and whether it differs from the patient’s preferred facility. We predict that Urgent Care Clinics will be the most frequently visited facility with good feedback about quality of care. We believe private physician’s office will be the least visited by patients, mainly due to wait time.

Method Participants We recruited 97 participants (67 females, 24 males, 6 unknown, ages 18 and older) through email, FaceBook, and Twitter, as well as students in summer courses in Psychology at Manhattan College. Procedure The anonymous survey was created and distributed online through Qualtrics. The survey consisted of 10 demographic questions and 23 statements pertaining to the most recently visited facility choice and their satisfaction at that particular facility (scale: 1 (Strongly disagree)-5 (Strongly agree)). The last question addressed whether their preferred facility differed from the most recently visited facility choice and why might this be. All data was analyzed using Microsoft Excel and SPSS.

70 Most Preferred

50 40

Most Recently Visited

30 20 10 0

Primary Urgent Care Physician's Clinic Office

Hospital

Nursing Home

Other

Figure 1. Participants most preferred vs. most recently visited medical choice Table 1. Primary factors and reasons for satisfaction provided by participants who selected primary care physician’s office as their most recently visited facility (% based on 61 out of 97 participants)

TOP FACTORS 1) Previous visit to that location 2) Insurance coverage 3) Established relationship with a doctor SATISFACTION 1) Duration of care 2) Communication skills of staff 3) Satisfaction with overall experience

Strongly Agree

Agree

Neutral

Disagree

Strongly Disagree

21 34.43% 14 22.95% 8 13.11%

28 45.90% 31 50.82% 37 60.66%

3 4.92% 8 13.11% 7 11.48%

7 11.48% 5 8.20% 5 8.20%

2 3.28% 3 4.92% 4 6.56%

18 29.51% 25 40.98% 21 34.43%

39 63.93% 32 52.46% 35 57.38%

3 4.92% 3 4.92% 3 4.92%

0 0% 1 1.64% 1 1.64%

1 1.64% 0 0% 1 1.64%

Regardless of the type of medical facility, the participants were satisfied with their overall experience at their most recently visited facility. However, participants appeared to be more satisfied with primary care and urgent care, than hospitals. For the most part, people reported high satisfaction at each facility or remained neutral. A majority of participants seemed to feel that being comfortable was the most important part of healthcare and that is why they return to and prefer their primary care physician’s office. This finding was consistent with other results about patient satisfaction. People tend to be more satisfied when the care is more personal (Cleary & McNeil, 1988). These results suggest that people perceive comfortability and familiarity as good quality of care, resulting in higher satisfaction. Although we live in a fast paced world, there are certain factors connected with the quality of care that appear to have greater influence on people’s choice of medical facilities they visit. Further investigations of the driving factors for patients’ choices and experiences are needed and should include a more diverse sample. Studying how these factors may vary from different cultural backgrounds may allow for new healthcare policies to be developed. Evolving patient satisfaction surveys will help the healthcare field grow with the changing needs of people.

References Andersen, R. & Newman, J. F. (2005). Societal and Individual Determinants of Medical Care Utilization in the United States. The Milbank Quarterly. 83 (4), 1-28. Boquiren, V. M., Hack, T. F., Beaver, K., & Williamson, S. (2015). What do measure of patient satisfaction with the doctor tell us? Patient Education and Counseling, 98, 1465-1473. Cleary, P. D., & McNeil, B. J. (1988). Patient Satisfaction as an Indication of Quality Care. Inquiry 25 (1), 25-36. Lis, C. G., Rodeghier, M., & Gupta, D. (2009). Distribution and determinants of patient satisfaction in oncology: A review of the literature. Patient Preference and Adherence, 3, 287–304.


Does high self-efficacy in a particular life domain offset the negative impact of low self-efficacy in other life domains? A study of college students and workers. Raymond Caridi Department of Psychology, Manhattan College Under the supervision of Dr. Nuwan Jayawickreme

Results

Introduction Self-efficacy is the belief in one’s own capability to perform a task (Gist, 1987). There is a positive relationship between self-efficacy and overall well-being (Tong and Song, 2004) People can have distinguishably different levels of self-efficacy in different domains of life (Lent et al., 1997). In the current study , we tested the hypothesis that high selfefficacy in one domain (e.g., at school) will moderate the relationship between low self-efficacy in a second domain (e.g., the workplace) and overall subjective well-being, such that high self-efficacy in one domain reduces the negative impact of low self-efficacy in the other domain on subjective well-being.

Methods  Data was collected from 84 Manhattan College students and 34 full-time workers employed as maintenance workers or debt collectors.  These participants were recruited through snowball sampling, and were asked to fill out surveys which contained scales measuring self-efficacy at work, self-efficacy at school, selfefficacy at home, and overall satisfaction with life.  Academic self-efficacy was measured using the Patterns of Adaptive Learning Survey (PALS; Midgley et al, 2000). Familial and work self-efficacy was measured using an adaptation of the General Self-Efficacy scale (Schwarzer & Jerusalem, 1995). Overall well-being was measured using the Satisfaction with Life scale (Diener, Emmons, Larsen & Griffin, 1985).  A moderation analysis was run using PROCESS Macro for SPSS in order to analyze interactions among the variables (Hayes, 2016).  Mahalanobis distances, Cook’s distances, and centered leverage values were measured in order to determine any outliers.

Conclusion

 Academic self-efficacy (r = .40, p = .000), familial self-efficacy (r = .35, p =  The results suggest that, as opposed to domains of .000), and work self-efficacy (r = .29, p = .002) had significant positive self-efficacy moderating one another, they actually correlations with well-being. have their own individual effects on well-being  There was no overall moderation effect of the different types of self(the more domains one has high self-efficacy in, efficacy on each other when predicting well-being. the greater their life satisfaction).  However, at different levels of self-efficacy, there was some evidence of  Academic self-efficacy predicts well-being when moderation (see Table 1) one of the other domains is low or average.  Familial self-efficacy seems to predict well-being more significantly when the other domains Table 1 increase.  Work self-efficacy does not appear to predict wellbeing.  A study with a larger sample size could potentially Family self-efficacy is low AND Academic self-efficacy significantly find a overall moderation effect.

If…

Then…

work self-efficacy is low

predicts well-being (p = .0116)

References

Family self-efficacy is average AND Academic self-efficacy significantly work self-efficacy is low predicts well-being (p = .0005)  Diener, E., Emmons, R. A., Larsen, R. J., & Griffin, S. (1985). The satisfaction with life scale. Journal of

Personality Assessment, 49, 71-75. Family self-efficacy is average AND Academic self-efficacy approaches  Gist, M. E. (1987). Self-Efficacy: Implications for work self-efficacy is average significance (p = .0695)

Family self-efficacy is high AND work self-efficacy is low

Academic self-efficacy significantly predicts well-being (p = .0114) 

Family self-efficacy is high AND work self-efficacy is average

Academic self-efficacy significantly  predicts well-being (p = .0183)

Work self-efficacy is low AND academic self-efficacy is average Work self-efficacy is average AND academic self-efficacy is average

Familial self-efficacy approaches significance (p = .0702) Familial self-efficacy approaches significance (p = .0617)

Work self-efficacy is average AND Familial self-efficacy significantly academic self-efficacy is high predicts well-being (p = .0076) Work self-efficacy is high AND academic self-efficacy is high

Familial self-efficacy significantly predicts well-being (p = .0207)

 

Organizational Behavior and Human Resource Management. The Academy of Management Review, 12(3). pp. 472-485. Hayes, A. F. (2016). The PROCESS Macro for SPSS and SAS. Retrieved from http://processmacro.org/index.html Lent, R. W., Brown, S. D., & Gore Jr., P. A. (1997). Discriminant and Predictive Validity of Academic SelfConcept, Academic Self-Efficacy, and MathematicsSpecific Self-Efficacy. Journal of Counseling Psychology, 44(3). pp. 307-315 Midgley, C., Maehr, M. L., Hruda, L. Z., Anderman, E., Anderman, L., Freeman, K. E., et al. (2000). Manual for the Patterns of Adaptive Learning Scales. Ann Arbor, MI: University of Michigan. Schwarzer, R., & Jerusalem, M. (1995). Generalized SelfEfficacy scale. Measures in health psychology: A user’s portfolio. Causal and control beliefs (pp. 35-37) Tong, Y., Song, S. (2004). A Study on General SelfEfficacy and Subjective Well-Being of Low SES College Students in a Chinese University. College Student Journal, 28(4). pp. 637-642


SCHOOL OF SCIENCE 2016 Summer Research


Development of the New Small Wheel for the ATLAS Experiment- Micromegas Construction Alexander Karlis ● Manhattan College ● 4513 Manhattan College Pkwy, Bronx, NY 10471 Givi Sekhniaidze ● CERN ● Building 154, Route de Meyrin 385, 1217 Meyrin, Switzerland The Micromegas Detector cont…

LHC Overview • • • • • • • • • • • •

Proton-proton collisions at 13 TeV Circumference : 27 km Depth: 50-175 m Proton speed: 0.99999999 of speed of light 600 million collisions every second World’s largest fridge: T = 1.9 K Ultra-high vacuum: P = 10-13 atm Magnetic field: 9 T Cost: $10 billion to build, not including operational costs Countries: 80 Universities: 500 Scientists and students: 7000

3.) After the gel coat is dried, eight layer of carbon fiber is laid. In order to harden the carbon fiber, a vacuum bag is created over the carbon fiber and an epoxy is applied, simultaneously, while the vacuum is on.

The ATLAS Detector and the ‘New Small Wheel’

Charged particles traversing the drift space ionize the gas; the electrons liberated by the ionization process drift towards the mesh. The electron avalanche takes place in the thin amplification region, immediately above the readout electrode.

Development of Vacuum Tables

• •

A significant part of muon trigger rate in the end-caps is background. An analysis of 2012 data demonstrates that approximately 90% of the muon triggers in the end-caps are fake. The NSW will have to operate in a high background radiation region while reconstructing muon tracks with high precision as well as furnishing information for the Level-1 trigger. Confirm the existence of track segments found by the muon end-cap middle station (Big Wheels) online.

Development of the Vacuum Tables cont…

Vacuum tables are the tools used in order to help facilitate the building of the detector panels for the New Small Wheel; including, readout panels and drift panels.

4.) After the vacuum tables have been created, they are test on their surfaces for flatness while the vacuum is on. If there are any deformation in the surface, hole are drilled in the general vicinity and glue is injected into the holes to fill any air gaps that may have arisen in the table.

1.) Preparation for the gel coat involved cleaning the granite table three times.

5.) Completed vacuum tables being put to use.

The Micromegas Detector

2.) The gel coat was then applied to the clean granite table with a certain thickness.

’Micromegas’ - ‘Micro mesh gaseous structure’ MM detectors consist of a planar (drift) electrode, a gas gap of a few millimeters thickness acting as conversion and drift region, and a thin metallic mesh at typically 100–150 µm distance from the readout electrode, creating the amplification region.

Acknowledgements I would, first like to thank Dr. Konoplich, for allowing me to come on his trip to Geneva with him, and allowing me to learn from such an amazing experience. I would also like to thank Givi Sekhniaidze for being a great mentor and boss while I worked on this project. And last, but not least, I would like to thank the CERN community for being so welcoming and helpful in my quest in becoming a physicist.


Dynamics of Flowering Branches of Artemisia tridentata Ismael Pena, Biology Department

Cumulative Length of All Branches (mm) Total Number of Branches Total Length of New Growth Total Length of Stem (mm)

06/04/15

09/10/15

307.94

366.2

14 126.209 302.1

17 210.155 259.507

Purpose: • The purpose of this study was to determine growth characteristics of branches, vegetative and flowering, using stem samples of Artemisia tridentata from June 2015 to November 2015.

10/01/15 1260.55 22 226.93 282.935

11/23/15

Cumulative Branch Length of Samples per Branch 1400

1260.05 22 289.80 311.10

1200

Cumulative Branch Length (mm)

Background: • Stems of Artemisia tridentata show eccentric growth that inhibits their ability to grow tall. • In contrast, each plant produces 20-50 stems per year and each stem has 100s of seeds per year.

11/23

1000

10/01

800

600

400 09/10

06/04

200

0 0

5

10

15

20

Number of Branches

Material & Methods: 1. Stem Samples of Artemisia tridentata were collected from JuneNovember 2015 and sent to us from Thistle, Utah. 2. Each sample was laid out and the branches were marked and removed from the main stem. 3. The main stem was then cut into smaller sample pieces, which were put into various alcohol solutions. 4. Samples were put in wax, made into blocks, then cut with a microtome to make microscope slides.

Results: • Our results show that as time passes ,from June to November, the branches change from vegetative to flowering. • As time passes we observed an increase in the cumulative branch lengths as well as the number of branches per stem. • The total length of New Growth also Increases. The author is grateful to the Catherine and Robert Fenton Endowed Chair to Dr. L.S. Evans financial support for this research.

25


Christina Hibner

Abstract

Results

Samples of terminal branches from five tree species were studied to determine if their geometric similarities resemble fractals. Three generally accepted procedures were used to determine if branch terminals were fractals. Box counting, self-similarity and the Newman methods were used. With box counting and selfsimilarity techniques, fractal values of 1.15 to 1.42 were obtained for whole branches. Similar fractal values were obtained from simple bifurcation-like terminals and samples when additional side branches were added. Newman analysis showed very poor results. Terminal branches show some fractal characterizers.

Fig.2 Platanus occidentalis (SYC)

The next method of segmenting the branches for analysis is shown in figure 3 and mainly tests for self-similarity. Beginning with all forked branches resembling the letter “Y” were compared against each other. Next, a single branch was added to the pervious images and was titled “Y+1,” then was run through FracLac. Then, another single branch was added to the previous “Y+1” image and was titled “Y+2” and so on in this fashion (Fig.3). All the Y+# branches cross species have a similar structure therefore their fractal dimension numbers should all be the same no matter what number branch was added. (Table 2). The results obtained support this theory. All fractal dimension numbers are around 1.2 which is the fractal dimension of that pattern.

Background

Methods and Materials For this study, images of actual tree branches were traced as line drawings with paint.NET (Paint.NET). Thereafter, the backgrounds were removed so that the background is devoid of pixels. These line drawings were analyzed using three different approaches the first two including the use of ImageJ FracLac to get a fractal dimension number. ImageJ FracLac is an open source program which uses box-counting to analyze fractal patterns. It is a widely accepted method for fractal analysis. The box-counting program super-imposes a grid of boxes over the Fig. 1 pattern to be analyzed (Fig. 1). FracLac measures how much space a pattern takes up by using increasingly smaller side lengths and counting the number of boxes that overlap at the pattern for each box size and then takes the ratio. The fractal dimension is the slope of the box count over the box size. Several different methods were used to section off the branch drawings to obtain data points that trend linearly. First, samples were segmented at their nodes down the main branch then cumulatively added node after node until there entire branch was analyzed (Fig.2). As a general trend, as more of the branch is added to the analyzed image the fractal dimension number calculated increased linearly (Table 1) This is to be expected because as the branch becomes more and more complete, it approaches the total fractal dimension number.

Table 1

When two first-order branches converge, they turn into a second-order branch. When two second-order branches converge, they turn into a thirdorder branch and so on. However, when a lower-order branch runs into a higher-order branch, the following branch takes the higher-order number (Fig. 4).

Fig. 4 Table 3

_________________ Node Number________________

Species Acer palmatum (JAP) Cornus florida (DOG) Cornus sericea (OSI) Lagerstroemia indica (CRA) Crataegus monogyna (HAW) Morus rubra (RED) Platanus occidentalis (SYC) Zelkova serrata (ZEL)

Fig. 3

Zelkova serrata (ZEL)

1 1.26 1.27 1.17 1.3 1.16 1.16 1.27 1.17

1-2 1.26 1.24 1.22 1.32 1.16 1.17 1.27 1.18

1-3 1.31 1.24 1.20 1.26 1.18 1.18 1.29 1.25

1-4 1.32 1.33 1.23 1.29 1.17 1.20 1.30 1.25

1-5 1.30 1.29 1.24 1.30 1.19 1.19 1.31 1.28

1-6 1.32 1.33 1.26 1.31 1.17 1.20 1.29 1.32

Fractal dimension

A fractal is a simple infinitely repeating pattern resulting in very complex shapes. The word 'fractal' comes from the Latin word fractus meaing 'broken' and was first used by Benoit Mandelbrot in 1975 in a paper entitled "The Fractal Geometry of Nature". Mandelbrot defined fractal as a rough geometric shape that can be split into repeating parts that are identical, just smaller. The words "self-similarity' also describes this phenomenon. There shapes are commonly found in nature and are as infinite as reality will allow. For example, a binary tree branch starts at one branch then to two, four, eight, sixteen, thirty two, and so on however environmental factors and the limits of biology prevent trees from being true fractals. These structures have a unique geometrical property in that they are self-similar [Hibner].

Table 2 _______________________________________Branches_________ Species Entire Y Y+1 Y+2 Y+3 Y+4 Acer palmatum (JAP) 1.32 1.26 1.22 1.22 1.26 1.23 Cornus florida (DOG) 1.42 1.26 1.27 1.23 1.24 1.26 Cornus sericea (OSI) 1.29 1.22 1.25 1.22 1.23 1.22 Lagerstroemia indica (CRA) 1.35 1.27 1.25 1.21 1.21 1.2 Crataegus monogyna (HAW) 1.15 1.17 1.17 1.17 1.17 1.18 Morus rubra (RED) 1.26 1.25 1.22 1.2 1.22 1.19 Platanus occidentalis (SYC) 1.3 1.29 1.24 1.27 1.28 1.29 Zelkova serrata (ZEL) 1.32 1.26 1.17 1.17 1.18 1.18

Fractal Dimension

Tree Branches as Fractals

Another method was found in a paper written by W. I Newman in which the fractal dimension is hand calculated from the outside in using ratios of the number of exterior branches or first-order branches and their lengths versus the number of second-order branches and their lengths (Table 3).

Conclusion The fractal dimension increases linearly as the number of side-branches increased. Considering more side-branches makes the branch more complex and thus the linear addition of branches eventually approached the fractal dimension for that species. As expected, species with more side-branches had higher fractal dimension values than species with fewer side-branches. The self-similarity approach (Y method) used herein had similar values as box counting. Similar to the box counting approach, considering additional side-branches eventually gives a relatively constant fractal value for each species. Newman analysis gave very poor results both within each species and among species.

Acknowledgements Dr. Evans, Dr. Roy, Dr. Liby, Joe Brucculeri and Jesse Jehan, Research Scholars, Manhattan College. The author is grateful to the Catherine and Robert Fenton Endowed Chair to Dr. L.S. Evans financial support for this research.

References "FracLac for ImageJ" "FracLac for ImageJ" Web. 11 July 2016. <https://imagej.nih.gov/ij/plugins/fraclac/FLHelp/Introduction.htm>. Hibner, Christina “Fractal patterns of diffraction and interference patterns.” Manhattan Scientist, 2015 Newman, W. I. "FRACTAL TREES WITH SIDE BRANCHING." Fractals, Vol. 5, No. 4 (1997) 603–614 World Scientific Publishing Company. Web. <https://www.math.purdue.edu/~agabriel/tree.pdf>. "Paint.NET - Free Software for Digital Photo Editing." PaintNET RSS. N.p., n.d. Web. 21 July 2016. <http://www.getpaint.net/index.html>


Distribution of Secondary , Tertiary and Quaternary areas of plants leaves By Jorge Gonzalez, Biology Department

Purpose : The purpose of this study was to determine if secondary, tertiary and quaternary areas were scaled to total leaf area for leaves of several species. Materials and Methods 1. Six leaves were chosen from the Manhattan College Campus. 2. The leaves were put into alcohol to decolorize the pigments and stained. 3. Secondary, tertiary and quaternary areas were photographed and measured with Image J( National Institutes of Health).

eaf areas are surrounded by veins

Total Leaf Area (cm2 ) 92.2 94.8 101.4 124.8 128 141.2

Determination of tertiary areas

Determination of quaternary areas

Species

Secondary Areas (mean) (cm2 ) 1.83 1.61 1.86 3.3 6.85 6.7

Hydrangea macrophylla Fraxnius pennsylvancia Quercus alba Viburnum lentago Tilia americana Populus deltoides

Total Leaf Area

Tertiary Areas (mean)

Number Tertiary Areas

Quaternary Area (mean)

Number Quaternary Areas

(cm2 ) 92.2 94.8 101.4 124.8 128 141.2

(cm2 ) 0.39 0.43 0.12 0.29 0.36 0.29

(total) 236 220 845 430 356 487

(cm2 ) 0.004 0.002 0.002 0.015 0.002 0.004

(total) 23,050 47,400 50,700 8,320 64,000 38,162

Tertiary areas are similar for six species.

Quaternary areas are similar for six species.

Secondary leaf areas are scaled to total leaf areas.

uture Research Is there a relationship between the number of xylem cells and the areas of leaves that xylem cells will provide water? For example, quaternary leaf areas should require fewer xylem cells to provide water compared to secondary leaf areas.

he author is grateful to the Catherine and Robert Fenton Endowed Chair to Dr. L.S. ans financial support for this research.

Primary vein xylem cells Number of Xylem cells

163

Secondary vein xylem cells 49

Tertiary vein xylem cells 15


Xylem Conductivity Characteristics in Grass Plants Humberto Ortega, Biology Department

Background

Methods

Plants need water to grow. Xylem cells conduct water from roots to stems and leaves. Xylem conductivity (McCulloh, et al. 2003) is a measure of the ability of a tissue to transport water. Water in stems must be conducted to leaves. What percentage of water in stems is conducted to the leaves?

Purpose The hypothesis of this experiment was that tropical grass species would have higher percentages of xylem conductivity that stems contribute to leaves than temperate grass species. The rationale for this hypothesis is that tropical plants live in warm, humid environments in which leaf transpiration rates should be much higher than for temperate plants that live in cooler, less humid environments. Water travels from the stem into the leaf sheath, which then feeds some of the water into the leaf blade.

http://archive.gramene.org/species/oryza/images/japonica_tiller_molly_labeled.jpg

http://facweb.furman.edu/~lthompson/bgy34/plantanatomy/leaf_monocotxs_large_labeled.gif

As seen in the image to the left and results on the right, there are many more vascular bundles in the stem compared to the relatively few bundles found in leaves.

https://s-media-cache-ak0.pinimg.com/736x/77/1d/12/771d1222a6a348db2ea10f6c305229b1.jpg

Reference: McCulloh, K. A., J. S. Sperry, and F. R. Adler. 2003. Water transport in plants obeys Murray’s law. Nature 421: 939-942.

The purpose of these experiments was to determine the xylem conductivity characteristics of tropical and temperate grasses. Tropical grass samples were obtained in Hawaii in 2014 while temperate samples were obtained near Manhattan College in 2014 and 2015. Tissue samples were prepared for histology and viewed under a microscope. Xylem conductivity of stems was compared with the xylem conductivity of leaves that these stems would provide water to. The number of bundles in both stems and leaves were determined. Xylem conductivity (a method of estimating the ability of xylem cells to transport water) was determined.

Results

The results of this experiment show that, on a percentage basis, tropical grass species contribute about twice as much xylem conductivity to their leaves than temperate grass species. The data also show that the number of bundles per leaf and the xylem conductivity per bundle were not different between the two grass groups. Since xylem conductivity per bundle was not different between the two groups, we interpret the data to mean that water conductivity in tropical plants is fundamentally different from that in temperate plants. Xylem Conductivity

Species Stem Leaf Tropical species Phragmites australis 4.86 0.373 Zea mays 92.43 5.881 Polypogon interruptus 0.19 0.015 Saccharum officinarum 12.47 2.306 Digitaria insularis 3.53 0.347 Cenchrus agrimoniodes 0.33 0.009 Axonopus fissifolius 0.08 0.003 Mean 16.27 1.276 Standard deviation 33.87 2.189 Standard deviation/Mean 2.08 1.72 Temperate species Pennisetum hameln 0.09 0.019 Miscanthus ' morning light' 6.20 0.472 Sphenopholis intermedia 0.13 0.013 Calamagrostis acutiflora 0.13 0.014 Hordeum vulgare 0.02 0.006 Miscanthus sihensis 1.61 0.352 Dactylis glomerata 0.46 0.065 Poa pratensis 0.03 0.003 Mean 1.08 0.118 Standard deviation 2.14 0.185 Standard deviation/Mean 1.97 1.57 Ttest probability 0.28 0.21

Number of bundles Mean Xylem Conductivity per Bundle

Percent to leaf

Stems

Leaves

Stems

Leaves

7.7 6.4 7.8 18.5 9.9 2.8 4.3 8.2 5.1 0.62

57.0 444.0 39.0 110.0 189.0 74.0 36.0 135.57 145.97 1.08

31.0 45.0 9.0 27.0 30.0 13.0 5.0 22.857 14.334 0.63

0.0853 0.2082 0.0049 0.1134 0.0187 0.0045 0.0022 0.0624 0.0781 1.25

0.0120 0.1307 0.0017 0.0854 0.0116 0.0007 0.0007 0.0347 0.0520 1.50

22.1 7.6 10.0 10.9 27.4 21.9 14.2 9.4 15.4 7.4 0.48 0.044

23.0 126.0 28.0 22.0 13.0 103.0 72.0 20.0 50.88 43.64 0.86 0.18

9.0 21.0 9.0 9.0 9.0 47.0 17.0 9.0 16.250 13.264 0.82 0.37

0.0037 0.0492 0.0046 0.0057 0.0016 0.0156 0.0064 0.0014 0.0110 0.0161 1.46 0.13

0.0021 0.0225 0.0014 0.0015 0.0006 0.0075 0.0038 0.0003 0.0050 0.0074 1.50 0.18


Methods to Compare the Rates of Bark Formation for Individuals of a Population of 599 Saguaro Cacti

General Logistic Curve

Lauren Barton

Morbid

Manifest

Initial Time

Previous Study:

volved manipulation of a eneralized logistic curve to shift nd fit data for 900 cacti and quire an synchronized curve. Used calculate average rates of bark etween surfaces.

•Initial: Open Circles •Manifest: Gray Circles •Morbid: Black Circles

Results

ABSTRACT: Bark formation occurs on saguaro cacti and other species of columnar cacti. Bark formation eventually leads to cactus death. Average bark formation rates have been determined for a population of saguaros (599 cacti). The current research shows that the Barked Green Dead rates of bark formation of individual cacti can be compared with this ‘average’ rate of bark formation. Understanding the process for individual cacti will provide information on Comparisons of methods and data sets the morbidity and mortalitiy processes. •Time shift general uses all data points for each Procedure 2: Nonlinear mixed effects Procedure 1: Time Shift General cactus in a data set to find an average logistic A MATLab program used to determine A MATLab program used to determine curve and is very similar to the method used in rates of bark formation for individual average rates of bark formation for a specific cactus plants for a specific data set. Logistic the previous study. data set by generating the same logistic curves are generated for each individual •Nonlinear mixed effects generates logistic curves curves mentioned in the previous study. cactus as well as the average cactus. for each cactus individually as well as the average A) 2014 Data cactus. A) 2014 Data •Both programs can be used to determine rates of bark formation. •2014 data set is comprised of cactus plants that are almost completely covered in bark. •2015 data set is comprised of cactus plants that are just beginning to show bark formation on their surfaces. B) 2015 Data B) 2015 Data

rom these logistic curves, we were able to obtain an average ate of bark formation for 900 actus plants. From this, we were ble to determine the average me delay of formation of bark etween the surfaces. For xample, the North Right surface will show bark 15 years after the Procedure 1 generates an average curve similar to the curve created in the previous outh starts to show bark study. The logistic curve is an average, but it ormation. can include an additional data point (2014 or 2015) and is calculated for a smaller set of data. In theory, procedure 1 should generate time delay values similar to those found in the previous study.

Conclusion & Future Analysis

Procedure 2 allows us to track individual cacti in a specific data set. Each colored line represents an individual cactus and shows how each cactus is different and as such, has a different rate of bark formation. We can follow these individuals and look for outside variables acting on a cactus that causes a different rate of bark formation.

The current study will be used to better understand rates of bark formation for individual cacti. These additional methods will be further analyzed to look to find a correlation between bark formation and other outside effectors. In addition, these new methods will be used to compare saguaro cactus plants which are already completely covered in bark as well as cactus plants that are just starting to show bark formation on their surfaces. The author is grateful to the Catherine and Robert Fenton Endowed Chair to Dr. L.S. Evans financial support for this research.


Background

Chromium (VI) is a pollutant found in ground water, and in microscopic amounts, can be fatal. Chromium (VI) is categorized as a human carcinogen and when ingested can cause lung cancer and can cause damage to kidneys and intestines. Chromium (VI) is formed by the erosion of chromium deposits in nature; it can also be formed through industrial processes such as welding on stainless steel, paints, textile dyes and leather dying. Ascorbic Acid, also known as Vitamin C, is known to reduce Chromium (VI) to Chromium (III). Chromium () is an essential dietary element found in most foods. However, ascorbic acid is soluble in water, therefore the retrieval of its oxidized form is not possible. The goals of this project is to load ascorbic acid onto an insoluble absorbent, use this complex to reduce chromium (VI) to chromium (III), and be able to recycle this complex.

Reducing Chromium (VI)

Chromium (VI) is able to be reduced by the method of rotating the GAC-Ascorbic Acid Complex (GAC-AAC) with the Chromium (VI) in increments of 10 mL per 5 minutes. Using 0.50 g of GAC-AAC, Chromium (VI) is able to be reduced until 40 mL is used, which is when the concentration of Chromium (VI) is shown to surpass the EPA’s recommended concentration for safe drinking water. When Chromium (VI) is left overnight soaked in GAC-AAC, it is able to reduce 100 mL of Chromium (VI) to safe drinking concentrations.

Recycling the GAC-AAC

Recycling GAC-AAC involves the reduction of Dehydroascorbic Acid. To do this, the used GAC-AAC must be preserved in hydrochloric acid as quickly as possible. Half an hour later, 20 molar equivalents of glutathione to ascorbic acid is added to the GAC-AAC and is left overnight. However, glutathione can be left soaking in GAC-AAC in as little time as 30 minutes. Lastly, the newly activated GAC-AAC is then filtered and washed generously with water. GAC-AAC can be recycled once, and is in the process of being able to be recycled multiple times.

What is GAC?

GAC is short for Granulated Activated Carbon. Water industries use this to remove organic wastes in groundwater, non potable water, and processed waters. This carbon is designed to withstand multiple uses and its reactivation.

Loading Ascorbic Acid Onto GAC

The standard procedure of loading ascorbic acid onto GAC is by dissolving ascorbic acid in water. Then, using a 1:1 weight ratio of ascorbic acid and GAC, add the GAC to the ascorbic acid solution. This sits overnight. It is then filtered and washed generously with water. This complex can be dried up to temperatures no higher than 120°C.

Table One: Reduction of Cr (VI) Material Volume of 200 µM Cr Concentration Of Cr (VI) (VI) GAC 5 mL 27.5 µM GAC 10 mL 29.8 µM GAC 20 mL 51.02 µM GAC-AAC 10mL 5.04 µM GAC-AAC 20 mL 6.14 µM GAC-AAC 30 mL 8.08 µM GAC-AAC 40 mL 10.6 µM EPA’s Recommended Concentration: 8.62 µM Conclusion: GAC-AAC is 40 times more effective than GAC alone

Table Two: Recycled GAC-AAC Volume of 200 µM Cr Concentration Of Cr (VI) (VI) 10mL 27.4 µM* 20 mL 3.90 µM 30 mL 2.17 µM 40 mL 6.07 µM 50 mL 9.88 µM EPA’s Recommended Concentration: 8.62 µM *Rotation and equipment error causes values to fluctuate. Conclusion: Glutathione is effective as a reducing agent and can also assist in reducing Cr (VI)

Conclusion

Ascorbic Acid is able to be loaded onto GAC by simply soaking the GAC in an ascorbic acid solution for 24 hours. This complex proves to be more effective in reducing Chromium (VI), which is just under 500 times more concentrated than EPA’s standard, than GAC alone. This complex is also able to be reduced by soaking with a reducing agent in an acidic environment.

Acknowledgements

School of Science for financial support The Department of Chemistry and Biochemistry Dr. John Regan as mentor

References

Dayan, A.D.; Paine, A.J. Mechanisms of Chromium Toxicity, Carcinogenicity and Allergenicity: Review of the literature from 1985 to 2000. Human and Experimental Toxicology. 2001, 20, 439-451. Hawley, E.L.; Deeb, R.A.; Kavanaugh, M.C.; R.G, J.J. Treatment Technologies for Chromium (VI). Chromium (VI) Handbook; CRC Press LLC 2004; 8, 273-308. Kazmi, S.A.; Rahman, M.U. Kinetics and Mechanism of Conversion of Carcinogen Hexavalent Cr(VI) to Cr(III) by reduction with Ascorbate. Journ.Chem.Soc.Pak. 1997. Vol.19, No.3.


Organic Molecules That Aid in Removing Cr(VI) From Water Douglas Huntington Manhattan College

Abstract

Results

Chromium (VI) metal is a designated priority pollutant by the US Environmental Protection Agency. In humans this metal, in trace amounts, has been linked with genotoxic carcinogenicity and lung cancer if continuous exposure persists. Chromate, which contains Cr(VI), can be removed inefficiently from water using a large granulated activated carbon, GAC, filter. An alternative method of chromate removal by using different organic compounds of varying structure to form a chromate ester complex which could be readily absorbed onto GAC was explored over the summer. This was accomplished by reacting chromate and various organic compounds together then adding GAC to determine how well these chromate ester complexes that formed absorbed onto GAC and comparing their 位max using UV/ Visible spectroscopy. I demonstrated that compounds with aromatic rings, such as catechol and 1,2-diaminobenzene, remove more chromate from solution compared to diol compounds, as propanediol and ethylene glycol.

The most effective compounds at removing Cr(VI) from solution at 3 to 1 molar equivalents of compound to chromate were compounds that contained aromatic rings, such as catechol and 1,2-diaminobenzene. Catechol had an average percentage of Cr(VI) removal of 86% and 1,2-diaminobenzene had an average percentage of Cr(VI) removal of 83%. Th non-cylcic diols that were tested, ethylene glycol, propane diol, pinacol, had an average percentage of Cr(VI) removal lower than the aromatic compounds, with all of their removal values around 66%. The cyclic diols that were tested, cis and trans 1,2-cyclohexanediol, were similar to the non-cyclic diols in which there removal of Cr(VI) values were around 66%. The percentage of Cr(VI) removal for trans-1,2-diaminocyclohexane was 66%, just in par with the other diols. We wanted to see if we remove all of the Cr(VI) from solution so we decided to change the molar equivalence from 3 to 1 to 6 to 1. Using our three best compounds, catechol, 1,2-diphenylamine, and 1,3propanediol, we ran the experiment again. Catechol removed 96%, 1,2-phenlydiamine removed 91% and 1,3-propanediol removed 86% of Cr(VI) in solution.

Methods and Materials

Table 1: Molar equivalence of 3 to 1 for Cr(VI) removal

Table 2: Molar equivalence of 6 to 1 for Cr(VI) Removal

In order to minimize error from constantly weighing out chemicals for different trials, stock solutions of the various compounds were made. By doing this, I calculated how many milliliters I needed for the various molar equivalents I tested and just easily pipet out the desired amount. This procedure was also followed for the 2.0x10-2 M chromate stock solution that was made. The two solutions were then pipetted into a 15mL beaker and a small stir bar was added. The solution was allowed to stir for 15 minutes. After that, 0.6g of GAC were added to solution and then stirred for another 15 minutes. After the stirring was completed, the solution was then diluted to a concentration of 2.0x10-4M. The dilution had to occur since the UV/Vis spectrum machine could not get an accurate reading on a concentrated solution. The diluted compound solution was then compared, using the UV/Vis spectrum machine, to a standard solution of 2.0x10-4M chromate and a 2.0x10-4M chromate and 0.6g Gac solution, in order to prove that GAC was not removing all of our chromate. This gave the lambda max of the solutions. From this, we could calculate the percent removal of chromate from solution.

Conclusions

Figure 1: How 1,2-diaminobenzene removes chromate from solution

Compounds that contain an aromatic ring and two hydroxyl groups are better at removing Cr(VI) from solution than their diol or non aromatic counterparts. The reason for this is probably due to the stabilization of the aromatic pi electrons, which increases the chance of the chromate ester to be absorbed onto GAC. At three molar equivalents, around 83 to 86% of Cr(VI) in solution is removed. At six molar equivalents, above 91% of Cr(VI) in solution is removed. Further investigation will occur to see if we can successfully remove all of the organic compound in solution next semester.

Future Research The stability of these chromate esters that form as a result of the reaction between chromate and the various compounds are interesting among themselves. Dr. Capitani will calculate the stability of the various chromate esters and will compare the stability calculations with the percent removal of chromate from solution data I calculated. Dr. Capitani can theoretically calculate the stability of these chromate esters and I will experimentally determine the stability of these chromate esters. Hopefully we will determine which ester is the most stable and focus on that during the semester. We will be seeking publication for the data on the stability of these chromate esters in February 2017.

Percent removal of chromate from solution= (位max of standard chromate + GAC) - (位max of compound)/ (位max of standard chromate+GAC)

Contact Information Douglas Huntington Manhattan College dhuntington01@manhattan.edu

Discussion The chromate species and the organic compound react to form a chromate ester, which absorbs onto GAC compared to regular chromate. With the selection of compounds I tested, we where able to compare various structural elements to determine if they had an influence on Cr(VI) removal. Stereochemistry was investigated by comparing Cr(VI) removal of cis and trans 1,2-dihydroxycyclohexane. Cr(VI) removal were similar for both compounds, 68% and 69% , so stereochemistry was determined to be ineffective on Cr(VI) removal. Nucleophilicity was tested, by the fact that nitrogen is more nucleophilic then oxygen, and shown to have a negligible effect as shown with 1,2-diaminocyclohexane. Percentage of Cr(VI) removal for this compound was 66% which is comparable to the other diols. Another factor examined was the ring size of the chromate ester that formed. The chromate ester that forms is either a 5 or 6 membered ring with the Cr(VI) species. Ring size was found not to have a huge effect as shown by ethylene glycol, which forms a 5 membered ring, and 1,3-propanediol, which forms a 6 membered ring. Cr(VI) removal for these compounds were 69 and 70%. As a result, ring strain does not have a major influence in the formation of these esters. The one factor that was determined to have an effect on Cr(VI) removal was the presence of an aromatic ring in the compound. Both catechol and 1,2-diaminobenzene had larger percentages of Cr(VI) removal than their counterparts. One possible reason is the stabilization of the chromate ester by the pi electrons that the aromatic ring posses which helps with the ester absorption onto GAC.

References 1. 2. 3.

Nakajima, Akira, and Yoshinari Baba. "Mechanism of Hexavalent Chromium Adsorption by Persimmon Tannin Gel." Water Research 38.12 (2004): 2859-864. Web. Corey, E.j., Einte-Paul Barrette, and Plato A. Magriotis. "A New Cr(VI) Reagent for the Catalytic Oxidation of Secondary Alcohols to Ketones." Tetrahedron Letters 26.48 (1985): 5855-858. Web DeSilva, Frank. "Activated Carbon Filtration." Water Quality Products Magazine (January 2000): n. pag. Web.

Acknowledgements I would like to thank the Manhattan College Science department for this opportunity. I would also like to thank Dr. Regan for his assistance and insight pertaining to my research.


Remediation of Water Containing Chromium VI Using Ascorbic Acid Mary Cacace Department of Chemistry and Biochemistry

Abstract

Project Goals

At least 74 million Americans in forty-two states consume a Group 1 carcinogen, classified by the International Agency for Research on Cancer (IARC). Chromium (VI), one of the United States’ top pollutants, is produced by certain industries such as stainless steel and textile dyes amongst many others. Due to its mobility in soil, this chromium (VI) byproduct has wandered from groundwater into the drinking water. Ascorbic acid, commonly known as Vitamin C, is known to reduce chromium (VI) to chromium (III). Chromium (III) is a more beneficial form of chromium found in daily multivitamins. Our goal is to produce an insoluble ascorbic acid derivative that can both perform the chemical reduction and subsequently be recycled for additional uses.

Synthesis (Cont’d) IV.  Tosylation

V.  Ring closing of Hydroxytosylate (Epoxide Formation)

Background Chromium is a metallic element with many chemical properties and multiple oxidation states, making it useful in an industrial setting. Industries that use chromium include but are not limited to: stainless steel, textile dyes, chrome plating, fungicides, wood preservatives, ceramics, and leather tanning. In employing chromium, these industries produce Cr(VI) as a harmful byproduct. From the soil and aqueous effluent of industrial sites, Cr(VI) moves into the groundwater in the form of chromate (CrO42-) and dichromate (Cr2O72-) ions. According to WHO, the safest level of exposure to chromium ions is less than 0.5mg/L. The adverse effects of chromium (VI) exposure, both in the long and short term, are extensive. It is regarded as a carcinogen by the Environmental Protection Agency, World Health Organization, and the International Agency for Cancer Research. Current remediation methods to remove Cr(VI) from groundwater are physical, biological, and chemical. Ascorbic acid, commonly known as Vitamin C, has been known to reduce Cr (VI) to Cr(III), with dehydroascorbic acid as the byproduct [see reaction below]. Because both ascorbic acid and dehydroascorbic acid are soluble in water, a problem is encountered in regards to recycling the chemical compound once the reduction takes place. Therefore, to compromise the water solubility of ascorbic acid, syntheses were performed to attach hydrophobic groups to its structure so that it could be recycled from the water after the reduction of Cr(VI).

Reduction of Cr(VI) Cr(III) with ascorbic acid as the reducing agent. The byproduct of this reaction is dehydroascorbic acid.

VI.  Formation of the Napthyl Ether

The goal is to modify the structure of ascorbic acid to obtain the epoxide. The epoxide intermediate will give us a range of options for adding hydrophobic groups to the structure so that ascorbic acid is water insoluble.

Synthesis I.

Synthesis of Ketal-Protected Ascorbic Acid

II.  Formation of Benzyl Ethers to protect reduction site (C2 & C3)

III.  Cleaving the Ketal to for Diol

Results All synthetic steps up to and including the tosylation were successful. The ketal-protected ascorbic acid was successfully synthesized from L(+)-Ascorbic Acid. The benzyl addition to C2 and C3 of ascorbic acid was also successful, however TLC analysis showed the presence of benzyl bromide in the product so there was a need to purify this product. The ketal formed at C5 and C6 was cleaved in order to form a diol. A tosyl group was added to C6 of the diol successfully although there was a need to purify the product via HPLC. The ring closing of the hydroxytosylate to form the epoxide allowed for many possibilities in regards to adding hydrophobic groups. One approach used 2-napthol and sodium hydride. After allowing the 2-napthol and sodium hydride to react and the sodium napthoxide to form, the epoxide was added in the hopes of cleaving it to obtain a napthyl ether. This reaction is a work in progress.

Acknowledgments v Manhattan College School of Science for financial support v Dr. John Regan as mentor


Characterizing abnormal protein expression in Sense Mutant Zebra fish (Danio rerio) as a link to amyotrophic lateral sclerosis James LiMonta and Quentin Machingo, Ph.D. Department of Biology, Manhattan College, Riverdale, NY 10471

Abstract

The Sense mutation creates an ALS like phenotype in zebrafish. This mutation occurs on the 3’-untranslated region of the tardbp gene which codes for the protein TDP-43. Because mutations that occur in the 3’-untranslated region do not cause a direct change in protein sequence, the goal of my research is to show a comparative difference in the TDP-43 levels and mRNA levels of wild-type and Sense mutant zebrafish. To achieve this goal Western Blotting and Real-Time PCR analysis was used.

Introduction

Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease that destroys neurons throughout the central nervous system (CNS), most notably the motor cortex. Every year over 6,000 Americans are diagnosed with ALS, and the average life expectancy of this fatal disease is two to five years from the time of diagnosis. There are two types of ALS in humans: familial, which is genetically based, and sporadic, which is non-genetically based. Familial ALS comprises only 5 to 10% of ALS cases diagnosed, while sporadic ALS comprises 90 to 95% of ALS cases diagnosed. Some risk factors for ALS include age, sex, smoking, and lead exposure, but the hereditary factor has recently drawn a lot of interest. Research has shown that there are four major genes that, when mutated, account for most familial ALS cases. One such gene is tardbp, which codes for TDP-43 and when mutated, it has been shown to cause the development of ALS in zebrafish as well as in humans.

Previous research done by Manhattan College students has suggested that a mutation, named Sense, is located in the 3’untranslated region (3’-UTR) of the tardbp gene and causes the ALSlike phenotype observed in the affected zebrafish. The 3’-UTR is responsible for post-transcriptional gene regulation, through microRNAs (miRNAs) mediated regulation. miRNAs bind to a complementary RNA sequence on the 3’-UTR of a target mRNA and can result in altered mRNA function by either directly degrading the mRNA or inhibiting its translation. Since the Sense mutation is in the 3’-UTR, we cannot directly observe a change in the mRNA’s coding sequence and ultimately a change in the amino acid sequence of TDP43. The main purpose of my research is to show that the mutation on the 3’-UTRof the tardbp gene causes a change in TDP-43 production. This change in TDP-43 production will almost certainly be caused by the miRNA regulating the mRNA sequence responsible for the translation into TDP-43. I also would like to determine how the miRNA accomplishes this task either by directly degrading the mRNA or by inhibiting the mRNA translation.

Hypothesis I

The mRNA levels in the Sense mutant embryos will decrease due to the mutation that will cause microRNA to degrade or inhibit its translation.

Hypothesis II

A relative decrease in overall TDP-43 levels in the mutant embryos will be observed comparative to the wild-type embryos; due to the mutation on the 3’-Untranslated region of the TARDBP gene.

Results Figure 1: DNA sequence of the the 3’-Untranslated region of the tardbp gene.

Results

Methods & Materials

Protein was isolated from embryos, and analyzes by Immunoblotting to determine TDP-43 levels in wild-type and mutant embryos. mRNA was isolated from embryos at various developmental stages in wild-type, and mutant embryos. The relative mRNA levels were then determined by using a Real Time- Polymerase Chain Reaction (RT-PCR) Thermocycler.

Discussion

The RT-PCR analysis has shown a substantial difference in mRNA expression between the wild-type 4 dpf embryo and the Sense mutant 4 dpf embryo. The results show a decrease in mRNA levels in the mutant embryos when compared to the wild-type. This data does support my hypothesis that there would be a decease in mRNA levels in the Sense mutant when compared to the wild-type. Although I did not get results from the Immunoblotting to determine the levels of protein in the wild-type and Sense mutant the RT-PCR results reinforce my hypothesis that there will be a decrease in protein expression in the Sense mutants when compared to the wild-type. I think the mRNA results reinforce my hypothesis due to the fact that mRNA is a protein precursor

Future Direction

I would like to continue this research during my senior year at Manhattan College. My major goal would be to further optimize my Immunoblotting protocol to produce more clear, and conclusive bands. After I accomplish that task I would like to show the protein levels of wild-type and the Sense mutant to see if my hypothesis is correct.

Acknowledgements

I would like to thank my mentor Dr, Machingo for his guidance and unyielding patience this past year. I would like to thank Dr. Theodosiou for funding my research.


REDESIGNING AND IMPROVING THE MULTISTEP SYNTHESIS OF ZINGERONE AND DEHYDROZINGERONE DOMINICK RENDINA, JAMES V. MCCULLAGH, DEPARTMENT OF CHEMISTRY AND BIOCHEMISTRY, MANHATTAN COLLEGE, RIVERDALE, NY 10471-4098

BACKGROUND CONTINUED

ABSTRACT The multi-step synthesis of Zingerone is a experiment used at this school for second semester Organic Chemistry lab, it begins with the adol condensation of Vanillin and Acetone to form Dehydrozingerone followed by transfer hydrodgenation with Sodium Formate into Zingerone. The experiment has had varying results with students, and for the spring semester of 2016 not a single transfer hydrodgenation was successful. The focus of this research project is to find out what went wrong with the students and see if there was any kind of error within the procedure. Multiple tests were run on the second stage of the experiment to tinker around with the reaction to find what may cause failure as well as to find ways to improve the procedure to give better synthesis results.

BACKGROUND INFORMATION Dehydrozingerone and Zingerone are two components inside of the Ginger plant, a commonly used food spice. The ginger plant has been used for multiple different kind of aliments including motion sickness, diarrhea, loss of appetite, and inflammation. Dehydrozingerone contributes to the pungent taste within ginger and has been studied as a potential chemotherapeutic agent for colon cancer¹. Zingerone has been studied as an antioxidant for peroxynitrite which causes tissue and cell damage in humans, leading to strokes².

Stage 2: Transfer hydrodgenation of Dehydrozingerone

2. pH conditions The acidity and basicity of the solvent was tested in the transfer hydrodgenation stage to see how it would affect the reaction rate. Sulfuric acid, potassium hydroxide, potassium carbonate, and pure methanol were tested. 3. Addition order The addition order was tested for the transfer hydrodgenation stage to see how the reaction behaved. Original addition involves adding DHzingerone and Sodium Formate first into the solvent, then the catalyst. Reverse addition adds the catalyst first into the solvent, then DHzingerone and Sodium Formate. 4. Quality of starting materials For the transfer hydrodgenation stage the quality of each starting material was tested to see how it would hinder the reaction if at all with all other conditions of the reaction being the same. The following starting materials which were changed are: Starting Material

Sample Conditions

Dehydrozingerone

High quality vs Low quality

Sodium Formate

Clean & New vs Dirty & Old

Pd/Alumina Catalyst

I. Regular vs Pre-reduced II. Old vs New Regular Catalyst

O O

H

+ H3C

To find out what went wrong, multiple different conditions and research methods were employed on the procedure of the Multi-step Synthesis of Zingerone. To see exactly how certain reaction conditions can make or break the formation of our desired product, the following methods were altered: 1. Reaction time The original procedure calls for a 1 hour reflux for both the adol condensation and the transfer hydrodgenation, to see how fast our reaction was the reflux time was cut down to 30 minutes, 15 minutes, or 5 minutes.

O CH3

15% KOH Acetone CH3

120 °C Reflux 1 hour / 30 min / 15 min

Stage 1: Effects of Time and Yield of DHzingerone Time Refluxed

Recrystallization Method

Percent Yield

1 Hour

Ice bath

71.58%

30 Minutes

Ice bath

50.87%

30 Minutes

Extraction, then ice bath

15 Minutes

Silica gel filtration, then ice bath

78.94%

35.75%

Old and Dirty Sodium Formate

85.00%

15.56%

Older non-reduced regular catalyst

72.00%

Stage 2.1: Acidic vs Basic Conditions using the regular non-reduced catalyst, 30 minute reflux Solvent used

Conditions

.25M H2SO4 in Methanol

Strongly Acidic

Percent Conversion 99.99%

.25M KOH in methanol Sat. K2CO3 in Methanol

Stronly Basic

78.26%

Weakly Basic

90.90%

Neutral

69.56%

Pure Methanol

Solvent used

Conditions

.25M H2SO4 in Methanol

Strongly Acidic

Percent Conversion 77.78%

.25M KOH in methanol Sat. K2CO3 in Methanol

Strongly Basic

81.81%

Weakly Basic

97.56%

Pure Methanol

Neutral

80.00%

CH3 HO O

CH3

Vanillin and Acetone were added to a round bottom flask. The potassium hydroxide solution was added to the flask and the solution stirred for either an hour, 30 minutes, or 15 minutes. Once reflux ended the solution was left to cool, water was added to it and the solution was neutralized with weak acid. Three variations for recrystallizing the crude product were preformed here; one where extraction took place before the precipitate was vacuum filtered, another where sillica gel filtration took place before the precipitate was vacuum filtered, and one where the precipitate was directly vacuum filtered and collected. The product collected was a dark yellow damp powder.

Stage 2.3: Original Addition vs Reverse Addition, strongly acidic conditions 30 minute reflux Order Of Addition

Pd/Alumina Catalyst used

Percent Conversion

Original

Non-reduced regular variant

99.99%

Original

Pre-reduced variant Non-reduced regular variant

77.78%

Pre-reduced variant

69.23%

Reversed Reversed

Stage 2.4: Quality of starting materials in weakly basic conditions, regular addition, 30 minute reflux Reagent Changed Percent Conversion Low quality Dehydrozingerone

Stage 2.2: Acidic vs Basic Conditions using the pre-reduced catalyst, 30 minute reflux

Stage 1: Adol condensation of Vanillin and Acetone into Dehydrozingerone

HO

RESULTS The success of the experiment depended on percent conversion and/or how much yield we ended up receiving in the end. NMR spectroscopy was used to determine if our product was in fact Dehydrozingerone or Zingerone.

PROCEDURE O

Dehydrozingerone, sodium formate, and the Pd/Alumina catalyst were added to a round bottom flask with methanol (multiple variations of this step occurred). The solution refluxed for 30 minutes, 15 minutes, or 5 minutes and then was filtered through silica gel. MTBE was filtered through to the filtrate and the solution was extracted. The organic layer was boiled off on a sand bath, a light yellow oil precipitate was left behind.

RESULTS

RESULTS

PROCEDURE

52.38%

CONCLUSION In stage one extraction and silica gel filtration do not remove the yellow color and cause product loss. In addition to that, the less time the reaction refluxes the less product will be obtained. In stage two, The pre-reduced Pd/Alumina catalyst is more effective than the regular Pd/Alumina catalyst in all conditions except for strongly acidic conditions with original addition, it is unclear as to why this is. Despite the pre-reduced catalyst being more effective, the regular catalyst works just as well and is significantly cheaper. Reverse addition harms the reaction and causes a decrease in percent conversion, and lower quality starting materials do affect the rate of the reaction and cause a less pure product to form. Despite all of these conditions, none of the handicaps presented in stage two were crippling enough to cause the reaction to fail as they did for the Spring 2016 students. While it is unclear what went wrong with the students from the Spring 2016 semester, it is clear that the procedure works; all cases caused a significant percent conversion at only half the original procedure’s reflux time.

REFERENCE LINKS 1. Dehydrozingerone, a Structural Analogue of Curcumin, Induces CellCycle Arrest at the G2/M Phase and Accumulates Intracellular ROS in HT-29 Human Colon Cancer Cells Shingo Yogosawa, Yasumasa Yamada, Shusuke Yasuda, Qi Sun, Kaori Takizawa, and Toshiyuki Sakai Journal of Natural Products 2012 75 (12), 2088-2093 2. Zingerone as an Antioxidant against Peroxynitrite Sang-Guk Shin,†,‡, Ji Young Kim,†,§, Hae Young Chung,*,§,‖ and, and JiCheon Jeong*,‡ Journal of Agricultural and Food Chemistry 2005 53 (19), 7617-7622


Monte Carlo Studies of 5 Junction Comb Polymers John Stone

Background: ●

A polymer is defined as a chemical compound consisting essentially of repeating structural units.(Merriam-Webster) Monte Carlo is defined as ● relating to, or involving the use of random sampling techniques and often the use of computer simulation to obtain approximate solutions to mathematical or physical problems especially in terms of a range of values each of which has a calculated probability of being the solution.(Merriam-Webster) What was studied are 11 and 14 branch 5 junction polymers in both 2D and 3D.

Results:

Method: ●

The simulation programs of the polymers were written in C and complied using the Linux GCC compiler. In these programs, different growth algorithms were implemented to generate different random structures. The structures are grown on either a 2D square lattice or 3D cubic lattice. The lattice provides a simplified model of reality, allowing for either 4 growth directions in 2D (north,south,east,west) or 6 possible growth directions in 3D (up,down,north,south,east,west). These simplified models allow for computing times to be reduced to manageable durations, and for comparisons with analytical calculations. The algorithms guide the programs in creating a polymer in which self intersections are allowed. ● (1) All polymer growth starts at the coordinates 0,0 in 2D and 0,0,0 in 3D. This will serve as the global origin of the entire polymer. ● (2) Growth is done several branches at a time, based around a local origin. In the first stages of growth, the local origin and global origin are the same. In all following stages of growth the local origin is the location of the last unit grown in the previous set of branches. ● (3) Each branch is grown until it contains the desired number of units. The program determines the end of a branch by calculating the current unit grown (a running total of all grown units) and if it's index is evenly divisible

The entire process(1-3) is repeated for as many iterations as the user specified at the beginning of execution. Property values are maintained between each iteration. Once the sampling is complete, a series of averages are taken to provide the characteristics of the generated polymer. The properties which have been calculated include the radius of gyration(<S2>), asphericity(<A>), scattering form factor(S(K)), and gratio. These are defined as: ƛ1 , ƛ 2 ,and ƛ 3 are the dimensions of the ellipsoid enclosing the comb 2 polymer, K is the scattering vector, and 1 2 3 N is the number of units.

Beads:

7001

Sample:

160000

Multi-junction comb polymers have been used to package and deliver drugs to specific targets. This is an extension of research on combs with 2,3, and 4 junctions.

⟨ A ⟩=

[(ƛ 1−ƛ 2)2 +(ƛ 1−ƛ 3)2 +(ƛ 2−ƛ3 )2 ] 2

2(ƛ1 +ƛ 2+ƛ3 )

⃗ Dij=⟨ X j −X i , Y j−Y i , Z j −Z i ⟩ ⃗ ∗Dij ⃗ 1 −K (e ) 2 ∑i ∑ j N wheni≠ j∨ Dij≠0

S (K )=

g−ratio=

⟨S 2 ⟩comb 2 ⟨S ⟩ linear

Monte Carlo simulations were performed to generate 11 and 14 branch 5 junction comb polymers. From these simulations various properties where calculated. The computer results agreed excellently with the theoretical predictions.

References: 1. P.G. de Gennes, Scaling Concepts in Polymer Physics, (Cornell University Press, Ithaca, 1979).

Quantity:

Average:

Standard Deviation:

Lambda 1

371.263609

0.505076

Lambda 2

123.971230

0.114808

Lambda 3

59.300125

0.050033

3. C. von Ferber, M. Bishop, T. Forzaglia and C. Reid, Macromolecules, 46 (6), 2468 (2013).

s^2 A

554.534964 0.255418

0.552718 0.000373

4. G. Zajac and M. Bishop, Comp. Educ. J., 5(4), 107 (2014).

L1/s2

0.669504

0.000312

2. E.F. Casassa and G.C. Berry, J. Poly. Sci, A-2, 4, 881 (1966).

5. C. von Ferber, M. Bishop, T. Forzaglia, C. Reed and G. Zajac, J. Chem. Phys., 142, 024901 (2015).

The data for different N values have been extrapolated to predict results for infinite N values.

A

3D Monte Carlo Extrapolated 0.299(1)

g-ratio

0.617(1)

0.61683(0)

A

0.255(2)

0.25566(6)

g-ratio

0.475(2)

0.47521(9)

11-Branch

Theoretical

6. G. Zajac and M. Bishop, Comp. Educ. J., 6(1), 44(2015). 7. A.J. Barillas, T. Borgeson and M. Bishop, Comp. Educ. J., 6(3), 108 (2015). 8. R. de Regt, C. von Ferber, M. Bishop, A.J. Barillas, T. Borgeson, Physica A, 458, 391(2016).

0.29919(9)

Scattering factor vs X where X^2 = K^2 * <S^2> for both the Monte Carlo (denoted MC) and the theoretical predictions in 3D. The Debye plot is for linear chains (no branching) and the S(K) is for the branching chains.

⟨ S ⟩=ƛ +ƛ +ƛ The structure of the 14 branch comb polymer

● ●

14-Branch ●

Typical output from the simulation:

In this figure there are 3 units per branch

The structure of the 11 branch comb polymer

Conclusions:

Combs with 11 branches contain 2201 to 5501 units where as 14 branch combs have 2801 to 7001 units. 160,000 samples have been obtained.

A plot of theoretical predictions and data generated by Monte Carlo vs X

Acknowledgments: Dr. Marvin Bishop Dr. Rani Roy Mrs. Elen Mons Jasper Summer Research Program Stackoverflow.com (For code references and troubleshooting)


A Study in Nutrigenomics: How do Dietary Variations Influence Epigenetic Status? Presenter: Tiffany M. Rodriguez Faculty Advisor: Dr. Bryan Wilkins, Dept. of Chemistry and Biochemistry, Manhattan College, Riverdale NY 10471 Jasper Summer Research Scholars Program Methods

Abstract A.

Nutrigenomics is the study of nutritional control of gene expression. We aim to understand how nutrition regulates gene expression and identify epigenetic markers of diet related diseases. Epigenetics relates gene expression levels to chromatin-associated deviations that do not include DNA mutations. Many of the “switches” that promote epigenetic changes are controlled by posttranslational modifications (PTMs) on the chromatin specific histone family of proteins. We determine how changes in chromatin PTMs are influenced by dietary alterations. Specifically we want to delineate how epigenetic marks change with diet and how they correlate with altered chromatin structure. We utilize a technique that allows for the site-directed encoding of unnatural amino acids (UAA) into protein, in vivo, in yeast. We express histone proteins that contain the UAA, p-benzoylphenylalanine, which contains a photoactivatable crosslinking R-group that can create protein-protein crosslinks. Using this probe we scan histone proteins, in living cells for protein interaction partners. Using calorie restrictions (CR), we mimic dietary changes and monitor variations in crosslinking patterns. Positions that yield crosslinking variants provide the first clue to diet influenced epigenetic markers. Histone PTMs are well documented and we can associate the positions of crosslinking deviations with known localized modification sites. We conclude that CR drastically changes the nucleosomal interactome. When cells are stressed with limited glucose intake, crosslinking patterns are altered, verifying that protein interactions are changed as compared to wild-type cells. These changes verify that protein regulation is altered at the nucleosomal level, most likely in response to altered PTM patterns.

A.

B.

C.

C.

D. Role of Aminoacyl-tRNA Synthetase

Role of tRNA

A

The UAA contains a benzophenone group that is a photoactive crosslinker. When it is irradiated with 365nm of UV light, the benzophenone group of produces a free radical, and forms a covalent bond with a protein interacting partner.

Introduction Nutrigenomics is the emerging field that reveals the role of nutrition on gene expression, which brings together the science of bioinformatics, nutrition, molecular biology, genomics, epidemiology, and molecular medicine.1 This field of research predominately focuses on the effect of nutrients on the genome, proteome, and metabolome.2 By studying these effects we are able to comprehend the relationship between specific nutrients and nutrient-regimes on human health. Nutrigenomics overall approach consists of: demonstrating which genes are switched on/off at any given moment, understanding how gene/protein networks cooperate; and determining the influence of nutrients on altered protein expression levels. Specifically, this field aims to understand how nutrition alters the epigenetic control of gene expression.1-2

Discussion

B . Unnatural Amino Acid – p-benzoylphenylalanine5

RNA Codon Wheel

There are several intriguing examples of how nutrition state affects chromatin structure in Saccharomyces cerevisiae, and other model organisms.7 For instance, Robyr et al. demonstrated that altering the type and concentration of sugar in the medium led to changes in the histone deacetylation states of known carboncatabolic enzymes, and related metabolic functions.7 The accessibility of chromatin for transcription is affected by the modification state of histone, specifically acetylation and methylation.

E. Genetic Code Expansion and the Incorporation of Crosslinker Unnatural Amino Acid in Histones of living yeast

Chromatin Active State Vs. Chromatin Inactive State 7 1. Acetylation Increases  Less Crosslinks  Active Chromatin 2. Methylation Increases  More Crosslinks  Inactive Chromatin 3. Inhibition of Type I and II HDAC’s  Acetylation Increases  Active Chromatin 4. Active Type III HDAC’s  Deacetylation Increases  Inactive Chromatin Utilizing calorie restrictions we mimicked dietary changes, and monitored variations in crosslinking patterns. Histone positions that demonstrate crosslinking variants provide the first clue to diet influenced epigenetic markers. Therefore, the position of crosslinking variations can be associated with histone PTMs. We compared crosslinks in cells growing under normal conditions (2% glucose), versus cells grown in 1.5% and 1% glucose. The reduction in glucose percentage is considered dieting, therefore the cells are under calorie restriction. Calorie restriction (CR) was first tested on H2AA61, and it was the first position to demonstrate that CR could induce variations in crosslink patterns. In Figure 1C, crosslinks decreased as the percentage of glucose (glc) decreased from 2% to 1%. However, as crosslinks decreased at 1.5% glc, new crosslinks appeared in new positions. The same trend was observed in positions Figure 2A, and 2D. Since acetylation is attributed to the decrease of crosslinks, it can be inferred that CR induces acetylation on H2AA61, H2AY58 and H2AA20 . Acetylation could be activated by the inhibition of type I and II HDAC’S, which in turn produces an active open chromatin state, and makes gene expression more accessible. In Figure 2B, position H2BK30, crosslinks increase at 1% glc, rather than following the general trend of crosslinks decreasing when glc decreases. In Figure 2C, position H2AS17,the crosslinks begin to increase as glc decreases, rather than decreasing as previously observed. Since methylation is attributed to the increase of crosslinks, it can be inferred that CR induces methylation on H2BK30 and H2AS17. Methylation could be activated by HMTs, which in turn produces an inactive open chromatin state, and makes gene expression less accessible. In Figure 2E, position H3A29, crosslinks decrease at 1.5%, but increase at 1% glc, perhaps CR may induce both acetylation and methylation at this position. We could create genomic mutations at the sites of the potential PTM to determine if the mutants create the same variation in crosslinking. Therefore, we will correlate the specific PTM with nutritional changes and infer its role in genetic control.

Procedure (briefly): 1. Yeast cells containing the plasmids for the pBPA translational system and a histone gene containing the amber (TAG) mutant were grown in 2% glucose, and 1 mM pBPA (Wt cells). Cells grown in 1.5 % and 1.0 % glucose were dietary mimics. The histone proteins of interest were tagged with a C-terminal HA-tag for antibody detection. 2. Cells were allowed to grow to saturation allowing pBPA to be translated into the histone protein of interest, and in turn, incorporated into the nucleosome. 3. Whole cells were irradiated with 365 nm light for 30 – 45 min. 4. Whole cell lysates were prepared and proteins separated by electrophoresis on SDS-PAGE gels.

A) Eukaryotic Chromosome Organization B) Histone Octamer2 C) Histone Modification Gene Expression Nutrigenomics is a revolutionary way of utilizing food as a pharmaceutical, containing the ability to reverse disease and hinder the rigors of ageing.2 This field involves finding epigenetic markers of the early phases of diet related diseases. Epigenetics relates altered gene expression levels to chromatin-associated deviation, however this does not include changes in the DNA sequence. Epigenetic changes are controlled by posttranslational modifications (PTM’s), either on DNA, or the chromatin specific histone family of proteins. For instance, histone acetylation, methylation, and/or phosphorylation can adjust chromatin structure to allow genes to more, or less accessible. This work will focus on histone modifications and how they mediate chromatin rearrangement in response to dietary changes. For instance, it is known that calorie restriction extends the lifespan and health span of multiple model organisms.3 In addition, CR demonstrates its capacity to delay or even reduce various agerelated diseases such as, cancer and diabetes.4 However, the mechanism by which CR extends lifespan is still uncertain. It would be interesting to understand how CR promotes longevity in mammals, but at the chromatin level. We will investigate how protein interactions (at the histone level), change in response to dietary restriction. By studying these factors, we will be able to infer the structural and mechanistic details about how diet influences the control on gene expressions.

5. The proteins were then transferred to a PVDF membrane using western blotting techniques. Then the histone proteins and crosslinks were visualized using anti-HA antibodies.

Results Figure 1: The Changes of Protein-Protein Interactions of Histone H2AA61, in Response to Calorie Restrictions H2AA61

A.

Genetic Code Expansion and the Incorporation of Crosslinker Unnatural Amino Acid in Histones5

Future Research

H2AA61

D. H2AA61

Crosslinked Products

Crosslinked Products

Further research consists of testing a library of histone positions under calorie restrictions (CR) to monitor variation in crosslinking patterns. In addition, we will utilize antibody detection methods to indicate acetylation and phosphorylation on histone positions that demonstrated crosslink changes in response to CR. Amino acid restriction is linked to CR and shown to aid in the increased lifespan of organisms. Therefore, we will utilize a drug that induces amino acid restriction to observe how it affects crosslink patterns.

Acknowledgements pBPA

UV

+

+

-

+

-

%Glucose 10

+

+

+ +

+

- -

+

+

20 10 20 10 20 10

+ pBPA + UV

+

-

20 % Glucose 20

+

+

+

10

15

20

+

+

pBPA

+

UV

+

-

+

+

+

+

+

+

+

+

pBPA

+

UV

+

% Glucose 10

+ +

+ +

+ +

+ +

+ +

15 20 10 15 20

% Glucose 20 20 10 15 20 Figure 1: We are interested in how the chromatin structure and the chromosomal dynamics in living cells change in response to dietary changes. Therefore, we decided to first test how protein-protein interactions change on the nucleosome in response to diet changes. Yeast cells grown in 2% glucose are considered wildtype (WT). We compared crosslinks in cells growing under normal conditions (2% glucose), versus cells grown in 1.5% and 1% glucose. The reduction in glucose percentage is considered dieting, therefore the cells are under calorie restriction. Calorie restriction was first tested on histone H2A, position alanine 61, three replications were carried out in order to verify the consistency of the results observed in set 1 of H2AA61. A) Set 1 H2AA61: Crosslinks decrease as the percentage of glc decreases from 2% to 1% B) Set H2AA61: The same pattern in crosslinks was observed in set 2, as in set 1. However, in 1.5% glc, instead of the crosslinks completely disappearing as observed in 1% glc, new crosslinks appeared at different positions vs. WT. C) Set 1 vs Set 2: Replication, demonstrating the same results. D) Set 3 H2AA61 vs set 2: 1.5% glc of set 3 does not follow the same pattern of crosslinks as in set 2. However, it does follow the same trend observed in set 1 and 2 of crosslinks disappearing as the percentage of glc decreases from 2% to 1%. B.

H2AY58

A.

H2BK30

C. H2AS17

pBPA UV

+ +

% Glucose 20

Crosslinked Products

+

+

+

+

+

pBPA

15

10

20

15

10

% Glucose 20

+

+

+

+

+

UV

+ +

+ +

E.

D.

Crosslinked Products Translational Suppression System (above figure) uses an evolved orthogonal tRNA/tRNAsynthetase pair to encode unnatural amino acid, p- benzoylphenylalanine at the amber stop codon. The genetic incorporation of the UAA relies on an evolved translational system that utilized an amber (UAG) stop codon to introduce new chemistries into proteins. Therefore, the expanded genetic code of Saccharomyces cerevisiae requires both an evolved suppressor tRNA and an evolved aaRS (aminoacyl-tRNA synthetase) that were engineered to recognize, and translate, the UAA of interest. The aaRS charges its cognate tRNA with the UAA. Since the evolved tRNA is an amber suppressor, an amber codon can be introduced into our gene of interest and it feeds the expression system the UAA. This results in full-length protein expressed with a site-specifically placed UAA.

H2AA61

Crosslinked Products

Crosslinked Products

Expanding the Genetic Code In our research we apply a synthetic biology technique that allows for the site-directed encoding of unnatural amino acids (UAA) into protein, in vivo, in Saccharomyces cerevisiae (Yeast).5 By utilizing this technique we are able to express histone proteins that contain the UAA, p-benzoylphenylalanine (pBpa). pBpa contains a benzophenone group that is a photo-active crosslinker. Therefore, when the histone protein is irradiated with 365 nm ultraviolet light, the benzophenone group produces a free radical, and readily recombines to form a covalent bond with a protein interacting partner. This results in a protein-protein crosslink that can be utilized to scan histone proteins in living cells. By encoding pBpa at different positions throughout the histone protein, it allows us to analyze the protein-protein interactions that occur on the surface of the nucleosome. In addition, by working under in vivo conditions, a better understanding of protein-protein interactions under true physiological conditions can be achieved.

C. H2AA61

H2AA61

B.

Post- translational modifications (PTM’s) of histones are a vital step in the epigenetic regulation of a gene.6 Nterminal tails of histones are the most accessible regions of these peptides, because it protrudes from the nucleosome and possess no specific structure. The N-terminal tails of histone are subjected to various modifications such as: acetylation, methylation, phosphorylation, and ubiquitination, by the histone PTM writers.6 In general, PTMs are believed to function in a combinatorial pattern known as the histone code. This process alters the expression states of associated loci in multiple ways, thus enabling gene regulation.6 Overview of the Chromatin- Level Regulation of Gene Expression 7

Crosslinked Products

Crosslinked Products

+ +

15 10

Figure 2: The Changes of Protein- Protein Interactions of Histone: H2AY58, H2BK30, H2AS17, H2AA20 and H3A29 A) H2AY58: Crosslinks decrease as glc decreases from 2% - 1%. However, at 1.5% glc new crosslinks appear, but they become absent at 1% glc. B) H2BK30: Crosslinks increase at 1% glc, rather than following the general trend of crosslinks decreasing when glc deceases. In addition, a crosslink increases at 1.5% glc, but decreases at 1% glc. C) H2AS17: The crosslinks begin to increase as the of glc decreases, rather than decreasing as previously observed. D) H2AA20: Crosslinks decrease as glc decrease, except for one crosslink that remains the same in every glc percentage. E) H3A29: Crosslinks decrease at 1.5%, but increase at 1% glc.

The author wishes to take this opportunity to express her appreciation to the School of Science Research Scholars Program, and Dr. Rani Roy for funding this project. The author also thanks the Department of Chemistry and Biochemistry at Manhattan College for providing the materials to conduct this research. Finally, this project would not have been possible without the guidance and dedication of her mentor, Dr. Bryan Wilkins.

References 1. Neeha, V. S.; Kinth, P. J Food Sci Technol Journal of Food Science and Technology 2012, 50 (3), 415–428. 2. Choi, S.-W.; Friso, S. Advances in Nutrition: An International Review Journal 2010, 1 (1), 8–16. 3. Mei, S.-C., and Brenner, C. Calorie Restriction-Mediated Replicative Lifespan Extension in Yeast Is NonCell Autonomous. PLOS Biology PLoS Biol 13, (2015). 4. Lin, S.-J. Calorie restriction extends yeast life span by lowering the level of NADH. Genes & Development 18, 12–16 (2004). 5. Chin, J. W. et al. An expanded eukaryotic genetic code. Science 301, 964–967 (2003). 6. HIstome: The Histone Infobase http://www.actrec.gov.in/histome/ptm_main.php (accessed Aug 20, 2016). 7. Garfinkel, M. D.; Ruden, D. M. Nutrition 2004, 20 (1), 56–62.


Consequences of Chytridiomycosis and Urbanization faced by Red-Backed Salamanders in Lower New York State Paul Roditis, Gerardo Carfagno PhD

INTRODUCTION:

• Amphibians globally are suffering from habitat loss, pollution, disease, and climate change. • International Union for Conservation of Nature listed about 30% of all amphibian species in the world as threatened (Hof et al. 2011). • Chytridiomycosis is a major disease effecting amphibians globally & caused by Batrachochytrium dendrobatidis (Bd), a chytrid fungus. (Global Invasive Species Database 2006). • Urbanization may also have adverse effects on salamander populations by altering important environmental factors such as canopy cover, moisture, and temperature. • Our goals were to determine if 1) this population is a carrier for Bd and 2) if urban populations are forced to occupy less suitable microhabitats.

Manhattan College, Dept. of Biology, Riverdale NY CONCLUSION:

RESULTS: DNA Analysis:

Ecological Analysis:

Figure 1

Figure 2

Lanes: 1: 100 Bp Marker 2: (+) Control 3: (-) Control 4-20: Samples

MATERIALS & METHODS:

Sampling: • 17 amphibians were sampled from field sites (Van Cortlandt Park, Saxon Woods Park and Teatown Park) through the summer months over a 2yr period. • Red-back Salamander, Plethodon cinereus, was the species focused on when sampling, however some other amphibian species were also swabbed for Bd. • Salamanders were swabbed for Bd and cutaneous microflora and had physical measures recorded. • At each salamander location, several ecological variables were measured, including; • distance to the nearest tree • diameter of the tree • length and width of cover object • canopy and ground coverage • air and ground temperature. • Same measurements were taken at a paired site at a random distance and direction for comparison.

Data Analysis: • DNA extracts from the 17 amphibians were analyzed using conventional end-point PCR, and viewed under UV light after a gel electrophoresis was run (Figure 1). • Ecological data was analyzed using either T-tests or Chi-square tests to examine differences among groups.

Figure 3

DNA Analysis: • The positive control was successful in demonstrating the efficacy of our primers and thus testing positive for Bd. • None of the amphibians sampled tested positive for the fungus. • Bacterial swabs from the skin of salamanders were successfully grown in culture and the various bacterial colonies were observed and compared. Ecological Analysis: • Salamanders tended to be found closer to mature trees (Figure 2), and underneath shorter cover objects (Figure 2) compared to the available microhabitat. • Ground temperature at salamander sites tended to be cooler than surrounding locations (Figure 4). • Our results show that salamanders are choosing microhabitats based on certain characteristics. • However, there were no significant differences found between amphibian habitats in urban locations and more rural sites. • This shows that salamanders are able to find ideal microhabitats even in an urban park (Van Cortlandt), so conservation of these populations is possible!

Figure 4

ACKNOWLEDGEMENTS:

• We thank Dr. Theodosiou and the School of Science Summer Research Scholar's Program for financial support, and Andrew Paramo, Mary Portes and James LiMonta for assistance in data collection.


Exploring Chromatin Dynamics Within the DNA Damage Response Pathway in Living Cells Bright Shi and Bryan J. Wilkins, Manhattan College, Department of Chemistry and Biochemistry

Abstract Genetic information is stored in the form of chromatin, consisting of DNA, histones and other essential proteins. Histone proteins mediate all aspects of chromatin function and are regulated by sets of posttranslational modifications (PTMs). Modification patterns dictate differential pathways dependent upon cellular queues. This dynamic behavior is at the heart of all chromatin related processes, such as replication, transcription and repair. Unfortunately, DNA is inherently susceptible to damage. There are numerous forms of damaging factors, where several DNA damage pathways collectively protect the genome from life-threatening mutations that have direct links to both cancer and aging. Therefore it is crucial that methods are developed that allow for us to study chromatin processes to better understand DNA damage pathways. We are using a synthetic biology approach that can trap histone-protein interactions in living cells, using unnatural amino acids. Comparing histone-protein interactions that are altered, due to DNA damage, will help us resolve the mechanisms that reshape chromatin structure under damaging stress. Many factors recognize and repair different types of damage but the orchestration of their function is still largely unknown. DNA damage signaling promotes broad changes in histone PTMs, and how the modifications control interactions at the nucleosomal interface during the response pathway is elusive. We can monitor histone PTMs across the cell cycle and correlate their influence on histone-protein interactions during damage pathways. We aim to expose nucleosomal repair protein-protein interactions and the mechanistic details of repair dynamics in yeast.

Introduction

Methods

Chromosomal stability is contingent upon localized chromatin domain reorganization in order to allow access to impaired nucleotides by cellular repair machinery. There remains an insufficient amount of data regarding the complicated molecular interactions that occur on the nucleosomal surface, particularly in the context of the living cell, during the DNA damage response. It is very difficult to assess chromatin dynamics in living cells. There do exist methods for studying chromatin in vivo, however the biochemical resolution of these techniques cannot completely clarify mechanistic details. Most chromatin related studies rely on the reconstitution of nucleosomal arrays in solution that cannot fully recapitulate real physiological conditions. In order to enhance our understanding of chromatin behavior in cells we require a technique that illuminates the molecular contacts that occur between the nucleosome and chromatin associated proteins in their native environment.

Our approach uses methyl methanesulfonate (MMS) and hydrogen peroxide (H2O2) as the DNA damaging agents. MMS is an alkylating drug that specifically methylates guanine and adenine DNA bases, mutations that cause DNA double-strand breaks as well as causing replication problems. H2O2 causes oxidative stress that can lead to mutations in DNA by creating an abasic site (loss of the base from the nucleotide). We used these reagents to monitor how UV-crosslinks from histone H2A changed when cells were stressed with DNA damage. We used plasmid borne histone H2A genes with amber mutations at the codon of interest for the site-specific installation of pBPA. The coding sequence also contained a region for a short peptide fusion tag (human influenza hemagglutin, “HA” tag) for antibody detection and visualization. The pBPA-containing H2A protein was expressed and allowed to incorporate itself into the native chromatin landscape. Cells were then exposed to UV-light and the cell lysates were analyzed by western blotting techniques and the crosslinked visualized via anti-HA antibodies.3 Control cells were not treated with DNA damaging agents.

The advent of an expanded genetic code has made it possible to express full-length protein harboring site-specific incorporation of unnatural amino acids (UAA).1 These amino acids are synthetic and possess side chain chemistries that can act as unique chemical handles. The power of this system resides in the ability to manipulate the endogenous translational machinery to read a stop codon as a sense codon. This requires the directed evolution of an aminoacyl-tRNA synthetase (aaRS) that is paired to a suppressor tRNA that recognizes the UAG (amber) stop codon. The substrate recognition site of the aaRS is evolved to accept only an UAA of interest. The aaRS/tRNA pair acts orthogonal to the host system creating a mechanism by which the UAA can be delivered to the ribosome and properly added to a growing chain of peptides. By introducing plasmid borne expression vectors containing the amber mutated coding sequence of a protein of interest plus the evolved aaRS/tRNA pair, live cells suppress the genetically installed stop codon with an UAA.

Approach 

Grow yeast cells; add H2O2 and Methyl methanesulfonate (MMS) as DNA damaging agent.

2. Cells grown for an additional hour in presence of the damaging agent. 3. Cells collected and then exposed to UV-light (365 nm) for 45 min. 4. Whole cell protein lysates prepared via trichloroacetic acid (TCA) precipitation. 5. Proteins separated by electrophoresis using SDS-PAGE gel. General scheme for pBPA installation into the chromatin landscape and crosslinking approach.

Results

pBPA molecule and mechanism for radical formation and recombination.

6. Proteins transferred to blotting membrane and then crosslinked proteins detected antibodies.

Discussion Figure 4 row A shows increased DNA damaging agent reduces the signal strength suggesting that when DNA is repairing itself, there is much less contact between histone and that protein. Comparing MMS and H2O2 effect, MMS is more effective at preventing crosslinking because for all the rows even .03% MMS is having greater effect on the signal. Also for the histoneprotein complex on row C, H2O2 have little or no effect on their crosslinking while it is clear that increasing amounts of MMS reduces signal strength. To further analyze the western blot, mass spectrometry is necessary to determine what the actual protein is that is crosslinking with the histone on H2A A61. But this assay shows that we can monitor physical contacts at the nucleosomal level in living cells.

General scheme for the introduction of UAA into yeast protein of interest.

We use the amino acid p-benzoylphenylalanine (pBPA) because it contains a benzophenone moiety that can form a diradical with low energy UV-exposure (~365 nm) allowing for hydrogen abstraction and radical recombination with neighboring proteins within a distance of 0.4 nm.2 We install pBPA into histone proteins and capture proteinprotein interacting crosslinks via exposure of living cells to UV-light.

1. Grow yeast cells to logarithmic phase, normalize the cell count, and then add MMS or H2O2. • 5 mM and 10 mM H2O2 • 0.03%, 0.05% and 0.1% MMS

Figure 4. Western Blot for H2A A61

Figure 5. Western Blot for H2A Y58

Row D in Figure 4 represents the concentration of histone molecules alone. The denser the mark, the stronger the signal released by the histone or the histone-protein complex. Strong signal is equivalent to higher quantity of histone or histone-protein complex. From C-A, the size of histone-protein complex increases. The minus UV lane shows no crosslinked protein as expected. When cells are exposed to UV crosslinked proteins become apparent as seen in rows A-C. The same samples were assayed for acetylation levels using an antibody that recognizes histone H4K16 acetylation (lower blot in each figure). Figure 5 represents the same type of assay but crosslinking from a different position on the histone. Interestingly, the crosslinking patterns are different suggesting that alternative proteins bind at these two different sites. For each, as more DNA damaging reagent is added, the crosslinking signal reduces. H4K16 acetylation also shows the same pattern for each position assayed, as expected. While the intensities of the H4 K16ac signal is different in each set of samples, the signal is more intense with at the lowest level of the damage agent. For example, in each sample, H4K16ac goes down as more H2O2 is added, as well as for when increasing levels of MMS are used.

H4K16 acetylation blotting was used to predict active chromatin. Increased acetylation means increased accessibility of chromatin for binding proteins. H4K16 ac western blot for H2A A61 shows that minute amount of DNA damaging agent significantly increases acetylation, suggesting that proteins are coming in to repair DNA damage at a significant rate. For both H2O2 and MMS, as the amount increases acetylation rate decreases. This suggest that yeast cells’ self-repairing process may not be sufficient to repair the damaged done (high concentrations of the damaging agent may be killing the cells). The crosslinking from Y58 shows a different crosslinking pattern meaning that different proteins bind to that position (which is to be expected). However, it is interesting to note that even though the protein interactions differ the crosslinking efficiency decreases as more damaging agent is added. Although we are not sure of the identity of the proteins observed in these assays we suggest that the protein is not a player in the damage pathway because it loses contact as the damage pathway is initiated. We will monitor more positions and begin to use mass spectrometry to identify the interacting protein. Only when the structures and the functions of these proteins are known, the mechanisms can be understood.

Reference 1. Chin, J. W. et al. An expanded eukaryotic genetic code. Science 301, 964–967 (2003). 2. Dorman, G. et al. Benzophenone photophores in biochemistry. Biochemistry 33 (19), 5661-5673 (1994). 3. Wilkins, B. J. et al. A cascade of histone modifications induces chromatin condensation in mitosis. Science 343, 77–80 (2014).


odification of Coffee Oil Feedstock and Heterogeneous Catalyst for Biodiesel Synthesis Thérèse A. Kelly and Dr. Yelda Hangun-Balkir Department of Chemistry and Biochemistry & Department of Biology Manhattan College, Bronx, NY 10471

INTRODUCTION

RESULTS

Biodiesel is a renewable and environmentally friendly alternative to its petroleum counterpart, due in part to its lower emissions of CO2, particular matter, and lack of aromatics (1). It can be produced from a variety of triglyceride-rich feedstocks through a catalyzed transesterification reaction (Fig. 2), which yields fatty acid methyl esters (FAMEs) and glycerol. The purification steps associated with biodiesel production are necessitated by the commonly used homogeneous catalyst KOH, which causes the formation of soap and water effluents that lower biodiesel yields and require further chemical processing (2). The heterogeneous catalyst CaO is a sustainable alternative to KOH, because of its lack of the aforementioned drawbacks, performance at mild reaction temperatures, and reusability (3). Coffee oil is a promising feedstock for biodiesel production, though its high free fatty acid (FFA) content contributes to its unsuitability for a CaOcatalyzed transesterification. In the present work, purchased coffee oil underwent a Fischer esterification pretreatment to lower its % FFA content. The catalyst Ca-glyceroxide was synthesized from CaO (Fig. 3), and used because of its properties of higher in situ reactivity and stability.

MATERIALS & METHODS Synthesis of Ca-glyceroxide Catalyst

Calcined CaO, glycerol, and methanol heated and stirred in a round bottom flask for three hours. The solid product was isolated and dried via vacuum filtration, and stored in an air-tight vial to prevent contamination by ambient air.

Pretreatment of Coffee Oil Feedstock to Lower Free Fatty Acid (FFA) Content

In order to lower moisture content and viscosity of the feedstock, coffee oil was stirred at ~110 °C for one hour and cooled overnight. The two-step pretreatment reaction was carried out at 60 °C in a round bottom flask set in a sand bath. A mixture of methanol (CH3OH) and 1% v/v sulfuric acid (H2SO4) was carefully added to preheated oil, and the reaction stirred for one hour. The cooled reaction mixture was added to a separatory funnel, and the resultant bottom layer underwent the pretreatment step a second time. The bottom layer in the second separation step was heated to remove any remnant methanol or water.

CONCLUSIONS

v  The synthesis of Ca-glyceroxide was confirmed by comparing its XRD pattern (Fig. 4) with the pattern described by Reyero et al. (2014).

Figure 4. XRD patterns of CaO ( ) and Ca-glyceroxide ( ). Ca-glyceroxide was synthesized in the laboratory with CaO, glycerol, and methanol.

v  The pretreated coffee oil underwent a color change from dark bluegreen to dark brown, following its final heating at 110 °C.

Figure 5. (L to R) Before and after final separation and heating of pretreated coffee oil.

v  Ca-glyceroxide successfully catalyzed the transesterification of pretreated coffee oil, to yield FAMEs (biodiesel). The peak at 1444.41 cm-1 in the biodiesel spectrum in Fig. 6 indicates methyl ester (RCOOCH3) formation. The broad peaks at ~3400 cm-1 indicate that methanol or water was still present following coffee oil pretreatment and transesterification.

Figure 8. Crude coffee biodiesel.

v  The spectrum in Fig. 7 shows biodiesel synthesized with KOH, which is the most commonly used homogeneous catalyst in large scale biodiesel production.

Figure 6. FTIR spectra of pretreated coffee oil and biodiesel synthesized with Ca-glyceroxide. The arrow indicates the presence of fatty acid methyl esters.

Figure 7. FTIR spectrum of biodiesel synthesized with untreated coffee oil and KOH. The arrow indicates the presence of fatty acid methyl esters.

CaO + 2C3H8O3→ Ca(C3H7O3)2 + H2O Figure 3. Ca-glyceroxide synthesis reaction equation. Figure 2. Transesterification reaction equation. Figure 1. Experimental setup for biodiesel synthesis.

In order to further assess the catalytic performances of Ca-glyceroxide and CaO, it is necessary to compare their percent yields of FAMEs following the transesterification of pretreated coffee oil. The application of HPLC and GC-MS would allow for the identification, separation, and quantification of FAMEs produced via transesterification. A titration method to determine the approximate % FFA content of the coffee oil must also be designed, in order to create a more precise FFA pretreatment method. Alternatively, glycerolysis could be pursued as an alternative to the acid-requiring pretreatment that was used in this project. Following further study of the aforementioned parameters, this project could explore the use of more sustainable raw materials. Waste shells and waste coffee oil are potential sources of heterogeneous catalyst and feedstock respectively, and together, their use would address the issues of costly biodiesel synthesis and waste disposal.

REFERENCES

Transesterification of Coffee Oil

The transesterification of coffee oil was carried out at 60 °C in a round bottom flask set in a sand bath. Approximately 2% w/w Ca-glyceroxide was added to heated coffee oil prior to the addition of methanol, and the reaction mixture stirred for three hours. The reaction product was recovered via vacuum filtration and added to a separatory funnel. To analyze the conversion of coffee oil to FAMEs (biodiesel), Fourier Transform Infrared Spectroscopy (FTIR) was performed on samples of the product.

Ca-glyceroxide successfully catalyzed the transesterification of coffee oil into FAMEs (biodiesel), and according to Reyero et al., can retain its high catalytic performance through 10 cycles of reuse (3). The synthesis and use of Ca-glyceroxide allowed for earlier introduction of the active catalytic phase into the transesterification of coffee oil (4). The active phase has higher basicity and FAME selectivity than CaO (3,4), and typically does not form until after the onset of transesterification. The presence of free fatty acids negatively affects the performance of homogeneous and heterogeneous catalysts alike, and it was determined through literature searches and trial experiments with untreated coffee oil that a free fatty acid pretreatment was necessary for the feedstock. Although % FFA values for coffee oil vary, a pretreatment method for Madhuca indica oil (19 % FFA) was applied to the coffee oil (5).

(1) Patil, P.D.; Gude, V.G.; Deng, S. Ind. Eng. Chem. Res. 2009, 48 (24), 10850-10856. (2) Di Serio, M.; Cozzolino, M.; Giordano, M.; Tesser, R.; Patrono, P.; Santacesaria, E. Ind.Eng. Chem. Res. 2007, 46, 6379. (3) Reyero, I.; Arzamendi, G.; Gandía, L. M. Chemical Engineering Research and Design 2014, 92 (8), 1519–1530. (4) León-Reina, L.; Cabeza, A.; Rius, J.; Maireles-Torres, P.; Alba-Rubio, A. C.; Granados, M. L. Journal of Catalysis 2013, 300, 30–36. (5) Ghadge, S.V.; Raheman, H. Biomass and Bioenergy 2005, 28,601-605.

ACKNOWLEDGMENTS Manhattan College Summer Fellows Program Manhattan College School of Science


Temporal Variation in the prevalence of Human Intestinal Parasite in Three Bivalve Species Collected from Orchard Beach, NY. Bivalve mollusks as filter-feeders play an important role in the integrity of estuarine ecosystem. Bivalves have been shown to be infected with human intestinal parasites such as Cryptosporidium, Toxoplasma gondii and Giardia lamblia. Cryptosporidium causes cryptosporidiosis in humans and other vertebrates. Toxoplasma gondii causes toxoplasmosis in humans and Giardia lamblia causes giardiasis in humans and dogs. As a result of their association with these parasites, bivalves could be used as biosentinels for human parasites in aquatic environments. Four bivalve species were collected at low tide from Orchard beach New York in September 2014 and October 2015. The species surveyed were the soft shell clam (Mya arenaria), the Atlantic oysters (Crassostrea virginica), the ribbed mussel (Geukensia demissa), and the blue mussel (Mytilis edulis). We have previously reported on the prevalence of C. parvum and G. lamblia in these four bivalves species from Orchard beach, NY. For this study, we will focus on two out of four, which are Mytilus edulis and Mya arenaria. The goal of this study was to determine the temporal variation in C. parvum, T. gondii, and Giardia lamblia in bivalves collected from Orchard beach New York using the polymerase chain(PCR)- reaction. We found that the prevalence of C. parvum in 2014 samples was 1% in Mytilus edulis, 16% in Geukensia demissa and 50% in Mya arenaria. However, in 2015, T. gondii and G. lamblia and C. parvum were not detected in Mya arenaria and Geukensia demissa. In contrast, G. lamblia had a 28.75% prevalence in Mytilus edulis and a 89% prevalence in Crassostrea virginica. These results indicate that bivalves can be used to assess water quality.

Fig. 1. Lifecycle of T. gondii

NUMBER OF POSITIVE TISSUES

9

Goals and Objectives • Determine the temporal variation in prevalence of human intestinal parasites in four bivalve species from Orchard beach, NY. • Determine the use of bivalves as biosentinels for human intestinal parasites.

Figure 3. Collection Site and collected bivalves from Orchard Beach, N.Y.

4 3

3

3

Mantle

Gills

2

1

1 0

Foot

Digestive gland

Siphon

15 13

29%

25% 20% 15% 10% 5%

0% 0% Mya arenaria 0% 0%

0%

0% 0%

Geukensia demissa 0% 0%

2014

PERCENTAGE PREVALENCE OF T. GONDII

50%

25%

Siphon

Gills

Mantle

16%

20% 15%

0% Mya arenaria 50% 0%

0% Geukensia demissa 16% 0%

2014

1%

Acknowledgements

0%

Mytilus edulis 1% 0%

2015

Figure 2. Prevalence of Cryptosporidium parvum in the bivalve collected from Orchard Beach New York

• • • •

Dr. Michael Judge, for help with the collection of bivalves. Dr. Ghislaine Mayer for directing the overall research. Manhattan College Department of Biology for financial support. Manhattan College School of Science scholars for funding.

Summer research

References

70%

60% 50% 40%

30%

0%

Abductor muscle

• Bivalves are excellent model systems to monitor the presence of human intestinal parasites in marine environment.

80%

2014 2015

Digestive gland

•There is marked difference in the prevalence of human intestinal parasites in bivalves collected in 2014 and 2015.

90%

10%

0 Foot

Conclusions

100%

20%

4

•Parasite DNA was detected in the digestive gland, the foot, the mantle, and the siphon.

30%

0%

5

6

Discussion

35%

5%

8

8

•Infection of the bivalves could have been caused by either human or animals. •Decrease in the prevalence of C. parvum in 2015 could be due to the cleaning of the beach.

40%

10%

10

10

Figure 5. Tissue distribution of Giardia lamblia in the bivalves collected from Orchard Beach New York

45%

2014 2015

Mytilus edulis 0% 29%

2015

50%

12

0

• Four species of bivalves were collected from Orchard Beach, NY, during low tide. The species collected were Mya arenaria (8 in 2014, 17 in 2015), Crassostrea virginica (10 in 2014, 9 in 2015) Geukensia demissa (44 in 2014, 64 in 2015) and Mytulis edulis (97 in 2014, 80 in 2015). • The gills, siphon, mantle, digestive gland, abductor muscle and foot were dissected from each mussel. • DNA was extracted from each dissected tissue using the Qiagen Tissue DNA kit. (Qiagen, Valencia, CA). The purified DNA was quantitated using a UV spectrophotometer. • PCR was performed using parasite specific primers to detect T. gondii, C. parvum, and G. lamblia DNA in the samples. • Amplicons were used to run agarose gels stained by ethidium bromide.

PERCENTAGE PREVALENCE OF C. PARVUM

PERCENTAGE PREVALENCE OF G. LAMBLIA

Fig. 3. Lifecycle of C. parvum

5

5

2

Materials and Methods

Figure 1. Prevalence of Giardia lamblia in the bivalve collected from Orchard Beach New York

Fig. 2. Lifecycle of G. lamblia

6

14

30%

2014 2015

7

16

Results

0%

8

Figure 4. Tissue distribution of Cryptosporidium parvum in the bivalves collected from Orchard Beach New York

NUMBER OF POSITIVE TISSUE

Abstract

Freda Fafah Ami Tei, Steven Kowalyk, Joseph Annabi, Christopher Annabi, Mohamed Fazeem and Dr. Ghislaine Mayer Department of Biology, Manhattan College, Riverdale, NY 10471 8

0% Mya arenaria 0% 0%

0%

0%

0%

Geukensia demissa 0% 0%

2014

0% 0% Mytilus edulis 0% 0%

2015

Figure 3. Prevalence of Toxoplasma gondii in the bivalve collected from Orchard Beach New York

• •

Rochelle, P.A. 1996. Comparison of primers and optimization of PCR conditions for detection of Cryptosporidium parvum and Giardia lamblia in water. American Society for Microbiology. 63: 106-114. Villegas, E. 2014. Using Bivalves as biosentinels to detect Cryptosporidium spp. and Toxoplasma gondii contamination in aquatic environments. The 89th Annual Meeting of the American Society of Parasitologists. 89: 117-18. Lalle, M., Pozio, E., Capelli, G., Bruschi, F., Crotti, D., & Caccio, S. M. (2005). Genetic heterogeneity at the β-giardin locus among human and animal isolates of Giardia duodenalis and identification of potentially zoonotic subgenotypes. International journal for parasitology, 35(2), 207-213. Freda F. Tei, Steven Kowalyk, Jhenelle .A. Reid, Matthew A. Presta, Rekha Yesudas, and D.C. Ghislaine Mayer. Assessment and molecular characterization of human intestinal parasites in bivalves from Orchard Beach, NY. Int.l J.Environ. Res. and Public Health. Int J Environ Res Public Health. 2016 Mar29;13(4). pii: E381. doi: 10.3390/ijerph13040381.


Autonomous Remote Memory for Virtual Machines and Linux Containers Student: Emmanuel Sanchez | Student Colleague: Steven Romero Faculty Advisor: Dr. Kashifuddin Qazi Manhattan College School of Science Summer Research Scholars 1. Abstract

Virtual Machines (VM) and Linux Containers are the core components of cloud computing. Granting VMs and containers the ability to use more memory than available is essential to cloud computing. This feature can be implemented with the aid of remote memory. Using remote memory from other physical systems (PM) within a cluster at random can result in an overload in a system and imminent failure. This project presents a framework that autonomously predicts memory usage of all PMs in a cluster and chooses a suitable PM that has available memory to share some with the failing system. This framework is potentially critical for various data center features such as load balancing, load consolidation, and resource overcommitting

  

Background 2. Background VMs VMs on the same system sharesystem resources. and containers on the same share resources When memory is overloaded, the system resorts to swap space.overload, the system resorts to Hard During memory  Default swap space on all systems use a dedicated Disk swap space HDD partition slow; remote RAM is much Hard Disk swap iswhich slow; is remote RAM is relatively fast faster. Reactively selecting a PM based on current system  It is possibleisto predict RAM use within a cluster requirements prone to overloads

 Datacenter memory loads are predictable  Load prediction mechanisms have been used to successfully achieve other goals

4. Methodology

6. Conclusion

Experimental analysis of the framework reveals that even using the average of current loads leads to a 25% reduction in overloaded PMs in contrast to a reactive approach. Using a prediction mechanism such as Fourier Transform results in a 33% reduction. Further, with the framework approach, the number of overloads lasting greater than 10 minutes at a time is reduced by 85% compared to the reactive approach. These results demonstrate the potential of the framework in lowering the chance of a hardware failure in memory overcommitted data centers.

5. Results

HDD vs Remote Swap Results

PM specs: 8 GB RAM i5 Quad Core Processor

S

7. Future Work

Cluster Simulation Results

PM Specs: 4 Hosts with 8 GB RAM running 8 VMs with 8 GB each Memory loads from NASA website data traces Total Overloaded PMs Total Time Overloaded (minutes) Overloads Lasting >10 mins Overloads Lasting <10 mins Amount of 2 PMs Simultaneously Overloading Amount of 4 PMs Simultaneously Overloading

3. Goal

• A framework that periodically predicts prospective memory usage • In case of potential overload, it intelligently decides whether a VM/container should: • Migrate to a system in the cluster with all RAM available • Share RAM with a system that has some RAM available • use HDD swap as a final resort

FFT 9867

Averages 11765

No Prediction 14716

Optimal 1472

4764

5564

7287

665

11

40

74

0

2351

3334

3456

51

3803

5535

7216

65

33

29

71

8

• The framework will be tested on a physical cluster with additional memory traces (e.g. Google’s data traces) • Various other prediction mechanisms (Chaos theory, Bayesian models) can be plugged in and analyzed for their efficiency in the framework • • • • •

8. References

http://www.digitalinternals.com/unix/linux-create-ram-disk-filesystem/438/ http://www.microhowto.info/howto/export_a_block_device_using_nbd.html Williams, D., Jamjoom, H., Liu, Y.H. and Weatherspoon, H., 2011, March. Overdriver: Handling memory overload in an oversubscribed cloud. In ACM SIGPLAN Notices (Vol. 46, No. 7, pp. 205-216). ACM. Gong, Z., Gu, X. and Wilkes, J., 2010, October. Press: Predictive elastic resource scaling for cloud systems. In 2010 International Conference on Network and Service Management (pp. 9-16). IEEE. http://ita.ee.lbl.gov/html/contrib/NASA-HTTP.html


Solving the Structures of ZSM-18 and SUZ-9 Gertrude Turinawe Hatanga Manhattan College, Department of Chemistry and Biochemistry Introduction

Methods

Two industrially synthesized zeolites, ZSM-18 and SUZ-9 are the focus of our study. A zeolite is a crystalline, porous aluminosilicate. The metal atoms (commonly, silicon and aluminum) in the zeolites are surrounded by four oxygen anions to form an approximate tetrahedron consisting of a metal cation at the center and oxygen anions at the four apexes [1]. ZSM-18 was first patented by Julius Ciric at Mobil Oil Corporation in 1976 [2] while SUZ-9 is more recent having been synthesized by researchers at BP and ExxonMobil [3]. Unique properties such as their porous character with uniform pore dimensions makes these zeolites important as they can be used as adsorbents, catalysts and molecular sieves.

We use sophisticated crystallographic programs such as GSAS-II, ATOMS and Superflip [4] to verify the published topology of ZSM-18. The refinement of the published topology in space group P63/m does not converge, indicating there may be problems with the published topology or its space group. Running the Superflip program (5) did not solve the structure but did suggest P6/m and P11m as possible space groups.

Objectives Being that the ZSM-18 structure was only determined by model-building, variations of this model have been suggested over time because possible ways to combine the T-atoms to form the structure are virtually limitless. Our main goal is to confirm the ZSM-18 structure using powder x-ray diffraction data. Unlike ZSM-18, the structure of SUZ-9 has not yet been suggested or determined by either model building or sophisticated ab initio crystallographic computer programs. Our goal is to determine the SUZ-9 structure using both methods suggested above.

RESEARCH POSTER PRESENTATION DESIGN © 2015

www.PosterPresentations.com

Previous research determined that SUZ-9 has hexagonal symmetry with cell dimensions a=b=36.14Å and c=7.5Å. It was also determined that SUZ-9 is the largest member of the “12-Ring Family of Porous Materials”. All members of this family are known to have the same lattice constant value for c but with different values for a and b. Being that the cell dimensions , a and b for SUZ-9 are twice those of LTL, it is then assumed that the framework topology of SUZ-9 is made up of the same building blocks as LTL (Fig.1).

Fig 1. Structure of LTL, composed of ltl (the 12ring in center), d6r and can building units (making 4- and 8-rings), as viewed from above (down the c axis). Blue = Silicon and red = oxygen atoms.

s6r

gme

C

ltl

C

d6r and can

C

Fig 2: Shown above are building blocks. The c dimension of gme, ltl, and d6r/can is the same as the 7.5 A axis length in SUZ-9

Fig 3 : LTL framework topology as viewed from the side. Oxygen atoms have been left out from Fig. 3 for better viewing purposes.

Results on SUZ-9 The most recent project we have embarked on is using Simulated Annealing, a program in GSAS-II [5].GSAS-II is an updated version of GSAS-I and has the capability to perform charge flipping and Simulated Annealing which were otherwise not possible in GSAS-I. Our first attempt was to export in XYZ Cartesian coordinates a 12Ring from ATOMS which was input into GSAS-II as a rigid body. The results of this simulated annealing calculation were however not helpful. Next we used the whole ltl cage (185 atom rigid body). The result we expect to obtain from this calculation is for the program to systematically align the ltl cage in the unit cell in such a way that it paves the way for other building blocks within the 12-Ring family to be inserted in the unit cell hence solving the SUZ-9 structure. The result we have obtained so far shows what appears to be an ltl cage placed at each of the four corners of the unit cell. This was anticipated

but we hoped that the program would fill in the empty gaps between the ltl cages. We hoped that the cages would anneal (be joined) in the unit cell with other building units of the 12-Ring Family of Porous Materials which did not happen. The next step will be to try and input other building units like can and d6r (shown above Fig. 2) along with ltl into the system to see if we obtain a better result. However, because the simulated annealing program nearly overwhelmed the capability of our fastest computer (a 2015 MacBookPro). We need to find and use a better computer.

Conclusion While some progress has been made in determining the structure of SUZ-9, the research is still ongoing. A faster computer will be needed in the future to continue exploring the Simulated Annealing program in GSAS-II.

Acknowledgments

This work was funded by the Camille and Henry Dreyfus Foundation Senior Scientist Mentor Program and the Summer Research Scholars Program at Manhattan College. Thanks to Dr. Richard Kirchner, mentor, and Eric Castro and Christine Schmidt, student research colleagues for their help and assistance..

References:

1. L. Moscou, in;H. van Bekkum, E.M.Flanigen, J.C.Jansen(Eds.); Introduction to Zeolite Science and Practice; Studies in Surface Science and Catalysis 58, ELSEVIER SCIENCE PUBLISHERS B.V, Amsterdam,1991 pp 2, 5 2. Ciric, J, inventor; Mobil Oil Corporation, assignee. Synthetic zeolite ZSM-18. United States Patent US 3,950,496. 1976 April 13 3. Stephen L. Lawton and Wayne J. Rohrbaugh; The Framework Topology of ZSM-18, a Novel Zeolite Containing Rings of Three (Si,Al)-O Species. Science 16 Mar 1990 Vol. 247, Issue 4948, pp. 1319-1322 (DOI:10.1126/science.247.4948.1319) 4. Baerlocher Ch., McCusker L. B., Palatinus L. (2007): Charge flipping combined with histogram matching to solve complex crystal structures from powder diffraction data, Z. Kristallogr. 222(2), 47-53 5. Toby, B. H., and Von Dreele, R. B. (2013). "GSAS-II: The Genesis of a Modern Open-Source All-Purpose Crystallography Software Package," Journal of Applied Crystallography 46, 544549. doi:10.1107/S0021889813003531


Abstract Chromodulin, also known as Low-molecular-weight chromium-binding substance (LMWCr),is a chromium cofactor that plays an important role in glucose metabolism, but its structure has never been characterized. We performed MMFFaq molecular mechanics calculations in an attempt to characterize the structure and determine its effect on Tyrosine Kinase. These calculations were performed in order to better understand the enzymatic mechanism and compare the thermodynamic properties of the chromodulin structures to the experimental values; this would hopefully allow us to characterize the structure. Introduction Since the 1950s, chromium has been used as a nutritional supplement. Later studies showed that chromium has the ability to alleviate the symptoms of Type II Diabetes, hinting at the biologically active role for chromium. In the 1980s, a chromium binding oligopeptide known as chromodulin was isolated. Chromodulin alleviates combats insulin resistance by binding increasing the activity of Insulin Receptor Tyrosine Kinase; though the exact mechanism is still not known.. If blood glucose levels start to get too high, the hormone insulin is secreted in order to lower blood glucose levels and initiate the conversion of glucose to glycogen. Once Insulin is bound to the Insulin Receptor, the receptor is activated and the two β-subunits, also known as tyrosine kinase enzymes specific tyrosine residues. The enzyme then phosphorylates second messenger proteins and the resulting signal transduction initiates gene expression that lowers blood glucose levels.

temperature. The structure of chromodulin was studied by altering the structure of the chromium cluster in the three candidate peptides and the binding region of chromodulin. Tyrosine kinase was modelled at different states in the enzymatic mechanism (see Figure 3), each containing different cofactors (e.g. ethylmecury, & chromodulin). The structure of both the peptides and proteins were modelled so that the calculations reflected the structure of these molecules at physiological pH (pH ~ 7). Discussion The results obtained from this project provide good data for elucidating the structure of chromodulin. Although the shape of the chromium cluster was determined by previous experiments, the exact orientation of the cluster and the residues involved in binding are still unknown. To narrow down the correct structure, six different orientations of the chromium cluster were developed with each atom in the trinuclear assembly being bound to different residues in the binding region EEEEGDD (Figure 5); the fourth chromium atom has interactions with all of the carboxyl groups of the acidic residues. Figure 7. Tyrosine Kinase Mechanism of Action in the Presence of Chromodulin In order to best understand the mechanism, the enzymes were studied with and without chromodulin and a possible ethylmercury cofactor from the PDB file. The structure of Tyrosine Kinase containing ethylmercury (TyrKHg) appears to provide the correct structure of the enzyme, as its ability bind ATP is greatly aided by chromodulin, as opposed to TyrK (Figure 7). Although the binding of ATP by TyrKHg is nonspontaneous, it may work through energetic coupling (one reaction driving the other) possibly from insulin binding, the formation of the Tyrosine Kinase dimer, being pulled by the energy released from the phosphorylation reaction. However, even though the binding can be favorable through energetic coupling, the initial interaction between Tyrosine Kinase and ATP is nonspontaneous, so enzyme activity should be Figure 5. Possible Orientations of the Chromium Cluster in Chromodulin

Figure 1. Mechanism of Tyrosine Kinase Activity1

Amino Acid Sequence EEEEGDDGCC EEEEGDDCCG EEEEGDDCGC pEEEEGDD

ΔGof - no disulfide bridge (kJ/mol)

-334.47 -385.02 -390.53 -391.89 Table 1. ΔGof (kJ/mol) of Candidate Amino Acid Sequences

Orientation Model

Figure 2. Proposed Mechanism of Chromodulin Action8 As seen in Figure 1, after the insulin receptor has bound insulin, chromium is transported into the cells by a protein called Transferrin (Tf). Chromodulin is then loaded with chromium; the large binding constant (Kf = 1.10*1021 M-4)2, essentially causes all chromium in the cell to be bound by chromodulin. Chromodulin binds chromium by the following reaction: Apo-LMWCr-6 (aq) + 4Cr3+(aq)  Holo-LMWCr+6 (aq) ΔGrxn (311.75 K) ~ -125.58 kJ/mol ΔGof of Cr3+(aq) at 311.75 K ~ -216 kJ/mol After chromodulin is loaded with chromium, it binds to the insulin receptor and increases its activity. Once insulin levels drop, chromodulin dissociates from the receptor and is excreted from the body. Despite being found in a wide variety of animals (e.g. rabbits, cows, humans), the exact structure of chromodulin has never been determined. Recent efforts have been undertaken to elucidate the structure of the metal cluster and the sequence of the amino acid. The chromium cluster is comprised of a tetrahedral arrangement of four Cr3+ ions. Three Cr3+ ions are antiferromagnetically coupled together and interact electrostatically with a fourth Cr3+ ion.

Figure 3. Model of Interactions Between Chromium Atoms in Chromodulin5 Cr2 forms carboxylate ligand bridges to interact with Cr3 and Cr4; while Cr3 and Cr4 are believed to be bridged by a stronger oxo-ligand (e.g. hydroxide, water, oxygen anion etc.). Unlike most metalloproteins, the chromium is believed not be coordinated by sulfur atoms in cysteine residues, but rather by oxygen atoms from carboxyl groups in acidic amino acid residues (Glu & Asp). The exact amino acid sequence of chromodulin differs between species, but this research project was focused chromodulin found in bovine liver. The oligopeptide has been theorized to contain Asp/Asn, Glu/Gln, Cys, & Gly amino acid residues in a 2:4:2:2 ratio, respectively. Recent efforts in characterizing the structure of chromodulin have revealed the Glx and Asx residues are responsible for binding the Cr3+ ions in the chromodulin cluster. Furthermore, Chen and Vincent have shown that the amino acid sequence pEEEEGDD, where pE is pyroglutamate, is able to bind chromium nearly as well as chromodulin (Kf = 1.92*1020 M-4)2; 2 Cys and 1 Gly were lost during the isolation procedure. pEEEEGDD-6(aq) + 4Cr3+(aq)  pEEEEGDD-Cr4 ΔGrxn (311.75 K) ~ -121.05kJ/mol Because the N-terminus is glutamate, it is difficult to sequence chromodulin using standard methods, as it causes the conversion of glutamate to pyroglutamate. Thusly, chromodulin should have the sequence EEEEGDD as its starting sequence; this structure also acts as the binding region. Assuming standard amino acid linkage, the sequence of chromodulin may be one of the following: EEEEGDDGCC, EEEEGDDCGC, or EEEEGDDCCG. Spectroscopic analysis using UV/Vis spectroscopy shows several distinct peaks.

1 2 3 4 5 6

EEEEGDDGCC-Cr4

-866.05 101.81 -436.43 -1348.39 141.59 -935.98

EEEEGDDCGC-Cr4

-634.24 -49.72 -450.53 642.43 -168.23 -1096.53

EEEEGDDCCG-Cr4

-321.22 -299.26 -261.64 N/A

-716.67 120.01 -440.46 -224.51 -178.73 -143.8

Table 2. ΔGof (kJ/mol) of Candidate Chromodulin Structures (w/o Disulfide Bridge)

Orientation Model

1 2 3 4 5 6

EEEEGDDGCC-Cr4

-897.52 104.61 -449.68 -1397.48 -210.74 -907.68

EEEEGDDCGC-Cr4

-909.42 -5.97 -400.14 -1083.54 -190.8 -1082.18

EEEEGDDCCG-Cr4

-884.16 147.38 -439.93 -221.93 -161.09 -248.49

Table 3. ΔGof (kJ/mol) of Candidate Chromodulin Structures (w/ Disulfide Bridge) By comparing the ΔGof values in the above figures, it is apparent that the sequence EEEEGDDGCC with the chromium cluster model 4 produces the lowest ΔGof values, implying that they form very stable chromodulin complexes; their ΔGof .values produce ΔGorxn values close to the accepted value determined from numerous experiments. Tables 1-3 assume that Cr3 & Cr4 are bridged by carboxylate groups, but since it’s theorized stronger bridging ligands are needed, different bridging ligands were used to see if the ΔGof are closer to the theoretical value for this amino acid sequence. The bridging ligand that produced ΔGof values closer to the theoretical value was O2.

Figure 6. Space Filling (left) and Ball and Stick (right) Models of Theoretical Chromodulin Structure Name of Molecule ΔGrxn (kJ/mol) % error Kf (M-4) EEEEGDDGCC -128.64 2.44%3.59*1021 EEEEGDDGCC - disulfide -147.42 17.39%5.03*1024 apo-chromodulin (bovine liver) -125.58 N/A1.10*1021 Table 4. Gorxn and Kf For the Binding of O2 & Cr3+ by apo-chromodulin and the Candidate Structures Name of Molecule pEEEEGDD

Figure 4. UV/Vis Spectra of Chromodulin6 The two peaks around 400 nm and 600 nm are due to the d-orbital transitions, but the peak 260 nm is more ambiguous; it may be due to either a disulfide bridge between two cysteine residues or due to the chromium cluster creating a ligand-tometal charge transfer (LMCT) complex with one of the ligands in the coordination complex. Methods Molecular Mechanics calculations were performed on theoretical structures of chromodulin and tyrosine kinase enzymes using Spartan ’14 (7), molecular mechanics, and the basis set MMFFaq (3). The chromodulin structure studied was found in bovine liver, while the tyrosine kinase enzyme was from the human insulin receptor. The structure of insulin receptor kinase was received from the protein data bank while the structure of chromodulin was inferred from experimental data.. Every calculation included calculation of the thermodynamic properties at 311.75 K; this temperature corresponds to bovine liver

ΔGof - disulfide bridge (kJ/mol)

ΔGrxn (kJ/mol)

-135.65

% error

Kf (M-4)

12.06%6.71*1022

pEEEEGDD (bovine liver) -121.05 N/A1.92*1020 Table 5. Gorxn and Kf For the Binding of O2 & Cr3+ by pEEEEGDD Despite the fact that the N-terminal glutamate has been converted to pyroglutamate, it is still possible for the carbonyl oxygen on pyroglutamate to bind chromium rather effectively, under aerobic conditions. The relatively small deviation from the experimental data shows that these theoretical structures have good likelihood of being consistent with the actual structures of chromodulin and pEEEEGDD. This also agrees with experimental data that only the Glu and Asp residues are responsible for chromium binding. Furthermore, because the results show consistency by having a relatively small % error in orientation model 4, this may be a good approximation of the chromium cluster in chromodulin. In particular, the small % error present in the sequence EEEEGDDGCC shows a great likelihood that it possesses the correct structure of chromodulin. However, because the percent error of EEEEGDDGCC with a disulfide bridge present is relatively small, it cannot be ruled out as a possible structure of chromodulin. The exact mechanism of by which Tyrosine Kinase functions is not very well known, nor understood. By studying the thermodynamics of the enzyme tyrosine kinase, it shed light into the enzymatic mechanism. Enzymes lower the activation energy of unfavorable reactions by passing through alternative and more favorable reaction pathways.

Form of Enzyme TyrK TyrKHg TyrKLMWCr TyrKHgLMWCr

ΔGof (kJ/mol)

Table 6. ΔGof of Tyrosine Kinase Enzyme Structures

Form of Enzyme TyrK TyrKHg TyrKLMWCr TyrKHgLMWCr (interim results)

ΔGorxn (kJ/mol)

65161 33521 57584 43922

-38128.74 -16506.74 -30756.74 ~ -40000

Table 7. ΔGorxn for Autophosphorylation of Tyrosine Kinase Enzymes Form of Enzyme TyrK TyrKHg TyrKLMWCr TyrKHgLMWCr

ΔGorxn (kJ/mol)

-20922.27 1358.73 -19166.27 -19166.27

Table 8. ΔGorxn for the binding of ATP by Tyrosine Kinase Enzymes relatively low in the absence of chromodulin. Furthermore, because the tyrosine residues in the active site are so close to each other, each subsequent autophosphorylation step will be more endergonic; this causes the degree of autophosphorylation to be somewhat limited. Also, since chromium is transported into the cell after insulin has already been bound to the receptor, it is fair to postulate that the enzyme should be able to function fairly well without chromodulin; though it will eventually become insulin resistant if there if there is a chromium deficiency, due to insufficient levels of chromodulin. Based on the interim results, the binding of LMWCr to TyrKHg appears to be an endergonic process, so it may bind to the enzyme via a different pathway. Since tyrosine kinase carries out signal transduction after insulin binding, it can be inferred that TyrKHgPi3 is the predominant form of the enzyme at this point in the reaction. Due to this, it is possible that substrate phosphorylation and autophosphorylation occur simultaneously. Afterwards, TyrKHgPi3 can bind chromodulin to increase enzyme activity; it’s worthy to note that due to the close proximity of the tyrosine residues, ATP binding gets progressively more endergonic over the course of the reaction. The presence of chromodulin lowers the ΔG for ATP binding at this point and allows autophosphorylation and to occur more favorably. Then, assuming the mechanism illustrated in Figure 2 and Figure 7 are accurate, once insulin levels drop chromodulin will spontaneously dissociate from the enzyme, after performing one last substrate phosphorylation. Conclusion Based on these calculations, it can be concluded that these structures of chromodulin and pEEEEGDD created during this project may possibly match the structures studied experimentally. When comparing the ΔGorxn and Kf values of the sequence EEEEGDDGCC and pEEEEGDD to the accepted values, show that they may have the approximate orientation of the chromium cluster found in chromodulin. Also, since only the sequence EEEEGDDGCC was able to produce comparable results, out of all three sequences, there is a strong likelihood that it may be the sequence of chromodulin. By modelling the effect chromodulin has on the energy of the Tyrosine Kinase enzyme, it’s effects on the enzyme mechanism can be inferred. Since chromodulin appears to make ATP binding more favorable, that shows one way that this cofactor can increase enzyme activity. By making tyrosine autophosphorylation a more favorable process, the reaction should be able to occur at a more rapid pace and signal transduction can occur more efficiently. Future Work Further calculations will be performed on holo-chromodulin and apo-chromodulin structures using density functional theory. The calculations performed during this method will produce more accurate ΔG values, a better approximation of the structures in aqueous solution, and a UV/Vis spectra of holo-chromodulin; the spectra then can be compared to the experimental data. MMFFaq calculations will continue to be run on the tyrosine kinase enzymes in order to determine ΔGof and ΔGorxn values, in the enzymatic mechanism. Acknowledgements The author gives his thanks to the Michael ‘58 and Aimee Rusinko Kakos Summer Research Fellowship for its continued financial support. He also thanks Dr. Joseph F. Capitani for providing guidance and advice throughout the entirety of this research project. References 1. Ahern, Kevin. "Insulin Receptor." Integration and Cellular Signalling. Oregon State University, n.d. Web. 11 Aug. a a 2016. 2. Chen, Y., Watson, H., Gao, J., Sinha, S. H., Cassady, J., Vincent, J.. “Characterization of the Organic Component of Low-Molecular-Weight Chromium-Binding Substance and Its Binding of Chromium.” The Journal of Nutrition, 2011. 3. Halgren, Thomas. A.. “Merck molecular force field. I. Basis, form, scope, parameterization, and performance of MMFF94.” Journal of Computation Chemistry. 1996 4. Harris, Daniel. Lucy, C. A., Quantitative Chemical Analysis. 9th ed., W.H. Freeman & Company, 2016. 5. Jacquamet, L., Sun, Y., Hatfield, J., Gu, W., Cramer, S., Crowder, M., … Latour, J.. “Characterization of Chromodulin by X-ray Absorption and Electron Paramagnetic Resonance Spectroscopies and Magnetic Susceptibility Measurements.” Journal of the American Chemical Society, 2012. 6. Peterson, R. L., Banker, K. J., Garcia, T. Y., Works, C.F.. “Isolation of a novel chromium(III) binding protein from b bovine liver tissue after chromium(VI) exposure.” Journal of Inorganic Biochemistry, 2008. 7. Spartan’14 Wavefunction, Inc. Irvine, CA 8. Vincent, John. “The Biochemistry of Chromium.” The Journal of Nutrition, 2000.


Developing Automated Systems for Measuring Interference Fringes of a Michelson Interferometer

Problem

Utilizing an Interferometer to measure change in optical path length can lead to very rapid shifts in fringes. These shifts in fringes can happen faster than an ordinary camera can even detect, potentially causing individual fringes to be missed.  To solve this problem we need to develop a device that will record the light intensity over time while the interferometer is active. The device will need to be able to record data at frequencies higher than an ordinary camera. 

 

Sensor Testing There were five sensors tested for detecting light intensity. The TEPT 5600, the TEMT 6000, the NSL 5152, The TSL 2561, and the NSL 4960 

Sensor

Range

Voltage

Vishay TEPT5600

0V–6V

Sparkfun TEMT6000

0V–6V

Luna Optoelectronics NSL 5152

0 V – 100 V

Adafruit TSL 2561

2.7 V - 3.6 V

Luna Optoelectronics NSL 4960

0 V – 150 V

Vishay TEPT5600

120 Hz

Sparkfun TEMT6000

>200 Hz

Objectives Develop code and hardware to test a Michelson Interferometer Test limitations of sensors are microcontroller

  

Testing in the Interferometer The particular set up used is a standard Michelson Interferometer, but the shift in mirror location is caused by a thermally expanding copper pipe.

The Device The basis for the device is the Arduino Genuino Uno, which is a powerful, but inexpensive microcontroller that can operate at 16 MHz. This microcontroller can read voltage outputs from various sensors, from light sensors, accerlerometers, thermometers, etc. To use the Arduino to the full potential, we needed to find the most efficient sensors to measure light intensity over time.

Maximum Frequency

Limiting Factors Lux Saturation Maximum Frequency Ease of Use

Michelson Interferometer 

Luna Optoelectronics NSL 5152

40 Hz

Adafruit TSL 2561

33 Hz

Luna Optoelectronics NSL 4960

40 Hz

Vishay TEPT5600

<75 Lux

Sparkfun TEMT6000

< 30 Lux

Luna Optoelectronics NSL 5152

<15 Lux

Adafruit TSL 2561

>200 Lux (did not saturate with intensity)

Luna Optoelectronics NSL 4960

<20 Lux

d=mλ/2

d = change in optical path length m = number of fringes  λ = wavelength of laser 

Conclusions

The detected value of 1798 local maxima would correlate to an optical path length change of 0.56 mm. This is a 180% error from the expected value.  However, about 400 of the ‘local maxima’ are below 0.45 volts, which is the ‘average’ value of each data point, which suggests that a large portion of the ‘local maxima’ are not fringes.

Results

Parameter

Lux Saturation Limit 

Moving forward First and foremost, further experimentation needs to be done with the interferometer to find what the limit is for which ‘local maxima’ are considered , and which ones are just background noise. The Arduino is a powerful device that allows for measuring of a variety of data types, assuming the device is programmed for. A variety of experiments are currently being developed for using the Arduino for the classroom, including pendulum motion and spring acceleration.

The copper pipe was heated from 23.75 C to 67.5 C, a shift in temperature of 43.75. The initial copper pipe length is 27.5 cm. For linear thermal expansion, this correlates to a path length change of 0.1997 mm.

Special Thanks Arduino Results

Dr Bruce Liby for the guidance and feedback Alex Karlis for building the Interferometer

Michaelson Interferometer Intensity as a function of Time

Final Device

0.75

0.7

0.65

0.6

The TEPT5600 and the TEMT 6000 have very high maximum frequencies they could operate at without saturating. Both had reasonable lux saturation points, however, the TEPT5600 has a curved window on top of a flat sensor, which makes aiming the interferometer directly at the sensor difficult.  As such, we chose to use the TEMT6000 for ease of use, and high maximum frequency.

Intensity (Lux)

Sean Heffernan, Dr Veronique Lankar

0.55 Column B 0.5

0.45

0.35

0.3 0

100000

200000

300000

400000

500000

600000

700000

Time (ms)

1798 Local Maxima 

RESEARCH POSTER PRESENTATION DESIGN © 2015

www.PosterPresentations.com

References

0.4

E. Richard Cohen, David R. Lide, and George L. Trigg, editors. AlP Physics Desk Reference, 3rd edition. New York: SpringerVerlag New York, Inc., 2003 Adafruit:c2005-2016 NYC (NY): Adafruit [Accessed June 11, 2016]. https://learn.adafruit.com/max31855-thermocouple-python-lib rary Sparkfun: r2010-2016 Niwot(CO):Sparkfun [Accessed June 11, 2016]. http://bildr.org/2011/06/temt6000_arduino/


Abstract Human Serum Albumin (HSA) is a protein found in plasma and helps to transport drugs throughout the circulatory system. The binding of dipicolinic acid (DPA), a chemical component of bacterial spores, to HSA was investigated using fluorescence and UV/Vis spectroscopy. The experimental trials were performed in order to understand the type of forces involved in the binding of DPA and determine the mechanism of quenching involved in binding reaction. These were determined by calculating the thermodynamic parameters, the quenching constant, and the binding constant. Introduction Human Serum Albumin (HSA) is a transport protein found in human blood. HSA is a globular protein that is comprised of a single amino acid chain and contains three structurally homologous domains (I, II, & III); Each domain also contains two subdomains within each. HSA binds most drugs and is used to transport them

Figure 1. Three-Dimensional Structure of HSA (http://pubs.rsc.org/en/content/articlelanding/2009/ob/b911605b#!divAbstract) throughout the circulatory system. When drugs are bound to HSA they become inactive and unable to carry out their intended biological function; if a drug bind strongly to HSA, the concentration of the active form of the drug will decrease. The two types of fluorophores present in HSA correspond to Trp-214 in subdomain IIA, which dominates the fluorescence of HSA, and several tyrosine residues found in different subdomains. The intensity of HSA’s fluorescence is very sensitive and subject to change, due to the local chemical environment, such as solvent type and different types of ligands in solution. These factors can induce a change in the protein’s conformation and reduce the emission through static and dynamic quenching. Consequently, this allows ligand-albumin binding information to be acquired via fluorescence quenching measurements; in this case, dipicolinic acid’s (DPA) interactions with HSA are being studied.

Figure 2. Molecular Structure of DPA (http://www.sigmaaldrich.com/catalog/product/aldrich/p63808?lang=en&region=US) DPA is one of many compounds that comprise bacterial spores (endospores). Under appropriate conditions, endospores germinate into active cells and release DPA; due to this, detection of DPA is used to chemically detect the presence of endospores. By determining the binding constant and thermodynamic parameters of DPA’s binding to HSA, the toxicological profile and pharmacodynamics of DPA can be determined. Methods The binding constant and thermodynamic parameters of DPA’s binding to HSA were determined via static quenching; both compounds were purchased from Sigma-Aldrich. The interactions were studied in a Tris(hydroxymethyl)aminomethane buffer, prepared in deionized water, at pH = 7.2. The fluorescence of the solution was measured with a Photon Technology International (PTI) cell holder, equipped with a thermostat; the cell holder’s slit widths were set to 4 nm. HSA’s excitation wavelength was set at 280 nm, and the emission spectra was read in the 290500 nm range. The UV/Vis spectrum was recorded with an Agilent 8453 UV/visible photodiode array spectrophotometer. 200 ΟL of stock HSA (1.00 x 10-5 M) and 1800 ΟL of Tris buffer (pH=7.2) were mixed together in a cuvet which was then placed in a constant temperature cell holder for 10 minutes in order to reach the desired temperature. The HSA solution was titrated with DPA by adding 1.00*10-4 M DPA at 2 ΟL intervals until the protein was saturated at 22 ΟL. After each addition of DPA, the fluorescence of the solution was measured using the spectrofluorometer. Each set of titrations was performed at three different temperatures (295, 303, & 308 K). The UV-visible spectra of each solution was also recorded at 295 K. Results and Discussion UV/Visible Spectroscopy

Flourescence Quenching

Figure 2. Emission spectra of HSA in the absence and presence of DPA. Νex = 280 nm; Νem = 337 nm By looking at Figure 2, it is evident by the decrease in HSA’s fluorescence that DPA is able to quench HSA. Furthermore, as quenching increases Νem shifts to a shorter wavelength. This is due to DPA changing the microenvironment of Trp-214, making it more hydrophobic. By examining the UV/Vis spectra (Figure 1) the process by which DPA quenches HSA can be explained. The measured fluorescence of HSA was corrected for the inner filter effect by the equation: Fcor = Fobs * eA/2 Fobs and Fcorr refer to the measured and corrected fluorescence intensities, respectively. A refers to the absorbance of DPA at the Νex = 280 nm, at the same concentration as it is in the mixture. However, equation 1 is only valid because the absorbance is less than 0.31. Stern-Volmer Constant Due to the quenching mechanism being static quenching, the corrected intensity can then be entered into the SternVolmer equation: F0/Fcor = 1+ KSV[Q] Ksv is the Stern-Volmer quenching constant and [Q] is the concentration of the quencher, in this case DPA.

Table 2. Values of Ksv, Ka, and n at 295, 303, and 308 K

As shown in figure 2, as temperature increases both Ka and Ksv decrease and Ka and Ksv are almost equal to each other; this provides further evidence of a static quenching mechanism. Thermodynamic Parameters The Vant’Hoff equation is used to obtain the thermodynamic parameters of the binding process: − ∆đ??‡đ??‡đ??‡đ??‡ đ?&#x;?đ?&#x;? ∆đ??’đ??’đ??’đ??’ đ??‹đ??‹đ??‹đ??‹đ??‹đ??‹ (đ??Šđ??Š đ??šđ??š ) = + đ?&#x;?đ?&#x;?. đ?&#x;‘đ?&#x;‘đ?&#x;‘đ?&#x;‘đ?&#x;‘đ?&#x;‘ đ??‘đ??‘ đ??“đ??“ đ?&#x;?đ?&#x;?. đ?&#x;‘đ?&#x;‘đ?&#x;‘đ?&#x;‘đ?&#x;‘đ?&#x;‘đ?&#x;‘đ?&#x;‘ These thermodynamic parameters can provide information about the forces responsible for ligand binding (Table 3). Table 3. Ligand Binding Forces Based on Thermodynamic Parameters

Thermodynamic Parameters ΔH > 0, ΔS > 0 ΔH < 0, ΔS > 0 ΔH < 0, ΔS < 0

Force Hydrophobic Electrostatic Van Der Waals or Hydrogen Bonding

Figure 3. Stern-Volmer plots at 295, 303, and 308 K Table 1. Stern-Volmer Constants at 295, 303, and 308 K Figure 5. Log (Ka) vs. 1/T at 295, 303, and 308 K Table 4. Thermodynamic Parameters at Three Temperatures After determining the value of the Stern-Volmer constant, for this reaction, the quenching mechanism (static or dynamic quenching) was determined. In dynamic quenching, the Stern-Volmer constant is determined from the following equation: KSV= Ď„kq kq refers to the quenching rate constant and Ď„ is the lifetime of the excited state of HSA (Ď„=1*10-8 s). For dynamic quenching: kq≤kd where kd is the diffusion rate constant in aqueous solution (kd= 1010 M-1s-1). The value of kq calculated for this project (kq~10-13) is much greater than the value of kd. Furthermore, in dynamic quenching Ksv increases as temperature increases; in static quenching Ksv decreases as temperature increases, which is shown in table 1. These two facts illustrate that the quenching is static and dynamic. Association Constant The process in which HSA binds DPA can illustrated in the following reaction: P+nD ⇌ DnP where P is the free protein, D is the drug, n is the number of binding sites, and DnP is the binding complex. The binding constant Ka can be calculated using the following equation: [đ??ƒđ??ƒđ??§đ??§ đ???đ???] đ??Šđ??Š đ??šđ??š = đ???đ??? [đ??ƒđ??ƒ]đ??§đ??§ However, since HSA is the only fluorescent species in the reaction: [đ???đ???] đ??…đ??… = [đ???đ???]đ?&#x;Žđ?&#x;Ž đ??…đ??…đ?&#x;Žđ?&#x;Ž đ??…đ??…đ?&#x;Žđ?&#x;Ž − đ??…đ??…đ??œđ??œđ??œđ??œđ??œđ??œđ??œđ??œ đ??Ľđ??Ľđ??Ľđ??Ľđ??Ľđ??Ľ = đ??Ľđ??Ľđ??Ľđ??Ľđ??Ľđ??Ľ đ??Šđ??Š đ??šđ??š + đ??§đ??§đ??§đ??§đ??§đ??§đ??§đ??§ đ???đ??? đ??…đ??…đ??œđ??œđ??œđ??œđ??œđ??œđ??œđ??œ

Since ΔH < 0 and ΔG < 0, it shows that the binding of DPA by HSA is an exothermic and spontaneous reaction. Furthermore, since ΔH < 0 and ΔS > 0, the binding of DPA is due to the electrostatic forces between the negatively charged DPA and the positive center of HSA. However, due to the small ΔS values, it’s possible that the binding interaction may also be due to Van Der Waals forces and hydrogen bonding. Conclusion When DPA binds to HSA, the fluorescence intensity of HSA decreases. The mechanism of quenching is believed to be static by the binding of DPA by HSA. This further substantiated by the data showing that both Ka and Ksv are almost equal to each other and their values both decreased with temperature increase. The values of the thermodynamic parameters indicate that the binding interaction is mainly due to electrostatic forces, and but not excluding Van Der Waals forces and hydrogen bonding. The value of n shows that HSA only has one binding site in domain II, which is consistent with the literature value. References 1. Mullah Muhaiminul Islam, Vikash K. Sonu, Pynsakhiat Miki Gashna, N. Shaemninway Moyon, Sivaprasad Mitra, Spectrochimica Acta Part A: Mol and Biomol Spectroscopy 152 (2016) 22-33 2. M.Maciazek-Jurczyk, M. Maliszewska, J. Pozycka, J. Pownicka-Zubik, A. Gora, A. Sulkowska, Journal of Molecular Structure 1044 (2013) 194-200

3. Hui Xu, Quanwen Lie, Yanqing Wen, Spectrochimica Acta Part A: Mol and Biomol Spectroscopy 71 (2008) 984-988 4. Jianniao Tian, Jiaqin Liu, Xuan Tian, Zhide Hu, Xingguo Chen, Journal of Molecular Structure 691 (2004) 197-202 5. Philip D. Ross, S. Subramanian, Biochemistry 1981, 20(11), pp 3096-3102

Figure 1. UV-visible spectra of (a) 1.0x10-5 M HSA Îťmax= 279 nm, (b) 1.0x10-4M DPA Îťmax= 273 nm and (c) mixture of HSA and DPA By examining Figure 1, it is evident that there is an overlap of the Îťmax of the HSA and DPA solutions. Due to the overlap of Îťmax in the HSA and DPA solutions, it is apparent that there is an inner filter effect being produced by DPA. In other words, there is a competition of photons between HSA and DPA, due to their similar Îťmax values, which causes an apparent decrease in the fluorescence of HSA.

Acknowledgements The author like to thank Dr. Enju Wang from St. John’s University for the collaborative work. He also thanks Dr. Jianwei Fan for her guidance throughout the entirety of the project.

Figure 4. đ??Ľđ??Ľđ??Ľđ??Ľđ??Ľđ??Ľ

đ??…đ??…đ?&#x;Žđ?&#x;Ž −đ??…đ??…đ??œđ??œđ??œđ??œđ??œđ??œđ??œđ??œ đ??…đ??…đ??œđ??œđ??œđ??œđ??œđ??œđ??œđ??œ

vs. log [Q] at 295, 303, and 308 K


The Investigation of the Binding Capability of Dipicolinic Acid to Human Serum Albumin by UV/Visible Absorption and Fluorescence Spectroscopy Sophia Prentzas, Marisa Kroger, and Dr. Jianwei Fan Department of Chemistry and Biochemistry , Manhattan College

Abstract

Human Serum Albumin (HSA) is the most abundant human plasma protein and has the ability to reversibly bind to drugs and transport them to their target sites. In this work, the HSA’s binding ability to Dipicolinic Acid (DPA), a chemical component of endospores, was investigated with the use of UV/visible absorption and fluorescence spectroscopy. Our experiments determined the binding constant, the intermolecular forces involved in binding, the number of the binding sites located on the HSA, as well as the type of quenching mechanism that occurred between the DPA and HSA. In addition, the thermodynamic parameters of the binding were also calculated.

The maximum absorption wavelengths (λmax) are very similar between the DPA and HSA causing an overlap of the spectra which introduces an inner filter effect. Ultimately, there is a competition for the photons between the HSA and the DPA causing an apparent decrease in the fluorescence intensity of the HSA. To correct the measured fluorescence with the inner filter effect, the following equation was used: 2 Fcorr= F x eA/2 [1]

Introduction

III. Thermodynamic Parameters Van’t Hoff Equation The Van’t Hoff Equation is as follows: 4 Log (Ka)= ΔS/2.303R- ΔH/2.303RT [4]

Where Ka is the association constant; ΔS is the entropy change; ΔH is the enthalpy change; R=8.314 J/ kmol; T is the temperature. The ΔG, Gibbs free energy, was determined by usingΔG= ΔH-T ΔS.

Where F is the emission intensity in the presence of the quencher; Fcorr is the corrected emission intensity of HSA; and A is the absorbance of the DPA at 280 nm with the same concentration as in the mixture of HSA and DPA.

Log(Ka) vs 1/T 5.4

Human Serum Albumin (HSA) is a single polypeptide chain of 585 amino acids. It is the most abundant protein found in the human blood plasma. It contains three homologous domains (I, II, III), each containing two subdomains (Figure II. Fluorescence Spectra of HSA 1).

Log(Ka)

5.3

0.00332

0.00334

0.00336

0.00338

0.0034

Fo/F

1.4 1.2 1 0.8 0.6 0.4 0.2 0

ΔH=-46.839 kJ/mol

ΔS= -77.965 J/K mol

ΔS= -58.470 J/K mol

ΔG= -29.898 kJ/mol

ΔG= -29.589 kJ/mol

0.2

0.4

0.6

0.8

1

1.2

Figure 6. The fluorescence intensity of HSA with increasing DPA concentration. As the curve reaches a plateau, it indicates that the HSA has been saturated by the DPA.

As the DPA was added from 4μL to 24μL the intensity of the fluorescence decreased approximately 30% and the emission max shifted from 335nm to 329 nm. These results indicate that there is indeed an interaction between the HSA and the DPA. When the DPA was added, it caused a change in the microenvironment of the tryptophan to become more hydrophobic. A.  Stern- Volmer Equation F0/FCorr = 1+ Ksv[Q]. [2] The Stern- Volmer equation [2] was used to determine the Stern-Volmer quenching constant, Ksv. Fo is the fluorescence intensity without the DPA quencher. Fcorr was obtained using Eqn[1] used to correct the inner filter effect. The [Q] is the concentration of the quencher, DPA. Stern-Volmer Plot at 295K 303K 308K Excitation Wavelength = 280nm Fo/Fcorr

1.5

Figure 3. Dipicolinic Acid (DPA) (http://www.sigmaaldrich.com/catalog/product/aldrich/p63808? lang=en&region=US )

295 K

1

303 K 308 K

0.5 0 0

0.2

0.4

0.6

0.8

1

1.2

1.4

[DPA]

Experimental

Figure 7. The Stern- Volmer Plot at three temperatures: 295K, 303K, and 308K.

Based on the Stern-Volmer equation, the quenching constant (Ksv) was calculated showing a decrease in the Ksv as the temperature increased. B. Determination of association constant Ka and number of binding sites 4 log [(F0-Fcorr)/Fcorr] = logKa + nlog[Q] [3] The double log equation [3] allowed for the determination of the association constant, Ka, by observing the yintercept of the graph, shown in Figure 8. The number of binding sites, n, was obtained from the slope of the graphs at the three different temperatures. Log(Fo-F/F) vs Log[DPA] -2

Results

-1.5 Log(Fo-F/F)

I.  Absorption Spectrum a

303 K

-1

308 K -0.5 0.2

0.1

0

-0.1

295 K -0.2

-0.3

-0.4

-0.5

-0.6

-0.7

-0.8

0 Log[DPA]

b

Values using Ksv

Values using Ka

ΔH=-52.898 kJ/mol

Fo/F vs [DPA]

[DPA]

Discussion

An important aspect of this experiment was to determine the quenching mechanism: dynamic or static quenching. Dynamic quenching is based on the collisions between the quencher (DPA) and flurophore (HSA) during the excited state. An increase in the temperature would result in an increase of the collision frequency and therefore increase the Ksv value. For static quenching, it is based on the association between the protein and the ligand at the ground state. With an increase in temperature, there will be less association and a decreased Ksv value. The mechanism was determined two ways. The first way was to obtain the relationship between the increasing temperatures and Ksv values. As the temperature increased, the results indicated a decrease Ksv value suggesting a static quenching mechanism. The second way was to focus on the dynamic quenching equation which is Ksv=kqτ, where kq is the bimolecular quenching rate constant and τ is the life time of excited state of HSA without DPA, 1.0 x10-8s. The kq is ≤ kd, where kd is the diffusion rate constant in aqueous solution. The maximum value of kd is1x1010M-1s-1. The data collected indicated that kq was 1x1013M-1s-1, which was much greater than kd, showing that the quenching mechanism could not be dynamic. Therefore, both experimental evidence indicate that the quenching mechanism is static. The binding constant, Ka, is an equilibrium constant of the association between HSA and DPA. The results revealed that as the temperature increased the Ka value decreased because the degree of association between the DPA and the HSA decreased. That would have increased the free HSA concentration and its fluorescent properties. The data also indicated that there was only one binding site. The n values obtained from the slope of the double log equation, were very close to one. This suggests that the one binding site is located in Domain II where the tryptophan is located. If the binding site was anywhere else the fluorescence would not be affected and therefore other methods would be needed in order to detect the interaction. The Van’t Hoff equation is related to the change in the equilibrium constant as a reaction to the change in temperature. Since the ΔΗ and ΔG values were negative, the reaction between the HSA and DPA is exothermic and spontaneous. The ΔΗ and ΔS values are also both negative and therefore, the interaction forces between the HSA and DPA are Van der Waals and hydrogen bonding. 5 Conclusion This study indicated that DPA does quench the emission of HSA and that has an effect on the microenvironment of the tryptophan in Domain II of HSA. The quenching mechanism of this reaction is static quenching. As the temperature was increased, both the Ksv and Ka values decreased. The thermodynamic parameters showed that the intermolecular forces between the HSA and DPA are both Van der Waals and hydrogen bonding. There is also only one binding site found in HSA and it is believed to be located in Domain II because of the change of the fluorescence intensity of the HSA that was observed. References 1) M.Maciazek-Jurczyk, M. Maliszewska, J. Pozycka, J. Pownicka-Zubik, A. Gora, A. Sulkowska, Jurnal of Molecular Structure 1044 (2013) 194-200 2) Mullah Muhaiminul Islam, Vikash K. Sonu, Pynsakhiat Miki Gashna, N. Shaemninway Moyon, Sivaprasad Mitra, Spectrochimica Acta Part A: Mol and Biomol Spectroscopy 152 (2016) 22-33 3) Anne Okafor, Merhun Uddin, Dan Wang Joseph E. Ocando, Neil Jespersen, Jianwei Fan and Enju Wang Dept of Chemistry St. Johns Uni pg 13 4) T.Sanjoy Singh, Sivaprasad Mitra, Spectrochimica Acta Part A: Mol and Biomol Spectroscopy 78 (2011) 942-948 5) Philip D. Ross, S. Subramanian, Biochemistry 1981, 20(11), pp 3096-3102

Figure 8. The Double Log Equation. Fo is the emission intensity in the absence of DPA; Fcorr is the corrected emission intensity in the presence of DPA; Ka is the association constant; n is the number of binding sites on HSA.

Acknowledgements

Table 1. Ksv, Ka, and n values at 295K, 303K, and 308K.

•  St. John’s University, Dr. Enju Wang for providing chemicals and advice. •  Dr. Jianwei Fan for guidance throughout experiment.

KSV (M-1)

Figure 4. The Absorption Spectrum. (a) 1x10-4 M DPA solution (λmax= 273nm) (b) 50/50 Mixture of HSA and DPA (c) 1x10-5 M HSA solution (λmax= 279nm)

0.0033

Table 2. Thermodynamic Parameters

0

c

0.00328

The fluorophores present in the HSA is largely due to the amino acid, tryptophan (Figure 2) located in subdomain IIA, as well as the tyrosine located in other subdomains.

0.00326

1/T (K-1)

Figure 5. The emission spectra of HSA with increasing DPA concentration.

The HSA and DPA used for this experiment were both purchased from Sigma Aldrich. The phosphate buffer (H2PO4-/ HPO42-) prepared in distilled water at a pH 7.2 was used to mimic the pH of the human blood stream at pH 7.35. The emission spectrum was measured using a Photon Technology International Fluorometer with a 1-cm cuvet connected to a thermostat bath. The absorption spectrum was measured using an Agilent 8453 UV/Visible photodiode array spectrophotometer. A 2 mL solution containing 1 μM HSA and phosphate buffer was incubated over night and then added to the cuvet attached to the thermostat bath. The solution was titrated with successive additions of DPA (1x10-4 M) until 24 μL was added. The emission spectrum was measured from 290-500 nm with an excitation wavelength at 280 nm. The excitation and emission bandwidths were set at 5nm.

0.00324

Figure 9. The Van’t Hoff Equation. This allows for a better understanding of the thermodynamic parameters.

FIGURE 1. Three-dimensional diagram of HSA (“Caffeine and sulfadiazine interaction differently with HSA: a combined fluorescence and molecular docking study.”)

Dipicolinic Acid (DPA) is a chemical compound that comprises a portion of the endospores (Figure 3). It is mainly known as the compound responsible for the endospore’s heat resistance. When the endospores germinate, DPA is released and can be bound to the HSA.3

5 4.8 0.00322

DPA added

HSA is greatly known as an important regulator for pharmacokinetic behavior. It has the ability to bind reversibly to most drugs and transport them to various target sites. Generally, when HSA binds to a drug, the drug is unable to carry out its biological function and the concentration of the active form of the drug decreases as it binds more strongly to the HSA.1,2

5.1 4.9

FIGURE 2. Structure of Tryptophan (http:// www.sigmaaldrich.com/catalog/product/aldrich/t90204? lang=en&region=US)

y = 2762.7x - 4.0719 R² = 1

5.2

Ka

n

295 K (1.70 ± 0.77) x105

(1.86 ± 2.36) x105

303 K (1.10 ± .00) x105

(5.72 ± 1.67) x104

.94

308 K (7.51 ± 0.44) x104

(2.81± .05) x103

.76

.95


Analysis of HDD Swap vs Remote Memory Swap for Virtual Machines and Linux Containers Manhattan College Summer Research Program, Research Student: Steven Romero Research Colleague: Emmanuel Sanchez, Research Advisor: Dr. Kashifuddin Qazi

1. Introduction - Virtual Machines (VMs) and Linux Containers are quintessential to cloud computing - VMs and Containers are processes and their memory can be overcommited - When system runs out of memory it resorts to Hard Disk Drive (HDD) swapping - This slows down performance - Modern networks are faster than HDD speeds - Using RAM over network, could prove to be a viable solution - A study of VM and container behavior with remote memory is fundamental for data center features such as load balancing, load consolidation, live migration and resource overcommitment

2. Objectives - Create remote memory scheme for VMs and containers - No modifications to existing stock operating systems - No added special hardware - Run benchmarks and analyze viability of remote scheme over HDD swap

Acknowledgments - Manhattan College School of Science for summer grant

3. Methodology

- One system served as the host machine from which ramdisk was created - The host machine makes the created ramdisk usable by putting it on Network Block Device (NBD) - The ramdisk storage is received by the client machine through the NBD client - The ramdisk was then used as swap on the client machine - Host and Client specs - i5 Intel Processor, 8GB of RAM, Ubuntu 16.04, kernel version: 4.4.0-34-generic, 1 Gbps switch - VM and Container specs - 8 GB RAM, Ubuntu 16.04; kernel version: 4.4.0-34-generic - Multiple iterations of two benchmarks (Memory BandWidth Tool and Sysbench) were run - Each iteration was run with varying amounts of physical and swapped memory - Performance was recorded for HDD swap and remote swap

References - Williams, D., Jamjoom, H., Liu, Y.H. and Weatherspoon, H., 2011, March. Overdriver: Handling memory overload in an oversubscribed cloud. In ACM SIGPLAN Notices (Vol. 46, No. 7, pp. 205-216). ACM. - https://www.jamescoyle.net/how-to/943-create-a-ram-disk-in-linux - http://www.thegeekstuff.com/2009/02/nbd-tutorial-network-block-device-jumpstart-guide/ - https://www.janoszen.com/2013/02/06/limiting-linux-processes-cgroups-explained/

4. Experimental Results

5. Conclusion The study shows that for large memory requirements (~7GB) remote memory swap performs about 34% better on average than HDD swap. For lower memory requirements the performance improvement is insignificant. Possible reasons for this disparity include, changes in the kernel's handling of NBD as well as the over head of creating TCP packets overall. Containers with remote swap show a higher performance improvement (up to 104%, average 42%) as opposed to VMs using remote swap.

Future Work - Further analysis will be performed by introducing a protocol that circumvents TCP packets


Identification of Intestinal Parasites in Domestic Dog (Canis familiaris) Fecal Samples Collected in Winston-Salem, NC Eric Bailey1, J.M. Porter-Kelley2, D. Frederick2, T. Porter2, C. Woods2, P. Jackson-Miller2, J. Riley2, A. Kennedy2, O. Seshie2, C. Hines2, C. Johnson2, and Ghislaine Mayer1 2Department http://www.franklincountydogs.com/adopt/available.cfm

1Department of Biology, Manhattan College, Riverdale, NY 10471 of Biological Sciences, Winston-Salem State University, Winston-Salem, NC 27110

Table 1. Primers and target genes used in this study.

Intestinal parasites pose a significant threat to individuals whose immune systems have been compromised by infection with viruses, such as the Human Immunodeficiency Virus (HIV). Infection with a parasite, such as the protozoan Cryptosporidium parvum, can often be fatal in immunocompromised individuals. Several other parasites such as Giardia lamblia, Toxoplasma gondii, and Neospora caninum also have the ability to infect humans. We have recently reported the occurrence of protozoan intestinal parasites in dogs frequenting New York City parks. Little is known about the presence of human helminth intestinal parasites in dogs. This is of great concern since humans are in intimate contact with dogs in Western society. The goal of this study was to assess the presence in dogs of protozoan and helminth intestinal parasites that are zoonotic.

Target Gene

5’-AAGCCCGACGACCTCACCCGCAGTGC-3’ 1st round forward primer 5’-GAGGCCGCCCTGGATCTTCGAGACGAC-3’ 1st round reverse primer

Introduction Giardia lamblia

5’-GAACGAACGAGATCGAGGTCCG-3’ 2nd round forward primer

References

5’-CTCGACGAGCTTCGTGTT-3’ 2nd round reverse primer Cryptosporidium parvum

5’-CCGAGTTTGATCCAAAAAGTTACGAA-3’ Forward primer

Hong, S.-H., et. al (2013) Molecular characterization of Giardia duodenalis and Cryptosporidium parvum in fecal samples of individuals in Mongolia. American Journal of Tropical Medicine and Hygiene 90:43-47. Razmi, G. (2009). Fecal and molecular survey of Neospora caninum in farm and household dogs in Mashhad Area, Khorasan Province, Iran. Korean J Parasitol The Korean Journal of Parasitology. 47, 417. doi:10.3347/kjp. 47.417 Fazaeli, A., et al. (2000) Molecular typing of Toxoplasma gondii strains by GRA6 gene sequence analysis. Int. J. Parasitol. 30:637-642.

18S rDNA gene

5’-TAGCTCCTCATATGCCTTATTGAGTA-3’ Reverse primer

5’-GTTGGGAGTATCGCCAACCG-3’ Forward primer Neospora caninum 5’-AACAACCCTGAACCAGACGT-3’ 1st round reverse primer 5’-ATGCGTTCAAAATTTCACCA-3’ 2nd round reverse primer 5’-GTAGCGTGCTTGTTGGCGAC-3’ Forward primer Toxoplasma gondii 5’-ACAAGACATAGAGTGCCCC-3’ Reverse primer

Nc-5 gene GRA 6 gene

5’-GTTGGGAGTATCGCCAACCG-3’ Forward primer 5’-AACAACCCTGAACCAGACGT-3’ 1st round reverse primer 5’-ATGCGTTCAAAATTTCACCA-3’ 2nd round reverse primer 5’-GTTGGGAGTATCGCCAACCG-3’ Forward primer 5’-AACAACCCTGAACCAGACGT-3’ 1st round reverse primer 5’-ATGCGTTCAAAATTTCACCA-3’ 2nd round reverse primer 5’-CATTAATGTATTTTGTAAAGTTG-3′ Forward primer

Ancyclostoma caninum Echinococcus granulosus

Internal transcriber spacer gene Internal transcriber spacer gene Mitochondrial 12S rRNA gene

5’-CACATCATCTTACAATAACACC-3’ Reverse primer

A

C

400 bp

328 bp

RESEARCH POSTER PRESENTATION DESIGN © 2015

www.PosterPresentations. com

6%

3%

2

T. gondii, G. duodenalis, and C. parvum were not detected. The tapeworm E. granulosus was found at a prevalence 3%. The hookworm N. americanus was detected at a prevalence of 6%. The hookworm A. caninum was found at a prevalence of 13% The protozoan parasite N. caninum was detected at 3% prevalence. Since all of the parasites are zoonotic, there is a potential for transmission to humans, which causes a public health concern.

It is not surprising that hookworms were detected in samples from North Carolina since these parasites are predominant in the soil in the warm climate

The lineage of the parasites from PCR-positive samples will be confirmed by sequencing.

Spatial comparison will be performed between samples collected in hot and cooler climates.

Future Directions

• • • • •

Dr. Ghislaine Mayer for being my mentor. Dr. Johanna Porter-Kelley and her students for providing us with the samples. Department of Biology, Manhattan College for their financial support. Jasper Summer Scholars research program for funding. Dean of the School of Science, Manhattan College.

References

Figure 3: Life cycle Neospora caninum.

https://www.researchgate.net/figure/6725742_fig1_Fig-1-Life-cycle-of-Neosporacaninum-From-Dubey-JP-Review-of-Neospora-caninum-and

Materials and Methods

3%

Acknowledgements 900 bp

255 bp

328 bp

• A total of 32 canine fecal samples were collected in shelters and neighborhoods in Winston-Salem, NC on March 16, 2015. • DNA was extracted from each fecal sample using the Qiagen Stool DNA kit (Qiagen, Valencia, CA). • The presence of parasites was detected by polymerase chain reaction (PCR). • The products of the polymerase chain reactions were detected by using a 1.5% agarose gel.

www.k-state.edu/parasitology/ 625tutorials/Tapeworm03

http://www.cdc.gov/parasites/zoonotichookworm/biology.html

Figure 2: Echinococcus granulosus life cycle. http://www.slideshare.net/

4

www.ksu.edu

Discussion

F

E

6

Conclusions • • • • •

328 bp

D

Figure 1: Life cycle of zoonotic hookworms.

workforce.calu.edu

8

en.wikipedia.org

Yamage, Mat, et al. (1996)"Neospora caninum: specific oligonucleotide primers for the detection of brain "cyst" DNA of experimentally infected nude mice by the polymerase chain reaction (PCR)." The Journal of Parasitology 82: 272. Yamage, Mat, et al. (1996)"Neospora caninum: specific oligonucleotide primers for the detection of brain "cyst" DNA of experimentally infected nude mice by the polymerase chain reaction (PCR)." The Journal of Parasitology 82: 272. Štefanić, S., et al. (2004). Polymerase chain reaction for detection of patent infections of Echinococcus granulosus (“sheep strain”) in naturally infected dogs. Parasitology Research Parasitol Res. 92: 347-351.

800 bp 556 bp

10

Figure 5. Prevalence of protozoans and helminths in intestinal samples.

Results

B

12

0

Helminth Species Necator americanus

13%

14

Rochelle, P. et al. (1997) Comparison of primers and optimization of PCR conditions for detection of Cryptosporidium parvum and Giardia lamblia in water. β-giardin gene American Society for Microbiology 60:106-114.

Parasite Prevalence (%)

Protozoan Species Primer Sequence

Figure 4: Gel electrophoresis analysis of PCR products of A) C. parvum. Lanes 1 and 21: 1 Kb marker. Lane 2: positive control. Lanes 3-20 and 22-34: C. parvum DNA samples. Lane 37: negative control. B) G. lamblia. Lanes 1: 100 bp marker. Lane 2: positive control. Lanes 3-18 contain G. lamblia DNA samples. Lane 20: Negative control. C) T. gondii. Lanes 1 and 21: 1 Kb marker. Lane 2: positive control. Lanes 3-19 and 22-36: T. gondii DNA samples. Lane 38: negative control D) N. caninum. Lanes 1 and 21: 1 Kb marker. Lane 2: positive control. Lanes 3-20 and 22-35: N. caninum DNA samples. Lane 37: negative control. E) E. granulosus. Lanes 1 and 21: 1 Kb marker. Lanes 2-19 and 22-35 contain E. granulosus DNA samples. Lane 37: negative control. Lane 11: positive sample for E. granulosus. F) A. caninum and N. americanus. Lanes 1 and 21: 1 Kb marker. Lanes 2-19 and 22-34: A. caninum and N. americanus DNA samples. Lane 36: negative control. Table 2. Summary of PCR data. Sample Number 168

Cryptosporidium Giardia Toxoplasma Echinococcus Neospora Ancyclostoma Necator gondii granulosus parvum duodenalis americanus caninum caninum -

131

-

-

-

-

-

-

-

143

-

-

-

-

-

-

-

170

-

-

-

-

+

-

-

129

-

-

-

-

-

-

-

135

-

-

-

-

-

-

-

166

-

-

-

-

-

-

-

174

-

-

-

-

-

-

-

115

-

-

-

-

-

-

-

106

-

-

-

-

-

-

+

173

-

-

-

+

-

-

-

117

-

-

-

-

-

-

-

158

-

-

-

-

-

-

-

110

-

-

-

-

-

-

-

130

-

-

-

-

-

-

-

122

-

-

-

-

-

+

-

144

-

-

-

-

-

-

-

142

-

-

-

-

-

+

-

164

-

-

-

-

-

+

-

175

-

-

-

-

-

-

-

134

-

-

-

-

-

-

+

128

-

-

-

-

-

-

-

146

-

-

-

-

-

-

-

133 108 137 103 165 112

- - - - -

-

- - - - -

-

- - - - -

-

- - - - -

-

- - - - -

-

-

154

-

-

-

-

-

-

-

-

- -

-

-

- -

-

+

- -

-

-

163 152

-

- -

-

1. Hong, S.-H., et. al (2013) Molecular characterization of Giardia duodenalis and Cryptosporidium parvum in fecal samples of individuals in Mongolia. American Journal of Tropical Medicine and Hygiene 90:43-47. 2. Rochelle, P. et al. (1997) Comparison of primers and optimization of PCR conditions for detection of Cryptosporidium parvum and Giardia lamblia in water. American Society for Microbiology 60:106-114. 3. Fazaeli, A., et al. (2000) Molecular typing of Toxoplasma gondii strains by GRA6 gene sequence analysis. Int. J. Parasitol. 30:637-642. 4. Yamage, Mat, et al. (1996)"Neospora Caninum: Specific oligonucleotide primers for the detection of brain "cyst" DNA of experimentally infected nude mice by the polymerase chain reaction (PCR)." The Journal of Parasitology 82.272. Web. 5. Razmi, G. (2009). Fecal and molecular survey of Neospora caninum in farm and household dogs in Mashhad Area, Khorasan Province, Iran. Korean J Parasitol The Korean Journal of Parasitology, 47, 417. doi:10.3347/kjp. 47.417 6. Štefanić, S., et al. (2004). Polymerase chain reaction for detection of patent infections of Echinococcus granulosus (“sheep strain”) in naturally infected dogs. Parasitology Research Parasitol. Res. 92, 347-351. 7. Tranas, J., et. al (1999, September). Serological Evidence of Human Infection with the Protozoan Neospora caninum. Clinical and Diagnostic Laboratory Immunology, 6, 765-767. Retrieved August 17, 2016.


Aniline Analogues as a New Ligand for Chromate Capture Background The World Health Organization (WHO) reports that the prevalent pollution of water from many industrial practices, such as, the tanning industry, has led to significant amounts of Cr(VI) being found in water We investigated different compounds that can remove the toxic form of chromate, which is Cr(VI) and convert it to a far less toxic form

• • •

Prototypes

Throughout the project, we looked into several different types of aromatic compounds including: catechols and phenols Catechols and phenols did not remove chromate as efficiently as aniline and its analogues: 3,4-dimethylaniline and phenylenediamine did We found that the more alkylated aniline, 3,4dimethylaniline did the best in removing chromate

The Purpose of GAC

• Since the molecules we tested were mainly soluble in water, we used GAC (Granular Activated Carbon) to try to remove them from water • GAC was not able to remove all of the organic compounds. For future investigation, we would like to focus on more insoluble compounds so that we do not have to depend on GAC.

Conclusion

• High electron density is shown to be preferred for removing chromate instead of Electron Withdrawing Groups (EWGs), such as, nitrocatechol • Further research into bigger alkylated anilines that have higher electron density and are insoluble can be useful for finding a compound that can remove chromate and come out as a precipitate, thus efficiently purifying the water

Acknowledgements I would like to thank Dr. Regan, Dr. Roy, and Dr. Mons for all their help throughout this project


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.