Swanson School of Engineering Undergraduate Summer Research Program Summer 2014

Page 1

Swanson School of Engineering

Undergraduate Summer Research Program Summer 2014


Welcome to the 2014 Issue of the Swanson School of Engineering (SSOE) Summer Research Abstracts! Every year the SSOE invites all undergraduates to propose a research topic of interest to study for the summer and to identify a faculty mentor willing to serve as a mentor and sponsor for their project. In this way, students get to work on cutting edge research with leading scientists and engineers while spending their summertime at SSOE. The students, however, were not restricted to the Swanson School of Engineering or even the University of Pittsburgh. The world has been fair game. As a result, one student studied this past summer at Westsächsische Hochschule Zwickau, in Germany! There are multiple programs that offer summer research opportunities to the SSOE undergraduates, the largest of these being the Summer Internship Program jointly sponsored by the Swanson School and the Provost. This year, the program was able to fund over 50 students, with generous support from both the SSOE and the Office of the Provost. Additional support was provided by the Swanson School of Engineering, the department of Bioengineering, and the following individual investigators: Jorge Abad, Anna Balazs, Elia Beniash, Kevin Chen, Markus Chmielus, X. Tracy Cui, Lance Davidson, Shawn Farrokhi, Kent Harries, Alex Jones, Marina Kameneva, Steve Little, Kacey Marra, Badie Morsi, Robert Parker, Jonathan Pearlman, Partha Roy, Ian Sigal, Prithu Sundd, Robert Turner, Rocky Tuan, Goetz Veser, David Vorp, and Jun Yang. Students also submitted poster abstracts to Science 2014 – Sustain It! in October. Thirty of the students were selected to present posters in a special undergraduate student research session at Science 2014. Other summer funding and projects were supported by the Mascaro Center for Sustainable Innovation (MCSI), the Pitt EXCEL Program, as well as individual SSOE departmental programs. This year, students from all of the SSOE summer opportunities were invited to submit an abstract to be considered for expansion into a full manuscript for consideration for publication the inaugural issue of Ingenium: Undergraduate Research in the Swanson School of Engineering. This provides undergraduates with the experience of writing manuscripts and graduate students – who form the Editorial Board of Ingenium – with experience in peer-review and editing. We hope you enjoy this compilation of the innovative, intellectually challenging research that our undergraduates took part in during their tenure at SSOE. In presenting this work, we also want to acknowledge and thank those faculty mentors who made available their laboratories, their staff, and their personal time to assist the students and further spark their interest in research.

David A. Vorp, Associate Dean for Research Larry J. Shuman, Senior Associate Dean for Academic Affairs


Student and Department/Program

Mentor and Department

Title of Research

Dhruv N. Srinivasachar

BioE

Bryan N. Brown

BioE

Mechanical Characterization of Extracellular Matrix Hydrogels for Peripheral Nerve Reconstruction

Molly E. Knewtson

BioE

April J. Chambers

BioE

Body Segment Parameters in Normal Weight Versus Obese Young Females

Ryan M. Le Grand*

BioE

April J. Chambers

BioE

Effects of Visual Fields on Standing Balance*

Liza A. Bruk

BioE

Xinyan Tracy Cui

BioE

Synthesis and Characterization of Magnetic Nanoparticles for Drug Delivery to Central Nervous System

BioE

Investigating the Effect of Conducting Polymer Graphene Oxide Polymer Coatings on Magnesium Corrosion

Huaxiu Li

ECE

Xinyan Tracy Cui

Matthew E. Lefkowitz

BioE

Lance A. Davidson

BioE

Designing a Novel Assay to Characterize Calcium Flux in Epithelial Contraction of Early Developing Xenopus Laevis

Corey M. Williams*

BioE

Lance A. Davidson

BioE

The Effect of Cell Division on Tissue Spreading in Xenopus Laevis Animal Cap Tissues*

Marlee R. Hartenstein

BioE

Partha Roy

BioE

Profilin-2 has an Anti-Migratory Action in Head and Neck Cancer Cells

Jonathan S. Calvert

BioE

Gelsy Torres-Oviedo

BioE

Uphill Walking Enhances the Retention of a New Stepping Pattern Learned on a Split-Belt Treadmill

* Abstract has been withheld to protect intellectual property


Student and Department/Program

Mentor and Department

Title of Research

Dominic J. Pezzone

BioE

David A. Vorp

BioE

Adipose-Derived Stem Cells from Diabetic Patients Display a ProThrombogenic Phenotype

Melissa R. Smith

BioE

Justin S. Weinbaum

BioE

Verification of Alexa Fluor 633 Binding to Elastic Fibers

Bradley W. Ellis

BioE

Julie Phillippi

Cardiothoracic Surgery

Differentiation of Perivascular Progenitor Cells and Their Role in Neovascularization

Catalina Escobar

CEE

Jorge D. Abad

CEE

Rivers as Political Boundaries: Peru and Its Dynamic Borders

Collin J. Ortals

CEE

Jorge D. Abad

CEE

The Birthplace of the Amazon River, The Confluence Between the Maranon and Ucayali Rivers

Erin M. Sarosi

BioE

John C. Brigham

CEE

A Large Scale Study of Human Right Ventricle Geometry and Function Relating to Pulmonary Hypertension

Brandon C. Ames

ChemE

Andrew Bunger

CEE

Role of Turbulent Flow in Generating Short Hydraulic Fractures with High Net Pressure in Slick Water Treatments

Donald P. Cunningham

CEE

Kent A. Harries

CEE

Open-Hole Tension Capacity of Pultruded GFRP having Staggered Holes

Shawn L. Platt

CEE

Kent A. Harries

CEE

Open-Hole Strength of Bamboo Laminate for Low-Impact Timber Repair

* Abstract has been withheld to protect intellectual property


Student and Department/Program

Mentor and Department

Title of Research

Elijah M. Barrad

CEE

Vikas Khanna

CEE

Life Cycle Energy Demand & Greenhouse Gas Emissions of Collaborative BIM Construction Project

Stephen C. Snow

ChemE

Anna C. Balazs

ChemE

Regenerating Composite Layers from Severed Nanorod-Filled Gels

Sierra Barner

ChemE

Ipsita Banerjee

ChemE

Effect of Substrate Properties on the Growth Kinetics of Encapsulated Human Embryonic Stem Cells

Patrick A. Bianconi*

BioE

Steven R. Little & Rocky S. Tuan

ChemE & Ortho Surgery

Preventing Articular Cartilage Calcification by the Controlled Release of Dorsomorphin*

BioE

Steven R. Little & Rocky S. Tuan

ChemE & Ortho Surgery

Three Dimensional Cell Culture Effects on Chondrogenesis of Kartogenin-Treated hMSCs

Meghana A. Patil

Yemin Hong

ChemE

Badie I. Morsi

ChemE

Determination of Hydrodynamic and Mass Transfer Parameters in a PilotScale Slurry Bubble Column Reactor for Fischer-Tropsch Synthesis

Blaec P. Toncini

ChemE

Robert S. Parker

ChemE

Utilizing an Interactive Educational Module to Educate Middle School Students about Diabetes

Yutao Gong

ChemE

Gรถtz Veser

ChemE

Cellular Toxicity of Nanomaterials

Jonathan D. Hughes

ChemE

Gรถtz Veser

ChemE

Dynamic Reactor Simulations of Chemical Looping Combustion

* Abstract has been withheld to protect intellectual property


Student and Department/Program

Kimaya Padgaonkar

ChemE

Mentor and Department

Gรถtz Veser & Ipsita Banerjee

Title of Research

ChemE

Towards Understanding Nanoparticle Toxicity

Alexander S. Augenstein ECE

Kevin P. Chen

ECE

Effects of Geometry on Dispersion Characterization of Transparent and Reflective Materials using WhiteLight Interferometric Techniques

Donald E. Kline

CoE

Alex K. Jones

ECE

Domain-Wall Memory Buffer for LowEnergy NoC's

Daniel J. Wright

ECE

Alex K. Jones

ECE

Effective Scientific Computing on Android Based Mobile Devices

Christian G. Bottenfield

EngrSci

Guangyong Li

ECE

Simulation of a Graded Bulk Heterojunction Organic Solar Cell

Stephanie P. Cortes

ECE

Thomas E. McDermott ECE

Case Study for Sustainable Building Modeling on a University Campus

Zachary T. Smith

ECE

Gregory Reed

ECE

Surge Generator Design for Electric Power Systems Lab

Michael H. Kuhn

CoE

Jun Yang

ECE

Exploring Opportunities with Phase Change Memory in Big Data Benchmarks

Anthony E. Analo

MEMS

Anthony J. DeArdo

MEMS

Microstructural Analysis of High Strength Steels

* Abstract has been withheld to protect intellectual property


Student and Department/Program

Mentor and Department

Title of Research

Raymond M. Mattioli

MEMS

Markus Chmielus

MEMS

Using Ultrasound Techniques to Measure Mechanical Properties of Metal and Polymer Samples

Meredith P. Meyer*

BioE

Markus Chmielus

MEMS

Mechanical Properties of 3D Printed Metals and Polymers*

Erica L. Stevens

MEMS

Markus Chmielus

MEMS

Pore Distribution in Inconel 718 Manufactured by Laser Engineered Net Shaping

Nick Jean-Louis

MEMS

Mark Kimber

MEMS

Construction and Analysis of a Partitioned Multifunctional Smart Insulation

Nathan Alaniz

MEMS

Nitin Sharma

MEMS

Testing a Baxter Robot’s Potential Application in Physical Therapy

Henry T. Phalen

BioE

Nitin Sharma

MEMS

Investigation of Electromyography as a Muscle Fatigue Indicator During Functional Electrical Stimulation

Bhim Dahal

MEMS

Guofeng Wang

MEMS

Predicting Strength of Nanocrystalline Copper from Molecular Dynamic Simulations

Andrew J. Macgregor

BioE

Robert Turner

Neurobiology

Coordinated Reset Deep Brain Stimulation to Treat Parkinson's Disease

Saundria M. Moed*

BioE

Ian A. Sigal

Ophthalmology

The Collagen Microstructure in the Peripapillary Sclera Changes With Distance From the Lamina Cribrosa*

* Abstract has been withheld to protect intellectual property


Student and Department/Program

Mentor and Department

Title of Research

Yuqi Wang*

BioE

Ian A. Sigal

Ophthalmology

Measuring Effects of Intraocular and Intracranial Pressures on Scleral Canal Expansion and Anterior Lamina Cribrosa Deformations*

Michael J. Morais

BioE

Matthew A. Smith

Ophthalmology

High-Dimensional Neural Correlates of Choice and Attention in V4

Olivia F. Jackson*

BioE

Elia Beniash

Oral Biology

Self Assembled Organosilane Coatings for Resorbable Devices*

Paige E. Kendell

BioE

Shawn S. Farrokhi

Quantifying Tibiofemoral Joint Physical Therapy Contact Forces in Patients with Knee Osteoarthritis Using OpenSim

Ana A. Taylor

BioE

Kacey G. Marra

Plastic Surgery

Analyzing Animal Model and DrugLoaded Microspheres for Local Breast Cancer Recurrence in Autologous Fat Grafting

Lindsey J. Marra

BioE

Kacey G. Marra

Plastic Surgery

An Adipose Stem Cell Suspension in Keratin Hydrogel for Peripheral Nerve Injury Treatment

Daniel J. Buysse

Psychiatry

Detecting Electrophysiologic Abnormalities in Chronic Insomnia Using Detrended Fluctuation Analysis

Anthony V. Cugini

BioE

Alexandra J. Moore*

BioE

Prithu Sundd

In Vitro Endothelialized Microfluidic Pulmonary, Assay to Study Pulmonary VasoAllergy & Critical Vasoocclusion in Sickle Cell Care Medicine Disease*

Ian P. McIntyre

BioE

Jonathan Pearlman

Rehabilitation Science & Technology

* Abstract has been withheld to protect intellectual property

Determination of Slope and Collection of Sidewalk Location Using a Pathway Measurement Tool


Student and Department/Program

Mentor and Department

Title of Research

Hannah J. Voorhees

BioE

Marina V. Kameneva

Surgery

Plasma Permeability of Synthetic Vascular Grafts

Roland K. Beard

EngrSci

William R. Wagner

Surgery

In Vitro Evaluation of Hemocompatibility of MPCPSiCoated Titanium

Drake D. Pedersen

BioE

William R. Wagner

Surgery

Visualizing Real-Time Platelet Deposition onto Ti6Al4V Caused by Distrubed Flow Geometires

* Abstract has been withheld to protect intellectual property


MECHANICAL CHARACTERIZATION OF EXTRACELLULAR MATRIX HYDROGELS FOR PERIPHERAL NERVE RECONSTRUCTION Dhruv N Srinivasachar, Travis A Prest and Bryan N Brown McGowan Institute of Regenerative Medicine, Department of Bioengineering University of Pittsburgh, PA, USA Email: dns23@pitt.edu INTRODUCTION Peripheral nerve injury is a common condition, often caused by trauma, surgery, or tumors, which results in paralysis, pain, and loss of sensation in affected areas. Damaged nerve fibers proximal to the injury site attempt to reinnervate the affected area, however, severe damage can cause degeneration [1]. Nerve autografts, the gold standard for bridging nerve gaps, suffer from availability and donor site morbidity issues [1]. Suturing and conduits only address nerve positioning and do not induce regrowth. One method proposed to induce regrowth is using nerve extracellular matrix (N-ECM), which provides an inductive scaffold for neurons and associated cells. N-ECM can be isolated from porcine sources, and can be delivered as a form-filling, thermallyresponsive hydrogel. Increasing the ECM concentration in the hydrogel has been shown to raise stiffness and decrease gelation time [2, 3]. However, increasing ECM concentration decreases porosity, which is important in allowing nerve fibers to infiltrate the gel and bridge the nerve gap [4]. Johnson et al. showed that changing the ionic strength of the gel diluent affected the mechanical properties of myocardial matrix gels without significantly affecting porosity [5]. Increasing gel porosity may also be done via gas foaming with CO2 [6]. The aim of the current project is to characterize the mechanical properties of porcine N-ECM gels. Specifically, we aim to determine the influence of N-ECM concentration, ionic strength, crosslinking, and gas foaming, on gel stiffness, structure, and gelation time. METHODS Porcine sciatic nerve was decellularized according to a well-established protocol of enzyme, detergent, and acid washes, producing N-ECM[2]. Hydrogels

were prepared from N-ECM by lyophilization, pepsin digestion, dilution in phosphate buffered saline (PBS), neutralization with NaOH, and heating to 37°C, as previously described [7]. Gels were modified by either: changing ECM concentration (5, 8, 10, 15 and 20 mg/mL), changing ionic strength of the PBS (0.5x, 1x and 1.5x), or gas foaming with NaHCO3 as base. The baseline gel contained 10 mg/mL of N-ECM, was diluted with 1x PBS, and was not gas foamed. Gels were mechanically tested using a dynamic rheometer to measure gelation time and complex shear modulus (shear storage and loss modulus). Scanning electron microscopy was used to examine the surface structure of the N-ECM hydrogel. Statistical analyses, including Student’s t-test and ANOVA with post-hoc Tukey’s test, were performed using Minitab 17 and Microsoft Excel. P-values less than 0.05 were considered statistically significant. RESULTS Hydrogel N-ECM concentration did not significantly change complex shear modulus or gelation time. However, the 5 and 20 mg/mL gels did not stabilize, so usable data was only obtained from 8, 10 and 15 mg/mL gels. There was a significant decrease in storage modulus between baseline and 0.5x PBS gels, however, loss modulus and gelation time were not significantly different. In addition, the 1.5x PBS gel was not significantly different compared to either baseline or the 0.5x gel. Gas foaming resulted in a significant increase in gelation time compared to baseline. However, complex shear modulus was not significantly affected. Table 1 below shows the mean gelation time, mean shear storage modulus, and mean shear loss modulus for each condition, with standard error of the mean in parentheses. Structurally, N-ECM hydrogels appear to have increased fiber density as ECM concentration increased between 8-15 mg/mL. Fiber density also appeared to increase slightly as


ionic strength increased. Gas foaming appears to decrease fiber density. However, at present these findings are qualitative and based upon visualization only. Figure 1 shows the gels as visualized by scanning electron microscopy.

REFERENCES

CONCLUSIONS Hydrogels from porcine N-ECM were successfully prepared and characterized. N-ECM gels were found to be stable with ECM concentrations of 8-15 mg/mL. Lowering ionic strength was found to decrease gel stiffness. Gas foaming increased gelation time and appeared to decrease fiber density. Future work on this project will include: determining properties of gels crosslinked using multiple methods, growing neurons and associated cells in gels to determine effects of porosity and stiffness on growth, testing gels in vivo for efficacy in nerve injury models, and optimizing an N-ECM gel formulation for porosity, gelation time, and stiffness.

3.

1. 2.

4. 5. 6. 7.

Deumens, R., et al. Prog Neurobiol, 2010. 92(3): p. 245-76. Medberry, C.J., et al. Biomaterials, 2013. 34(4): p. 1033-1040. Willits, R.K., et al. J Biomater Sci Polym Ed, 2004. 15(12): p. 1521-31. Man, A.J., et al. Tissue Engineering Part A, 2011. 17(23-24): p. 2931-2942. Johnson, T.D., et al. Nanotechnology, 2011. 22(49): p. 494015. Annabi, N., et al. Tissue Eng Part B Rev, 2010. 16(4): p. 371-83. Freytes, D.O., et al. Biomaterials, 2008. 29(11): p. 1630-7.

ACKNOWLEDGEMENTS Funding for Dhruv Srinivasachar was provided by the University of Pittsburgh Department Of Bioengineering.

Table 1: N-ECM Gel Mechanical Properties Gel Type Baseline 8 mg/mL 15 mg/mL Gelation 10.64 (1.38) 9.55 (0.94) 10.29 (2.01) Time (min) Shear Storage 80.1 (17.67) 49.8 (6.29) 64.1 (14.48) Modulus (Pa) Shear Loss 10.35 (3.03) 5.60 (0.89) 6.58 (1.35) Modulus (Pa) *p<0.05 compared to Baseline by ANOVA and Tukey’s test **p<0.05 compared to Baseline by Student’s t-test

0.5x PBS

1.5x PBS

Gas Foamed 17.37 (1.52)**

14.10 (0.70)

9.92 (1.01)

9.41 (2.49)*

71.8 (10.47)

61.5 (8.40)

1.54 (0.34)

11.10 (4.03)

8.87 (1.60)

Figure 1: Scanning electron micrographs of each gel type. Going clockwise from the top left, baseline, 8 mg/mL, 15 mg/mL, gas foamed, 1.5x PBS, and 0.5x PBS gels are shown.


BODY SEGMENT PARAMETERS IN NORMAL WEIGHT VERSUS OBESE YOUNG FEMALES Molly E. Knewtson, Zachary F. Merrill, Rakie Cham and April J. Chambers Human Movement and Balance Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: mek122@pitt.edu, Web: http://hmbl.bioe.pitt.edu/ INTRODUCTION Body segment parameters (BSPs) are important in biomechanical gait analysis and injury prevention. Many methods have been used to create predictive models and regression equations such as cadaverbased studies [1], imaging [2], computerized tomography (CT) [3], gamma mass scanning [4], and biplanar radiography [5]. However, each of these methods has disadvantages such as timeintensiveness, monetary cost, and/or delivery of high doses of radiation to the participant. Dual energy X-ray absorptiometry (DXA) is a quick, inexpensive, low-radiation full body scan that can distinguish the difference between bone, muscle, and fat tissue, allowing it to calculate accurate densities and masses of the body’s segments. It is a verified method of finding segmental mass, center of mass, and radius of gyration in the frontal plane [7]. Previous models were developed with subjects in a normal weight range [2,3], so they do not accurately predict BSPs of obese individuals. For example, Chambers et al. found a significant difference between trunk segmental mass, center of mass, and radius of gyration in obese versus non-obese elderly [6]. Since 60% of workers in the United States have a body mass index (BMI) over 25, classifying them as overweight or obese, there is a need for an accurate way to predict segmental specifications for the working population. This study aims to compare body segment parameters of young obese females against normal weight females using DXA. METHODS Obesity corresponds to a BMI between 30 and 40 kg/m2, while normal weight corresponds to a BMI of 18.5 to 25 kg/m2. Twenty three females (10 obese, 13 normal-weight), aged 21 to 40, were recruited for participation in this study. Each participant underwent a full body DXA scan (Hologic QDR 1000/W) lying supine with legs internally rotated and maximum plantar flexion. Eighty-eight body measurements were taken

including lengths, widths, and circumferences of the limbs as well as widths, depths, and circumferences of the torso and head. In analysis of the DXA scan, bony landmarks were used to define the boundaries between head, torso, thigh, shank, upper arm, and lower arm segments as shown in Figure 1 [2]. The head segment extended from the vertex to the base of the mandible. The torso was defined from the acromion to the greater trochanter. The torso was separated from the arms with a boundary through the medial acromion to the axilla, and it was separated from the thigh with a boundary just lateral to the anterior superior iliac spine and the ischial tuberosity of the pelvis. The thigh was defined from the greater trochanter to the center of the knee joint. The shank extended from the knee joint center to the lateral malleolus. The hands and feet were excluded from analysis. Each of these body segments was further divided into horizontal sub-regions 2.6 or 3.9 cm tall. Masses were calculated for each sub-region and used to determine BSPs for the segment.

Figure 1. Whole body DXA scan with segmental boundaries


Scans were analyzed for segment length as a percent of body height (SL), segment mass as a percent of body mass (SM), longitudinal distance from the proximal end to the center of mass of the segment as a percent of segment length (COM), and frontal plane radius of gyration as a percent of segment length (RG). A two-tailed t-test was run to compare the parameters for the obese subjects to those of the normal weight subjects. Statistical significance was set at 0.05.

found to be more proximal than that of normal weight subjects. This agrees with results from previous studies [6]. Both adipose and muscle tissue stores are nearer to the superior end of the shank segment, moving the center of mass more proximally for subjects with a higher BMI. Additionally, due to adipose tissue stores in the posterior upper arm, obese participants had a significantly greater SM in the upper arm segment when compared against normal weight subjects.

RESULTS As expected, no significant differences were found in SL between the two subgroups, as greater BMI does not affect relative lengths of bones. However, obesity was found to significantly affect RG, COM, and SM in various segments. Only significant results are included for brevity. The mean and standard deviation of significantly different BSPs of the 23 subjects are presented in Table 1.

CONCLUSIONS Significant differences were found in many variables between the obese and normal weight young female populations. Since current anthropometric tables are based on BSPs of healthyweight individuals, they do not accurately model obese females. This study asserts the need to develop obesity-specific anthropometry sets.

Torso and lower arm RG of the obese population were found to be smaller than that of the non-obese population, indicating that subjects with a larger BMI had more concentrated mass in their torso and lower arm segments, as discussed in Chambers et al. [6]. This is an intuitive result as people with a greater BMI have more abdominal adipose tissue concentrated at the midsection. Head RG was found to be larger for subjects with a greater BMI. In accordance with current literature [6], head and shank SMs were smaller for obese than normal weight subjects. These segments do not store a large portion of the body’s adipose tissue, so they do not show much variation in mass between normal weight and obese subjects. Since the obese subjects have a larger overall body mass, the head and shank segments contain a significantly smaller percent of total body mass than they do for normal weight participants. Shank COM of obese subjects was

REFERENCES 1. Dempser, W. T. Wright-Patterson Air Force Base, 55-159, 1955. 2. de Leva, P. J Biomech 29, 1223-30, 1996. 3. Pearsall, D. Annals of Biomedical Engineering 24, 198-210, 1996. 4. Zatsiorsky et al. Biomechanics VIII-B 1152-1159, 1983. 5. Dumas et al. IEEE Transactions on Bio-medical Engineering 52, 1756-63, 2005. 6. Chambers et al. Clinical Biomechanics 25, 131-6, 2010. 7. Durkin et al. J Biomech 35, 1575-1580, 2002. ACKNOWLEDGEMENTS Funding Source: NIOSH grant No. R01 OH010106. Special thanks to Human Movement and Balance Laboratory and the Osteoporosis Prevention and Treatment Center.

Table 1: Mean and standard deviation of statistically significant (p<0.05) body segment parameters of obese and normal weight females. * denotes p<0.01. Radius of Gyration Center of Mass Segment Mass (% of segment length) (% of segment length) (% of total body mass)

Normal Weight Obese

Torso* 28.4±0.5 27.2±0.2

Lower Arm* 26.9±0.2 26.6±0.3

Head* Shank* 24.6±0.6 41.5±0.7 25.4±0.3 39.9±0.7

Head 50.5±1.9 52.2±1.4

Upper Arm 3.1±0.2 3.5±0.4

Shank 4.8±0.2 4.3±0.6

Head* 7.1±0.9 5.2±0.2


SYNTHESIS AND CHARACTERIZATION OF MAGNETIC NANOPARTICLES FOR DRUG DELIVERY TO CENTRAL NERVOUS SYSTEM Liza Bruk, Noah Snyder, X. Tracy Cui Neural Tissue Engineering Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: lab154@pitt.edu, Web: http/://www.engineering.pitt.edu/cui/ INTRODUCTION Approximately 50 million people in the United States are afflicted by neurological diseases, some of which are neurodegenerative. Neurodegenerative diseases involve the progressive neuronal death within the nervous system, due to overstimulation of neurons by excitatory neurotransmitters. An estimated $100 billion is spent in the United States on health care for Alzheimer’s disease (AD) alone. The country is therefore faced with an enormous economic burden due to AD and other related diseases, such as Huntington’s disease, Parkinson’s disease, and Amyotrophic Lateral Sclerosis (ALS) [1-2]. Magnetic nanoparticles (MNPs) are used as a vessel for CNS drug delivery, due to their ability to cross the blood brain barrier (BBB) [3]. It is possible to target cell types via MNPs as microglia rapidly uptake them, allowing for localization of the drug delivery [4-5]. Silica-based magnetic nanoparticles are used to allow passive release of encapsulated drug, as well as controlled release via high-frequency magnetic field stimulation [6-7]. Accumulation and localization of magnetic nanoparticles in the brain can be controlled and localized by application of a magnet over the head and monitored by magnetic resonance imaging (MRI) [8-9]. The goal of this project is to incorporate drug into MNPs and, in conjunction with MRI technology, release the drug in a localized manner. The project involves optimization of the MNP synthesis and release protocols to demonstrate uniform nanoparticle size, high drug load, and controllable release with effective doses. Methods Initial studies are done with fluorescein. Other relevant drugs can be substituted in later studies. Magnetic silica nanoparticle synthesis is done via the sol-gel method modified from the Hu paper [6]. The

silica sol is prepared by mixing silicon tetraethoxide (TEOS), 2N HCl, and ethanol in a ratio of 2.25mL/500μL/600μL. This solution is aged for two hours and 40mL of a 0.25M solution of Fe(NO 3 ) 3 •9H 2 O is added, along with 2.5mg of the drug dissolved in 5mL of deionized water. The solution is brought to a pH of ~2.7 by the dropwise addition of 0.2M NaOH. MNP solution is washed three times with deionized water and concentrated twenty times. The concentrated MNPs are ultrasonicated in a water bath for 15 minutes to disperse MNPs in solution. For scanning electron microscopy (SEM) imaging, small samples are dried at 80°C on gold film. Characterization is done via heat release studies and magnetic release studies. The heat release studies are done at 37°C (body temperature) and 80°C. Initial magnetic release studies are done at frequencies of 50kHz and 3kHz and amplitude of 10V. Studies are in the process of being transferred to the 7T MRI for further magnetic release characterization. RESULTS MNPs are found to be stable at room and body temperatures (Figure 1). Analysis of particles via SEM determines that MNPs are formed at approximately 150nm, which is a suitable size for crossing the BBB. Magnetic release studies prove the possibility of releasing encapsulated drug via magnetic stimulation at various frequencies. Both applied frequencies (50kHz and 3kHz) result in approximately twofold increase in fluorescence intensity (Figure 2).


Fluorescence Intensity

600

Background 1 hr 80C

300

0

Fluorescence Intensity

Figure 1. Fluorescein is released when exposed to temperatures > 80C for at least 1hr as indicated by the increase in fluorescence of the MNP solutions (位 ex =485nm and 位 em =535nm). 100 50 0 50kHz 10V 1hr Background

3kHz 10V 1hr

Magnetic Stimulation

Difference

Figure 2. Fluorescein is released when exposed to magnetic fields at amplitude 10V and frequencies of 50kHz and 3kHz. Both frequencies, when applied for 1hr, result in approximately twofold increase in fluorescence intensity compared to the background control.

DISCUSSION Initial studies with fluorescein as the loaded sample drug prove MNPs can be synthesized effectively and the drug can then be released in a controlled manner via magnetic stimulation. Stability of MNPs reduces unwanted passive release and increases ability to control release. MNP solutions must be concentrated after synthesis to increase concentration of drug released. MNPs must by dispersed in solution via water bath ultrasonication rather than mechanical sonication to avoid overheating the solution and causing unwanted release of drug. MNPs loaded with several different antioxidants have been synthesized and are undergoing the same testing as the fluorescein loaded MNPs. REFERENCES [1] R.C. Brown et al. Environ Health Persp. 113, 1250-1256, 2005. [2] J. Emerit et al. Biomed Pharmacother. 58, 3946, 2004. [3] J. Peng et al. J Control Release. 164, 49-57, 2012.

[4] M.R. Pickard et al. Int J Mol Sci. 11, 967-981, 2010. [5] A. Bal-Price et al. Neuroscience. 21, 6480-6491, 2001. [6] S. Hu et al. Langmuir. 24, 239-244, 2008. [7] S.D. Kong et al. IEEE T Magn. 49, 349-352, 2013. [8] B. Chertok et al. Biomaterials. 29, 487-496, 2007. [9] C. Sun et al. Adv Drug Deliver Rev. 60, 12521265, 2008. ACKNOWLEDGEMENTS Magnetic nanoparticles are synthesized and characterized at the Neural Tissue Engineering Lab. MRI testing is done at the Radio Frequency Research Facility, with the help of Dr. Tamer Ibrahim and Yujuan Zhao. Funding for this project is provided by the Commonwealth of Pennsylvania. Undergraduate research was funded by the Swanson School of Engineering.


INVESTIGATING THE EFFECT OF CONDUCTING POLYMER GRAPHENE OXIDE POLYMER COATINGS ON MAGNESIUM CORROSION Huaxiu Li, Graduate Mentor: Kasey Catt, Lab Mentor: Xinyan Tracy Cui NTE Laboratory , Department of Bioengineering University of Pittsburgh, PA, USA Email: hul34@pitt.edu INTRODUCTION Magnesium (Mg) is a very promising material for an array of biomedical applications including vascular, musculoskeletal and general surgery due to its excellent physical and chemical properties (1). However, the issue of biodegradable Mg is rapid degradation, non-uniform corrosion and the associated by-products inside human body(1) . Mg corrodes in aqueous environment via an electrochemical reactions as following : Mg + H2O -> Mg(OH)2 + H2. This reactions leads to magnesium hydroxide and hydrogen gas where the passivating magnesium hydroxide is broken down by Chloride in the body (1). There are currently ways to minimize the corrosion of Mg implants , such as magnesium-based alloys(1), anodization(2) and one or more organic coating (3), but each of them has disadvantages for biomedical applications(1)(2)(3). Fortunately, there has recently been promising work with conducting polymers for controlling corrosion on non-Mg metals(3)(4). There has however been little study of conducting polymers for Mg corrosion protection. In our study, we use 3,4ethylenedioxythiophene (EDOT) as monomer to produce conducting poly 3,4ethylenedioxythiophene (PEDOT) doped with negatively-charged graphene oxide (GO). The corrosion protection of this film on Mg is then evaluated using Tafel test, pH test and Mg Ion assay . METHODS Mg ribbon (20mm*3mm*2mm) was cut and cleaned with ethanol. Polymerization solution consisted of 20mg GO mixed with 2mL ethanol sonicated for 30minutes, 40 ul EDOT and 70 ul deionized water were then added. Mg specimens were coated with PEDOT-GO film on a Garmy 1

potentiostat/femtostat with a constant potential of 0.7 volts over 400 seconds using a two electrode setup. Coated Mg specimens were then dried in the freezer for 2 days. Tafel tests were conducted in PBS (137mM Sodium Chloride, 2.7mM Potassium Chloride, 10mM Phosphate Buffer and pH = 7.4) on the Gamry system using a three electrode setup. A positive potential shift and decrease in the current of the Tafel scan indicates a decrease in the corrosion rate. pH and Mg ion assays were also carried out in PBS with standard surface area to volume ratio 1:50 (cm2:ml). The uncoated part of Mg specimens was mounted in epoxy to ensure only the coated area was exposed to the PBS solution for both tests. pH of each PBS solution was measured with pH sensor every day. For Mg ion assay, 10 ul extract was added to 200 ul Magneisum Liquicolor assay (Xylidyl blue) reagent and was allowed to react for 10 minutes. The absorbance value was read from spectrophotometer and the corresponding concentration was calculated according to the calibration curve. RESULTS AND DISCUSSION Tafel Test: Tafel tests resulted in an average -1.7 V corrosion voltage for pure Mg and an average -1.5 V corrosion voltage for Mg sample coated with PEDOT-GO. These results indicate decreased corrosion for Mg when coated with PEDOT-GO film. pH test and Mg Ion Assay: The result from pH test and Mg ion assay is shown in Table. 1. We can see the average pH of pure Mg is higher than that of Mg coated with PEDODT-GO after 3 days immersion. Since pH of the corrosion solution is an important


indicator of Mg corrosion level, PEDOT-GO coating has demonstrated a capability of preventing Mg corrosion.

but there is it still optimization of assay procedure that needs to be completed. REFERENCES

In terms of Mg Ion assay, the Mg ion concentration of pure Mg and Mg coated with PEDOT-GO are similar in the following 3 days after immersion except` Day 2 . Current research has shown that the mixing process after adding the extract to the Mg reagent can lead to certain difference during absorbance reading. This could be a reason why the Mg ion assay has not displayed a trend similar to the pH test. Also the current electrodeposition of PEDOT-GO on Mg is carried out using Chronoaperometry polymerization. The PEDOTGO film quality can therefore be different depending on the surface area of Mg immersed in the polymerization solution and the total charge passed while depositing on the film. The release rate of Mg ion may be affected by the film quality of each specific sample. At the same time, the variance of the film quality may also lead to the variance of experimental samples in both pH test and Mg ion assay. So further study should be conducted on the improvement of electrodeposition method.

(1). P. Gill, N, Munroe, R, Dua, S, Ramaswamy,”Corrosion and Biocompatibility Assessment of Magnesium Alloys”, Journal of Biomaterials and Nanobiotechnology, 2012, 3, 1013 (2). D, Xue, Y, Yun, M, Schulz, V, Shanov, “Corrosion protection of biodegradable magnesium implants using anodization”, Materials Science and Engineering , C 31 (2011) 215–223 Bent et al. Exerc.Sports Sci Rev., 33.3, 107-113, 2005. (3). D, Tallman, G, Spink, A, Domini, G,Wallace, “Electroactive conducting polymers for corrosion control,Part 1.General introduction and a review of non-ferrous metals”,J Solid State Electrochem , (2002) 6: 73–84 DOI, 10.1007/s100080100212 (4) G.M.Spinks, A.J.Dominis, G.G.Wallace, D.E.Tallman, “Electroactive conducting polymers for corrosion control, Part 2.Ferrous metals”,J Solid State Electrochem , (2002) 6: 85–100 , DOI 10.1007/s 100080100211 ACKNOWLEDGEMENTS

CONCLUSION Both Tafel tests and pH test have demonstrated positive result of PEDOT-GO coating in the Mg corrosion prevention. The Mg ion assay has not shown as promising data as the previous two have,

This research was supported by NSF Engineering Research Center on Revolutionizing Metallic Biomaterials and the Swanson School of Engineering, University of Pittsburgh.

Table 1. pH and released Mg ion concentration of 1xPBS of immersed samples after 3 days (n=3). Time Day 1 Day 2 Day 3

2

Magnesium Ion Concentration(mM) Pure Mg PEDOT-GO Mg 0.2820 0.2713 0.4342 0.3497 0.5440 0.5564

pH Pure Mg 8.39 8.77 8.92

PEDOT-GO Mg 8.00 8.37 8.71


DESIGNING A NOVEL ASSAY TO CHARACTERIZE CALCIUM FLUX IN EPITHELIAL CONTRACTION OF EARLY DEVELOPING XENOPUS LAEVIS Matthew Lefkowitz1, Deepthi Vijayraghavan1, Timothy Jackson1, and Lance Davidson1,2,3 Morphogenesis and Developmental Biomechanics Lab, 1Department of Bioengineering 2

Department of Developmental Biology, 3Department of Computational and Systems Biology University of Pittsburgh, PA, USA Email: mel88@pitt.edu, Web: http://www.engr2.pitt.edu/ldavidson/

INTRODUCTION Morphogenesis is a complex process in which cells within the embryo self-organize to form the body plan. Dysregulation of cell movements during development can lead to severe birth defects [1]. These movements are driven, in part, by non-muscle actomyosin-mediated contractility. For instance, actomyosin driven cell shape change has been shown to be important in neural tube closure and other processes critical to the success of organogenesis [2]. To study how coordinated tissue contractions occur in embryonic tissues, our lab has developed techniques to induce contractions in Xenopus laevis embryonic epithelia [3]. In one approach, we have shown that local electrical stimulation can induce a contractile wave in microsurgically isolated tissues (e.g. animal cap ectoderm explants). These tissue explants are composed of two cell layers, an outer epithelial and inner mesenchymal layer. Using this model the lab is now testing molecular contributors to contraction in these tissues; in this poster we focus on the role of calcium. Calcium in epithelial tissues, like skeletal and cardiac muscle, plays a central role inducing contractions in individual cells as well as communicating to other cells to induce contraction [4, 5]. Actomyosin contractions appear to be driven by calcium transients; when calcium is released from the endoplasmic reticulum (ER), intracellular calcium directly binds to proteins in the cytoskeleton allowing non-muscle myosin II to contract F-actin networks and contract the cell. However, calcium is also required for the

maintenance of cell-cell adhesion both within and between tissues in our explants. Thus, our goal is to determine the role of calcium in electrically-induced tissue contraction. MATERIALS AND METHODS In order to electrically-induce transient contractions we cultured frog embryonic tissues on gold electrodes. Gold was electro-sputtered onto the bottom of a petri dish to create an electrode and was subsequently adsorbed with fibronectin. Animal caps of the gastrula stage Xenopus laevis embryos were microsurgically isolated and allowed to adhere onto the electrode. To pass an electrical impulse through the tissues we used a pipette electrode and a wire, connected to the constant current output unit (PSIU6, GRASS Technologies). The pipette and wire electrode were brought into contact with the gold electrode and surrounding aqueous media, respectively. We recorded the strength and timecourse of the contractile tissue response in time-lapse sequences collected using a stereoscope (Zeiss Stemi 2000C) equipped with a computer controlled CCD camera (The Imaging Source) (see experimental setup, Figure 1A). Prior to stimulation, explants were recorded for 20 minutes; subsequently, two 15 mA stimulations for 10 ms were produced using the stimulator (S88x, Astromed-Inc), observing the explants for 20 minutes after each stimulation at 15 second intervals. To perturb cellular calcium gradients, two different compounds were used: EGTA, which chelates calcium in the media, and thapsigargin, an antagonist of ATP driven calcium


A

B

Figure 1A. Experimental setup showing stimulator connected to the constant current output unit which is in-turn connected to the gold electrode. Figure 1B. Representative trace of an experiment treated with thapsigargin. Two stimulations are initiated with no treatment. Subsequently, the thapsigargin is added to the media and the tissue is allowed to incubate. Finally, two more stimulations are initiated.

channels that pump calcium from the cytosol into the endoplasmic reticulum. Contractions were compared within samples before and after treatment. To quantify the magnitude and temporal dynamics of tissue contraction, we used a custom program, GridTracker, adapted from the ITK image library, which uses image registration to track pixel movement in image sequences. GridTracker reports the time-dependent deformation within a grid laid over the animal cap image sequence. Area changes are calculated from deformation of the grid (see representative trace, Figure 1B). RESULTS AND DISCUSSION Incubation of explants in 0.25 mM of EGTA resulted in separation of the epithelial layer from the underlying mesenchymal layer within a few minutes. Electrically-induced contraction triggered early layer separation. While EGTA appeared to attenuate the strength of induced contractions it is likely that tissue separation may also account for some of the observed deformation. Thapsigargin does not induce tissue separation and we found 0.4 M thapsigargin had no effect on the magnitude of contractions but instead altered the time-course of contractions, slowing both the time-to-peak-contraction and the time-to-halfrecovery. CONCLUSIONS We have developed a simple experimental model that allows us to induce a tissue-wide contraction in a complex composite tissue and test the role of specific calcium-dependent pathways for triggering and transmitting contractions. Our quantitative image analysis tool provides sensitive tracking of tissue

contraction. Our results suggest that calcium plays a role in the temporal dynamics of contractions. Future studies will be needed to understand how actomyosin dynamics are regulated by calcium transients and to understand how calcium waves propagate both within and between each layer. ACKNOWLEDGEMENTS We would like to thank the Swanson School of Engineering for a Summer Research Fellowship. Additional support for this work has been provided by grants from from the National Institutes of Health (R01 HD044750 and R21 ES019259) and the National Science Foundation (CAREER IOS0845775 and CMMI-1100515). DV has been supported by the Biomechanics in Regenerative Medicine Training Grant (NIH T32 EB003392). TJ has been supported by the Cardiovascular Bioengineering Training Program (NIH T32 HL076124). REFERENCES [1] L. A. Davidson (2011). The physical mechanical processes that shape tissues in the early embryo. Chapter in "Cellular and Biomolecular Mechanics and Mechanobiology." (ed. by Amit Gefen) Springer Studies in Mechanobiology, Tissue Engineering and Biomaterials. (springer) [2] Davidson LA. Epithelial machines that shape the embryo. Trends in Cell Biology. 2012;22:82–87. [3] Joshi, S. D., von Dassow, M. and Davidson, L. A. 'Experimental control of excitable embryonic tissues: three stimuli induce rapid epithelial contraction', Exp Cell Res (2010) 316: 103-14. [4] Lee, H. C. "Calcium in Epithelial Cell Contraction." The Journal of Cell Biology 85.2 (1980): 325-36. [5] Berchtold, Martin W., Heinrich Brinkmeier, and Markus Müntener. "Calcium ion in skeletal muscle: its crucial role for muscle function, plasticity, and disease."Physiological reviews 80.3 (2000): 1215-1265.


PROFILIN-2 HAS AN ANTI-MIGRATORY ACTION IN HEAD AND NECK CANCER CELLS Marlee Hartenstein, David Gau and Partha Roy, Ph.D. Center for Biotechnology and Bioengineering, University of Pittsburgh, PA, USA Email: mrh85@pitt.edu INTRODUCTION One of the most dangerous characteristics of cancer is its ability to metastasize, or migrate to other parts of the body away from the primary tumor. To better understand gain- or loss-of-function of which proteins contribute to metastatic phenotype, actinbinding proteins such as Profilin are analyzed. A recent study from our laboratory has shown that downregulation of Profilin-1 (Pfn1 – a ubiquitously expressed actin-binding protein) in human breast cancer cells (BCCs) promotes their disseminative ability from the primary tumor. Similarly, Mouneimne and colleagues reported that depletion of Profilin-2 (Pfn2 – a relatively less-studied isoform of Pfn) increases the aggressiveness of breast cancer cells. These studies suggest that Pfn isoforms have anti-migratory action in breast cancer cells. The primary goal of this project was to examine the role of these two Pfn isoforms in the regulation of aggressive phenotype of other types of cancer cells. This specific study was conducted on head and neck cancer (HSNCC) cells. METHODS Cell Culture and Transfection Cal27, HN5, and Cal33 were cultured in DMEM media supplemented with 10% fetal bovine serum and antibiotics. Plasmid transfection of cells was performed using Lipofectamine LTX (Invitrogen) and siRNA transfection was performed using Dharmafect Transfection Reagent 2 (Fisher Scientific) according to the manufacturer’s protocol. Immunoblotting Total cell lysate was prepared by extracting cells with modified RIPA buffer (50 mM Tris–HCl—pH 7.5, 150 mM NaCl, 1% triton-X100, 0.25% sodium deoxycholate, 0.1% SDS, 2 mM EDTA, 50 mM NaF, 1 mM sodium pervanadate, and protease inhibitors) and run on a SDS-PAGE. Immunoblotting conditions for the various antibodies were: monoclonal Pfn2 (Santa Cruz Biotechnology - 1:1000), and monoclonal α-tubulin (Sigma - 1:5000).

Cell Motility Assay Cells were sparsely plated overnight on collagencoated 24-well dish, and were imaged for 2 hrs at a 1min time-interval between the successive image frames. For all time-lapse recordings, proper environmental conditions (37°C/pH 7.4) were maintained by placing the culture dish in a microincubator. Cell trajectory was constructed by frame-by-frame analyses of the centroid positions (x, y) of cell-nuclei using ImageJ. Cell Proliferation Assay To assess Pfn2 effect of cell growth in culture, 20,000 cells were plated overnight in triplicates in a 6 well plate following transfection treatments. Cells were trypsinized and counted on days three, five, and seven to assess cell number. Statistics All statistical tests were performed with a student ttest, and a “P” value less than 0.05 was considered to be statistically significant. Most data were represented as box and whisker plots where dot represents the mean, middle line of box indicates median, top of the box indicates 75th percentile, bottom of the box measures 25th percentile and the two whiskers indicate the 10th and 90th percentiles, respectively.

RESULTS We analyzed expression pattern of the two Pfn isoforms in a panel of HSNCC cell lines (Cal27, Cal33, HN5, 686LN, and UmSCC1) and normal oral squamous epithelia (Het1A) which revealed differential expression of Pfn2 but not of Pfn1 (data not shown). This data prompted us to further examine the effect of perturbation of Pfn2 expression on motility of HSNCC. 1, 2. We selected three HSNCC lines including Cal27, Cal33 and HN5. Cal27 and HN5 cells have high levels of Pfn2 while Cal33 line has a very low level of Pfn2. In terms of motility, Cal33 cells are much more migratory than either Cal27 or HN5 cells (data not shown). Pfn2 KD increased the average speed of both Cal27 and HN5 lines. Figure 1 demonstrates the effects of Pfn2 KD on HN5; the Cal27 Pfn2 KD


experiment data is not shown but exhibited the same trend as the HN5 cell line. A student t-test showed that the difference between the control and experimental groups in the Cal27 and HN5 KD experiments were significant, with p-values of 2.09e-15 and 2.48e-9, respectively. HN5 Pfn2 KD Motility Relative Speed

6 **

4 2 0 HN5 C

HN5 P

Figure 1. Motility in terms of relative speed of HN5 after Pfn2 KD where C represents the control group, P represents the Pfn2 KD experimental group, and top bar indicates significance.

Conversely, GFP-Pfn2 overexpression decreased the average speed of migration of CAL33 cells. These results can be seen in Figure 2. A student ttest proved the difference in the two data groups to be significant, with a p-value of 0.000399.

Relative Speed

Cal33 Pfn2 OE Motility 2

**

1.5

1 0.5 0 Cal33 GFP

Cal33 GFP Pfn2

Figure 2. Cal33 OE motility data in terms of relative speed comparing the control, Cal33 GFP, with the Pfn2 overexpression group, Cal33 GFP Pfn2, with a top bar indicating significance.

The growth assay of HN5 relating a control to a Pfn2 KD group showed that there was no significant difference between the proliferation curves of the experimental and control groups.

Number of Cells e4

150

HN5 Pfn2 KD Growth Curve

100 HN5 C

50

HN5 P

0 Day 3

Day 5

Day 7

Figure 3. Proliferation curve of HN5 taken over the course of one week. The red line represents the Pfn2 KD group, as signified by "HN5 P" in the legend, while the blue line represents the control, labeled "HN5 C".

DISCUSSION The KD of Pfn2 in both Cal27 and HN5 yielded an increase in speed of the experimental group when compared to the control. This finding supports the conclusion that low levels of Pfn2 correlates to an increased speed of motility. Pfn2 OE in Cal33 also supported this finding, as the relative speed of Cal33 after the over expression of Pfn2 decreased its speed when compared to the control. Therefore, the effects of Pfn2 regulation on the motility of HNCCs resemble that of the findings of Mouneimne and colleagues in BCCs. After analysis of the HN5 growth assay data it cannot be determined how Pfn2 affects cell proliferation. Even though the experimental group did grow somewhat faster over the course of seven days, the difference is not great enough to deduce that Pfn2 is a growth inhibitor. Further trials will be necessary to create a conclusion. Overall, it was determined that high levels of Pfn2 limits cell motility in HNCC lines, as supported by both Figures 1 and 2. However, the effect of Pfn1 on these cell lines is yet to be determined. REFERENCES [1] Ding Z, et al. Profilin-1 downregulation has contrasting effects on early vs late steps of breast cancer metastasis. Oncogene. 2013. doi:10.1038/onc.2013.166. [2] Mouneimne G, et al. Differential remodeling of actin cytoskeleton architecture by profilin isoforms leads to distinct effects on cell migration and invasion. Cancer Cell. 2012 Nov 13; 22(5): 615-30. doi:10.1016/j.ccr.2012.09.027. ACKNOWLEDGEMENTS I would like to acknowledge my PI, Dr. Partha Roy, for entrusting me with this project and my mentor, David Gau, for providing me with the skill set necessary to perform the previous experiments. I would also like to acknowledge Malabika Sen of UPMC for supplying the cancer cell lines for study.


UPHILL WALKING ENHANCES THE RETENTION OF A NEW STEPPING PATTERN LEARNED ON A SPLIT-BELT TREADMILL Jonathan S. Calvert, Carly Sombric, and Gelsy Torres-Oviedo Human Movement Research Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: jsc53@pitt.edu, Web: http://www.engineering.pitt.edu/MARGroup/Home/ INTRODUCTION Stroke patients often possess an asymmetric walking pattern that affects their mobility and quality of life. Promising studies have shown that patients can correct their gait asymmetries after walking on a split-belt treadmill, which forces their legs to move at different speeds [1]. Thus, there is an interest for enhancing the duration of these positive effects after split-belt walking. We hypothesize that the duration of the walking pattern learned on the split-belt treadmill is modulated by the forces experienced at the subjects' feet (i.e., ground reaction forces) during split-belt walking. To test this, we assessed the retention of adaptation effects after split-belt walking when healthy subjects walked uphill or downhill, which are conditions that naturally modulate ground reaction forces during walking. METHODS Twenty healthy volunteers (12 men, mean age 25.2Âą5.4) participated in the study. The subjects were divided among three groups: flat, uphill, and downhill walking. The three groups walked on a split-belt treadmill (Bertec), in which one leg moved at 0.5 m/s and the other leg moved at 1.5 m/s (3:1 belt speed ratio) for 600 strides. Subjects also walked on the treadmill when the two belts moved at the same speed (1 m/s) before and after the splitbelt condition. Kinematic data were recorded in all groups with a motion tracking system (Vicon) via reflective markers placed bilaterally on the trunk and legs. In the uphill and downhill groups, the treadmill was inclined to a 15% grade before, during, and after split-belt walking. We used the kinematic data to compare the walking pattern of the three groups when the two belts were moving at the same speed before and after split-belt walking. To accomplish this, we computed in each group their asymmetry in 1) step length, 2) step position, and 3) step time, which are measures

known to adapt independently during split-belt walking [2]. We evaluated the extent of adaptation effects (i.e. after-effects) on these outcome measures by comparing their average values before split-belt walking and the initial steps (i.e., average of the first 5 steps) after split-belt walking. We also compared the decay of adaptation effects across the three groups. To this end we computed the number of steps that subjects took before their walking pattern returned to baseline behavior after the splitbelt perturbation was removed. DATA PROCESSING The raw kinematic data was extracted from Vicon and processed in MATLAB. Step length asymmetry, step position asymmetry, and step time asymmetry were calculated as defined in previous split-belt adaptation studies [2]. Values from presplit-belt walking were subtracted from the postsplit-belt walking values to assess how well the subjects retained the new walking pattern. Divergence from baseline walking was defined as the points at which the asymmetry was greater than two times the standard deviation of the baseline walking data. A one-way ANOVA with Tukey post hoc analysis was used to determine if there were differences between the walking conditions (Îą=0.05). RESULTS Figure 1 shows the step length, step position, and step time asymmetries after split-belt walking. The uphill group had step length and step position asymmetries that were significantly different to those of the other two groups. However, the step time asymmetry was not significantly different across groups. Additionally, the flat and downhill groups were not significantly different among any of the asymmetry values.


specialized neural control strategies [3]. Therefore, the increased retention of spatial parameters after split-belt walking may be due to a different motor control strategy than what is used during flat splitbelt walking.

Figure 1: The average step length, step position, and step timing asymmetries immediately following the adaptation period. The uphill group is significantly different for step length and step position, but not step timing. Asterisks indicate statistical significance.

Additionally, subjects in the uphill group took longer to return to baseline values in the spatial parameters than the temporal parameters. Specifically, subjects in the uphill group took an average of 60 steps to return to baseline values for step position asymmetry vs. 49 and 21 steps in the flat and downhill groups, respectively. Conversely, all groups took an average of 15 steps to return to step time asymmetry baseline values. DISCUSSION Uphill walking significantly increased the retention of the new walking pattern in the spatial domain, but not the temporal domain. Previous studies have shown that slope walking in humans may require

This increase in after-effects in step position and step length, but not step time, in the uphill condition suggests that split-belt walking on an incline could enhance the correction of spatial asymmetries via split-belt walking. An increase in the retention of spatial adaptation effects is important because stroke patients often have larger deficits in one domain than the other [4]. Therefore, it is pertinent to identify ways to adapt spatial and temporal asymmetries independently. REFERENCES 1. Reisman et al. Brain 130.7, 1861-1872, 2007. 2. Finley et al. Neural. Submitted. 2014. 3. Lay et al. J Biomechanics, 39.9, 1621-1628, 2006. 4. Malone and Bastian, Neurorehabilitation and Neural Repair. 28.3 230-240. 2014. ACKNOWLEDGEMENTS Funding for this project was supplied by the Swanson School of Engineering at the University of Pittsburgh.


ADIPOSE-DERIVED STEM CELLS FROM DIABETIC PATIENTS DISPLAY A PRO-THROMBOGENIC PHENOTYPE Dominic J. Pezzone, Jeffrey T. Krawiec, Justin S. Weinbaum, J. Peter Rubin, David A. Vorp Center for Vascular Remodeling and Regeneration, Department of Bioengineering University of Pittsburgh, PA, USA Email: djp65@pitt.edu , Web: http://www.engineering.pitt.edu/vorplab/ INTRODUCTION Many preclinical evaluations of tissue-engineered blood vessels (TEBVs) utilize autologous cells from healthy humans or animals. These models hold minimal relevance for clinical translation of TEBV therapy, as treatment will, by definition, be required for patients at high cardiovascular risk like diabetic individuals. Adipose-derived stem cells (ADSCs) represent an ideal cell type for clinical translation, as these can be easily and plentifully harvested from high-risk patients. Previously in vivo evaluations of diabetic donor ADSC-seeded TEBVs in a rat model were performed that displayed a markedly reduced patency rate compared to those from healthy donors (i.e., non-diabetics), and this was due to early (<1 week) thrombosis1. To probe mechanistically, this study analyzed diabetic donor ADSCs and assessed two critical components of thrombosis: platelet adhesion and fibrinolysis. We hypothesize that diabetic donor ADSCs have an increased ability to bind platelets and/or a decreased fibrinolytic activity making them more prone to thrombosis when used in TEBVs. As diabetic patients are a key patient cohort who would require an autologous TEBV, it is critically important for the clinical translation of TEBVs to investigate why diabetic donor ADSCs display a pro-thrombogenic phenotype. METHODS Human ADSCs were obtained from patients who were classified into either healthy (non-diabetic, <45 years of age) or diabetic (diabetic, <45 years of age) cohorts (n=3 donors each). Human smooth muscle cells (SMC) were purchased from ATCC. To test the ability of diabetic ADSCs to adhere platelets, monolayers were incubated with bovine platelet rich plasma anti-coagulated with a 7:1 vol/vol citrate dextrose for 30 minutes. This was followed by PBS washes to remove any unbound platelets. Platelets bound to ADSCs were labeled using a standard immunofluorescence protocol

staining for CD41 (1:100, Kingfisher #CAPP2A) with counterstains for DAPI and F-actin (1:250, Sigma #P5282). Samples were imaged using NIS Elements software. Healthy ADSCs and SMCs were used as controls. To test the fibrinolytic activity of diabetic ADSCs, conditioned media was obtained by replenishing culture media on near-confluent flasks of cells then collecting after two days in either normal (i.e. with serum) or serum-free conditions. Zymography, utilizing established methods2 was performed using conditioned media in fibrin-based acrylamide gels (7.5% acrylamide, approximately 400 ¾g/mL fibrin) to provide a platform for fibrin degradation while also being able to identify active proteins (based on molecular weight) involved in this process. Following protein separation via electrophoresis, gels were incubated in a divalent cation reaction buffer (50 mM Tris HCl pH 7.4, 1 mM CaCl2, 1 mM MgCl2) for 1, 3, 5, and 7 days to allow for enzymatic degradation of the gel. Gels were then stained with Coomassie Blue to observe degradation bands. Healthy ADSCs were used as controls. DATA PROCESSING Fluorescent images of platelets adhered to either healthy or diabetic ADSCs were quantified to determine if diabetic ADSCs displayed an increased ability to bind platelets. Platelets bound to cell bodies and cell nuclei were manually counted in each image, which was then utilized to calculate the average number of platelets per cell. This value was averaged between donors of the same group (i.e. healthy or diabetic) and compared utilizing a student’s t-test with a significant difference being defined at p<0.05. Fibrin based zymograms of healthy and diabetic ADSCs were analyzed qualitatively to determine if diabetic ADSCs displayed a reduced fibrinolytic activity. Presence or absence of bands in zymograms was determined visually.


RESULTS

Figure 1. (A) Example image showing platelets (red) bound to cell bodies (body: green, nuclei: blue). The number of bound platelets per cell was not statistically significant (N.S.) between healthy and diabetic ADSCs.

fibrinolytic activity noted by bands present at 83 and 88 kDa (likely plasminogen; data not shown) similarly in healthy ADSCs. Bands were confirmed to not be due to the presence of serum within conditioned media which can contain fibrinolytic factors by utilizing serum-free media. DISCUSSION The lack of statistical significance in the number of platelets bound to diabetic and healthy ADSCs but apparent higher fibrinolytic activity in healthy ADSCs offers a potential mechanistic explanation as to the reduced patency seen during in vivo testing with diabetic ADSC-based TEBVs1. Cellular incorporation within TEBVs is often utilized to increased patency3, and based on this study, seems to occur through the ability of those cells to efficiently break down thrombogenic material, such as fibrin. As diabetic ADSCs are unable to successfully perform this task, further exploration into mechanisms to enhance or supplement diabetic ADSCs will be imperative for the clinical translation of TEBVs. CONCLUSIONS Human ADSCs from healthy and diabetic donors show no difference in platelet adhesion but diabetic ADSCs have a reduced fibrinolytic ability based on zymography of their secreted factors.

Figure 2. Zymogram showing fibrin degradation bands from healthy ADSC conditioned media at 38 and 31 kDa that are not present with diabetic ADSCs

ADSCs obtained from healthy and diabetic donors showed no difference in the number of bound platelets per cell (Figure 1) and in either case the platelet binding was lower than when human SMCs (i.e. our positive control; data not shown) were used. However, ADSCs obtained from diabetic donors displayed a reduced fibrinolytic ability compared to those obtained from healthy donors, with a 1 week time point being determined as optimal. Earlier time points (i.e 1, 3, and 5 days) showed similar results but with less prominent bands. The decreased diabetic fibrinolytic activity is particularly shown by a 31 kDa band (likely urokinase plasminogen activator) and a 38 kDa band (likely plasminogen) developing on zymograms from healthy ADSC but not from diabetic ADSC conditioned media (Figure 3). However, diabetic ADSCs do possess some

REFERENCES 1. Krawiec et al., “A Cautionary Tale for Autologous Stem Cell-Based Vascular Tissue Engineering,� International Society of Applied Cardiovascular Biology (ISACB) 14th Biennial Meeting, Cleveland OH, April 2014 2. Ahmann et al. "Fibrin degradation enhances vascular smooth muscle cell proliferation and matrix deposition in fibrin-based tissue constructs fabricated in vitro." Tissue Engineering Part A 16.10 (2010): 3261-3270. 3. Nieponice et al. "In vivo assessment of a tissueengineered vascular graft combining a biodegradable elastomeric scaffold and musclederived stem cells in a rat model." Tissue Engineering Part A 16.4 (2010): 1215-1223. ACKNOWLEDGEMENTS This work was supported by the NIH (R21 #EB016138), the AHA (#12PRE12050163), and the Swanson School of Engineering at the University of Pittsburgh.


VERIFICATION OF ALEXA FLUOR 633 BINDING TO ELASTIC FIBERS Melissa R. Smith, Justin S. Weinbaum Vascular ECM Dynamics Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: MRS137@pitt.edu, Web: http://www.engineering.pitt.edu/Justin_Weinbaum/ INTRODUCTION Abnormalities of elastic fibers and their components have been shown to cause a number of heritable diseases, including Marfan syndrome, WilliamsBeuren syndrome, cutis laxa, and many others. Though these diseases exhibit a variety of characteristics and phenotypes, cardiovascular disease involving aortic aneurysms, dissections, stenosis or vascular tortuosity is common to most of them [1]. Elastic fibers are critical components of the extracellular matrix of many tissues, including larger arteries, skin, lungs, and ligaments.. Their main function is to allow these tissues to stretch and recoil without lasting damage, though they also help to regulate growth factors such as transforming growth factor β (TGFβ), which controls vascular morphogenesis through smooth muscle cell differentiation and matrix synthesis. Elastic fibers are composed of an inner elastin core surrounded with fibrillin microfibrils, and are produced primarily during development. The protein elastin accounts for about 90% of the composition of elastic fibers, though the proteins fibrillin-1 and -2, and MAGP-1 are also important components of microfibrils [1]. Methods exist to detect these proteins in excised tissue, but currently elastic fibers cannot be detected non-destructively in vivo. An in vivo method to detect elastic fibers and any abnormalities they might exhibit would be very beneficial to the further study and possible treatment of elastic fiber diseases because it would allow doctors to closely monitor the progression of a disease. In their research on neurovascular coupling, Shen et al. demonstrated that the Alexa Fluor 633 dye can be safely used in vivo in cats, mice, and rats to bind to elastic fibers in artery walls [2]. The goal of this study is to

determine what component of elastic fibers Alexa Fluor 633 binds to for future use in studying elastic fiber diseases. METHODS In the first experiment conducted, adult human smooth muscle cells (hSMC) were cultured in a base medium of DMEM/F12 with 10% fetal bovine serum and 1% penicillin/streptomycin on glass coverslips. Coverslips were harvested at 2, 4, 6, 9, and 11 days after confluence; some were pre-dyed with Alexa Fluor 633 (2 μM), and all were immediately fixed with methanol. Coverslips were incubated with primary antibodies for MAGP-1 (rabbit polyclonal, EMD Millipore) or elastin (BA-4 mouse monoclonal, Sigma). The cell nuclei, elastin, and microfibrils of all coverslips were then visualized with fluorescence using Hoeschst 33342 (5 μg/mL), goat anti-mouse Cy3, and goat antirabbit FITC, respectively. In the second experiment conducted, adult hSMCs were cultured with base medium on glass coverslips. Once the coverslips reached confluence, they were split into 4 groups and treated according to the following scheme, in order to maximize the chance of elastin production: group 1 was treated with 1.5 mL base medium, group 2 was treated with 1.5 mL base medium + TGFβ1 (1 ng/mL), group 3 was treated with 0.5 mL base medium + 1 mL adipose-derived stem cell (ADSC) base media, and group 4 was treated with 0.5 mL base medium and 1 mL ADSC conditioned media. Coverslips were harvested from each group at 7, 12, and 14 days past confluence and immediately fixed with methanol. All were stained for MAGP-1 and elastin as described above.


RESULTS When the coverslips from the first experiment were imaged with a Nikon E800 microscope running NIS Elements imaging software, no immuno-staining for elastin was observed. The MAGP-1 marker did fluoresce for both sets of coverslips, and the Alexa Fluor 633 did fluoresce on the cover slips pre-dyed with it. There appears to be co-localization of the two, as exhibited in Figures 1 and 2.

Figure 1: Image of cover slip from first experiment pre-dyed with Alexa Fluor 633, harvested 9 days past confluence. Green fluorescent marker binds to MAGP-1 of microfibrils.

between the different treatment groups over the time period analyzed. DISCUSSION From the lack of elastin staining in our images, it is reasonable to believe that the adult hSMCs used did not produce any elastin, especially considering that adult cells were used in this study, and most elastin production occurs during development. However, there is a strong correlation between the patterns seen for MAGP-1 and Alexa Fluor 633 dye, suggesting that the dye binds either to MAGP-1 or another component of fibrillin microfibrils generally located near MAGP-1 in the microfibril. FUTURE WORK Further work is needed to conclude to what component of elastic fibers Alexa Fluor 633 binds. Cell types should be utilized that will actually produce elastin, such as lung fibroblasts, and the experiment should be done with a comprehensive set of antibodies that bind to fibrillin-1 and -2 as well as elastin and MAGP-1 [1]. In addition, the validity of this study would benefit from a quantitative way to analyze parameters of microfibril structure and orientation using image analysis tools available in Dr. David Vorp’s Vascular Bioengineering Laboratory. REFERENCES 1. A.K. Baldwin, A. Simpson, R. Steer, S.A. Cain, C.M. Kielty. Elastic fibres in health and disease. Expert Rev. Mol. Med. Vol. 25, e8, August 2013, doi:10.1017/erm.2013.9. 2. Z. Shen, Z. Lu, P.Y. Chhatbar, P. O’Herron, P. Kara. An artery-specific fluorescent dye for studying neurovascular coupling. Nat. Methods. 2012;9:273-276.

Figure 2: Image of cover slip from first experiment pre-dyed with Alexa Fluor 633, harvested 9 days past confluence. Alexa Fluor 633 fluoresces at a wavelength in the red spectrum.

The coverslips from the second experiment also did not report any significant fluorescence from the Cy3 markers that would indicate elastin. MAGP-1 staining did not show any significant difference

ACKNOWLEDGMENTS Funding for this research was provided by the University of Pittsburgh’s Swanson School of Engineering and the Office of the Provost.


Differentiation of Perivascular Progenitor Cells and Their Role in Neovascularization Bradley W. Ellis, Benjamin R. Green, Jennifer C. Hill, Vera S. Donnenberg, Thomas G. Gleason, and Julie A. Phillippi Thoracic Aortic Research Laboratory, McGowan Institute of Regenerative Medicine University of Pittsburgh, PA, USA Email: bwe4@pitt.edu INTRODUCTION Bicuspid aortic valve (BAV) is a congenital heart malformation that occurs in 1%-2% of the general population.1 BAV patients are at an increased risk of thoracic aortic aneurysm (TAA), which can lead to and aortic catastrophe such as aortic dissection or rupure.2,3 Surgical intervention is commonly recommended when the aneurysm reaches a maximum orthogonal diameter between 5.5cm and 6cm.4 Though aortic surgery is becoming increasingly safe,4 it is still imperative to find a less invasive method of repairing TAAs and preventing rupture. Despite the high prevalence of BAV, little is known about the underlying mechanisms that lead to the associated complications. It has been suggested that tissue degeneration of the aorta caused by inefficient artery repair plays an important role in the development of TAA.5 In order to more effectively treat TAA it is pertinent to define the governing pathways of this tissue degeneration. Progenitor cells have been shown to play an important role in tissue repair throughout the human body including vessel repair.5 In previous studies, isolated pericytes located in small blood vessels have been shown to possess progenitor cell characteristics.6,7 The vasa vasorum, microvessels located in the adventitia of larger vessels, supplies blood and nutrients to larger vessels such as the aorta, and can be considered analogous to the small blood vessels previously described. Preliminary studies by our group showed that the vasa vasorum include pericyte cells.8 We hypothesize that vasa vasorum serve as a stem cell niche, harboring cells that exhibit the potential to differentiate into functionally relevant blood vessels. In this study we explored the differentiation capabilities of primary pericytes, and examine their ability to form vascular networks with human endothelial cells in vitro. MATERIALS AND METHODS Patient Enrollment and Tissue Collection To investigate the biological mechanisms for BAV patients suffering from TAA (BAV-TAA), we actively maintain a tissue bank of prospectively collected aortic specimens from patients who are undergoing elective surgery for ascending aorta/aortic valve replacement. Patient specimens selected for this study were within 10 years of age (50-59) and within 5mm of aortic diameter (50mm55mm). The surgeon harvested specimens during elective surgery with informed patient consent and Institutional Review Board approval. Pericyte Isolation Pericytes were isolated as previously described. 8 Briefly, the aortic adventitia was dissected away from the media and intima, and enzymatically digested to obtained a cell suspension. The cells were cultivated in basal growth medium (Dulbecco’s Modified Eagle media, 10% FBS, 1% penicillin/streptomycin (Invitrogen) (DMEM)) for ~2 weeks until the population reached >1x106 cells. Cells expressing the antigenic profile of pericytes (CD146+/CD90+/CD56-/CD45-/CD34-/CD31-) were sorted through the use of Flow Activated Cell Sorting (FACS). Differentiation Assays Pericytes were cultured in basal growth conditions to confluency. Cells were cultured in the presence or absence of VEGF (50ng/mL) to encourage endothelial or TGF-beta (2ng/mL)/PDGF (50ng/mL) to encourage smooth muscle (SMC)

cell lineage progression for 14 days in commercial endothelial (Cell Applications Inc. San Diego, CA), or smooth muscle (Cell Applications Inc. San Diego, CA) cell basal media respectively. Media was replenished on days 0, 2, 4, 8, and 10. Images were captured using phase contrast microscopy on a Nikon TE-2000-E inverted microscope (Nikon Corporation, Melville, NY) at 0, 2, 4, 8, 10, and 14 days of treatment using a CoolSNAP ES2 Monochrome 1394x1040 Camera (Photometrics, Tucson, AZ) and NIS Elements Software 3.2 (Nikon). After the 14 days of treatment, RNA was isolated and qualitative Polymerase Chain Reaction (qPCR) assessed gene expression of endothelial markers (Von Willabrand’s Factor (VWF), an adhesion protein found in endothelial cells, and CD31, a vessel growth protein) and smooth muscle markers (alpha smooth muscle actin (SMA), a contractile protein specific to SMC, and calponin, a calcium binding protein found in SMC). A two-tailed Student’s ttest was then performed to determine differences in gene expression for each marker and a p<0.05 was considered significant. In addition to gene expression analysis, the cultures were analyzed for lineage-specific markers using immunocytochemistry (ICC) to detect expression of endothelial markers (VWF and CD31) and smooth muscle markers (SMA, calponin, and myosin heavy chain (MHC), a contractile protein seen in SMC). In vitro endothelial branching assay Isolated human primary aortic adventitia-derived pericytes and commercially obtained human pulmonary endothelial cells (HPAECs) (Lonza, Basel, Switzerland) were co-cultured either on (2D, 25x103 cells/cell type/well) or in (3D 125x103 cells/cell type/well) 300µL pre-gelled GFR-Matrigel (Corning, Tewksbury, Massachusetts) substrate in 24-well tissue culture plates. Cells were maintained at a sub confluent state prior to the experiment. Single cultures of either pericytes or HPAECs (50 x 103cells/well for 2D, 25 x 104 cells/well for 3D) were seeded as controls. Cells were then incubated for 8 days at 37°C. Cells on Matrigel were visualized using the imaging procedure previously described. RESULTS Growth Factor Treated Pericytes Show Changes in Gene Expression and Morphology Pericytes cultured under SMC lineage-specific conditions developed a more spindle-like morphology indicative of SMCs (Figure 1C). Gene expression analysis showed an increase in calponin expression (Figure 2) and an increased trend in α-SMA expression (13.4±6.9 fold, p=0.28). The ICC showed a change in the distribution of α-SMA, and an increased in the number of calponin and MHC-expressing cells (Figure 3). Figure 1. (A) Untreated pericytes after 14 days of treatment, (B) Endothelial differentiation treated pericytes after 14 days of treatment, (C) SMC differentiation treated pericytes after 14 days of treatment, (D) confirmed endothelial cells, (E) confirmed SMCs. All scale bars are 100um. Figure 2. Change in fold of calponin expression of SMC treated cells compared to untreated pericytes. Standard error bars are presented. Endothelial differentiated pericytes displayed an increase in calponin expression (p<0.05)


Figure 3. (A) Non-treated pericytes stained for SMA (red) and calponin (green). (B) SMC treated pericytes stained for SMA (red) and calponin (green). (C) Endothelial treated pericytes stained for SMA (red) and calponin (green). (D) Non-treated pericytes stained for MHC (red) and calponin (green). (E) SMC treated pericytes stained for MHC (red) and calponin green). (F) Non-treated pericytes stained for VWF (green) and CD31 (red). (G) Endothelial treated pericytes stained for VWF (green) and CD31 (red). Scale bar=

Pericytes cultured under endothelial lineage-specific conditions showed no change in VWF expression. However, there was an increase in CD31 expression (p<0.01) (Figure 4). There was no observed change in cell morphology for the endothelial differentiated pericytes (Figure 1B). There was a decrease in SMA and an increase in CD31 in over 95% of the pericytes observed through ICC (Figure 3). Figure 4. Change in fold of CD31 expression of endothelial treated cells compared to untreated pericytes. Standard error bars are presented. Endothelial differentiated pericytes displayed an increase in CD31 expression (p<0.01).

Pericytes vascularize on/in Matrigel when cultured with and without HPAECs Figure 5. (A) 2D HPAECs on Matrigel 2 after days of incubation. (B) 3D Pericytes on Matrigel 3 Days after incubation. (C) 2D Pericytes on Matrigel 2 days after incubation. (D) 2D 1:1 mixture of pericytes and HPAECs on Matrigel after 2 days of incubation. All scale bars are 100Âľm.

For HPAECs cultured on Matrigel, cells displayed endothelial tube branching within the first 24 hours of incubation (Figure 5A). Pericytes alone and the pericyte/HPAEC co-cultured formed spontaneous spheroids within 4-6hr of seeding and appeared to organize alongside HPAECs. HPAECs showed more extensive branching after 2 days when compared to the pericytes alone and pericyte/HPAECs co-cultures (Figure 5B and 5C). Cells cultured within Matrigel as a 3D gel, showed less initial branching than the 2D culture, but began to show tube-like structures after 3 days in culture (Figure 5D). DISCUSSION AND CONCLUSIONS The results of the study demonstrate that pericytes increased expression of smooth muscle and endothelial lineage-specific markers when cultured in defined medium, and exhibit immature endothelial-like sprouting when cultured by themselves, or organized with endothelial cells on Matrigel substrates. The increase in CD31 expression as well as the increase in CD31 seen in ICC imaging strongly supports the hypothesis that aortic pericyte cells demonstrate the ability to differentiate into functionally relevant blood vessel cell lineages such as endothelial cells. CD31 is universally expressed in endothelial cells and is considered the classic marker for this lineage. However, phase contrast imaging showed that pericytes culture under endothelial lineage condition did not morphologically resemble confirmed

endothelial cells. Furthermore, the lack in change of expression of VWF could be due to diversity in vWF expression among endothelial cells8. The spindle-like morphology, up-regulation of calponin gene expression, the trend of increased in the Îą-SMA expression, and the increased in the number of calponin and SM-MHC+ cells following SMC-defined medium support our hypothesis that adventitia-derived aortic pericyte cells are capable of differentiating into more mature cells of the SMC lineage. In future studies, an optimal time of treatment for specific cell lines could be found to more conclusively show the differentiation capabilities of pericytes. The Matrigel assays revealed that both pericytes and HPAECs have the ability to form immature vascular networks. The ability of pericytes to spontaneously form spheroids on Matrigel appears to lead to endothelial-like sprouting and organization with endothelial cells. Ongoing experiments are focused on optimizing ratios of co-cultures, and examining the impact of putative differences in normal versus diseased microenvironments on endothelial branching. Future studies will also examine long-term stabilization of endothelial networks by pericytes and consider the influence of disease on these processes. In total, our data indicates that pericytes possess the potential to differentiate into more mature cells of the smooth muscle lineage, can organize with branching endothelial cells and display endothelial-like sprouting from spontaneously-formed spheroid cultures. Ongoing studies in our group are focused on ascertaining differences in differentiation capabilities between diseased and non-diseased pericytes as well as between pericytes isolated from patients with tricuspid aortic valve and BAV. REFERENCES 1. Ward C. Clinical significance of the bicuspid aortic valve. Heart 2000; 83:81-5 2. Gleason TG. Heritable disorders predisposing to aortic dissection. Thoracic Cardiovascular Surgery. 2005; 17:274-281 3. Branchetti E, et al. Antioxidant enzymes reduce DNA damage and early activation of valvular interstitial cells in aortic valve sclerosis. Arteriosclerosis, Thrombosis, and Vascular Biology. 2013; 33:66-74 4. Danyi P, et al. Medical therapy of thoracic aortic aneurysms are we there yet? Circulation. 2011; 124:1469-1476 5. Shen Y, et al. Stem cells in thoracic aortic aneurysms and dissections: potential contributors to aortic repair. The Annals of Thoracic Surgery. 2012; 93: 1524-1533 6. Corselli M, et al. Identification of perivascular mesenchymal stromal/stem cells by flow cytometry. Cytometry. 2013; 83:714720 7. Crisan M, et al. A perivascular origin for mesenchymal stem cells in multiple human organs. Cell Stem Cell. 2008; 3:301-313 8. Green BR, et al. Phenotypic diversity of perivascular progenitor cells from human aorta. ISACB Biennial Meeting. April 2-5, 2014; Conference Proceedings ACKNOWLEDGEMENTS Research reported in this publication was supported by the University of Pittsburgh Swanson School of Engineering (BWE), the Competitive Medical Research Fund of the UPMC Health System (JAP), and the National Heart, Lung and Blood Institute of the National Institutes of Health under Award Number R01HL109132 (TGG and JAP). The authors acknowledge the assistance of Kristin Konopka and Julie Schreiber for IRB protocols and obtaining informed patient consent. We are grateful to our surgical colleagues Drs. Christian Bermudez, Jay Bhama, Forozan Navid, and Lawrence Wei of the Department of Cardiothoracic Surgery, University of Pittsburgh Medical Center for assistance with aortic specimen acquisition.


RIVERS AS POLITICAL BOUNDARIES: PERU AND ITS DYNAMIC BORDERS Escobar, Catalina Hydrology Laboratory, Department of Civil and Environmental Engineering, University of Pittsburgh, Pittsburgh, PA, USA Email: cae50@pitt.edu INTRODUCTION Geopolitical boundaries are fundamental in the creation of any nation. Without such delineations, different governments would remain in an inevitably chaotic state as each tried to claim ownership of natural features and resources. A blurred definition of a frontier can lead to uncertainty, tension, and ultimately, disputes between countries while each attempts to get a foothold on more land or access to natural resources. If the rigidity of national borders is so important in keeping international relations amicable, one is left to wonder what happens when a boundary follows an inherently dynamic natural feature such as a river. From a political standpoint, it is important to understand the implications of a dynamic boundary and the social repercussions that result from it. From a scientific standpoint, it is fascinating to use technology to explore the processes that define and shape a river and its unique migratory patterns. Rivers, although fundamentally dynamic, have been chosen as political boundaries since the beginning of colonization for several reasons. Such divisions were preferred namely for their defensive capabilities and military benefits, and because they were often the first features mapped out by explorers. Furthermore, rivers were indisputable boundaries that did not require boundary pillars or people to guard them. However, it is important to understand the complexities of a river as a boundary. All rivers change over time through processes such as accretion, deposition, cut-off, or avulsion, rendering a political boundary subject to dispute. Depending upon the flow, size, and surrounding land, a river will migrate differently than others. As these natural features migrate, one country loses land while another gains land leading to tension between legal rigidity and fluid dynamism. This in turn can manifest in social disruption due to cultural differences, political upheaval, or conflict risk as a result of scarce water resources. The purpose of this research is to assess the temporal and spatial variability of the political boundaries of Peru that follow rivers. METHODS Peru shares borders with Colombia, Brazil, Bolivia, Chile, and Ecuador. A large part of its northern border with Colombia follows the Putumayo River and later the Amazon River. Part of its eastern border with Brazil follows the Yavari River and later the Yaquirana River. These rivers are natural features used as political boundaries yet they differ in how each migrates. By means of a spatial and temporal analysis of satellite images using ArcGIS it was possible to characterize the planform morphodynamics and the erosion and deposition areas for the Putumayo River, the portion of the Amazon River that is part of the Peruvian boundary, the Yavari River, and the Yaquirana River. The erosion and deposition areas were related to land distribution among Peru, Colombia, and Brazil. Examination of the

Digital Elevation Model (DEM) shows how the altitude of the surrounding land affects the watersheds and thus allows for a better understanding of the dynamic of rivers.

Figure 1: This map, created using ArcGIS, shows Peru and its surrounding neighbors. The two dynamic boundaries of Peru studied in this paper include the Peruvian frontier with Colombia and Brazil. These dynamic borders follow the Putumayo, Amazon, Yavari, and Yaquirana rivers. The direction of flow is depicted with a black arrow. The Digital Elevation Model (DEM) is set as the background.

Before beginning the analysis of the rivers, it was necessary to download all of the satellite images. The images had to be taken in the dry season in order to have accurate erosion and deposition measurements. For the Putumayo River, satellite images from 1990, 1994, 1998, 2002, 2006, 2010, and 2013 were used. For the Yavari and Yaquirana Rivers, images from 1985 and 2013 were used. The first step in extracting data from satellite images was to digitize each river. With the use of ArcGIS tools, it was then possible to calculate the migration of the river between time stages.


Figure 2: This image shows erosion and deposition of one anabranching structure of the Putumayo River from 1990 to 1994. Here blue is water, green is deposition, and red is erosion.

from the Navy, two types of discrepancies were found: shifts in entire sections of the river and differing choices of which branch to use as the boundary. The former dissimilarity could result from a faulty coordinate system used in Google Earth, and the latter from a mistake in not following the border treaties or from a failure to update the border. Many border treaties specify that the boundary should follow the thalweg, or deepest channel of a river; however, the depths of channels change as sediment is transported. Because of the dynamism of the rivers, the research was limited as there was no way to find the depths of all the channels of the rivers for the past two decades.

RESULTS Ultimately, this research combines data regarding the morphodynamics of these rivers with historical insight on border treaties in order to gain a comprehensive understanding of political implications and social repercussions of dynamic boundaries. From simple examination of the satellite images, it is evident that the Putumayo and Amazon rivers are anabranching while the Yavari and Yaquirana rivers are meandering.

Figure 3: This is a satellite image of a part of the Putumayo River. This river is multi-thread or anabranching.

Figure 4: This is a satellite image of part of the Yavari River. This river is meandering. Migration is evident through scroll bars and oxbows.

Through the erosion and deposition analysis, it can be concluded that the Putumayo and Amazon rivers show high migration with the periodic development of islands. Although the Yavari and Yaquirana rivers exhibit extremely slow migration, the migration is evident through the multiple oxbows and scroll bars along the river banks. DISCUSSION For further analysis, the most current boundary delineation following rivers between Peru and its surrounding neighbors was needed. Two sources were utilized: the Peruvian Navy and Google Earth. Comparing the boundary shown on Google Earth and the shapefile of Peru obtained

Figure 5: This is a satellite image of the Putumayo River. Here the purple line is the official border delineation acquired from the Peruvian Navy. The Red line is the boundary line shown on Google earth. Each passes through a different channel depending upon which is treated as the main channel.

Further research needs to be conducted to determine which delineation is the true and most current boundary between Peru and Colombia and Peru and Brazil. CONCLUSION Peru has several dynamic boundaries with its surrounding countries. The rivers are classified as meandering and anabranching. Based on this analysis, it seems that the Peru-Colombia border maintains a higher variability of planform changes, while the Peru-Brazil border is more stable, and only in the downstream portion, important migration rates are observed. This analysis is not only important for boundary delineation, but for social, economic, and ecological divisions. REFERENCES L. H. Woolsey. "The Leticia Dispute between Colombia and Peru." The American Journal of International Law 27.2 (1933): 317-24. Web. 29 Apr. 2014. Donaldson, John W. "Where Rivers and Boundaries Meet: Building the International River Boundaries Dataset." (2009+): 629-44. Web. Donaldson, John W. "Paradox of the Moving Boundary: Legal Heredity of River Accretion and Avulsion." 4.2 (2011): n. pag. Web. ACKNOWLEDGEMENTS Jorge D. Abad, PhD, Department of Civil and Environmental Engineering, University of Pittsburgh, Pittsburgh, PA. Service of Hydrography and Navigation of the Amazon, Peruvian Navy, Lima, Peru.


THE BIRTHPLACE OF THE AMAZON RIVER, THE CONFLUENCE BETWEEN THE MARANON AND UCAYALI RIVERS Ortals, C.1, Abad, J. D.1, Garcia, K.2, Jorge Paredes3, Jorge Vizcarra3 1 2

Department of Civil and Environmental Engineering, University of Pittsburgh, USA

Department of Environmental Management, National University of the Peruvian Amazon, Peru 3

Service of Hydrography and Navigation of the Amazon, Peruvian Navy, Peru

Until now, there has not been a sufficient amount of research conducted on river confluences for the case of large river systems such as the Amazon River. The world’s water ways support millions of people on a daily basis. Rivers provide people with a vast number of resources from transportation, to food, clean water, energy, and more. With a vast amount of water related projects occurring now, one must understand how a river works (baseline) before they can fix or change it. Modifying rivers from building dams to dredging can be beneficial to humans, however, can be detrimental to rivers. For example, near Leticia, Colombia there was dredging in order to make way for a larger dock and larger vessels. However, the consequences were unforeseen. This change in the river accelerated its migration, rendering the port a poor investment. With the research conducted this summer, the hope is to gain an understanding on how rivers function naturally to better improve engineering projects. The current research about river confluences has been limited to very small scales of single threat channels. Until now, an investigation like that has not been conducted. The Amazon River is the largest river in the world with regards to water discharge. This waterway supports the economic and livelihood of hundreds of thousands of people. Much is still unknown about this large river. The confluence of the Amazon River is very important with regards to biodiversity as well. The region located beside the confluence is PacayaSamiria and occupies the Ucamara depression (Figure 1). This area is home to a large variety of flora and fauna. This mega-diverse region was created in part by the dynamic rivers. The migration of the rivers and transportations of sediments to new areas creates an ideal habitat for wildlife. To realize this project, an analysis of the planform morphodynamics of the three rivers (Maranon, Ucayali and Amazon) was conducted. This analysis consists of calculating the migration of centerline and deposition and erosion for each river. These

calculations will be done near the confluence, every 5 years, from 1985-2010. This analysis shows that the system in whole is dynamic, but each river that comprises the system has its own dynamics. A unique characteristic of this confluence is it is formed by a quasi-meandering river (high migration) and an anabranching river (low migration). An analysis of a DEM (Digital Elevation Map) and geology maps show the river basin as well as scroll bars. Using these two types of data, boundaries were selected for the investigation. The confluence point is controlled by the dynamics of the Ucayali River. This controlling factor was determined by an analysis of the movement of the rivers. In order to carry this out, LandSat imagery of the confluence provided by NASA was reclassified using ArcGIS. Using conditional analyses of the temporal images, it is possible to determine the deposition and erosion of the area, thus the controlling river.

Marañón River

Flow Direction Figure 1: LandSat image of the Amazon River Confluence in the year 1985. This figure depicts the general area of Pacaya-Samaria. This imagery is used to collect temporal and spatial data.


Figure 2: (a) Geologic map of the confluence layered with DEM between the Maranon and Ucayali River. The map also shows the erosion and depositional patterns between 1985 to 2012. The brown section is alluvial deposits, or softer material. Within this zone, the rivers have freedom to move. The blue region, Pururo Formation, and the dark purple, Ucamara Formation are stronger materials, limiting the movement of the river. (b) The confluence between these rivers is dynamic as shown by the scroll bars, thus the confluence point is movable in the geological time scale.

(a)

b)

After reviewing information relevant to the above topics, measurements of the flow structure (using ADCP, Acoustic Doppler Current Profiler) and bathymetry (using single beam) were carried out by Dr. Abad, SHNA (Service of Hydrography and Navigation of the Amazon, Peruvian Navy), and I during July and August 2014. After, this data will be processed to analyze the bed type and flow structure of river confluence. This data will show the mixing zone (bathymetric, hydrodynamic, and sediment transport) of the two rivers at the confluence region. This physical phenomenon is caused by two different flows, with varying velocities, sediment concentrations, temperatures, combine. Also understanding how the velocity and sediment concentrations change at different cross sections will provide further insight on how the river will migrate in the future, and how the modulation exerted by the Ucayali River is a dominant factor for the location of the confluence point, and how the understanding of the confluence point is important for the biodiversity of the Pacaya-Samiria region. The above data can be further used in modeling once processed. Using the data, the flow structure at the confluence will be simulated using Computation Fluid Dynamics (CFD) models for extrapolating the flow and morphodynamic conditions. This understanding of the mean and turbulent flow at the confluence will be important for the transfer of fauna between the Maranon and Ucayali rivers. As well, the information gathered will result in an entry in the Geomorphology Journal describing the dynamics of the confluence point for the geological time scale as well describing the mixing (bed, flow, sediments) region between these two large rives.


A LARGE SCALE STUDY OF HUMAN RIGHT VENTRICLE GEOMETRY AND FUNCTION RELATING TO PULMONARY HYPERTENSION Erin M. Sarosi, John C. Brigham, Marc A. Simon, and Timothy C. Wong UPMC Cardiology Department, Department of Civil Engineering University of Pittsburgh, PA, USA Email: ems188@pitt.edu INTRODUCTION Pulmonary hypertension (PH), which is marked by high pressure in the pulmonary arteries (a mean pulmonary artery pressure (mPAP) ≥ 25 mmHg at rest, as tested through a right heart catheterization (RHC)), is a severe cardiovascular disease [1]. In particular, pulmonary arterial hypertension (PAH) impairs the blood flow to the lungs and can lead to damage on the right side of the heart, particularly the right ventricle (RV). Initially, the RV attempts to compensate for this damage by increasing muscle mass, which is known as adaptive hypertrophy. With a certain amount of sustained pressure, the RV can no longer adapt and is forced into a stage of dilatation and contractile dysfunction, which can eventually lead to heart failure, which is the major cause of death [2]. It is thus hypothesized that the size and shape of the RV is an indicator of how severe the disease is in terms of cardiac function. By analyzing the geometric properties and hemodynamics from a large dataset of patients at various states of pathology, patterns can be determined relating to indicators of RV functional changes. The RV has been less frequently studied than the left ventricle (LV). The RV’s irregular crescent shape and thin free wall make it difficult to measure characteristics such as volume and surface areas. Bellofiore et al have been on the forefront of using both novel and established methods to measure RV function and to classify RV-based diseases, including pressure-volume loop analysis and RV wall stress analysis [1]. Other studies, such as Roeleveld et al have utilized solely two-dimensional (2D) geometry of the RV to make conclusions about the heart’s physiologic state [3]. Roeleveld’s report in particular focused on the apparent flattening of the interventricular septum, which is characteristic of RV pressure overload. The focus of the present study is a shape analysis on both 2D and three-

dimensional (3D) geometry to find improved methods of classifying and diagnosing patients with PH. By using cardiac magnetic resonance imaging (CMRI), which is the gold standard for measuring RV volumes, 3D models were constructed to analyze the structural components of the RV [1]. METHODS A retrospective review of an institutional review board approved PH research protocol identified 25 subjects (10 female, mean age 56 years) who had undergone clinically indicated cardiac MRI and RHC within 1 day of the other test and diagnostic image quality. Cardiac MRI was performed on a 1.5 tesla system (Espree, Siemens; Erlangen, Germany), and included a short axis stack of steady state free precession gated cine images (6mm slice thickness, 4mm skip). The right ventricular endocardial borders at end systole (ES) and end diastole (ED) were traced on a dedicated workstation by a trained investigator, and subsequently transferred to publicly available imaging processing software (ITK Snap). Finally, a 3D image processing software (Simpleware) was used to create a 3D, volumetric surface mesh utilizing the end-diastolic and end-systolic traces respectively, as shown in Figure 1a. RHC was performed according to routine clinical protocol.

Figure 1a. A 3D RV mesh in ES created in Simpleware. Figure 1b. Tracing the RV in ITK Snap from short axis images and creating a mask to be imported into Simpleware. The three green points have been manually selected to calculate curvature of the interventricular septum.


The curvature of the interventricular septum was defined using 3 manually selected points (anterior and posterior RV insertion points and mid septum) on the basal most short axis end-diastolic slice containing the papillary muscles, as shown in Figure 1b. Curvature was then calculated as described by Roeleveld et al [3]. A circle was fit through these three points, and the curvature was found by computing the reciprocal of the radius of this circle. Curvature was defined positive if the septum bowed towards the RV. Volumes were normalized to the patients’ body surface area (BSA) [4]. RESULTS Of the 25 patients, 16 had an mPAP ≥ 25 mmHg and thus were considered to have PH. Curvature of the interventricular septum was inversely related to mPAP (Figure 2a), indicating flatter or less bowing into the RV with worse PH.

Figure 2a. Plot of the septal curvature versus mPAP at both ED (blue ) and ES (red ) time stages.

RV volumes increased with increasing mPAP (Figure 2b). PH patients had lower RV ejection fraction, larger volumes, and lower curvature comparing to non-PH patients (Table 1).

Table 1. Average Statistics for PH vs non-PH patients

mPAP (mmHg) Ejection Fraction (%) Curvature (1 / radius) Normalized Volume (mm3 x 103/ BSA)

ED ES ED ES

Figure 2b. Plot of normalized volume versus mPAP at both ED (blue ) and ES (red ) time stages.

DISCUSSION We found quantifiable flattening of the septum as mPAP increased, which is concurrent with the literature [3]. On average, the septum of patients with PH was 6.7% less curved than patients without PH during ED and 14.3% less curved during ES. However, there is definite heterogeneity of the data which is most likely due to variations in measurement. The trend between septal flattening and higher mPAP pressures is promising, and we hope that the signal will be clarified with the addition of more subject data. RV dilation was also seen with PH, as has been well documented. We have validated methods to quantify RV volumes and the geometry of the septum. Future plans involve adding more patients to the database to improve the significance of the tests. PV loops will be constructed to correlate geometric measurements to the overall efficiency and mechanics of the cardiac system. Regarding the 3D shape analysis, the RV meshes will be parameterized to fit a sphere. Then, similar to Fourier decomposition of sine waves, the newly mapped RVs will be decomposed into 3D modes. The goal of this project is to classify the shapes based on what percentage of each mode makes up their RV shape. Then the patients will be clustered based on patterns that fit each person’s disease condition and personal traits. REFERENCES 1.Bellofiore et al. Annals of Biomedical Engineering 41.7, 1384-398, 2013. 2.Noordegraaf et al. European Respiratory Review 20.122, 243-53, 2011. 3.Roeleveld et al. Radiology 234.3, 710-17, 2005. 4.Maceira et al. European Heart Journal 27.23, 2879-888, 2006.

Non-PH 17.6 ± 4.9 58.1 ± 7.5 0.01330 ± 0.00192 0.01419 ± 0.00343 78.0 ± 18.6 32.6 ± 9.4

PH 40.5 ± 10.5 42.2 ± 12.3 0.01240 ± 0.00418 0.01216 ± 0.00330 102.4 ± 24.2 60.9 ± 24.8


Role of Turbulent Flow in Generating Short Hydraulic Fractures with High Net Pressure in Slick Water Treatments Brandon C. Ames University of Pittsburgh; Department of Chemical and Petroleum Engineering 1249 Benedum Hall 3700 O’Hara Street Pittsburgh, PA 15261, USA bca13@pitt.edu INTRODUCTION

Laminar Model

Turbulent Model

Laminar Adjusted Leakoff

This paper provides an argument for considering turbulent flow as the primary propagation regime for hydraulic fracturing using slickwater in shale reservoirs. It shows that the tendency of models that assume laminar fluid flow to over-predict fracture length and under-predict net pressure can be corrected by instead recognizing that the flow regime is turbulent for high rate, water-driven hydraulic fractures.

Fracture Length Length meters

1500

1000

500

500

APPLICATION The application is for hydraulic fracturing using high rate pumping of low viscosity fluids. In these cases it can be shown that Reynolds Number is sufficiently high to imply turbulent flow. The results include both a clarification of conditions under which turbulent flow should be considered and approximate solutions for predicting fracture length, width and net pressure.

1000

1500

2000

2500

3000

3500

Time s

Figure 1. Predicted Fracture Lengths

However, as shown in Figures 2 and 3, increasing the leakoff coefficient causes the model to predict a lower net pressure and smaller fracture width; thereby, further under-estimating the pressure compared to field data. Laminar Model

Turbulent Model

Laminar Adjusted Leakoff

RESULTS

Net Pressure Pressure Pa

Using values similar to slickwater treatments in shale reservoirs, and assuming a contained fracture with blade-like (PKN) geometry, the laminar flow model overestimates fracture length by a factor of 2 and underestimates the width and net pressure by a factor of 2, compared to the turbulent flow model. Modelers that are attempting to history match length and pressure field data are often forced to increase the value of the Carter leakoff coefficient to reduce the predicted length to match what is observed. Figure 1 shows how adjusting the leakoff coefficient by a factor of 3 changes the predicted fracture length.

1.5

10 6

1.0

10 6

500000

500

1000

1500

2000

2500

3000

Figure 2. Predicted Net Pressure

3500

Time s


Laminar Model

Regime

Turbulent Model

Laminar Adjusted Leakoff

Length

Laminar Model

Fracture Width

(

Width

(

)

Pressure

)

(

)

Width meters

Turbulent Model

0.007 0.006

Laminar Leakoff Model

0.005 0.004

(

)

(

)

(

)

(

)

(

)

0.003 0.002 0.001

SIGNIFICANCE 500

1000

1500

2000

2500

3000

3500

Time s

Figure 3. Predicted Fracture Width

By incorporating turbulent flow into the design length, width and net pressure can be modeled without increasing the leakoff coefficient. A simplified list of the governing equations that were used to solve the length, width and pressure estimators for the different models can be seen in Table 1. Table 1: Governing Equations and Estimators

Governing Equations

The predicted fracture lengths obtained from laminar flow-based models often don’t match the field data from microseismic mapping unless the modeler chooses, for example, much higher values for the leakoff coefficient than are expected based on other fracture diagnostics. Consequently, this increase results in a model that generates a significantly lower pressure than what is measured. On the other hand, consistent with many observations, the turbulent flow model predicts fractures that are shorter, wider, and associated with a greater net pressure than the laminar model predictions. These results provide a physical explanation for inconsistencies that are often observed by industry. By recognizing this solution, more extensive models can be developed that would improve the efficiency of well treatments for the oil and gas industry operating with low viscosity fluids in shale formations. Acknowledgements This fellowship was supported by the Swanson School of Engineering and the Office of the Provost. This support, along with the mentorship of Dr. Andrew Bunger is gratefully acknowledged.


OPEN-HOLE TENSION CAPACITY OF PULTRUDED GFRP HAVING STAGGERED HOLES Donald P. Cunningham, Kent A. Harries Department of Civil & Environmental Engineering, University of Pittsburgh, PA, USA Email: dpc31@pitt.edu INTRODUCTION The use of pultruded glass fiber reinforced polymer (GFRP) materials for structural applications is relatively new in the construction industry. As in any structural system, simple yet efficient connection methods are critical to maintaining the integrity of a structure. For isotropic materials, such as steel, bolted connections are often preferred for their efficiency and practicality in construction. However, due to the anisotropic and brittle natures of GFRP materials, stresses induced by bolts are redistributed differently affecting assumed limit states. This work focuses on the open-hole tension capacity of pultruded GFRP. Limit states design standards are being developed (including ASCE 2010) that address tension capacity and, specifically the calculation of net cross section area of a member having holes in a manner identical to that used in steel construction (AISC 2010). It is the contention of the authors that adopting the same net section calculations used for steel – a ductile isotropic material – is inappropriate for GFRP – a relatively brittle and often highly anisotropic material. This is particularly the case when connections having staggered holes are used. METHODS All tests were conducted on coupons having a nominal width of 4 in. and length of 16 in. long; all tests were conducted with an 8 in. gage length between grips. Tests were conducted using a 120 kip capacity universal test machine having 4 in. wide hydraulically operated wedge grips. These were used to hold the specimens without tabs and no grip-related failures were observed in the entire test program.. All holes were cut using a 33/64 in. diameter brad bit (specifically intended for drilling fibrous materials). In all, 20 specimen series, each having 5 repetitions were tested in longitudinal tension. Four additional series were tested in transverse tension. The series parameters included the following: a) plate thickness (t): ¼ and ½ in.; b) total number of holes (N): 0, 1, 2 and 3; c) hole gage (g): 1.0 and 2.0 in. (2 and 4 bolt diameters), and; d) hole stagger spacing (s): 0, 1.0 and 1.5 (0, 2 and 3 dia.). All hole patterns were centered in the width and along the length of the specimen. For cases having 2 or 3 bolts, s = 0, represents the case in which all bolts are in a single line across the 4 in. specimen width. OPEN-HOLE TENSION STRENGTH TESTS A comparison of the ultimate capacity of the specimens having no holes to the specimens with a single line of holes through the center of the plate (s = 0) makes it clear that there is an open hole strength reduction factor (k) associated with the presence of the holes as described by Eq. 1 (Lopez-Anido, 2009). Tn = kFTAn Eq. 1 Where FT is the material tensile strength and An is the net cross section area through the line of holes. The average value of k

observed was 0.77, with a range of 0.72 to 0.82. These observed values are comparable to those summarized by Lopez-Anido (2009) and consistent with the recommended value for design, k = 0.7 (ASCE 2010). Previous studies considered only plates having a single hole. The present data demonstrates that there is no apparent compounding effect of multiple holes despite the relatively small gage distances provided. EFFECT OF STAGGER Results were compared to the presently used method of determining design strengths for staggered hole patterns, for which the quantity s2/4g is added to the net width of the section for each adjacent hole deducted. The effective net area of material (usually steel) resisting tension is then found by multiplying the net length of the failure path by the thickness (t) of the member (AISC 2010): An = Ag – tNh + tΣ(s2/4g) Eq. 2 Where Ag is the gross section area of the member. Based upon the application of Eq. 2, specifically the s2/4g term, a staggered-hole arrangement is predicted to be stronger than a non-staggered arrangement having the same number of holes. As summarized in Table 1, this was not observed in the present study. In Table 1, the ‘observed effect of stagger’ is the ratio of observed failure stress (Fu) for the staggered case calculated using Eq. 3 with respect to the non-staggered case having the same details except s = 0 (i.e., Fu/Fu,s=0). The ‘prescribed effect of stagger’ is the ratio of the net section area calculated using Eq. 2 to that calculated using Eq. 3, effectively isolating the s2/4g term (i.e., An/An,ns). An,ns = Ag - tNh Eq. 3 For the staggered 2-hole arrangements tested, the ultimate strength of the specimens tested did not exceed the strengths of comparable non-staggered-hole arrangements. For the 3hole arrangements, a small increase was observed (6 and 9% for t = 0.25 and 0.5 in., respectively), although not nearly approaching the 45% increase predicted by Eq. 2. In most cases tested, the staggered-hole specimen failure followed the classic ‘zig-zag’ pattern indicating that the stagger spacing values tested indeed resulted in a staggered connection. STRAIN CONCENTRATION AND DISTRIBUTION Digital image correlation (DIC) was used to obtain longitudinal strain fields from representative specimens (Figure 1). Strain ‘snapshots’ were taken at applied load values of approximately 0.30Pu, 0.60Pu and 0.90Pu to establish the progression of the strain patterns and establish the presence and growth of the stress concentrations resulting from the holes. As can be seen in the plots for all specimens, the longitudinal strain at the edges of each hole was many times


fact that specimens were tested without tabs did not appear to effect test results since failures occurred well within the gage length in all cases.

Table 1: Specimen capacity and effect of stagger. effect of stagger t g s Fu N observed prescribed in. in. in. ksi (COV) Fu/Fu s = 0 An/(An,ns) 2 1.0 0 44.4 (0.087) 0.25 2 2.0 0 48.4 (0.040) 3 1.0 0 49.0 (0.061) P g 2 1.0 1.0 45.6 (0.075) 1.03 1.08 2 1.0 1.5 45.6 (0.072) 1.03 1.19 s 2 2.0 1.0 47.8 (0.023) 0.99 1.04 3 1.0 1.0 46.7 (0.069) 0.96 1.20 P 3 1.0 1.5 52.0 (0.066) 1.06 1.45 2 1.0 0 39.7 (0.049) 2 2.0 0 42.4 (0.065) 3 1.0 0 43.8 (0.048) 2 1.0 1.0 40.2 (0.020) 1.02 1.08 0.50 2 1.0 1.5 41.3 (0.052) 1.04 1.19 2 2.0 1.0 41.6 (0.028) 0.98 1.04 3 1.0 1.0 42.5 (0.067) 0.97 1.20 3 1.0 1.5 47.8 (0.029) 1.09 1.45 the strain at points a distance away from the hole. The ratio of the average net section strain calculated using Eq. 4 to the recorded strain at the edge of the hole is approximately 0.45 corresponding to a stress intensity factor of 2.22. εavg = αPu/An,nsE Eq. 4 Where αPu is the proportion of the ultimate load at which the strain is calculated and E is the tensile modulus. The no-hole specimen demonstrates very uniform strain distributions across the specimen both at midheight and near the grips at the top and bottom of the image. While this should be expected, it is confirmation that the test machine grips were sufficiently rigid to result in uniform introduction of load across the nonstandard 4 in. wide specimens. Additionally, the a)

b)

longitudinal strain, eyy

0.016

REFERENCES 1. ASCE. Pre-Standard for LRFD of Pultruded FRP Structures. 2010. 2. AISC. Specification for Structural Steel Buildings. ANSI/AISC 360-10. 2010. 3. Lopez-Anido, R. (2009) “Open-Hole Tensile Strength for Pultruded Plates”, report to ASCE Fiber Composites and Plastics Committee, University of Maine. c)

d)

g = 25.4 mm s = 38.1 mm

g = 25.4 mm 0.020

CONCLUSIONS The test program conducted validated a number of hypotheses regarding the open-hole tension behavior of pultruded GFRP materials: 1. Despite the significant variation in quality of material, there is a consistent strength reduction associated with a circular drilled hole. The value of this reduction was found to be approximately 0.77 in this study, comparable to results of similar tests summarized by Anido-Lopez (2009). Furthermore, the reduction is not compounded by the number holes along a gage line (up to 3). 2. Although there was no apparent effect on observed ultimate capacities, from the DIC data, it is evident that there was some interaction of adjacent hole stress-raisers when the gage was 2db (1.0 in.). Present requirements for a minimum gage of 4db for non-staggered connections would appear to be supported by these observations. 3. Due to the brittle anisotropic nature of pultruded GFRP material, Eq. 2 is not appropriate for the calculation of the net section area of a staggered hole arrangement. The s2/4g term must be dropped, resulting in the use of Eq. 4 for staggered or non-staggered net section area calculation.

e)

g = 25.4 mm s = 38.1 mm

g = 25.4 mm

0.021

0.020

0.002

0.002

0.015

0.023

0.001

0.001

0.050

0.030

0.030

0.045

0.025

0.025

0.040

0.020

0.020

0.035

0.015

0.015

0.030

0.010

0.010

0.025

0.005

0.005

0.020

0.000

0.000

0.015 0.010 0.005 0.000

0.87Pu 0.97Pu

0.72Pu 0.87Pu

0.87Pu

Figure 2. Measured strains across horizontal section at hole location in 6.4 mm thick specimens.


OPEN-HOLE STRENGTH OF BAMBOO LAMINATE FOR LOW-IMPACT TIMBER REPAIR Shawn L. Platt and Kent A. Harries Department of Civil & Environmental Engineering, University of Pittsburgh, PA, USA Email: slp71@pitt.edu Introduction With an aging infrastructure comes a greater need for repairs and even greater need for appropriate materials, means and methods for those repairs. In many areas, fiber reinforced polymer (FRP) composites are at the forefront of repair technology (e.g., ACI 2008). Bamboo, being ‘rediscovered’ due largely to its sustainable credentials, has been around for centuries and could be considered nature’s original FRP. Bamboo is composed of vascular bundles consisting of longitudinal fibers bound together with lignin matrices. The fibers are the source of bamboo’s superior mechanical properties (including tensile capacity and toughness) but also make designing with bamboo unlike designing with most conventional materials. There have been many investigations into the properties of full-culm bamboo (e.g., Janssen 1981, Sharma 2010 and Richard 2012). But the use of the full-culm bamboo in construction is limited and its use as a potential repair material impractical, despite its favorable mechanical properties. Taking advantage of superior mechanical properties, bamboo has been incorporated into applications as diverse as flooring and glue-laminated members or “glubam” (Xiao et al. 2008); reinforcement for concrete and masonry (Ghavami 2005), and; reinforcing fibers for mortars and polymers (Li et al. 2011). Nonetheless, the FRP-like aspect of bamboo materials (superior, although highly anisotropic mechanical properties) has not been leveraged in many cases; this has led to our interest in the repair field. The focus of this study is the application of manufactured bamboo strips for structural repair used in a manner similar to modern FRP methods (ACI 2008). The application envisioned is the repair of timber structures for which bamboo, it is proposed, offers an aesthetically similar or virtually invisible alternative. The comparable stiffness of bamboo and timber results in a more natural interface mitigating induced stress raisers often associated with repair methods. The tensile strength of bamboo is generally superior to that of most species of timber, thereby not only repairing but potentially strengthening the original structure without compromising aesthetics or the historic fabric of the structure. With an emphasis on repair of historic or architecturally sensitive structures, bolted external repairs, rather than adhesively bonded, are preferred (USDoI 1995). Objective Taking a limit states approach, it is necessary to address all manners by which a structure or element may fail and design for these. The focus of the current work is on bolted connections for bamboo strip repairs of timber members. The limits states of the connection include bolt shear; bearing/splitting of bamboo; shear-out of bamboo, and net section failure of bamboo. While all will eventually be addressed, the focus of the present work is the open-hole capacity of bamboo in tension thereby defining the net section capacity of the member.

Test Arrangement Specimens 3.5 in. wide and 16 in. long were cut from the 8 in. wide strips having a nominal thickness (t) of 0.25 in. The material is comprised of a single layer of approximately 0.75 in. wide by 0.25 in thick bamboo strips laminated together to form the 8 x 0.25 in. laminate strip. The 0.25 in. dimension is the through-culm-wall dimension of the source bamboo poles. Holes with a diameter (h) of 0.5 in. and 1.0 in. were drilled varying the transverse gage (g) and longitudinal spacing (s) dimensions. Strain gauges were installed at a location of 2.0 in. from the hole furthest from the center of the specimen. This was done to investigate the strain redistribution in the specimen. A mechanical clip gauge was placed, centered vertically, on the edge of the specimens during testing. Tests were conducted using a 120 kip capacity universal test machine. There were 23 specimen series each with sample sizes (n) of 3 to 5. Each series contained from 0 to 3 holes (N) with spacing and gages ranging from 0 to 2 inches. A view of a typical specimen having 3 holes at a gage of 1 in. prior to testing is shown in Figure 1.

g

P

s P Figure 1 Open-hole tension specimen prior to testing and notation.

Test Results Control specimens having no holes were tested both in the longitudinal (L) and traverse (T) directions; results are shown in Table 1. Little difference was observed between the natural and caramelized bamboo products with the exception of the transverse tension strength. This is an indication that the caramelization process (which is done for aesthetic reasons) adversely affects the lignin matrix but not the bamboo fibers. As can be seen, the degree of anisotropy (i.e., L/T) is significant. Table 1 Bamboo strip material properties. Material & Orientation Natural Caramelized

L T L T

Max. stress, F u ksi (COV) 13.77 (0.11) 0.98 (0.11) 13.42 (1.12) 0.59 (0.15)

F uL /F uT 14.0 22.8

Modulus, E ksi (COV) 1410 (0.08) 133 (0.20) 1243 (0.16) 156 (0.28)

E L /E T 11.2 9.0

Specimens tested having only a single row of bolts (i.e. s = 0) demonstrated some reduction in tensile strength (T) beyond the calculated effect of net section area (A n = A g - Nht) as described by the factor k in Eq. 1. This is an indication of the stress-concentrating effect of the holes. Eq. 1 T n = kF UL A n


As shown in Table 2, for the natural bamboo material, the observed strength reduction was marginal for 0.5 in. diameter holes and k ≈ 0.8 for 1.0 holes. The value of k ≈ 0.9 for 0.5 in. holes in the caramelized material. These values of k are greater than comparable values observed in GFRP materials (Cunningham et al. 2014) as should be expected due to the greater degree of anisotropy in the bamboo. Table 2 Longitudinal strength of bamboo strip having single row of holes. Material

N 0 1 2 2 3 1 2 0 1 2 2 3

Natural

Caramelized

h

g

Max. stress, F u

Observed strength reduction

in. 0.5 0.5 0.5 0.5 0.5 1.0 1.0 0.5 0.5 0.5 0.5 0.5

in. 1.0 2.0 1.0 2.0 1.0 2.0 1.0

ksi (COV) 13.77 (0.11) 12.42 (0.10) 14.46 (0.05) 13.06 (0.05) 14.22 (0.07) 10.71 (0.17) 11.45 (0.10) 13.42 (0.12) 11.71 (0.13) 12.34 (0.06) 11.28 (0.17) 12.20 (0.22)

F u /F u, N=0 0.90 1.05 0.95 1.03 0.78 0.83 0.87 0.92 0.84 0.91

For connections requiring multiple bolts a staggered connection will generally a) be more compact; b) the stagger helps to better engage adjacent bolts by reducing the shadowing effect along the direction of the applied load; and, c) results in an effectively larger net section in isotropic materials and therefore is a common detail. Table 3 shows results from cases in which staggered bolt lines were tested. Only 0.5 in. diameter holes were considered and, due to limited material availability, only caramelized materials were tested. For the case of a staggered connection, the stress is calculated based on a plane net section accounting for all holes across the section (i.e.. A g – Nht) regardless of stagger spacing (s). Table 3 Specimen capacity and observed effect of stagger. N 2

g

s

c-to-c

Max. stress, F u

Observed effect of stagger

in. 1.0

in. -

in. -

ksi (COV) 12.34 (0.06)

F u /F u, s=0 -

2

1.0

1.0

1.4

11.71 (0.04)

0.95

2

1.0

2.0

2.2

13.57 (0.10)

1.10

2

2.0

-

-

11.28 (0.17)

-

2

2.0

1.0

2.2

12.62 (0.12)

1.12

2

2.0

2.0

2.8

12.36 (0.05)

1.10

3

1.0

-

-

12.20 (0.22)

-

3

0.5

1.0

1.1

10.24 (0.06)

0.841

3

1.0

1.0

1.4

11.86 (0.13)

0.97

3

0.5

2.0

2.1

12.24 (0.07)

1.001

3

1.0 2.0 2.2 14.63 (0.05) normalized by specimens having g = 1.0

1.20

1

In isotropic materials, the effect of staggering bolts is to increase the net section tensile capacity. While the results of this pilot study are not conclusive, providing a stagger is observed to increase the capacity marginally provided adequate spacing between the holes is provided. Providing a center-to-center distance (c-to-c, see Table 2) of more than 2.0 in. (4 hole diameters) resulted in an increase in net section strength. Below 2.0 in., interaction between stress concentrations developed at the holes is believed to occur resulting in a reduction in net section capacity. Further study is required to verify and quantify this effect.

Conclusions This experimental program investigated the open-hole tension capacity of a manufactured bamboo strip product and any potential benefit from staggering of holes. The material displayed reliable patterns of material properties. Net section reduction factors accounting for the stress-raising effect of the holes were identified. The impact of staggering the holes was observed to depend on the spacing between holes. Additional study is necessary to quantify these effects since the effect of introducing the hole is detrimental (Table 2), while the effect of staggering the holes may counteract this effect (Table 3). Continued study using digital image correlation is planned and should help to address the apparent interaction observed. The apparently detrimental effects on transverse properties resulting from the caramelization process were identified. For this reason, future work will only use the natural material. This study will continue to investigate all limit states associated with bolted connections in an effort to develop a practical external retrofit system suitable for timber structures. Acknowledgements Testing equipment, materials, and support provided by Dr. Kent A. Harries and the Watkin-Haggart structural Engineering Laboratory (WHSEL). Funding provided by Dr. Kent A. Harries, the Swanson School of Engineering, and the Office of the Provost. References ACI 440.2R, 2008, Guide for the Design and Construction of Externally Bonded FRP Systems for Strengthening of Concrete Structures, American Concrete Institute, Farmington Hills, MI, USA. Cunningham, D., Harries, K.A. and Bell, A.J., (2014) OpenHole Tension Capacity of Pultruded GFRP Having Staggered Hole Arrangement, submitted to Composite Structures Ghavami, K., (2005) Bamboo as reinforcement in structural concrete elements. Cement and Concrete Composites, 27, 637-649. Janssen, J. (1981) Bamboo in building structures Thesis, Eindhoven Univ., Eindhoven, The Netherlands. Li, F., Liu, Y.F., Gou, M., Zhang, R. and Du, J. (2011) Research on Strengthening Mechanism of Bamboo Fiber Concrete under Splitting Tensile Load, Advanced Materials Research, 374-377, 1455. Richard, M. J. (2013) Assessing the Performance of Bamboo Structural Components. PhD Dissertation, University of Pittsburgh. Sharma, B. (2010) Seismic Performance of Bamboo Structures. PhD Dissertation, University of Pittsburgh. United States Department of the Interior (1995) The Secretary of the interior’s Standards for the Treatment of Historic Properties with Guidelines for Preserving, Rehabilitating, Restoring and Reconstructing Historic Buildings, National Park Service, Washington, DC.188 pp. Xiao, Y., Shan, B., Chen, G., Zhou, Q., and She L.Y., (2008) Development of a new type of Glulam – Glubam. Modern Bamboo Structures, Xiao, Y., Inoue, M., and Paudel S.K., eds., London, UK, 41-47.


LIFE CYCLE ENERGY DEMAND & GREENHOUSE GAS EMISSIONS OF COLLABORATIVE BIM CONSTRUCTION PROJECT Elijah Barrad Department of Civil & Environmental Engineering University of Pittsburgh, PA, USA Email: emb109@pitt.edu, Web: http://www.engineering.pitt.edu/civil INTRODUCTION The environmental burden of the construction industry has been a subject of engineering research and development on a global scale for many years. In the United States, buildings account for 36 percent of total energy use, 65 percent of electricity consumption, and nearly a third of both greenhouse gas emissions and raw materials use1. While a large majority of any building’s life cycle environmental impact may be attributed to its energy consumption over the use phase, it is critical that designers and builders also understand the effects of construction activities and building operations on the environment. The goal of this project is to present a case study of a new construction project that utilized Building Information Modeling (BIM) for design, construction, operations & maintenance. The building’s life cycle environmental impact is assessed in terms of energy demand and greenhouse gas emissions. Results are compared to energy demand and emissions figures from existing construction LCA studies, in order to assess the viability of BIM projects from a sustainability perspective. Case Study Cardinal Wuerl North Catholic High School (CWNCHS) is a 171,000 sq. ft. new construction project located on a 71-acre lot in Cranberry Township (15 miles North of Pittsburgh), PA. The project goal is for the building to obtain LEED Silver certification when construction is complete in January 2015. System Boundaries This study in particular operated under the assumption that the use of BIM would not have a substantial effect on raw materials requirements for a construction project. Therefore, upstream processes such as resource extraction,

processing, and manufacturing are not taken into account when compared with non-BIM project impacts, as the difference would be negligible. The system boundary begins with transportation of materials from supplier to construction site, and includes on-site construction equipment operation, architectural and engineering services, and material waste disposal. The building operation phase is also included in the system boundary to determine difference in building energy performance between project types. METHODS The research framework was a BIM-enabled life cycle inventory and impact assessment. The structural, mechanical, and architectural design models from the CWNCHS project were analyzed in Revit, and a material quantity takeoff was performed. This information, along with project-specific mileage data, was used to estimate the total transportation loads (ton-miles) of steel, concrete, wood, masonry, brick, drywall and insulation to the job site. Construction activities were quantified in terms of diesel fuel combustion and electricity consumption, which were converted from construction costs using factors of $4.073 per gallon of diesel fuel2 and $0.0748 per kWh of on-site industrial electricity usage3. The impact of architectural, engineering, and surveying work was assessed using an economic input-output model, where the input is the total contract amount in U.S. Dollars for this service sector. SimaPro was used to gather emissions and energy inputs and outputs for each activity. The TRACI2 and Cumulative Energy Demand (CED) methods were utilized for these analyses. The building’s environmental impacts during its use phase were estimated using Revit in conjunction with Autodesk Green Building Studio (GBS), a cloud-based service capable of


DISCUSSION This research aimed to prove a reduction in environmental impact thanks to the implementation of a collaborative BIM framework for construction projects. When compared with a case study on commercial buildings in the US, the CWNCHS construction project achieved a higher energy performance per unit area than the control. However, the results from the life cycle impact assessment of the construction phase specifically were found to be significantly higher than the emissions and energy demand from the ASCE study4. While these results do not show the expected reduction in overall carbon emissions and energy demand during the construction phase, they do validate the potential for collaborative BIM to achieve higher levels of sustainability throughout the building’s most demanding life cycle phase: occupancy. It is recommended that in a future BIM-LCA study, the system boundary be expanded to include the materials and end-of-life

Construction Phase CED (MJ e) and GWP (kg CO2 e)

RESULTS Table 1 summarizes energy demand and carbon emission figures from SimaPro and Green Building Studio. Figures 1 & 2 compare this study’s results to the CED and GWP figures from an existing LCA study on commercial buildings. The results are normalized for comparison by dividing emissions and energy demand by the area of new construction and, for use phase, by the useful life of the building.

phases of the building, to further understand the impact of upstream and downstream activities.

4.00E+03 3.00E+03 2.00E+03

CED

1.00E+03

GWP

0.00E+00 CWNCHS

ASCE

Figure 1: Construction impacts per sq. meter CED (MJ e) and GWP (kg CO2 e)

simulating a building’s energy performance based on its component specifications. The architectural Revit model from the CWNCHS project was uploaded to GBS, where an energy simulation was performed and interpreted. Results are plotted and compared using Microsoft Excel.

1.60E+03 1.40E+03 1.20E+03 1.00E+03 8.00E+02 6.00E+02 4.00E+02 2.00E+02 0.00E+00

Use Phase CED GWP

CWNCHS

ASCE

Figure 2: Operational impacts per sq. meter per year

REFERENCES 1. USEPA. EPA Green Buildings. 2013. 2. USEIA. Gasoline and Diesel Fuel Update. 2014. 3. USEIA. Average Retail Price of Electricity to Ultimate Customers by End-User Sector. 2014. 4. Guggemos et al. Life-Cycle Assessment of Office Buildings in Europe and the United States. 2006.

ACKNOWLEDGEMENTS This project was conducted in cooperation with the Roman Catholic Diocese of Pittsburgh, Astorino, and Mascaro Construction Company. Special thanks to Dr. Vikas Khanna and Mark Dietrick for their professional consultation. Funding was provided by the Swanson School of Engineering and the Office of the Provost.

Table 1: Life Cycle Inventory Activity Architecture, engineering, surveying Material transport Construction equipment operation Electricity Wasted materials Building operation phase

Input 4.90E+06 1.26E+07 1.12E+04 1.71E+06 7.79E+03

Unit USD ton-miles gal kWh kg

CED (MJ e) 1.65E+07 1.29E+07 1.77E+06 1.54E+07 1.30E+05 5.20E+06

GWP (kg CO2 e) 1.23E+06 1.39E+06 1.17E+05 1.28E+06 7.79E+03 3.12E+05

LCI database USA Input Output Database 98 Franklin USA 98 Franklin USA 98 Ecoinvent System Processes Ecoinvent System Processes Autodesk GBS


REGENERATING COMPOSITE LAYERS FROM SEVERED NANOROD-FILLED GELS Stephen C. Snow, Xin Yong, Olga Kuksenok, and Anna C. Balazs Department of Chemical and Petroleum Engineering University of Pittsburgh, PA, USA INTRODUCTION The development of self-regenerating materials has been a focus in materials science for several decades, owing to the impact that such a breakthrough might have. The utility of self-regeneration would extend the lifetime of materials ranging from mobile device screens to aircraft. The primary objective of this project is to take our existing model of a self-regenerating material and implement a method of strengthening the interface between the two gels [1]. This proves to be a challenge because the interactions between the separate moieties are repulsive, which creates a natural gap at the interface as the nanorods diffuse upward into the outer solvent. Although it is possible to keep the nanorods anchored at the interface by tuning monomer concentrations, this is undesirable as it is important for the nanorods to disperse throughout the two layers so that the characteristics of the uncut gel are truly replicated, adding strength to the composite. To solve this issue, we introduce a new cross-linker to the system, which bonds with both the original gel and the regrown gel. This addition allows us to replicate a carpentry trick for joining two wood pieces. This approach utilizes two different fasteners at a joint, nail and glue, to form temporary and long-term bonds, respectively. In our system, the nanorods initially act as nails to hold the two gels together. Next, our newly introduced particle will form inter-gel cross-links at the interface while the nanorods move out, fulfilling our main objective of creating a strong interface. METHODS Our system is modeled using a dissipative particle dynamics (DPD) approach, a coarse-grained particlebased method that allows us to reach larger length and time scales than a full-atom molecular dynamics approach [2]. The system contains a chemically crosslinked gel network arranged in a diamond-lattice structure, which has polymer chains modeled by the coarse-grained bead-spring model with bond and angle potentials. We use amphiphilic nanorods, where the rod portion likes the outer solvent and the end-grafted chains like the inner solvent. The polymer gel is regrown through an atom transfer radical polymerization (ATRP) living copolymerization process of monomer and crosslinker [3].

Inter-gel cross-linking, shown in Figure 1, occurs when polymerization is nearly complete.

Figure 1: The formation of an inter-gel cross-link. The radical from an active end is transferred to the additional cross-linker, where it may then form another bond.

RESULTS AND DISCUSSION After extending the height of the simulation box, we looked at the relation between initial monomer concentration and average position of the nanorods (Figure 2). We see that for higher amounts of monomer, the nanorods, initially anchored at the interface, move upward into the blue gel. This is a result of the regenerated gel, which is connected to the nanorods by the initiator sites, pushing down on the green gel during growth, causing the nanorods to rise in return. This leads to dispersion, but it also results in a gap between the gels as the regenerated layer moves away from the interface.

Figure 2: Plot of average position of rod center of mass vs. time for different initial monomer concentrations.

Â


(b)

No Additional Cross-Linker

(c)

Reacted Cross-Linkers

(a)

5% Additional Cross-Linker Z

Z

Figure 3: (a) Visual representation of additional cross-linker’s effect. Included are closeup views of interface, with reacted additional cross-linker in red. (b) Quantitative effects of additional cross-linker at each z-layer near the interface. Reacted cross-linker is defined as cross-linker with at least 3 bonds formed. (c) Gel density at each z-layer near the interface for different amounts of additional cross-linker.

After adding the additional cross-linker to the system, we saw a clear change in the interface, as seen in Figure 3a. Where once there was a gap formed at the interface, the additional cross-linker holds the two gels together neatly. The 5% prefix to the additional cross-linker indicates that the additional cross-linker constitutes 5% of the green gel by volume. By counting the amount of reacted cross-linkers at various layers (Figure 3b), we confirm that adding the additional cross-linker results in notable changes at the interface. With no additional cross-linking, there is a dip at the interface, but adding the additional cross-linker makes it so that the interface is no longer the weak point in the composite.

additional cross-linker, the density remains roughly uniform throughout the interface. An explanation for the gel density results was found in Figure 4, which compared the total and inter-gel crosslinking for the different amounts of additional crosslinker. With 10% or greater additional cross-linker, there are massive spikes in cross-linking near the interface, with only a small portion of those being inter-gel crosslinks. This indicates that with those amounts, the majority of reacted cross-linkers form intra-gel links, causing clumping to occur below the interface. CONCLUSIONS We found that by using a fraction of 5% additional cross-linker, a strong interface can be achieved without sacrificing our goal of maintaining a roughly homogenous composite. Also importantly, we were able to identify issues with using an excessive amount of this cross-linker. Many of these reacted cross-linkers don’t contribute to the strength of the interface, instead just causing the green gel to become more heterogeneous.

Figure 4: Density profile for different amounts of additional cross-linker. Blue line corresponds to total cross-linking. Red line corresponds to inter-gel cross-linking.

We then varied the amount of additional cross-linker to compare effects. The gel density graph (Figure 3c) shows that for high amounts of additional cross-linker, 10% or higher, there is an increase in density below the interface. This is not desired, as it creates a more heterogeneous composite. Alternatively, with 5%

REFERENCES 1. Yong, X.; Kuksenok, O.; Matyjaszewski, K.; Balazs, A. C. Nano Letters 2013 13 (12), 6269-6274 2. Groot, R. D.; Warren, P. B. J. Chem. Phys. 1997, 107, 4423– 4435 3. Gao, H. F.; Polanowski, P.; Matyjaszewski, K. Macromolecules 2009, 42, 5925– 5932 ACKNOWLEDGEMENTS The authors gratefully acknowledge the NSF (grant EEC-1359308), Swanson School of Engineering, and Office of the Provost for financial support of this research.


Effect of Substrate Properties on the Growth Kinetics of Encapsulated Human Embryonic Stem Cells Sierra Barner1, Thomas Richardson1, Prashant N. Kumta1,2 and Ipsita Banerjee1,2 Department of Chemical Engineering1, Department of Bioengineering2 Swanson School of Engineering, University of Pittsburgh, PA INTRODUCTION Insulin injection is the predominant treatment for type 1 diabetes, however; transplantation of insulin producing islets will provide a permanent treatment for diabetes. Yet the lack of sufficient donor pancreas has generated a critical need for a renewable source of insulin producing cells. Human embryonic stem cells (hESC) have been distinguished by their unique ability to differentiate to any cell type in the body. While hESC differentiation is largely dominated by chemical modulation of the cell microenvironment, recent reports have indicated the importance of the physical microenvironment in affecting cell fate. Our research focuses on pancreatic differentiation of hESCs [1]. A previous project in our lab showed that the use of calcium alginate encapsulation of hESCs for enhanced pancreatic differentiation compared to differentiation on tissue culture plastic. In this work we investigate the effect of physical properties of the capsule on the growth of hESCs during pancreatic differentiation METHODS Undifferentiated H1 hESCs were maintained on matrigel coated tissue culture plate for 5-7 days in mTeSR1 at 37oC and 5% CO 2 before passaging. The experiments were performed with p55-p70 hESCs. Alginate encapsulation was achieved by suspension of the cells in an alginate solution comprised of filtered low viscosity alginate with gelatin. This solution was then added drop wise to a cation solution of 100mM CaCl 2, 10mM BaCl 2, or 20mM BaCl 2. Alginate capsules were incubated for 6-8 minutes in the solution. The capsules were washed with PBS and suspended in appropriate medium with Y-27632 for 4 days prior to differentiation. The stage-wise induction protocol for pancreatic differentiation of hESCs was adopted from our previous study [2]. DE stage was induced using ActivinA with Wnt3A for 4 days. Afterwards, PP stage was induced with KAAD-cyclopamine for 2 days and retinoic acid for 2 days. All differentiation media was made using DMEM/F12, supplemented with 0.2% BSA and 1xB27. Cell viability was assessed at the last day of each differentiation stage using LIVE/DEAD assay. The encapsulated cells were incubated with 2 ÂľM ethidium homodimer-1 and 1 ÂľM calcein-AM in DMEM/F12 for 25 minutes and imaged using florescence microscopy.

Individual colony area and aspect ratio were determined by undergoing image processing. This was done by taking the stock LIVE image and applying a thresholding and binarizing process. The binary image was then used to form an outline of the individual cell colonies. Cell proliferation was measured using the AlamarBlue assay. The encapsulated cells were treated with 10% AlamarBlue in media. These cells were incubated for 4 hours, and fluorescent intensity was measured. Cell cytotoxicity was measured using the LDH assay. Were a lysis solution is added to the cells and incubated for 4 hours. The absorbance is then measured. Cells were decapsulated with EDTA and washed with PBS. mRNA was isolated using the NucleoSpin RNA II kit, cDNA was obtained using the Impromll Reverse Transcription System. PDX1 gene expression was measured with q-PCR using an MX3005P system. RESULTS hESCs were encapsulated in alginate capsules and the biochemical and physical cues were changed by varying the crosslinker type and concentration. After encapsulation, we quantified the dynamics of cell growth in the capsules. Gel stiffness of encapsulated hESC at concentrations of 100mM CaCl 2 , 10mM BaCl 2 , and 20mM BaCl 2 were measured by atomic force microscopy. There was no significant difference between the stiffness of the 10 mM BaCl 2 gel and the 100 mM CaCl 2 gel. This stiffness measurement shows a relatively soft gel. The 20 mM BaCl 2 was significantly stiffer than the 10mM BaCl 2 gel. We analyzed the cell proliferation of encapsulated hESC using the AlamarBlue assay during the stage-wise differentiation protocol. Gel conditions of similar stiffness showed increased proliferation through differentiation. The stiffer 20 mM BaCl 2 conditions showed suppressed proliferation throughout differentiation to the PP stage. Cell cytotoxicity was measured using the LDH assay, which measured the amount of cell death per mL of alginate. There was minimal cell death throughout differentiation for all conditions.


The viability was measured using the LIVE/DEAD assay, in which cells that fluoresced green were live and cells that fluoresced red were apoptotic. On day 1, there was little to no difference in viability between the varying alginate capsules. On Day 14, the colonies in the 100 mM CaCl 2 condition are quite enlarged and asymmetrical. The 10 mM BaCl 2 and 20 mM BaCl 2 were slightly enlarged but kept a generally spherical and symmetrical shape. The area and aspect ratio of the individual cell colonies were determined by acquiring the outline of the colonies by an image processing stage.

Average Area

Area (µm^2)

15000

100 mM CaCl2 10 mM BaCl2 20 mM BaCl2

10000 5000 0 0

2

4

6 8 10 Time (days)

12

14

Figure 1: Average colony area of various alginate gel compositions

Figure 1 shows the average cell colony area for the 100 mM CaCl 2 increased throughout differentiation to the pancreatic progenitor stage. The softer 10 mM BaCl 2 condition shows an increased colony area in relation to the stiffer 20 mM BaCl 2 condition with a smaller average colony area. The aspect ratio is calculated by taking the minimum diameter of a colony divided by the maximum diameter, where 1 is perfectly spherical and elongates as it approaches 0. On day 1 of differentiation, all of the conditions showed colonies that were spherical ranging from approximately 0.85-1.00. Colonies in the CaCl 2 gel had a broad distribution of aspect ratios, while both the barium gels had colonies that are more spherical. Differentiation status of the encapsulated cells were analyzed by measuring the gene expression of PDX1 by qRT-PCR. PDX1 is a transcription factor necessary for β-cell maturation and is hallmark indicator of pancreatic differentiation. The PDX1 gene expression of the 100 mM CaCl 2 and 10 mM BaCl 2 capsule was highly upregulated. The PDX1 gene expression for the stiffer, 20 mM BaCl 2 capsule was suppressed. Thus, alginate capsules made with higher BaCl 2 concentration inhibit differentiation compared to capsules made with CaCl 2 and low BaCl 2 concentrations.

DISCUSSION Our results demonstrated that on day one of the procedure, there is little to no difference between the viability and proliferation. This is likely because it is the beginning of the procedure and the cells have yet to develop or differentiate. When the experiment proceeded to day 14, there was noticeable increase in proliferation of the conditions with similar stiffness, and suppressed proliferation in the stiffer 20 mM BaCl 2 gel. This is due to softer gels allowing for more rapid growth from the less restrictive gel. The distribution of individual colony average area and aspect ratio showed that the CaCl 2 gel had a large increase in size throughout differentiation due to the gel being less rigid in comparison to the two BaCl 2 gels. The 10 mM BaCl 2 condition has increase in size, in relation to the stiffer condition with a smaller colony area. This is due to the rigid nature of the BaCl 2 gels. The two BaCl 2 compositions both had very spherical colonies whereas the CaCl 2 had colonies that started to form a more asymmetrical shape. This is due to CaCl 2 being a softer gel than the BaCl 2 and as the colonies grow they are not being forced into such tight spherical shapes. Our results also demonstrated that as the gel stiffness increased, the cell growth is restricted and the gene expression decreases. As alginate gel stiffness increase, pancreatic differentiation of hESC was suppressed. The alginate capsule made with 100 mM CaCl 2 was the softest of the gels we measured, and consequently showed the highest differentiation. The alginate capsule made using 20 mM BaCl 2 , resulted in significantly lower PDX1 gene expression. When we decreased the barium concentration to 10 mM, we observed a gel stiffness very similar to the capsule made using CaCl 2 . Using the lower BaCl 2 concentration rescued the PDX1 gene expression, and thus we observed very similar levels of gene expression as with the capsule crosslinked with calcium. This is of particular interest because the gel stiffness is suppressing so strongly even in the presence of growth factors. The gel stiffness is the lowest in the CaCl 2 alginate gel, and increases with the higher concentrations of BaCl 2 . The gel stiffness and the PDX1 gene expression are inversely proportional to one another with the three alginate gel compositions. REFERENCES 1,2 Richardson T, Prashant KN, Banerjee I. “Alginate Encapsulation of Human Embryonic Stem Cells to Enhance Differentiation to Pancreatic Islet-like cells.” (2014) Tissue Engineering Part A

ACKNOWLEDGEMENTS I would like to thank the Banerjee lab, REU – systems medicine program (NSF grant number EEC-1156899 ), and the SSOE – Summer Research Internship Program.


THREE DIMENSIONAL CELL CULTURE EFFECTS ON CHONDROGENESIS OF KARTOGENIN-TREATED hMSCs Meghana Patil1, Riccardo Gottardi2,3, Veronica Ulici2, Steven R. Little1,3, and Rocky S. Tuan1,2 Department of Bioengineering1, Center for Cellular and Molecular Engineering, Department of Orthopaedic Surgery2, and Department of Chemical and Petroleum Engineering3 University of Pittsburgh, PA, USA Email: map205@pitt.edu, Web: littlelab.pitt.edu INTRODUCTION Osteoarthritis (OA) is the most prevalent joint disease in the United States, affecting nearly 27 million people and costing over 42 billion dollars in 2009. OA primarily impacts the aging population whose hyaline cartilage gradually deteriorates over a lifetime of normal wear and tear. Patients suffer from pain and stiffness in their joints upon movement, greatly limiting their mobility and decreasing their quality of life. Current treatments for OA, such as corticosteroid/hyaluronan injections and nonsteroidal anti-inflammatory drugs, are mostly palliative and do not target the source of the disease. The final option of total joint replacement is highly invasive and have a limited duration of effectiveness, which is particularly challenging and risky for elderly patients. As the incidence of OA increases, there has been a push in regenerative medicine to regenerate hyaline cartilage through the differentiation of adult human mesenchymal stem cells (hMSCs). Previous work has shown that the small molecule kartogenin promotes chondrogenesis in hMSCs [1]. Furthermore, as shown by Zhang et al., a high density, three dimensional (3D) cell culture environment is important to effectively promote chondrogenesis [2]. The present study evaluates the extent to which a 3D environment in micromass culture and gelatin scaffold enhances chondrogenesis of hMSCs treated with the chondrogenic small molecule kartogenin [1]. Results from this study should help to optimize the delivery of hMSCs and kartogenin to the joint space for cartilage regeneration as a treatment for OA. METHODS hMSCs were isolated with IRB approval from the bone marrow of a 81 year old female donor (2D monolayer) and 70 year old male (micromass and gelatin) undergoing total joint replacement,

expanded and used at passage 3. Three methods of cell culture were compared: (i) two dimensional (2D) monolayer culture, (ii) 3D micromass culture, and (iii) 3D gelatin scaffold constructs. For (i), cells were plated at a density of 10,000 cells/cm2 in 6 well plates. For (ii), micromasses were created by incubating 10 µl droplets of cells at 20,000,000 cells/ml. For (iii), 400,000 cells were embedded into 40 µl gelatin constructs through UV crosslinking. All cells were cultured in serum-free DMEM (supplemented with 50µg/ml ascorbate, 40µg/ml proline, 200 units/ml penicillin, 200 µg/ml streptomycin, 0.5 µg/ml Fungizone Antimycotic, and 10 µg/ml ITS) to which either 1 µM kartogenin (dissolved in DMSO), 10 ng/ml TGF-β1 (positive control), or 0.0952 µl/ml DMSO (negative control) was added. Samples were analyzed for gene expression using quantitative real-time polymerase chain reaction (RT-PCR). Gene expression for collagen type II (Col2a1), aggrecan and glyceraldehyde 3-phosphate dehydrogenase (GAPDH) were measured, using 18S rRNA for normalization. Relative gene expression for each experimental control is measured compared to the DMSO negative control group. Samples were also stained with Alcian blue to qualitatively determine the presence of sulfated glycosaminoglycans (GAG) in the samples. RESULTS RT-PCR analysis of kartogenin-treated hMSCs (Fig. 1a) shows a greater upregulation of collagen type II gene expression in both 3D culture models compared to 2D monolayer culture. Collagen type II is particularly upregulated in the gelatin scaffold, showing approximately twice as much expression as in micromass culture. The same trend is shown when cells are treated with TGF-β1 as a positive control. Aggrecan expression (Fig. 1b) in


(a)

(b)

Figure 1. Relative gene expression of collagen II (a) and aggrecan (b) in various cell culture conditions treated with kartogenin and TGF-β1 to the same conditions cultured with DMSO (n=1). kartogenin-treated samples does not follow the same trend, as it is highest in 2D monolayer, followed by gelatin scaffold and micromass. In TGF-β1 samples, aggrecan expression is approximately the same in the micromass and gelatin scaffold models and is higher than in the 2D monolayer culture. Alcian blue staining (Fig. 2) showed that kartogenintreated 2D monolayer samples stain very lightly, similar to the negative control. Positive control TGFβ1 samples stain more strongly. Within micromass samples, kartogenin treated samples stain slightly more strongly than negative control. TGF-β1 micromass cultures formed a pellet which stain very dark blue. Gel scaffold samples are currently in the process of being sectioned and stained.

Figure 2. Alcian blue staining of monolayer and micromass cell culture samples. DISCUSSION Preliminary results show that there is an increased level of chondrogenesis in hMSCs when treated with kartogenin in a 3D environment as compared to a 2D

environment. A similar trend is observed when cells are treated with TGF-β1. There is higher expression of collagen type II in both 3D models, characteristic of developing hyaline cartilage. Although RT-PCR results show decreased aggrecan gene expression in kartogenin-treated micromass samples, Alcian blue staining shows that there is slightly increased GAG levels in the kartogenin treated samples as compared to the negative control. Additional staining of gelatin scaffolds with Alcian blue will be helpful to solidify this trend. From the three cell culture models tested, the 3D gelatin scaffold seems to be most favorable for chondrogenesis. The model shows the highest levels of collagen type II expression for both kartogenin and TGF-β1 treated groups, as well as increased aggrecan expression compared to the negative control. This finding could be applied to tissue engineering with the aim of regenerating cartilage as a potential treatment for OA by using kartogenin to differentiate hMSCs in a 3D environment. REFERENCES 1. Johnson, K. Science, 336 (6082):717-21, 2012. 2. Zhang, L. Biotechnol. Lett., 32, 1339-46, 2010. ACKNOWLEDGEMENTS Support for this project was given by Swanson School of Engineering, University of Pittsburgh, Commonwealth of Pennsylvania; NIH, Ri.MED Foundation; Berner International Corporation Award.


DETERMINATION OF HYDRODYNAMIC AND MASS TRANSFER PARAMETERS IN A PILOT-SCALE SLURRY BUBBLE COLUMN REACTOR FOR FISCHER-TROPSCH SYNTHESIS Yemin Hong, Omar Basha, Laurent Sehabiague, & Badie Morsi Reactor and Process Engineering Laboratory, Department of Chemical and Petroleum Engineering University of Pittsburgh, PA, USA Email: yeh10@pitt.edu, Web: http://www.pitt.edu/~rapel/ Introduction In 2013, the US crude oil production and consumption were 10-million barrel per day (bbl/day) and 18.9-million bbl/day, respectively with a net import totaling 8.9-million bbl/day. The majority of the oil is consumed in the transportation sector in order to meet our lifestyle demands. The imported oil comes mainly from Canada, Mexico, Venezuela, Nigeria and the Middle East (Saudi Arabia, Iraq, Libya, Kuwait, etc.). Unfortunately, the Middle East is in ongoing turmoil, Nigeria is suffering from a lingering civil war, and the US is having huge ideological and political differences with the Venezuelan administration. Therefore, relying on those countries for our oil import could dramatically threaten our lifestyle and jeopardize our national security. Luckily, the US has tremendous resources of coal and biomass which could last for over 400 years at the current consumption level [1], and the new discovery of Marcellus and Utica shale gas plays offer huge new fossil energy sources, which could be used to satisfy our needs, improve our lifestyle and ensure our national security. The only proven technology which could convert coal and/or biomass as well as natural gas into environmentally-friendly, zero-sulfur, zero-nitrogen, and virtually no aromatics transportation fuels, is the Fisher-Tropsch (F-T) synthesis. This process was developed in Germany during the 1920’s by Franz Fischer and Hans Tropsch and was used to produce synthetic hydrocarbons. In this process, the syngas (H2 + CO) reacts in the presence of a catalyst (iron or cobalt) in a unit operation (reactor) to produce a mixture of liquid hydrocarbons [2, 3]. Upon upgrading, high value synthetic fuels, including naphtha, gasoline, diesel, jet fuel, and lube oil could be produced. Currently, the F-T process is commercially carried out in multi-tubular fixed bed reactors (FBRs) or slurry bubble column reactors (SBCRs) by Sasol and Shell in South Africa, Qatar and Malaysia.

Research Objective The goal of this research is to investigate the hydrodynamics and mass transfer parameters in a pilot-scale SBCR operating under typical F-T conditions. The volumetric mass transfer coefficient (kLa), gas holdup (εG) and Sauter mean bubble diameter (d32) were measured for nitrogen (N2) and helium (He) as surrogate components for CO and H2, respectively. The experiments were performed using NICE (National Institute of Clean-And-Low Carbon Energy) molten reactor wax containing iron catalyst up to a concentration of 15 vol.%. The reactor was operated under high pressure (P) and temperature (T) in the churn-turbulent flow regime [4]. Experimental Set up and Procedure The SBCR consists mainly of: reactor, gas sparger, damper, filter, demister, compressor, Coriolis mass flow-meter, gas supply vessel, and gas cylinder. The reactor is provided with two Jerguson sight-windows in order to enable monitoring the gas bubbles size/behavior under a given condition. The desired gas (N2, He or mixture) is loaded into the reactor; and at the desired P and T, the compressor is used to circulate the gas through the slurry. All the T-P-time data were recorded through a data acquisition system and the values were visualized on a personal computer using LabView Software. The Transient Physical Gas Absorption (TPGA) technique was used to obtain kLa, the manometric method was used to obtain εG, and the Dynamic gas Disengagement (DGD) technique was employed to calculate d32. Calculations The gas holdup, defined as the volume fraction occupied by the gas in the slurry, was determined under the following assumptions: (1) the reactor is operating under steady-state conditions; (2) the liquid/slurry and the gas phases are well mixed in the given differential pressure cell legs; and (3) the impact of the frictional losses on the pressure drop is


negligible [5]. The Sauter mean bubble diameter, defined as the total volume of gas bubbles divided by their total surface, was obtained using the following assumptions: (1) the rate of disengagement of each gas bubble is constant under a given set of experimental conditions; (2) once the gas flow through the reactor is stopped, there is no breakup or coalescence of gas bubbles and the size remain constant as each bubble disengages; and (3) the liquid internal circulation has no effect on the rise velocity of the gas bubbles [5]. The volumetric mass transfer coefficient, defined as the gas-liquid interfacial area (a) times the mass transfer coefficient in the liquid-side (kL), was calculated using the transient portion of the P-time curve just before the system reaches the thermodynamic equilibrium [5]. Experimental Results and Discussion Figure 1 shows that the gas (N2) holdup increases with increasing the gas velocity (0.15-0.25 m/s) and temperature (422-442 K) at constant pressure (200 psia) and catalyst concentration (15 vol.%). This behavior can be explained by the increase of the gas momentum due to the increase of the gas velocity. This behavior is in agreement with previous findings in the literature. The increase of gas holdup with temperature could be related to the decrease of liquid surface tension which resulted in the formation of large number of small gas bubbles, increasing the gas holdup. These data can actually be seen in Figure 2 which shows that the Sauter mean bubble diameter decreases with increasing the system temperature.

Figure 1: Temperature effect on gas holdup Other results, which are not shown here, indicated that the increase of the He volume fraction in the gas

mixture led to the increase of the Sauter mean bubble diameter and the decrease of the gas holdup. Again, the addition of He decreased the gas momentum and consequently the gas holdup as mentioned.

Figure 2: Temperature effect on bubble size Conclusions The gas holdup of N2 and He in the molten NICE reactor wax appeared to increase with increasing the superficial gas velocity and system pressure and temperature. The behavior of the volumetric mass transfer coefficients for the gases used in NICE wax followed that of the gas holdup. The Sauter mean bubble diameter increased with temperature and the mole fraction of He in the gas mixture. References [1] British Petroleum, "BP statistical review of world energy," ed, 2014. [2] M. E. Dry, "Advances in Fischer-Tropsch chemistry," Industrial & Engineering Chemistry Process Design and Development, vol. 15, pp. 282286, 1976. [3] M. E. Dry, "The Fischer-Tropsch process: 19502000," Catalysis Today, vol. 71, pp. 227-241, 2002. [4] A. de Klerk, Fischer-Tropsch Refining. Weinheim: Wiley-VCH Verlag & Co. KGaA, 2012. [5] L. Sehabiague and B. I. Morsi, "Hydrodynamic and Mass Transfer Characteristics in a Large-Scale Slurry Bubble Column Reactor for Gas Mixtures in Actual Fischer–Tropsch Cuts," International Journal of Chemical Reactor Engineering, vol. 11, pp. 1-20, 2013.

Acknowledgement The opportunity and financial support given by Dr. Morsi, the SSOE, and the Office of the Provost is greatly acknowledged


UTILIZING AN INTERACTIVE EDUCATIONAL MODULE TO EDUCATE MIDDLE SCHOOL STUDENTS ABOUT DIABETES Blaec Toncini [1], Li Ang Zhang [1], Robert Parker [1], Cheryl Bodnar [1] [2] [1] Department of Chemical and Petroleum Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA. [2] Engineering Education Research Center, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA. Email: bpt21@pitt.edu INTRODUCTION Currently 25.6 million individuals in the United States have diabetes. Between 1988 and 2008, the number of individuals with diabetes increased by 128%, a trend that could lead to 1 in 3 Americans being diabetic by 2050. Of the current diagnoses, 90-95% are for Type II diabetes where the number one risk factor is obesity.1 This risk factor may be minimized through maintenance of a healthy diet. Utilizing an interactive educational module based upon a diabetes computer model, we propose to show middle school students the impact foods can have on their blood sugar levels. To accomplish this goal, we selected to employ a comprehensive educational program as they have been shown to have success in the past. For instance, students who were between 15-19 years old that participated in comprehensive sex education had a 50% lower risk of teen pregnancy when compared to students who participated in abstinence only programs.2 The proposed educational module will be accompanied by a lesson plan focusing on the risks associated with diabetes, how to read nutrition labels, and the importance of a healthy diet. This intervention will provide the necessary background for students to better understand their health and how they can take an active role to avoid diabetes. EDUCATIONAL TOOL DEVELOPMENT We assembled a focus group of general science, health and physical education teachers from local elementary and middle schools. Utilizing a series of open-ended questions, we learned what an effective lesson on diabetes would encompass. The top responses from teachers included lesson plans, lists of materials for the lessons, accurate, up to date information regarding diabetes, and an interactive component to engage the students.

LEARNING OBJECTIVES This educational module has three learning objectives: 1. Students will be able to describe diabetes, what types exist, and the risk factors for each type. 2. Students will be capable of looking at a nutrition label and stating whether the product contains a significant amount of sugar. 3. Students will be able to describe the impact that food has on blood sugar. LESSON PLANS To meet these learning objectives we developed three lesson plans. The first lesson plan allows for students to learn about types of diabetes, mechanics of the disease, and the associated risk factors. They also learn about blood sugar and its impact on the body. Finally, students are shown the progression of diabetes, including how it’s diagnosed and potential forms of treatment. As part of the second lesson, students are taught about carbohydrates, fats, proteins, calories, and sugars. Students also are shown how to read nutrition labels. As part of the lesson plan, they bring in nutrition labels from home to serve as part of the demonstration. The teacher helps students visualize what different amounts of sugar look like by tying it back to the amount listed on the nutrition label. The final outcome of this lesson plan is the creation of healthy and unhealthy meal plans. During the final lesson students use the interactive computer program developed as part of this project by entering in their week of meal plans to see the impact of food on their character's blood sugar.


INTERACTIVE EDUCATIONAL MODULE The interactive program built for these lessons utilizes a mathematical model to translate nutritional inputs into blood sugar outputs.3 At the onset of the program, students are tasked with building a character that they will provide meals to. They enter the age and gender of their character, data which is used to provide customized results for an average person of the specified age and gender. After building their characters, students are able to begin “feeding” their character. This is done via the screen displayed in Figure 1. First, a student selects the day for which he or she would like to enter the character's meals. Next, the student selects the meal and food that the character will be eating. They have options for breakfast, lunch, dinner, an afternoon snack and an evening snack. Once the student has selected the day, food item, and meal during which the character is eating, he or she clicks “Add Food.” Immediately, the student's food and meal time choices are confirmed in the “Recorded Diet,” and the nutrition facts for that day are updated. After entry of the meal plan is complete, the student can observe the entered diet's impact on

a diabetic and non-diabetic's blood sugar by clicking the corresponding “Go!” button. Figure 2 below shows the results for an unhealthy diet with a diabetic character. Blood sugar levels begin at 126 mg/dL, the concentration associated with diabetes, and rise to higher and higher levels for each meal. The diet entered for this graph included an overabundance of carbohydrates, fats, and added sugars. The yellow region represents healthy blood sugar levels. Along with this graph, students would be shown healthy or unhealthy avatars of the same

Figure 2. An example of blood sugar results associated with an unhealthy diet for a diabetic. gender as their character depending upon if the entered diet is determined to be healthy or not. This determination is made through comparing the entered nutritional intakes to recommended nutritional intakes for an average person of the character’s entered age, gender, and diabetes status. REFERENCES 1. Fast Facts: Data and Statistics about Diabetes. (2013). American Diabetes Association. Retrieved July 10,2014 from http://proffesional.diabetes.org/admin/UserFiles/0%20%20Sean/FastFacts%20March%202013.pdf 2. Kohler, P.K., Manhart, L.E., Lafferty, W.E. (2008). AbstinenceOnly and Comprehensive Sex Education and the Initiation of Sexual Activity and Teen Pregnancy. Journal of Adolescen Health, 42(4), 344-351. Retrieved from www.scopus.com 3. Roy, A., Parker, R.S. (2006). Dynamic Modeling of Free Fatty Acid, Glucose, and Insulin: An “Extended Minimal Model”. Diabetes Technology and Therapeutics, 8(6), 617-626. Retrieved from www.scopus.com

Figure 1. Meal entry screen within educational computer module.

ACKNOWLEDGEMENTS Student Module Development Team: Daniel Cardona, Sarah Francisco, Stephanie Verk, Chris Dundas, Lauren Musgrave, Sammie Weiss, Robert Gregg, Sierra Barner National Science Foundation, REU Program, Grant EEC-1156899 Swanson School of Engineering University of Pittsburgh Office of the Provost


CELLULAR TOXICITY OF NANOMATERIALS Yutao Gong, Sharlee Mahoney, Thomas Richardson, Ipsita Banerjee and Götz Veser Swanson School of Engineering, Department of Chemical & Petroleum Engineering University of Pittsburgh, PA, USA Email: yug17@pitt.edu INTRODUCTION Nowadays nanomaterials are widely used in different fields such as electronics, catalysts, medical treatment, while at the same time they have triggered scientists’ concern regarding the potential health and environmental effects resulting from wide-spread use of nanomaterials and lack of standard test for toxicity in industry.[1] Due to their small size, nanomaterials are able to pass through the human body by inhalation, ingestion, and skin penetration. Additionally, their minute size makes it easier for them to overcome cell barriers and react with intracellular structure and macromolecules.[1] The aim of this project is to investigate the toxicity of three different types of Ni nanoparticles (hollow Ni in SiO2, non-hollow Ni in SiO2 and Ni on SiO2 respectively) and bare SiO2 nanoparticles using 3t3 fibroblast cells (mouse skin cells) as models. In addition, the Ni nanoparticles were tested with regard to dissolution and settling in order to identify correlations of these properties with cellular toxicity.

concentration of this substance in the solution based on the intensity of light at this wavelength. In addition, ICP-MS was used to measure the amount of Ni ions in the medium. Three different Ni nanoparticles were added into 5 tubes based on 5 different time spots (0h, 4h, 24h, 48h and 120h) and then centrifuged with filters to remove Ni nanoparticles, thus yielding a measurement of dissolved Ni ions in solution. RESULTS As seen in Figure 1, each absorbance was normalized by dividing with the initial absorbance, so that the settling rate can be compared relative to the same starting point. It can be found Ni-SiO2 initially settle slightly faster than the other two Ni nanoparticles, while it assume virtually the same settling rate after 2 hours.

METHODS After 24 hour proliferation, 3t3 cells were exposed to the toxins for another 24 hours and then were counted via MTS and LDH assays (where MTS detects live cells and LDH dead cells). 3t3 cells were used since they are inexpensive but highly robust and sensitive to the toxins.[2] The concentrations of Ni nanoparticles, SiO2 and NiCl2 (as reference) varied from 5mg/L, 10mg/L, 50mg/L, 100mg/L and 200mg/L to 300mg/L. The settling rate of three different Ni nanoparticles are measured and compared with the help of UVVis spectroscopy. Basically, this spectroscopy can be used to determine the composition of the solution using a particular wavelength as a “fingerprint” for each substance and measure the quantity or

Figure 1. Comparison of settling rate of three Ni nanoparticles within 360 min. The data were recorded after each 5 min for the first hour and then 10 min, 15 min, 20 min, 30 min and 1 h for the following several hours.

Figure 2 shows that all three Ni nanoparticles were dissolved as Ni ions in the 3t3 medium to some extents, with Ni-SiO2 having the lowest dissolution with only around 6 mg Ni/L out of 200 mg Ni/L and no major changes in the dissolution rate over the duration of the experiment. In contrast, nhNi@SiO2 had the highest dissolution with 32 mg Ni/L out of


200 mg Ni/L and dissolved rapidly over the first 20 hours.

Figure 2. Comparison of dissolution of three Ni nanoparticles (200mg/L) in 3t3 medium after 0h, 4h, 24h, 48h and 120h.

Finally, figure 3 summarizes the results of the toxicity assays for two of the three nanomaterials and NiCl2 as reference. It can be seen that the percentage of survival for the two Ni nanoparticles were both lower than that for NiCl2 except at the highest Ni concentration of 300mg/L. Among the nanomaterials, the hNi@SiO2 resulted in lower survival than nhNi@SiO2.

settling occurs. Since nanoparticles tend to aggregate to form big particles which accelerates the settling rate[3], it is necessary to investigate the extent of aggregation of the different nanoparticles. However, the similar settling rate of three Ni nanoparticles suggests that the difference in nanoparticle aggregation of nanoparticles is likely negligible and that cells were hence exposed to equal doses of toxins, thus simplifying the analysis of the cellular toxicity assays. More cells survived in the medium with NiCl2 than that with Ni nanoparticles. That means that for the same concentrations of toxin, the nanomaterials show enhanced toxicity. However, the dissolution experiments show that only a fraction of the Ni in the nanomaterials dissolved in the 3t3 medium as Ni2+ ions. Thus ionic Ni2+ cannot account entirely for the observed toxicity. Furthermore, although NiSiO2 showed the lowest dissolution, the cell assays showed that it was the most toxic Ni nanomaterial among these three. A possible explanation could be that some macromolecules (such as proteins) cap the Ni particles on the surface of the SiO2, while it is hard for them to pass through the pores of the porous silica to prevent Ni dissolution from hollow Ni and non- hollow Ni nanomaterials. Additionally, Ni nanoparticles uptake could differ between these materials and impact cell viability. We are currently planning further experiments to investigate these hypotheses. REFERENCES 1.

2.

Figure 3. Percentage of live cells of two Ni nanoparticles and NiCl2 at different concentraions of toxins.

DISCUSSION Since live cells always stick to the bottom of the plate for proliferation, cells will have more exposure to the nanoparticles if more/faster nanoparticle

3.

S. Shahriar, S. Behzadi, S. Laurent, M. L. Forrest, P. Stroeve, and M. Mahmoudi. "Toxicity of nanomaterials." Chem Soc Rev 41 (2012): 2323-2343. NIH 3T3 (Mouse embryonic fibroblast cell line) Whole Cell Lysate. http://www.abcam.com/nih-3t3-mouseembryonic-fibroblast-cell-line-whole-cell-lysateab7179.html Rouse H. Elementary Mechanics of Fluids. 1946

ACKNOWLEDGEMENT I would like to acknowledge Dr. Veser, the Swanson School of Engineering and the Office of Provost for funding this research project, and Dr. Veser and Dr. Banerjee for mentoring me through this research experience.


Dynamic Reactor Simulations of Chemical Looping Combustion Jonathan Hughes1,2, Götz Veser1,2 1 Department of Chemical & Petroleum Engineering, University of Pittsburgh 2 Mascaro Center for Sustainable Innovation, University of Pittsburgh Email: jdh101@pitt.edu Introduction Fossil fuels, including natural gas, remain a critical energy source, but concerns over the environmental impact of rising atmospheric carbon dioxide create a societal demand for reduction of carbon emissions. To facilitate carbon capture and sequestration, Chemical Looping Combustion (CLC) uses a metallic oxygen carrier as an intermediate between the two reactants, which gives inherent separation of combustion products. This can be achieved by a spatial separation, as in fluidized‐bed operation, or by a temporal separation using a periodically‐operated fixed‐bed reactor. However, this transient‐state operation adds an additional layer of complexity to an already intricate network of kinetic pathways and reactor dynamics. The present work develops a computational model to aid in understanding the complexities of a periodically‐operated fixed‐bed CLC reactor. Model The reactor model is set up as a system of partial differential equations in time and across the length of the reactor. A pseudo‐homogeneous plug‐flow reactor model (neglecting radial differences) was assumed since the convective (axial) flow is dominant over diffusion. The model considers nickel oxide on an alumina support as an oxygen carrier, based on previous kinetic models by Iliuta et. al. (2010)1 and Zhou & Bollas (2013)2. The kinetic network of reactions is adapted from the model developed by Iliuta et. al. This model considers 11 reactions, including four combustion reactions and seven catalytic reactions between gas‐phase species. Although Arrhenius equations for the

kinetic parameters are available, the current work focuses on isothermal operation and uses experimental data by Bhavsar & Veser (2013)3 to validate the model results. Gas‐Solid Combustion reactions Methane combustion with nickel oxide is assumed to take place step‐wise, with CH4 first reacting to form hydrogen and either carbon monoxide or carbon dioxide, and then H2 and CO further reacting with nickel oxide to form H2O and CO2. Reaction rates are written as a function the Arrhenius rate constant k0exp(‐Ea/RT), of reactant concentrations cCH4, CNiO etc., and of availability of nickel oxide on the surface of the oxygen carrier ST(1‐XNiO), where ST is the total surface area of the oxygen carrier bed and XNiO is the fractional conversion of nickel oxide to nickel. Methane combustion is assumed to be catalyzed by nickel in order to reflect the methane breakthrough curve observed throughout the literature, thus the nickel concentration is included as a term in the methane combustion reactions. Catalytic reactions Nickel in its pure metallic form is an excellent catalyst often used in methane reforming and cracking under similar conditions to those of CLC. The present model hence accounts for 7 possible Ni‐catalyzed gas‐phase reactions. The reaction rates are written according to Langmuir‐Hinshelwood‐ Hougen‐Watson (LHHW) kinetics, which consider adsorption and desorption of species on the catalyst active sites. The reaction rate expressions used in this model, which may be found in Iliuta et. al. (2010)1, are not included in this abstract.

Table 1. Combustion Reactions Name Formula r1 One‐site CH4 Combustion CH4 + NiO → CO + 2H2 + Ni r2 Two‐site CH4 Combustion CH4 + 2NiO → CO2 + 2H2 + 2Ni r3 CO Oxidation CO + NiO → CO2 + Ni r4 H2 Oxidation H2 + NiO → H2O + Ni

Reaction Rate [kmol/kg bed*s] koexp(‐Ea/RT)*ST(1‐XNiO)cNi*cNiO*cCH4 k0*exp(Ea/RT)* ST(1‐XNiO)*cNi*cNiO* cCH4 k0*exp(Ea/RT) *ST(1‐XNiO)*cNiO*cCO k0*exp(Ea/RT) *ST(1‐XNiO)*cNiO*cH2


Dynamic Reactor Simulations of Chemical Looping Combustion

Table 2. Combustion Reaction Rate Parameters Description Parameter r1 Pre‐exponential factor k0 [kmol/kg*s] 4.66 Activation Energy Ea [kJ/mol] 77416 Carrier Surface Area ST [m2/kg] 51100 The model was programmed using Matlab, and the partial differential equations were solved using the method of lines by first calculating the spatial derivatives with a finite difference method, then integrating over time using the ode15s solver. Results Figure 1 shows the fuel and CO2 concentration profiles along the length of the reactor at 3 time points, demonstrating the flow and conversion of fuel along the reactor as well as over time.

Concentration [a.u.]

Simulation Reactor Profile

Length [cm] Fig. 1. Simulated reactor profile showing conversion of fuel along the reactor at 3 time points The experimental data and simulation results are compared in figure 2. The simulation shows an initial build‐up of methane, which is unable to react due to the lack of catalyzing Ni sites. As NiO is slowly converted the reaction rate reaches a critical point where Ni becomes present in sufficient quantities for combustion to occur rapidly, converting the accumulated methane and producing a sharp peak of complete combustion products (CO2 and H2O). As the nickel oxide is depleted, partial oxidation products (CO and H2) are formed as there is insufficient oxygen for complete oxidation.

r2 1.31 x10‐4 26413 51100

r3 1.098 x10‐4 26505 51100

Page 2 of 2 r4 4.18 x10‐3 23666 51100

Simulation Exit Gas Composition Exit Gas Composition [a. u.]

24 July 2014

Time [s] Fig. 2. Experimental (dashed)3 and simulated (solid) exit gas composition The methane curve notably departs from experimental data, particularly at the beginning and end of combustion, while the hydrogen curve is sluggish. However, the CO2, H2O and CO curves are generally in agreement with the experimental data. Conclusions The kinetic model used in this work shows good qualitative and quantitative agreement with the empirical data. Future refinements will address the methane breakthrough curve and extension onto non‐isothermal operation. Furthermore, a sensitivity study will focus on identifying the critical operating parameters and thus on guiding future experimental investigations. Acknowledgements: Mascaro Center for Sustainable Innovation, Frank & Daphna Lederman. References 1. Iliuta, et. al. AIChE J. 56, 1063‐79, 2010. 2. Zhou, Z.; Han, L.; Bollas, G.M. Chemical Engineering J. 233, 331‐48, 2013. 3. Bhavsar, S.; Veser, G. Energy & Fuels, ACS 27, 2073‐84, 2013.


Towards Understanding Nanoparticle Toxicity Kimaya Sharlee Mahoney1, Thomas Richardson1, Ipsita Banerjee1,2,3, GÓ§tz Veser1,4 Department of Chemical Engineering1 & Department of Biongineering2, Swanson School of Engineering; McGowan Institute for Regenerative Medicine3, Mascaro Center for Sustainable Innovation4 Email: kip15@pitt.edu Padgaonkar1,2,

Introduction The field of nanotechnology is a fast-growing, powerful field. Material properties at the nano-scale level vary greatly from the macroscopic components due to their small size. Functional nanomaterials have already found application in many popular consumer products such as apparel, cosmetics, and electronics. Yet, with this novel capability, they have the potential to exhibit different properties in areas including toxicity1. Nanomaterials have increasing evidence of elevated toxicity and with that the potential to harm people and the environment. The field requires a more thorough understanding of the effect of the nanotechnology on toxicity1-2. This project hypothesizes synthesizing and analyzing metal nanoparticles with a nontoxic support will create less toxic nanomaterials. The goal is to create a nanoparticle that is safest by design through comparison of different configurations. For example, industries use metals like nickel, which can be toxic, for catalysts in the energy technology. By analyzing the effects of different combinations of toxic nanoparticles with a non-toxic support, we can better ensure that when industries and products utilize nickel and other toxic metals, people and the environment will not be harmed safer because of it. Methods Our nanomaterials combine nickel and silica. These are chosen as nano nickel (Ni) is known to be unstable and toxic while amorphous silica (SiO2) is nontoxic and are used in a wide range of applications such as biomedical imaging, catalysis, and drug delivery due to its versatility and nontoxic properties. Therefore, we hypothesize that embedding nickel in porous silica will allow toxicity mitigation of nickel nanomaterials. Embedding the metal in silica could modify transport and interaction of the embedded nanoparticles with its environment while still providing full access to the metal nanoparticle surface.. Three novel nickel and silica combinations, Figure 1, with varying structures are synthesized in our lab to test in our experiments. They are nonhollow nickel embedded within silica (non-hollow), nickel encapsulated in silica (hollow), and nickel

deposited on silica. A reverse microemulsion process is used to produce the nanoparticles. Slight modifications to the procedure give us different physical structures, while keeping the same basic building blocks for each nanomaterial. We also have NiCl2 salt that represents nickel ions that dissolute from the nanoparticles for the purposes of testing toxicity of nickel by itself. NiCl2 and SiO2 are used as to measure the results of our nickel and silica combinations against.

Figure 1: TEM images of nanoparticles In order to test the effects of our nanoparticles in an in vitro model, we are employing the use of cells. 3T3 mouse-skin derived fibroblasts are suited and well established for in vitro studies because they are inexpensive, robust and proliferate quickly. Human embryonic stem cells (hESCs) on the other hand are better suited for human comparison. They are unspecialized and proliferate indefinitely. However, as they are a novel in vitro model for toxicity evaluations, our studies have been focusing on experiments on the more established 3T3 fibroblasts before we continue onto in-depth studies with hESCs. Our nanomaterial cell experiments begin with plating 3T3 cells and allowing them to propagate for 24 hours. The nanoparticles are sterilized in UV light for 1 hour and the sterilized nanoparticles are added to a stock media in the highest desired concentration. From there, the nanoparticle solution is sonicated and then diluted into the necessary concentrations according to the experiment plans. The nanoparticles are then added to the cells and exposed to the nanoparticles for 24 hours. Many controls are used against the nanoparticles including cells without nanoparticles,


cells exposed to just the nontoxic silica, and cells exposed to NiCl2. The experiments are then subjected to analysis via several assays and observation, including analysis of the cell viability and cell death. . Results and Discussion First it was critical to complete studies on the effects of our nickel salt and silica nanoparticle controls. Post-experiment, we analyzed the results via qualitative and quantitative analysis. Qualitative analysis included using microscopy and showed differences between nanomaterials visually (Figure 2). However, a quantitative analysis was necessary to compare the concentrations and nanomaterials more thoroughly. A B C

D E F Figure 2: Confocal Images. A. Control B. SiO2, C. NiCl2, D. Ni-on-SiO2 E. hollow Ni@SiO2, F. nonhollow Ni@SiO2. Assays were used on the cells postexperiment in order to determine the cytotoxicity associated with the concentrations and types of nanoparticles. An MTS assay allowed us to determine cell viability while an LDH assay allowed for determination of percent death. As expected, our porous silica is non-toxic, while our nickel salt exhibited toxicity over the concentrations tested (Figure 3A).

Figure 3: MTS Viability Curves for A. NiCl2 & SiO2 B. Ni-on-SiO2, h-Ni@SiO2 & nh-Ni@SiO2

We were able to perform 3T3 cell experiments with the novel complex engineered nanomaterials, nonhollow Ni@SiO2, hollow Ni@SiO2 and Ni-on-SiO2. Interestingly, all of the nanomaterials showed a toxic effects on the cells (Figure 3B). The hollow Ni@SiO2 and Ni-on-SiO2 nanoparticle showed elevated toxicity as compared to NiCl2. The nonhollow Ni@SiO2 showed similar toxicity as compared to the NiCl2. Conclusion We were able to verify that 3T3 cells are a viable model for nanotoxicity studies. We confirmed that porous silica does not have a toxic effect in 3T3 cells. In our comparison of nickel-silica nanoparticles to nickel ions, we found that nanoparticles tend to exhibit higher death and lower survival as compared to the NiCl2 salt control. Based on these conclusions, the nickel-silica nanoparticles intrinsically possess a nano-specific toxicity in addition to ion-dissolutionrelated toxicity that the nickel ions cause. Future Work Future plans include studying the nanoparticle uptake into the cell, testing our nanoparticles on the hESC platform, and continuing analysis on the nanoparticle toxicity mechanism. Another project within our lab uses in vivo zebrafish models. Comparison of the effects of nanomaterial on the cellular level with a multicellular organism will give us more insight into the effects of the nanoparticles in vitro and in vivo as well as on varying cellular levels. We are also extending our studies to a physiochemical assessment of the nanomaterials. The results of our toxicity studies with that of our physiochemical studies will allow for structuretoxicity analysis and correlations. This will hopefully lead to identification of a structure that is least toxic and most optimal for nanomaterial configurations. References [1] Nel, A.; Xia, T.; M채dler, L.; Li, N., Toxic potential of materials at the nanolevel. Science 2006, 311 (5761), 622627. [2] Oberdoester, G., Nanotoxicology: An Emerging Discipline Evolving from Studies of Ultrafine Particles (vol 113, pg 823, 2005). Environmental Health Perspectives 2010, 118 (9), A380-A380. Acknowledgements: Thanks to the Veser and Banerjee labs and members for your assistance and to the Mascaro Center for Sustainable Innovation for support.


Effects of Geometry on Dispersion Characterization of Transparent and Reflective Materials using White-Light Interferometric Techniques Alexander S. Augenstein(1), Kevin P. Chen(1) and Christopher Taudt(2) 1. Department of Electrical Engineering, University of Pittsburgh, PA, USA 2. Department of Applied Physics, Westsächsische Hochschule Zwickau, Saxony, Germany Email: asa55@pitt.edu

INTRODUCTION Classical optical properties of a transparent material sample are completely characterized by the material's dispersion curve. Incomplete static quantities used to compare optical properties of various materials such as the nominal refractive index and Abbe Number are popular due to the relative ease with which these values are measured compared to a complete dispersion curve. Delbarre et al. investigated methods of deriving a material's refractive index as a function of wavelength from a set of direct discrete measurements on the material's Group Velocity Despersion (GVD) [1]. Baselt et al. determined that a material's GVD curve is directly measureable by way of time-frequency domain white light interferometric techniques [2]. Using coefficients obtained from a nonlinear damped least squares fitting algorithm, a third order Sellmeier polynomial allows for the calculation of an approximate analytic dispersion curve valid over a finite wavelength domain. METHODS The first set of experiments incorporated a Schott N-BK7 Crown Glass wedge while the second incorporated a chirped mirror pair (Layertec GmbH Z0109046, Z0109049), both with known dispersion characteristics in the test arm of a Michelson interferometer. Light from a halogen lamp used as the white light source was coupled into a 50 micrometer fiber and collimated through a planoconvex achromatic lens with a focal length of 10mm. The beam was split between the reference and test arms by way of a 50/50 beamsplitter. The light in the reference arm passed only through free space and was reflected back into the beamsplitter by a mirror whose distance from the beamsplitter

adjusted by a ThorLabs TDC001 servocontroller connected to a ThorLabs Z812B servomotor with a step resolution of 100 nanometers. In the test arm, either the sample wedge (Qioptiq G334483000) or the chirped mirror pair acted as the dispersive element. The incoming optical beam passed through the test arm, and was reflected back into the beamsplitter for recombination with the reference beam. The recombined beams were then coupled into a 400um output fiber through a planoconvex achromatic lens with focal length 30mm, placed at a distance 35mm away from the fiber input in order to maximize the power into the fiber without saturating the spectrometer’s detector. A total of six data sets were collected for the N-BK7 wedge, where three were taken along the face of the material to adjust the sample thickness and three more at the nominal width for various incident angles of the beam into the material. Four additional data sets were collected for the chirped mirror experiment for select incident angles between 10 and 45 degrees. DATA PROCESSING The reference arm's position was recorded prior to data collection corresponding to the white-light interference point for a sample free test arm. Inserting the sample resulted in an increased optical pathlength for each wavelength component, and therefore a different stationary point (known as the equalization wavelength) which was recorded using the Avasoft 8.0 software package for each position of the reference arm as the reference arm mirror was stepped by the stepper motor away from the reference position in 200 nanometer increments.


This information was recorded in MATLAB and fit using the Levenberg-Marquardt fitting algorithm provided in the optimization toolbox to the third order Sellmeier equation in [1]. With the A, B, and C coefficients determined, the material's dispersion curve could readily be calculated via substitution into the equation provided by Delbarre for refractive index as a function of wavelength. RESULTS In the case of the wedge sample experiment, direct measurements of the GVD were calculated back to the material's refractive index within an accuracy on the order of 10-3 as shown in Figure 1 compared to the expected dispersion equation for such a crown glass material given in [3]. The discrete RMS error was found to be greater than 0.16% and less than 0.22% in all measurements. This demonstrated that timedomain interferometry can be used to derive the material dispersion curve for isotropic transparent samples independent of the beam path through the sample and the sample geometry. Chirped mirrors are classified as anisotropic media and therefore yielded different dispersion curves as a function of incident beam angle. The results of these tests are summarized in Figure 1, and indicate that the sensitivity of dispersion with respect to wavelength decreases with increasing incident angle.

Figure 1: Time delay as a function of wavelength for incident angles of 10, 20, 30, and 45 degrees corresponding to the purple, green, red, and blue curves, respectively.

DISCUSSION Finding that the dispersion curve for a glass wedge could be reconstructed within an RMS error of 2.2 x 10-3 agrees with the accuracy of the fit procedure in [1], which tates that such a procedure will yield an approximate dispersion curve with root mean square error on the order of 10-3. With the procedure tested and proven on a glass sample, the more complex case of anisotropic dispersive media was tested using chirped mirrors. These mirrors were designed with the intention of exhibiting negative dispersion but were only characterized by the manufacturer at small angles (less than 19 degrees). The data shown in Figure 1 displays that the mirrors indeed exhibited negative dispersion, and that increasing the incident angle decreased the negatively dispersive properties of the mirror pair. This is a logical result as chirped mirrors rely on reflection based on periodic media theory, and changing the incident angle into such a planar periodic medium monotonically increases the spatial period of the medium as seen by the beam [4]. REFERENCES 1. Delbarre et al. Appl. Phys B 70. 45-51, 2000. 2. Baselt et al. Appl. Optics 25. 32-37, 2011. 3..CRC Handbook of Chemistry and Physics, Section 10, Refract. Index and Transmittance of Representative Glasses ed. 95, 2014-2015. 4. Saleh, Bahaa E. A., and Teich, Marvin C. Fundamentals of Photonics. 2nd ed. WileyInterscience, 2007. Print ACKNOWLEDGEMENTS Many thanks to my supervisor and colleagues at the Zwickau University of Applied Sciences for their continued financial and invaluable intellectual support, as well as my professor Kevin P. Chen for making this international research experience possible.


DOMAIN-WALL MEMORY BUFFER FOR LOW-ENERGY NoC’s Donald E. Kline Jr., Rami Melham, and Alex K. Jones University of Pittsburgh, PA, USA Email: dek61@pitt.edu INTRODUCTION Non-volatile memory (NVM) solutions such as phase-change memory (PCM), spin-transfer torque magnetic memory (STT-MRAM), and spintronic racetrack memory represent incredible potential for energy reduction in virtually all computer memory applications. However, replacing traditional memory with NVMs in order to save energy poses a significant performance challenge due to longer access times, particularly to write [4]. Racetrack memories increase density over STT-RAM by storing multiple bits on a single nanowire separated by domain-walls, and these bits can be shifted to share a single access point. Thanks to DWMTAPESTRI [1] racetrack, writes can now be performed using shifting, reducing the write latency to 0.5 ns [1]. Utilizing this recent development, I propose several methods to use racetrack memory to implement virtual channel buffers for a network-onchip (NoC) for a 4 GHz multicore system with a 1 GHz network clock cycle. The best racetrack schemes demonstrates a 4.3% speedup over a baseline racetrack FIFO memory control scheme with a 53% savings in energy. Compared to SRAM it exhibits an 8.2% latency degradation versus traditional but with a 58.6% energy reduction. RACETRACK IMPLEMENTATION Traditional SRAM virtual queue implementations have a pointer to the next location to read out from the virtual queue, and a write pointer, which holds the next location to write into. These pointers increment after their respective operations are performed. The baseline racetrack implementation uses this idea and uses shifting to align the read and write position with the access point (read/write head). However, this technique does not leverage the natural shifting capability of the racetrack. The other proposed racetrack schemes handle data flow differently. Because of a .5 ns write time and a .5 ns shift time [1], a cycle can be used to write in a flit of data, and then shift it one position to the right. Thus, a single write head is positioned at one end of

the racetrack, and one or more read heads distributed throughout the nanowire allowing independent access of data. Thus, the buffer acts more like a shift register than a traditional FIFO. In addition, the racetrack control schemes developed make certain that the data stays contiguous, which guarantees that the racetracks will always be able to be used to capacity. IMPLEMENTATION OVERVIEW In order to test different schemes, MIT’s multicore simulator HORNET [2] was used to simulate the NoC and compute both average flit latency across the network and the energy consumption of the virtual channels. The default wormhole switching mesh network from HORNET was augmented to control the racetrack and delay reads and writes if a racetrack was not in a legal position for the required access. In addition, peripheral circuitry power calculations for SRAM and the different racetrack control schemes were analyzed using NVSIM. Furthermore, Sniper [3] was used to generate traces of the PARSEC benchmark suite, and these traces were run in the modified HORNET simulator. The tests were performed on a simulated 64-core network using one-queue-per-flow o1-turn routing. In addition, each ingress port connecting a core to its neighbor has eight virtual channels, with queues eight flits long. Each of the racetrack implementations contains four equally spaced read ports with a single write port and a single flit SRAM buffer to store the head flit. The exception is the paired virtual queue scheme, which has two racetracks, each 4 flits long with 2 read heads and 1 write head each with no SRAM buffer. Also, the no buffer configuration is virtually equivalent to the rt1peek configuration, except it does not have an additional SRAM buffer. RESULTS Due to the slower speed of racetrack reads compared to traditional SRAM reads, all racetrack configurations are guaranteed to be at best equal to


the speed of the baseline SRAM. On average, the SRAM-Half configuration had a 1.7% increase in packet latency. The baseline and rt1peek racetrack schemes had a 12.8%, and 10% increase in packet latency. Removing the SRAM buffer, the rt1 scheme had a 73.3% in latency while the paired virtual queue scheme had an 8.2% increase in latency. This is enumerated in Figure 1.

Figure 1: Average per-flit latency normalized to SRAM latency of traces generated from a Sniper simulated core operating at 4 GHz with a 1 GHz network.

As can be seen above, the addition of an SRAM buffer to most racetrack control schemes can drastically improve performance (rt1peek latency improves 57.6%). However, the cost of this SRAM buffer can be clearly seen in the power results (see Figure 2). On average, SRAM-Half had an 11% power reduction, the naïve scheme had an 11.9% power reduction, rt1peek had a 14.1% power reduction, and both rt1peek without an SRAM buffer and the paired virtual queue scheme had a 58.6% power reduction over SRAM. Due to the 8 to 10 percent reduction in performance over SRAM-Half at a small cost of power, it can be seen that both the naïve scheme with the buffer and rt1peek with the buffer are not viable schemes. Also, these two schemes without the buffer both had over a 73% reduction in performance, which makes them difficult to justify.

generated from a Sniper simulated core operating at 4 GHz with a 1 GHz network.

DISCUSSION While the inherent composition of racetrack memory can result in a significant energy reduction from traditional SRAM, a direct replacement of racetrack memory with SRAM results in significantly reduced performance (over 74% increase in message latency for the baseline implementation without an SRAM buffer, and 12.8% on average with an additional SRAM buffer). That is why, in order for racetrack memory to practically replace SRAM in virtual channel buffers without a drastic drop in performance, more advanced schemes are necessary. One such strategy that demonstrated its viability through this experiment was the paired virtual queue scheme, which on average across the Parsec benchmarks had a 4.3% improvement over the baseline scheme with an SRAM buffer in performance, as well as a 57.6% improvement over the SRAM scheme in power. While HORNET is a highly-accurate cycle level simulator, the traces provided to it by SNIPER do not reflect the added delays of the network as they develop. In other words, a more reliable measurement of latency and energy could be generated if a simulator with feedback was used, such as one currently in development by another member of Dr. Jones’ research lab. We are currently connecting the simulators to generate a full system simulation. Also, a possible avenue for expansion would involve generating configurations which require a reduced logical overhead. REFERENCES 1. Venkatesan et al. "DWM-TAPESTRI - An energy efficient all-spin cache using domain wall shift based writes," DATE 2013, pp. 18-22. 2. Pengju Ren, et al. "HORNET: A Cycle-Level Multicore Simulator," IEEE TCAD 2012, vol.31, no.6, pp.890,903 3. Carlson et al. "Sniper: Exploring the level of abstraction for scalable and accurate parallel multi-core simulation," SC, 2011 pp.1,12, 12-18 Nov. 2011. 4. Stuart S. P. Parkin, et al. “Magnetic Domain-Wall Racetrack Memory” Science, April 2008: 320 (5873), 190-194. 5. Xiangyu Dong et el. "NVSim: A Circuit-Level Performance, Energy, and Area Model for Emerging Nonvolatile Memory," TCAD 2012 vol.31, no.7, pp.994,1007

ACKNOWLEDGEMENTS Figure 2: Total energy consumption by virtual queues, normalized to SRAM virtual queue energy consumption

I would like to thank Dr. Jones for his guidance and funding for the project, Fan Chen for the calculation of the peripheral circuitry numbers, Haifeng Xu for generating the Sniper traces and helping determine the appropriate energy parameters, and Michael Moeng for his advice on using Hornet.


EFFEECTIVE SCIENTIFIC COMPUTING ON ANDROID BASED MOBILE DEVICES Daniel J Wright Department of Electrical and Computer Engineering University of Pittsburgh, PA, USA Email: djw67@pitt.edu INTRODUCTION The capability of embedded processing available in palmtop computing continues to increase. While such systems are not yet capable of large scale scientific computing, palmtop computers can now be used for on-board computing for a variety of applications such as smart data-loggers, processing hubs for wireless sensor networks, etc. However, the processing capability requires deep knowledge of embedded programming environments and often details of the underlying computing architecture. To be useful in these domains, it is valuable to provide a comfortable development environment such as MATLAB. This project provides a MATLAB-like interface to develop software for palmtop computing along with a subset of useful scientific and data processing functions that are highly tuned for deployment on palmtop computers running the Android operating system, while maintaining uniform performance and functionality across a diverse platform. This has been investigated through the use of different API’s on the Android operating system that natively use any available hardware on the device, offloading work from the Dalvik virtual machine (VM). METHODS The study was conducted on two separate ARMv7 Android devices running on 4.4.2 KitKat. The SoC’s used are the Qualcomm Snapdragon 800 and Nvidia Tegra 3. ARM based devices were used due to their ubiquity in the mobile device market, as well as their well-known compatibility with Android development tools. For comparing performance, implementations of functions were written in Java, C, C++, and RenderScript API. To uniform results, KitKat’s experimental Android Runtime was disabled, allowing for both devices to run off of the VM to ensure consistency. Because of Android’s VM nature, C and C++ functions were invoked from Java, which was done utilizing the Java Native Interface (JNI).

The source code for both C and C++ implementations were compiled with GCC 4.9 with the arm-linux-armeabi toolchain to convert the binaries to the Android ABI. Both the compiler and toolchain comes standard with the latest version of the NDK (Native Development Kit), Revision 10. The C++ library chosen was STLport to support the C++ RenderScript API and C++ exceptions. All but one of the compiler flags were used for linking; the additional flag was for supporting ARMv7 NEON instructions, which has shown to improve floating point operations [1]. If one uses x86, The NDK Revision 10 can emulate “~47%” of the NEON instructions for x86 [2]. RenderScript is a framework created by Google and is an analogue to OpenCL. Kernels, which instruct the machine to do an operation by element, are written and parallelized across any available CPU, GPU, or DSP cores on the device. The intrinsics of how Android does this can be found in Google’s RenderScript documentation. RenderScript has already shown great performance, as well as a bonus of low power usage [3]. To invoke implemented functions at runtime arbitrarily from the user, each function was added to a hash table and invoked via Java’s Reflection API in a prototype console user interface for simplicity. In order to use the Reflection API, all JNI and RS (RenderScript) code was wrapped in a Java layer, which is the top level component of the project. DATA PROCESSING A performance test computed is a simple scalar exponentiation of matrix single-precision floating point elements in a square matrix generated by Java’s pseudorandom number generator of dimensions 125, 500, 1250, and 2500. The data returned from each function implementation was returned and compared to MATLAB for accuracy. This test was chosen due to its prevalence in many


algorithms, meaning that results from this test will correlate to other algorithms. For an accurate reading of the time to complete each run, the functions were profiled using the Dalvik Debug Monitor Service. The working threads were paused for 3 seconds between each function call, to isolate the performance effects from garbage collection, which has been shown to greatly impact Java performance [4]. RESULTS The results of the test were similar for size 125. Shown in Table 1, both devices performed similarly in each category for smaller sizes. For size 500, the Snapdragon Java RS implementation scaled 7.62 times faster than single threaded Java, while the Tegra 3 only scaled 4 times. Both devices did as well, sometimes better, as the quad threaded Java method. Overall, the runtime over all 4 tests shows that the Snapdragon ran all of the tests in under a fifth of the time using RS, compared to single threaded Java. The Tegra 3 scaled such that it ran in a third of the time. Figure 1 shows the scaled performance of RS and Java.

It is worth noting that the two devices deviated with RenderScript performance. This deviance is presumed to be due to arbitrary latency of making calls to various pieces of hardware. Both devices saw an increased deviance when calling the JNI, since there is added overhead to system calls which will only be scheduled to execute when the operating system is ready. Unfortunately, the NDK RenderScript API was shown to be very detrimental to performance in the test, due to the overhead of creating RenderScript objects and invoking the JNI. DISCUSSION This study shows a glimpse into the capability of palmtop computing on the Android operating system. The open source nature of Android allows for developers to understand the operating system at the lowest levels. Android’s Linux nature also allows many developers to be familiar with the system, and the NDK toolchain allows for previously written libraries to be compiled and linked with Android compatible binaries. REFERENCES 1. Joško Rokov, “ARM Architecture and Multimedia Applications,” University of Zagreb 2. Google, http://developer.android.com/tools/sdk/ndk/index.html

3. Roelof Kemp, Nicholas Palmer, Thilo Kielmann, Henri Bal “Using RenderScript and RCUDA for Compute Intensive tasks on Mobile Devices: a Case Study,” VU University 4. Xi Qian, Guangyu Zhu, Xiao-Feng Li “Comparision and Analysis of Three Programming Models in Google Android,” Intel

Figure 1: Runtime of the two devices (in milliseconds).

ACKNOWLEDGEMENTS Funding was provided through the Swanson School of Engineering and Office of the Provost. I thank Dr. Alex Jones for this opportunity and Marco Marson for his assistance in the project.

Table 1: Performance times for given sizes: column 1 is Snapdragon results and column 2 is Tegra results Invoked from 125 500 1250 2500 11.8±1.87 12.5±2.12 206.6±19.59 225.9±4.31 999.6±26.84 1036.6±20.72 5104.3±69.46 4101.1±16.47 Java Single 9.8±1.87 11.3±1.64 102.7±26.04 150.9±18.88 299.5±28.82 304.5±20.42 1134.8±28.21 1355.1±323.39 Java Quad 6.4±1.65 6.4±.97 27.1±7.96 56.6±5.13 319.6±31.50 437.4±54.56 857.2±32.60 1234.9±231.14 Java RenderScript 7.3±0.48 8.6±.52 88.3±2.31 117.9±3.14 565.6±22.17 715.2±13.89 2394.5±202.61 2836.1±65.72 NDK 9.6±0.52 11.1±.88 94.6±2.88 86.6±3.24 780.1±67.40 557.0±50.79 1156.6±21.44 1417.7±30.54 RenderScript NDK


SIMULATION OF A GRADED BULK HETEROJUNCTION ORGANIC SOLAR CELL Christian Bottenfield, Fanan Wei, Guangyong Li Deptartment of Electrical and Computer Engineering University of Pittsburgh, PA, USA Email: cgb17@pitt.edu, gul6@pitt.edu INTRODUCTION Recently, University of Michigan researchers developed a novel method of fabricating highly interpenetrating polymer networks in thin film bulk heterojunction (BHJ) organic solar cells (OSCs). This fabrication method, succinctly called the ESSENCIAL approach, creates a graded blend of photovoltaic polymers that has improved device efficiencies to 4.71%, outperforming other solar cells fabricated by conventional methods [1]. Although the ESSENCIAL method has produced better results, it is not fully understood why the resulting solar cell morphology facilitates enhanced efficiencies. Through simulation, identification of why this fabrication method’s resulting morphology is superior will guide further research into developing solar cells exhibiting higher efficiencies Previous work has utilized a multi-faceted approach to achieve accurate device simulation through Monte Carlo, optical, and drift-diffusion model simulations [2-4]. Our simulation follows this strategy with significant modification to achieve the modeling of graded bulk heterojunction OSCs as fabricated by the ESSENCIAL method. This work identifies key improvements of the ESSENCIAL fabrication method over conventional methods. SIMULATION The simulation is composed of three components: (1) the optical simulation (2) the Monte Carlo simulation (3) the electrical simulation. Together these components create a multi-scale simulation, working from principles on the nano-morphological scale to the electrical device scale. The key to accurately simulating this graded bulk heterojunction structure is in capturing the material properties’ variation throughout the device. Previous simulations, assuming no variation of material parameters, used single material parameters for the bulk of the active layer. The proposed simulation introduces multiple nano-scale

layers to account for variation in the active layer and to investigate the enhanced processes leading to higher efficiencies in the ESSENCIAL morphology than typical bulk heterojunction morphologies. The device architecture used in our simulation is shown in Figure 1.

Figure 1: Device architecture and simplification of the active layer into several nano-scale regions of varying compositions.

The optical simulation determines the absorption rate in the active layer, which ultimately contributes to information about the generation rate of carriers. Thus, it is directly related to the current generated by the solar cell. The simulation must take into account reflection and refraction, interference, absorption, and the intensity of the incident light. Absorption in the device is calculated by the optical transfer matrix theory rather than the famous BeerLambert law due to the former’s account for optical interference. The simulated color map of the absorption in the solar cell is shown in Figure 2. The Monte Carlo simulation is unique due to its ability to generate realistic morphologies that are only partly accounted for by the optical simulation and utterly ignored by the electrical simulation. The Monte Carlo simulation serves to calculate two important values: (1) the carrier mobilities and (2) the exciton dissociation efficiencies (EDE’s). The electron and hole mobilities were calculated as functions of the electric field, which was determined from the built-in voltage created by the difference in work functions of the electrodes [5].


Also, the Monte Carlo results reported increased hole mobility; the higher hole mobility further contributes to the increased short-circuit current.

Figure 2: Absorption as function of wavelength and depth

The electrical simulation uses the drift-diffusion model, involving the coupled continuity equations and the Poisson equation. To successfully solve these three equations simultaneously, numerical iteration is necessary, which requires the equations to be normalized, discretized, and linearized. RESULTS With the full drift-diffusion simulation developed, the J-V curves, efficiency, open-circuit voltage, fill factor, and short-circuit current density may be calculated. The resulting data for the ESSENCIAL and normal BHJ morphologies are shown in Table 1 and compared with the experimental data from the work of J. Guo. Although the simulation shows some deviation from the experimental data, both the simulated and experimental data showed increased short-circuit current, fill factor, and efficiency than their normal BHJ counterparts. DISCUSSION The true insights into the superiority of the ESSENCIAL approach are found in the Monte Carlo simulation. Perhaps this is not surprising, since it generates the true morphology for the larger multi-scale simulation. From the Monte Carlo simulation we observed decreased recombination resulting from the increased morphological order of the ESSENCIAL approach. The recombination directly affects the collection efficiencies; thus the increased short-circuit current is accounted for.

Overall, this work, while creating a multi-scale simulation that captures vertical variation of active layer composition, has identified the sources of the ESSENCIAL morphology’s improvements over typical BHJ morphologies as increased hole mobility and decreased recombination. Understanding the improvements of the ESSENCIAL fabrication method leads to a deeper understanding of morphology’s important role in solar cell performance. This fundamental research in device physics is the backbone of driving solar cell technologies to be more efficient, economical, and viable – important requirements for solar cells, to satisfy before seriously contending with dominant nonrenewable energy sources. REFERENCES 1. Park et al. Adv. Energy Mater., 3, 9, 1135-1142, 2013 2. Wei et al. IEEE J. Photovoltaics, 3, 1, 300-309, 2013. 3. Liu et al. Dissertation, University of Pittsburgh 2011. 4. Li et al. IEEE J. Photovoltaics, 2, 3, 320-340, 2012. 5. Häusermann et al. J. Appl. Phys., 106, 104507, 2009 ACKNOWLEDGEMENTS This project was funded by the Swanson School of Engineering and the Office of the Provost of the University of Pittsburgh and would not have been possible without their support during the 2014 summer. Significant contributions to this research project were also made by Jay Guo (Electrical Engineering and Comp. Science, Dept., University of Michigan, MI. USA, Email: guo@umich.edu) and Hui Joon Park (Division of Energy Systems Research, Ajou University, South Korea, Email: huijoon@umich.edu).

Table 1: Comparison of simulation results to experimental results of the non-ESSENCIAL and ESSENCIAL morphologies.

Type non-ESSENCIAL (Experiment) non-ESSENCIAL (Simulation) ESSENCIAL (Experiment) ESSENCIAL (Simulation)

Jsc (mA/cm^2) 9.38±0.44 12.160 13.83±0.52 13.27

Voc (V) 0.59±0.00 0.500 0.51±0.01 0.52

FF (/100) 58.96±01.20 56.78 66.98±5.05 67.13

PCE (%) 3.27±0.17 3.480 4.71±0.36 4.66


Case Study for Sustainable Building Modeling on a University Campus S.P. Cortes, T.E. McDermott Mascaro Center for Sustainable Innovation, Electrical and Computer Engineering Department University of Pittsburgh, PA, USA Email: spc34@pitt.edu INTRODUCTION From affecting the environment to affecting the health of people all around the world, buildings hold a significant role in communities on a global scale. Considering that approximately 5 billion square feet are built each year [1] and that Americans spend most of their time indoors, constructing more efficient buildings and understanding building impacts on the environment and on people’s health is increasingly vital. The primary aim of this study is to understand the importance of buildings, modeling performance and efficiency, and consider alternative energy conservation measures (ECMs) towards design and renovation. Particularly, the significance of verifying simulation results with actual data and the impact of integrating ECMs into the model of the Mascaro Center for Sustainable Innovation (MCSI) will offer a case study that can be generalized to other buildings. LEED (Leadership in Energy and Environmental Design) is a certification program that increases awareness and promotes greener buildings. One way to find an optimal, green design is through building modeling. EnergyPlus is a prominent whole building simulation program created by the Department of Energy (DOE). EnergyPlus allows for the optimization of building designs towards less energy and water consumption [2]. MCSI is a 3-story extension of the 12-story Benedum Engineering Hall, and has achieved LEED Gold certification. A particular ECM that can offer a greener solution for MCSI is a Building Integrated Photovoltaic (BIPV) Façade as it can serve as glazing for buildings while producing usable energy. METHODS The study consisted of an EnergyPlus model that had been built in a previous study by DeBlois [3]. The model had been calibrated for both the hourly and monthly methods as detailed in another study [5]. The results were compared to metered consumption data [6] and determined to meet Guidelines from the American Society of Heating, Refrigeration, and Air Conditioning Engineers [3]. More data has been made available from the electric meter since the verification of the model detailed by DeBlois. Thus, prior to any other investigation into the MCSI model,

the outputs for lighting and electric equipment loads from the EnergyPlus simulation were compared to measurements obtained from the sensor meters. This allowed for increased confidence that the model continues to be representative of the actual building performance. MCSI’s scorecard for LEED was then examined [4]. One particular category under which MCSI could have improved (only having scored 4 points out of 14 possible) was Energy & Atmosphere. In particular, on-site renewable energy could be credited one point. Ultimately, this would not have made a significant difference in the final certification level since 10 more points, under the LEED v2.0 Core & Shell system, would have been needed for MCSI to be awarded LEED Platinum. Nonetheless, it demonstrates how various alternative measures can be taken to improve building sustainability through the examination of LEED standards that can be applied to the building of question. Two separate BIPV façades, ECMs that would provide onsite renewable energy to MCSI, were individually integrated into the EnergyPlus model. Spectral (i.e. glazing) and electrical properties from the datasheets for two commercial products [7, 8] were used to simulate each façade. For this preliminary investigation, a simple glazing system was used in the EnergyPlus model to completely replace the prior exterior window constructions. DATA PROCESSING The metered electric energy for the lights and electric equipment were averaged for each hour across each month for each year from Feb. 2012 to Jun. 2014. The resulting data were graphed on the same plot and the respective EnergyPlus output was overlaid on the same plot (Fig. 1). Integrating the BIPV façades into all of the exterior windows of the model (i.e. the windows in the first through third wings and second floor tower of Benedum Hall) resulted in the generation of electricity that could be used to help serve the building energy needs. Additionally, due to the glazing property of the BIPV windows, the need for cooling in MCSI was expected to decrease, further reducing the electricity consumption of MCSI.


RESULTS The meter data for 2012 and early 2013 were very similar to the results from the simulation, because the meter data during that time were used to make the schedules for EnergyPlus. Thus, the significance of evaluating more recent data not included in the original verification (end of 2013 and first half of 2014) is to ensure that the input into the model continues to represent the actual building schedules. From Fig. 1, it is evident that this is the case.

Energy (kWh)

Lights and Electric Equipment Loads 120 100 80 60 40 20 0

2012 2013 2014 EnergyPlus

Month-Time

Figure 1: The lights and electric equipment loads from the electric meter for 2012, 2013, and first half of 2014 were compared to EnergyPlus outputs.

The cooling results for both BIPV façades were as expected (Table 1); additionally, the heating load was greater for both BIPV scenarios. It is suspected that neither of the products simulated are as effective at thermal insulation as the existing windows. However, the thermal insulation was not further investigated in this study and remains a subject for future research. The heating load increase was especially large for the case with Suntech See Thru solar panels, which resulted in an overall increased annual energy use for the Suntech See Thru panels. The increased heating load is offset in the model with the PVGU windows, resulting in an annual reduction of energy. Table 1: Annual Energy Results for Integration of BIPVs Original Energy Consumption by End Use (kWh)

Cooling

337696.25

Suntech See Thru 329952.76

Heating

981895.03

1235390.51

1029675.46

0

30858.59

66243.85

Photovoltaic Energy Generated (kWh) Net Site Energy Use (kWh) Energy Reduction from Original (kWh)

General PVGU 334633.86

programs are used to determine ECMs that will result in optimal building performance. Inaccurate or incomplete monitoring can hinder successful validation of a model. Through the study of two types of BIPV windows, the importance of having several alternative ECMs and of evaluating the overall picture was highlighted. Heating is a significant component in the building energy use. An increase in the annual heating load was observed when either BIPV window was integrated into the building model. In an effort to further explore the effect of building heating, the total heat gain, total heat loss, and transmitted solar radiation of the windows should be analyzed. Furthermore, the impact of different window units – due to their unique spectral properties – on heating should be investigated before making a final decision on ECMs. Another consideration is that simple systems were used to model the glazing and PV properties of the BIPV window. Using more accurate and complex models could yield a deeper understanding of the reductions in cooling and gains in heating. Economics play a major role in implementing ECMs. Thus, since costs of the solar cells were not considered in this study, future studies should consider lifecycle cost analysis. REFERENCES

1. “Solution: The Building Sector.”Architecture 2030. [Online].

2. 3. 4. 5. 6.

7. 8.

2511688.78

2711662.95

2481275.98

0

-199974.17

+30412.8

DISCUSSION This study has demonstrated the importance of understanding building impact, sustainable measures that can be applied to buildings, and how building simulation

Available: http://www.architecture2030.org/the_solution/buildings_solut ion_how/. “EnergyPlus.” U.S. Department of Energy. [Online]. Available: http://apps1.eere.energy.gov/buildings/energyplus/ J. Deblois. “Building Energy Modeling for Green Architecture and Intelligent Dashboard Applications.” PhD dissertation, Dept. Mech. Eng., Univ. Pittsburgh, 2013. “University of Pittsburgh Benedum Hall, LEED BD+C: Core and Shell (v2.0).” U.S. Green Building Council. Scorecard. Raftery, P., M. Keane, and A. Costa, Energy and Buildings, 2011. 43(12): 3666-3679. Collinge, W.O., et al., “Measuring Whole-Building Performance with Dynamic LCA: A Case Study of a Green University Building in International Symposium of Life Cycle Assessment and Construction,” 2012: Nantes, France. “Suntech See Thru BIPV Solar Modules.” Arcman Solar Power. [Online]. Available: http://www,arcmansolar.com/catalog/7-5-13.aspx. “Technology Overview.” Pythagoras Solar. [Online]. Available: http://www.pythagoras-solar.com/technologysolutions/technology-overview-energy-efficient-windows/.

ACKNOWLEDGEMENTS The contribution of W. O. Collinge is gratefully acknowledged for the immense amount of time he spent and for the help he provided throughout this research. This study was supported by the Mascaro Center for Sustainable Innovation and the Bevier Foundation.


SURGE GENERATOR DESIGN FOR ELECTRIC POWER SYSTEMS LAB Zachary T. Smith1, Michael R. Doucette1, James D. Freeman1, Ansel Barchowsky2, Brandon Grainger, PhD, Gregory F. Reed, PhD, and Daniel J. Carnovale Department of Electrical and Computer Engineering / Electric Power Initiative University of Pittsburgh, PA, USA Email: zachary.t.g.smith@gmail.com / bmg10@pitt.edu INTRODUCTION This project’s purpose is to create a surge generator that will deliver a surge comparable to a lightning strike. The surge generator will be used for demonstrative purposes in The University of Pittsburgh’s Electric Power Systems Lab (EPSL). The present cost of a surge generator with the desired capacity can range anywhere from $30,000 to $100,000. The objective of this work is to produce a similar surge generator with a budget of around $5,000. Design and construction of the surge generator has been performed by two undergraduate research groups and funding was provided by Eaton Corporation. The generator’s design includes a boost converter, which charges a high power capacitor to 6kV. Once the capacitor is fully charged, the user is able to send a signal to a high voltage relay to allow the capacitor to discharge. This discharge is coupled to a load (typically a 60W light bulb) and behaves like a lightning strike across the load. Capacitors and inductors are used to decouple the load from the 120VAC power supply which prevents the surge from propagating to the source. The first task was to design and simulate the surge generator circuit. The authors verified that the design met IEC (International Electrotechnical Commission) and IEEE (Institute of Electrical and Electronics Engineers) standards. A bill of materials of suitable components was compiled and prototype assembled. Experimental results of the design are provided and analyzed in the forthcoming sections. METHODS The surge generator system is required to create a 6,000V ± 300V surge with 1.2µs ± 0.36µs x 50µs ± 10µs open circuit voltage and 8µs ± 1.6µs x 20µs ± ______________________________________________________________ 1- Undergraduate Researchers; 2- Graduate Student Aid;

4µs short circuit current per IEEE C62.41-1991 and IEC 61000-4-5 standards [1]. The design concept chosen was a Combination Wave Generator (CWG) and a decoupling network as defined by IEC 610004-5. IEEE offered an example of a voltage surge immunity test simulation performed by Powell and Hesterman [2]. The circuit components and values used in the IEEE example were used as a benchmark for this surge generator design. The IEEE example also included a decoupling network, which proved to be a suitable fit for this surge generator design. The surge generator circuit performance was simulated in MATLAB/Simulink. PROTOTYPE DEVELOPMENT The current and voltage measurements used to rate various components for the prototype (Figure 1) of the surge generator were based upon the simulation results gathered from the simulation analysis.

Figure 1: Hardware implementation of surge generator.

All voltage and current measurements for the surge generator were measured at the Power Systems Experience Center with an Eaton Power Xpert Meter, capable of measuring high current / voltage transients at a 6 MHz sampling rate.


RESULTS The open circuit voltage and short circuit current are the two critical waveforms of the surge. Table 1 below shows the behavior of the surge generator prototype with comparisons to the ideal parameters. Definitions of front time and time to half are provided in [2]. Note that at present, no short circuit tests have been done on the prototype. Figure 2 is the open-circuit, full-voltage surge waveform measured across the load terminals in the prototype. Figure 3 is zoomed to show the surge decay before the zero crossing and Figure 4 is a voltage measurement with prototype loaded with a 60W light bulb. In Figure 4, the light bulb bursts at the initial peak. An electrical arc establishes across the air gap after the rupture introducing nonlinear resistance into the current path impacting the signal. Table 1: Open circuit voltage waveform test results. Measured Time to 30% 5.98E‐06s Time to 90% 7.05E‐06s 2nd 50% Time 5.25E‐05s Measured Front Time 1.07E‐06s Time to Half 5.19E‐05s

Maximum Voltage Sample Rate 3287.76V 6.00E‐06 Voltage Undershoot % of Peak 423.18V 12.87% Min Ideal Time Max Ideal Time 0.84E‐06s 1.56E‐06s 4.00E‐05s 6.00E‐05s

Figure 2: Open circuit surge waveform, measured across load terminals. (X-Axis: Time (µs); Y-Axis: Voltage (V)).

DISCUSSION Component selection was a challenge for the undergraduate design team. A lesson learned is lead-times for high voltage product purchases are longer compared to commonly used hardware and need to be accounted for in the project schedule. The primary energy storage capacitor, the high voltage relay, and the blocking diodes required to be large enough to withstand a continuous 6kV. The other auxiliary components needed to be robust enough to withstand the surge. Most common electrical components are not tested with such a high pulse. The total cost of the prototype surge generator design without a hardware enclosure is $5,247. This metric is significantly lower compared to off the shelf surge generators. The high voltage relay did not operate as expected; therefore, a spark gap was used to pass the high voltage surge. The spark gap uses an air gap to pass the surge when the voltage reaches a critical value, which explains why the peak voltage was 3288V instead of 4500V. Fine tuning the length of the air gap will yield a 4500V surge. The short circuit current tests are left to be performed on the prototype and measured. Once the full voltage and current tests abide by the electrical standards mentioned, the prototype will be neatly packaged and brought to the EPSL to be used as a future demonstration piece. CONCLUSIONS The paper presented an initial prototype of a high voltage surge generator design benchmarked against leading standards in the industry and well suited for a university laboratory to meet research needs. REFERENCES

Figure 3: Open circuit surge waveform, during period before zero crossing. (X-Axis: Time (µs); Y-Axis: Voltage (V)).

[1] International Standard. IEC 61000-4-5. [Available Online]: http://www.sankie.com/uploadimg/contents/20100722110236817.pdf. [2] Powell, D. E., Hesterman, B. “Introduction to Voltage Surge Immunity Testing” IEEE Power Electronics Society. September 18, 2007. [Available Online]: http://www.denverpels.org/Downloads/Denver_PELS_20 070918_Hesterman_Voltage_Surge_Immunity.pdf.

ACKNOWLEDGEMENTS Figure 4: Surge waveform when loaded with 60W incandescent light bulb. (X-Axis: Time; Y-Axis: Voltage (V)).

This work was supported by funding and equipment donations from Eaton Corporation.


Exploring Opportunities with Phase Change Memory in Big Data Benchmarks Michael Kuhn Jun Yang Department of Electrical & Computer Engineering University of Pittsburgh, PA {mhk25, juy9}@pitt.edu INTRODUCTION Traditional DRAM technology faces several problems that are not able to be addressed with the technology such as high power consumption. Phase Change Memory (PCM) is another option being considered as a suitable replacement for the challenges that DRAM faces. While Phase Change Memory has many advantages over DRAM, there are still obstacles that are being overcome. These include PCM endurance, longer access latencies, and higher dynamic power. Until now, Phase Change Memory has been put through rigorous testing and studied using small data benchmarks using various benchmark suites such as the SPEC 2006 benchmark suite. No in depth testing of the Phase change Memory with big data benchmarks has been performed.

computation-intensive tasks, web search, and web serving. This is found to be a rather accurate sense of what real world applications would be run on main memory, so these are the big data benchmarks that will be tested with the MARSS simulator.

BACKGROUND Many emerging big data applications used in the real world need to be able to process high volumes of data. For example, Facebook keeps a large portion of its non-image data in main memory. For Phase Change Memory to be a suitable replacement for traditional DRAM or even partially integrated into main memory with DRAM, it needs to be tested against big data benchmarks for analysis. MARSS, which stands for MicroARchitectural and System Simulator, is a tool for cycle accurate full system simulation of x86-64 architecture, with the ability to perform multicore implementations. MARSS is based on PTLsim, an older full system simulator, and QEMU, a full system emulation environment. The incorporation of the full system emulation environment with a full system simulator improves the speed of the simulations and also makes a simpler graphics interface for users to work with. This makes the MARSS simulator a perfect tool to perform all big data benchmark testing. Figure 1 shows the emulation and simulation path for running the MARSS simulator. Cloudsuite, a benchmark suite for emerging scale-out applications, covers a broad range of application categories commonly found in today’s datacenters. This includes data analytics, data serving, media streaming, large-scale and

Figure 1: Emulation and Simulation Path EXPERIMENT Before any research began on finding a full system simulator to test Phase change memory was completed, a deeper level of understanding was needed to meet the demands. Studying previous works of Phase Change Memory and the problems that it faced was necessary. Graphs were created, using compiled code to streamline the creation, to better understand how Phase Change Memory was working on smaller data sets, and understand the need to test the Phase Change Memory against larger data sets. These graphs were created for a better understanding on the material and also for use in a future paper. The initial find of a full system simulator provided a challenge, because it needed to support the functionality that the Cloudsuite benchmarks needed and yet also still needed to be simplistic to use as time was a limiting factor. Three full system simulators were looked at all together and MARSS was chosen because of how simplistic its use was compared to the other two. MARSS being the least known of the three had a lack of support and troubleshooting for problems if any would arise. This lack of support and troubleshooting compared to the


others caused a week long test of the other two full system simulators, but were more complicated than they were worth and ultimately MARSS was brought back and after a week of troubleshooting a problem, it was ultimately kept as the simulator used. After the MARSS simulator’s troubleshooting was complete and the simulator was working properly, three disk images that came with the download were compiled to further advance the tests needed to be done with the simulator. Two of these disk images were of benchmark suites and the other one was of a Linux operating system. The benchmark suites provided were Splash2 and Parsec2.1. Experimenting with the graphics interface of the QEMU side allowed the testing and proper emulation of each of the benchmark suites. Once a simple emulation of the benchmark suites was completed, each benchmark suite would have a full system simulation performed to test the accuracy of the simulator and make sure everything was in proper working order for the big data benchmarks. To perform these full system simulations, it involved creating multiple checkpoints for each of the benchmarks within each of the two benchmark suites. Once the checkpoints were created, a configuration file was written to tell the simulation which checkpoints to run from each of the benchmark suites. As this takes a long time to run, and time was limited every checkpoint was run to verify working order of the simulator. While the full system simulations were being completed over the course of a couple day, several of the Cloudsuite benchmarks were being evaluated and compiled to have a full system simulation run. After initial complications in compiling a few of the benchmarks and with lack of time to troubleshoot the issues the Graph Analytics benchmark was chosen to be completed first. This was because the Graph Analytics benchmark needed the least setup and compiling of programs that were involved. The Graph Analytics benchmark used only two software programs to make it run, these were GraphLab and TunkRank. TunkRank is implemented on GraphLab to use a Twitter dataset with 41M vertices. The dataset size for this benchmark is 25GB. RESULTS As time became a factor towards the end, and simulations were done in a quicker fashion, unfortunately only small amounts of testing was

completed. No full system simulation has yet been completed using a big data benchmark from the Cloudsuite program. Along with this fact, none of the full system simulation tests were able to be run using Phase Change Memory as the main memory, but rather all simulations were completed using a simulated DRAM technology. Phase Change Memory is within the grasps of the work that is being done. Along with completing the initial findings of the full system simulator, some code will have to be written in order to incorporate a simulated Phase Change Memory into the system. The graphs used to understand Phase Change Memory with smaller data benchmarks will be featured in another publication which will be released sometime in the future. Additional testing on the Phase Change Memory with the big data benchmarks will hopefully be completed in the future if additional funds become available. REFERENCES 1.

2.

3.

4.

Ping Zhou, Bo Zhao, Jun Yang and Youtao Zhang, “A Durable and Energy Efficient Main Memory Using Phase Change Memory Technology,” International Symposium on Computer Architecture, 2009. Lei Jiang, Youtao Zhang and Jun Yang, “Mitigating Write Disturbance in Super Dense Phase Change Memories,” IEEE/IFIP International Conference on Dependable Systems and Networks, 2014. Avadh Patel, Furat Afram, Shunfei Chen and Kanad Ghose, “MARSSx86: A Full System Simulator for x86 CPUs,” Design Automation Conference, 2011. Michael Ferdman et al., “Clearing the Clouds: A Study of Emerging Scale-out Workloads on Modern Hardware,” 17th International Conference on Architectural Support for Programming Language and Operating Systems, 2012.

ACKNOWLEDGEMENTS A huge thanks goes out to the many people who made this possible and helped in the understanding of the material, this includes Rujia Wang, Lei Jiang and Sandy Weisberg. Money for this research were Dr. Jun Yang, the Swanson School of Engineering and the Office of the Provost.


MICROSTRUCTURAL ANALYIS OF HIGH STRENGTH STEELS Anthony Analo Advisor: Professor A.J. DeArdo Department of Mechanical Engineering and Material Science University of Pittsburgh, PA 15261, USA Email: aea26@pitt.edu To investigate the behavior of the microstructure of INTRODUCTION dual-phase steels after plastic deformation we To determine the characteristics that control the subjected our samples to a hole expansion test physical properties of steels used in the natural gas where a conical expansion tool was forced into a and automotive industries our investigation begins pre- punched hole until a crack occurred. We then at the microscopic level. These defining performed controlled nital chemical etches on the characteristics can be identified through the use of samples, measured the volume fraction of metallographic techniques. This paper will focus on martensite using optical images and conducted the following goals: Vickers hardness testing near the crack. Firstly, to investigate the microstructure and To investigate the first formation of austenite in evaluate the quality of steel pipes used in the natural dual-phase steels we subjected samples of the hot gas industry for drilling applications. band as well as cold rolled steels to various heat Secondly, to investigate the properties of high treating processes and used optical and electron strength, dual-phase steels used for automotive microscopy to evaluate the phases present. applications. Specifically, the effect of plastic RESULTS deformation on the microstructure and the effects of Vickers and Rockwell hardness testing for the pipe the stored energy of cold work on the initial samples returned values that varied greatly and formation of austenite. would be considered faulty by industry standards (Table 1). The optical images revealed heavy METHODS Standard pipe used for fracking is made by a banding and segregation of separate phases in the seamless drawing method where the metal is microstructure of the pipe samples (Fig. 1). extruded to the desired length. This can lead to a variance in wall thickness due to the inability to hold the necessary tooling fixed while forming the inner surface of the extrusion [1]. The pipe is evaluated using industry standard, non-destructive hardness testing, where resistance to plastic deformation is measured. Several indentations are made on a small area of the pipe’s cross section and if the values vary by more than 5 units on the Rockwell hardness scale they fail. We subjected our pipe samples to Vickers hardness testing (VHN) as well as more material-specific Rockwell hardness Figure 1: Nital-etched pipe sample showing testing (HRC). The indentations for each test were banding made near the center of the pipe’s cross section, 3mm apart from each other, around the entire Sample 1 of the hole expansion samples had an circumference of the pipe. To reveal the average martensite volume fraction of 16.04% and microstructure, the samples were subjected to an average hardness value of 310HV100/10 while controlled nital chemical etches with experimentally Sample 2 had an average martensite volume determined rates of etching to ensure accuracy. The fraction of 23.2% and an average hardness value of phases present in the microstructure were identified 336HV100/10. The optical images revealed crack using optical microscopy. propagation through the hard martensite phase (Fig. 2).


which increased directionality of stresses and susceptibility to defects.

Figure 2: LaPera-etched hole expansion showing crack propagation through hard phase

Experimental data showed that for the same heat treatments, cold rolled samples formed more austenite than the hot band samples (Fig.3).

Figure 3: Volume fraction of martensite in cold rolled and hot band samples for varying heat treatments

DISCUSSION Although our hardness testing of the pipe samples differed from the industry standard of testing a small area of the cross section, we feel that testing the entire circumference of the pipe’s cross section was more comprehensive and yielded valid results. The sizes of the indents made during macroscopic hardness testing were as large as 500 microns and did not allow for precise measurement of individual phases in the microstructure. However, optical images revealed banding in the microstructure

mechanical

During the hole punch test the samples are curved but even after being flattened the initial bending is still present in the microstructure. In order to see how the crack behaves without bending it would be useful to use a biaxial expansion test. However, our test of plastic deformation on dual-phase steels confirmed the expected trend of crack propagation along the hard phase. The experimental data confirms that cold rolled samples formed more martensite than hot band samples when subjected to the same heat treatment processes. It is known that austenite forms at grain boundaries, and any text book will tell you that all grains behave the same but when considering the role of stored energy of cold work this assumption seems incorrect [2]. A comparison of the position of stored energy in images of cold rolled samples prior to heat treating with the position of newly formed martensite in these samples following heat treatment is expected to reveal a strong correlation. REFERENCES 1. Metallic Materials-Method of Hole Expanding Test, ISO/TS 16630, 2003. 2. U.R. Lenel, “Reaustenitisation of Some Alloy Steels,” Darwin College Cambridge, October 1980. 3. A. Mustapha, E. A. Charles, and D. Hardie, “The Effect of Microstructure on Stress-Strain Behavior and Susceptibility to Cracking of Pipeline Steels,” Journal of Metallurgy, vol. 2012, Article ID 638290, 7 pages, 2012. ACKNOWLEDGEMENTS Mr. Yu Gong, Basic Metals Processing Research Institute (BAMPRI) Ms. Victoria Wang, (BAMPRI) Dr. Minjian Hua, (BAMPRI) Mr. Bing Ma, (BAMPRI) Swanson School of Engineering and the Office of the Provost

Table 1: Consecutive Hardness Values for Vickers and Rockwell Hardness Testing Hardness Value Hardness Scale 1 2 3 4 5 6 388 461 618 308 291 298 VHN (100g, 10s) 14.4 11.8 15.5 21.1 16.8 18.5 HRC (150kg, 2s)


USING ULTRASOUND TECHNIQUES TO MEASURE MECHANICAL PROPERTIES OF METAL AND POLYMER SAMPLES Raymond M. Mattioli Dr. Markus Chmielus’s Research Group Department of Mechanical Engineering and Materials Science University of Pittsburgh rmm114@pitt.edu INTRODUCTION Additive manufacturing (AM) or more commonly referred to as 3D printing is a process that has experienced a lot of growth and development over the course of the past several years. One drawback to these newly developed AM printing processes is in the lack of knowledge about the microstructures and properties of the printed materials. One advantage of some AM processes especially the here used LENS (Laser Engineered Net Shaping) printer allows for combination of materials to create new alloys. Since not all of these alloys have published mechanical properties, these properties need to be determined experimentally after they are printed. One of the most important mechanical properties of a material is its elastic modulus. Elastic modulus, or Young’s modulus, is defined simply as the ratio of stress to corresponding strain of a material under tension or compression. Therefore, Young’s modulus is a measure of the ability of a material to withstand changes in length when under lengthwise tension or compression [1]. The objective of this part of the summer research was to use an ultrasonic machine to determine mechanical properties of materials printed with a LENS printer and to compare them to similar conventionally produced samples. Also studied in this project was the initial process of 3D printing samples with LENS in general. METHODS Our lab has two instruments that can be combined to measure the elastic modulus of materials: a 2247A oscilloscope and a Panametrics 5052 ultrasonic pulser/receiver. After coupling a transducer probe to the sample, sound waves are produced within the probe by the piezoelectric effect. A CRT readout display (Fig. 1) allows the waveforms for each sample to be analyzed and the modulus to be calculated [2]. The machine, once calibrated, could be used to

make measurements on real samples. To check the accuracy of the machine, a measured waveform would have to be compared with published graphs. A 1” cube of aluminum 6061T6 was milled for this purpose. To couple the transducer to the samples and increase the signal strength of the waves, two couplants were used (glycerin for longitudinal waves and a viscous resin for shear waves). Following the aluminum cube, four plastic samples were printed of the following thicknesses: 1”, 0.5”, 0.25”, and 0.125”. These samples were printed using the Stratasys Objet 260 Connex machine and fabricated from Stratasys VeroWhitePlus rigid plastic material. The dimensions of each sample were measured using calipers. The oscilloscope and pulser settings were set as instructed in the manuals. These settings were used for every sample tested, as they are the most general settings for testing. In addition to the aforementioned samples, two four inch by four inch plates (one of Inconel-716 and one of Ti-64) were tested.

Figure 1: Readout of a trial with the Aluminum cube

DATA PROCESSING The data needed in these experiments is the time it takes for the waves (both longitudinal and transverse) to travel through the width of the


material and be echoed back to the transducer. With the times measured, the velocities followed by Poisson’s ratio and the elastic modulus can be calculated. The transducers used for both forms of testing are equipped with both wave transmitters and receivers. The ultrasonic pulse emitted travels through the material, gets reflected off the back face of the material, and the transducer “listens” to the pulse that keeps attenuating with time. The signals are sent to the oscilloscope where they get registered as peaks in the voltage. The time between signals is measured using the |!SEC"| function on the oscilloscope. The oscilloscope can store one previous trial’s results. The trial values are transferred to an Excel spreadsheet to calculate the materials properties. RESULTS For the aluminum cube test, the best returned result was 71.9 GPa, with a literature value of 68.9 GPa and an experimental error of 4.35%. For the Ti64 tests, the average measured elastic modulus was 117.8 GPa and the literature value is 113.8 GPa (see Table 1 for more information). For the Inconel plate tests, the average measured elastic modulus was 201.8 GPa with a literature elastic modulus of 204.9 GPa. The experimental moduli for the polymer samples of decreasing thickness measured in GPa are 19.45, 1.345, 6.991, and 4.258, while the literature value is 2.5 GPa (see Table 2 for more information). Table 1: Results of 5 trials on the Ti64 plate (YM stands for Young’s modulus, measured in GPa)

DISCUSSION After analyzing the data, is was found that the measurements of the thin Inconel-716 and Ti-64 plates were resulting in Young’s moduli deviating less than 5% from the literature value. On the other hand, polymer samples all had average deviation from the literature value of above 5%. The elastic modulus results from the ultrasound-oscilloscope experiments showed data

that was inconsistent with the literature values which might have several reasons. Several of these tests were performed outside of the suggested thickness for testing the Elastic modulus with ultrasound, which was between 5 and 12.5 mm. Additionally, the resolution is limited on samples that are too thin due to small changes in pulse transit time across short sound paths, while thicker samples result in more space for the waves to be deflected and slowed down. It was also found that end-face parallelism of the samples yields more accurate results; otherwise the waves reflect off an uneven surface and do not get read back into the transducer properly. While the results for polymer samples do show useful results, the elastic modulus measurements of metal samples with a certain thickness range make ultrasound measurements as a nondestructive materials test method a viable option of 3D printed samples. Further experimentation with the 3D printed metal samples is required to improve the confidence in this method and to reduce systematic errors that reduce accuracy of the elastic modulus results. Table 2: Polymer samples of different thicknesses (Thickness is measured in cm and Young’s modulus is measured in GPa)

REFERENCES 1. Encyclopædia Britannica (2014). Online. <http://www.britannica.com/EBchecked/topic/65 4186/Youngs-modulus>. 2. Parmley, Lisa. (2012). “How Does an Ultrasound Machine Work?” <http://www.ultrasoundtechniciancenter.org /ultrasound-­‐knowledge>. ACKNOWLEDGMENTS This project would not have been possible without the supervision and guidance of Dr. Markus Chmielus and the support provided by my lab-­‐mates Meredith Meyer, Jakub Toman, Erica Stevens, and Ravi Tej. Funding for this internship is coming from the SSoE and the Office of the Provost of the University of Pittsburgh and is greatly appreciated.


PORE DISTRIBUTION IN INCONEL 718 MANUFACTURED BY LASER ENGINEERED NET SHAPING Erica Stevens†, Jakub Toman, and Dr. Markus Chmielus Mechanical Engineering and Materials Science Department University of Pittsburgh, PA, USA † Email: els119@pitt.edu

INTRODUCTION Additive manufacturing (AM) is a relatively new manufacturing technique, and one that still requires much research and development. It is known that AM generally produces less waste and requires less production steps than traditional manufacturing, but the macro- and microstructural effects and defects on parts made by AM are still being explored. Inconel 718 is an iron-nickel-based superalloy. It is used frequently in aerospace applications, and is commonly produced by AM due to complicated part geometries that cannot be produced using traditional manufacturing. METHODS The presented research is focused on the AM method of laser engineered net shaping (LENS). LENS builds individual layers by using a laser as the energy source to selectively melt powders introduced by nozzles of a powder feed system during the process1. The OptoMec LENS 450 system was used to produce four prisms (labelled A, B, C, D) made with Inconel 718 (IN718) powder (~100 μm) and deposited with different deposition parameters onto an IN718 plate which had been traditionally manufactured. The prisms were then closely examined, to determine what structural and mechanical effects the AM processing technique and parameters had on the material properties. Each layer of each prism was built in two steps: the contour and the hatching. First, the laser traced the outline of the cross-sectional square, producing the contour. Then, it would scan back and forth through the middle of the square at a specified angle that varied by layer in order to fill in the shape, which created the hatching. This method caused the contour to be partially scanned over twice (once when the contour was created, and again when the hatching was built). The layer, contour, and hatching are labeled in Fig. 1.

Figure 1: LENS-fabricated IN718 as-printed face (left) and etched cross-section (right), highlighting build layer (blue), contour (green), and hatching (white)

In order to examine the samples separately, the substrate was cut around each prism, leaving four small pieces each topped with one of the prisms. These samples were imaged on a Keyence digital optical microscope in the asprinted condition then hot compression mounted in Phenolic resin. Each was ground using 400grit SiC grinding paper until the sample was flat. SiC papers of 600-, 800-, and 1200-grit were then used to improve the finish. Final polishing was accomplished using a 0.5 μm Al2O3 suspension and 0.05 μm Al2O3 suspension. Each polished sample was examined optically in bright field and in dark field, and sample D was examined in the JEOL JSM6510 scanning electron microscope (SEM), including energydispersive X-ray spectroscopy (EDS). All polished samples were also tested for hardness using a Leco Vicker’s microhardness tester. The load used was 1000 gf (grams force), with a dwell time of 5 seconds.

RESULTS After polishing, the surfaces (layers and cross sections) were examined with optical microscopy, scanning electron microscopy (SEM) with energy dispersive x-ray spectrometry (EDS), and microhardness measurements. A combination of these results was used to gain an understanding about the effects of LENS fabrication on the microstructure and consequently on the mechanical properties of IN718. Fig. 2 shows one of the IN718 samples, from the top view, both as-printed, after polishing, and after re-polishing.


region of indents was in the hatching, and the second region progressed into the contour.

Figure 2: LENS-fabricated IN718 as-printed (right), after polishing (center), and after re-polishing (left)

It can be seen that the scan direction can be distinguished in the polished sample by way of small dots that appear bright in the dark field image. These dots become more frequent and less patterned in the re-polished sample, as well as around the contour of the center image. The dots can also be seen in the SEM image in Fig. 3 as black spots.

Figure 3: Compositional backscatter electron images of LENS-fabricated IN718 of the same area, with the image on the right displaying where EDS data was acquired

EDS was done on the spots and the data from several of the spectra are given in Table 1 and compared to the expected nominal composition of IN718. As compared to the expected compositions of 0 wt.-% O and 0.5 wt.-% Al, the spots showed between 9 and 19 wt.-% O and between 1 and 5 wt.-% Al. Table 1: Select spectra from EDS data acquired using the area shown in Fig. 3, compared with the nominal composition of IN7182. All data is in wt.-%. C has been omitted since it is a common contaminant.

O Al Ti Cr Fe Ni Nb

Spec. 2 18.32 4.04 1.22 15.25 13.47 38.80 8.90

Spec. 3 9.24 1.12 1.50 16.06 14.31 46.04 11.74

Spec. 8 14.20 4.26 0.93 17.52 14.83 43.12 5.14

Nominal 0 0.5 0.9 19 18.5 52.5 5.1

Hardness testing was done on the face of the polished prism from hatch into the contour. There are two clear regions of indents shown in Fig. 4, first with higher hardness (234 HV – 255 HV) the second with lower (208 – 225 HV). The first

Vicker's Hardness

270 260 250 240 230 220 210 200 0

5

10 15 Indent Number

20

Figure 4: Microhardness for regions in the hatching (first 13 indents) and the contour (last 7 indents)

DISCUSSION The comparison between the polished and repolished samples shows that the scan direction can no longer be seen after some amount of material is removed. This is evidence that the distribution of dots changes throughout the layers of the sample. A combination of the images and the EDS data lead to the conclusion that the small dots are holes, or pores. EDS showed that the pores contained high amounts of Al and O, which are not abundant in IN718. Therefore, the elements must have been introduced through the final polishing step, which was done using Al2O3, and gotten trapped. As they are defects, the pores affect the mechanical properties of the material. It was observed that the contour had an increased number of pores, and it was observed that the hardness of the contour was less than that of the hatching, where there were less pores. Based on these results, we will focus now on the influence of printing and processing parameters on the defects we found. REFERENCES 1. Frazier, W. E. Metal Additive Manufacturing: A Review. Journal of Materials Engineering and Performance 23, 1917–1928 (2014). 2. Donachie, M. J. Superalloys: a technical guide. (ASM International, 2002).

ACKNOWLEDGEMENTS This research was funded jointly by Dr. Markus Chmielus, the Swanson School of Engineering, and the Office of the Provost.


CONSTRUCTION AND ANALYSIS OF A PARTITIONED MULTIFUNCTIONAL SMART INSULATION Nick Jean-Louis, Mike Greene Department of Mechanical Engineering University of Pittsburgh, PA, USA Email: nij16@pitt.edu INTRODUCTION Shipping is the primary mode of transport for items of any size and or quantity. In most cases the only worries involved with shipping are the cost, delivery time and the safety of the package. For some products, it is imperative that the temperature of the payload remains within a tight window during shipping. For example, frozen products are shipped over long distances, and without proper maintenance the items will easily perish or become unusable. Many of the current methods available to prevent such catastrophe depend on the use of a supplementary aid for the container such as thermal blankets, non-reusable chilling units or an extensive amount of gel packs or dry ice. One commercial product that meets this need is the iBox[1], which can maintain payload temperatures between 2 and 8 degrees centigrade through the use of vacuum insulation, phase change material and thermal regulation devices. It is possible to create a container that may require less phase change material with the use of partitioned multifunctional smart insulation. Conceptual analysis of the partitioned multifunctional smart insulation wall was recently published in Applied Energy by Kimber et al. [2]. The multifunction aspect of the insulation is the ability to regulate its R-value between highly insulative or highly conductive states. The wall is made with an outermost and an innermost surface similar to that of a normal wall but within the wall there are N-1 thin polymer membranes, where N is the number of partitions. For instances when larger R-values are required, the membranes serves as enclosures for the air within the wall in order to make the air stagnant which will prevent natural convection. The stagnant air acts as a resistor to thermal energy as it passes through each individual air layer. In terms of conduction, which require lower R-values, the wall is simply compressed in a manner such that all the air is removed and the thin-

membranes form a single layer. This results in an absence of the resistances to convection and radiation and leaves only material resistances to conduction. METHODS The onset of this research consisted of the creation of multiple smart insulation prototypes in conjunction with the creation of a Matlab script which displays the transient response of the system. The current prototype was constructed from two insulation panels serving as the outer walls of the smart insulation wall. The partitions are made of thin aluminum sheets which are bound together by thin plastic film in order to create the layers of stagnant air. The goal for the final prototype is to create a wall in which the air can enter and escape when the wall is in the expanded and collapsed state respectively. Upon completion of the prototype, testing will occur where the smart wall is placed inside of a thermal chamber as a regulator of the heat transfer between a cold source and the payload. The transient response of the system is dependent on the thickness of the air layers thus a method for changing the thickness with respect to the temperature of the payload is being optimized to maintain the cold temperature of the payload. RESULTS Based on the Matlab simulation, the payload would be able to maintain temperatures between 5 and 9 degrees Celsius for 3 days, yet the most optimal method for controlling the smart insulation as a function of temperature is still being determined. The design is still in progress but the current prototypes verify that the concept of smart insulation can be achieved.


ACKNOWLEDGEMENTS I would like to thanks Dr. Kimber and Dr. Clark for giving me the opportunity to work in their labs. I would also like to thank Tyler, Greg, Paul, Nick, and Naji for all the help that they have provided with the Matlab script and the prototypes. Special thanks to the Swanson School of Engineering for providing me with the opportunity to conduct research at the University of Pittsburgh. REFERENCES [1] Kimber M, Schaefer L, Clark W, Conceptual analysis and design of a partitioned multifunctional smart insulation. Applied Energy 2014; 114:310319 [2] “iBOx: The thermal regulated, autonomous and reusable insulated shipping container� ColdChainIQ. Web. 20 March. 2014.


Testing a Baxter Robot’s Potential Application in Physical Therapy Nathan Alaniz, Nalaniz@smu.edu Advisor: Dr. Nitin Sharma, Neuromuscular Control and Robotics Laboratory INTRODUCTON Every year in the U.S. more than 795,000 people are the victims of a stroke [1]. A stroke can cause long lasting damage to the neurological system and recovery requires hands on physical therapy. A large part of rehabilitation treatment involves constant movement repetitions and as such robots are being considered as a way to provide this therapy. The usage of robots in physical therapy has many benefits including reducing the cost of healthcare and allowing for physical therapists to treat more patients at one time. Not only are robots able to perform precise and consistent tasks, but they can evaluate the patients performance in real time. A clinical trial using the Mirror Image Movement Enabler (MIME) robot found that therapy sessions with the MIME robot were at least as effective as hands on therapy with a therapist [2]. The Baxter robot (Figure 1) is an industrial assembly robot designed to be user-friendly, affordable, and safe for use in close proximity to humans. It is also capable of being operated through haptic interaction. Baxter is run through a UNIX terminal and uses an open source system called The Robot Operating System (ROS), which is designed to make programming robots easier [3]. Baxter has two arms, each with seven degrees of freedom, allowing for a full range of motion and making it a good candidate for usage in upper, and potentially lower, limb rehabilitation. The objective of this project was to evaluate whether Baxter could be programed for use in physical therapy. To do this, it was necessary to 1) run a closed-loop feedback controller (PID) on Baxter’s limbs to test physical interactions, 2) create a way to physically interact with Baxter, and 3) develop a way to program with Baxter using Matlab/Simulink to simplify the user interface.

Figure 1: Left: The Baxter Robot; Right: Custom Handle

Methods My first task was to learn how to program Baxter. Using the documentation provided for Baxter and its peripheral systems, I was able to quickly learn the basics of programing Baxter in Python and using its built in functions [4]. My second task was to test a simulation code on Gazebo, which is a 3D-simulation environment designed for the simulation of Baxter. In Gazebo, the Baxter simulation can be programed and operates almost exactly like the real Baxter Robot. This allows for safe experimentation as potentially dangerous errors in the code can be found and resolved without risk of damaging Baxter or harming nearby persons. First Objective: In order to test if Baxter was capable of manipulating a person’s arm, I wrote a program that would make the Baxter limb attempt to match the mirrored position of Baxter’s opposite limb. This process mimicked the bilateral mode used in the MIME clinical trial [2]. The arms were controlled by a simple PID algorithm. The opposite limb was held in place by Baxter’s gravity compensation mode and could be moved freely by any external force. It was found that Baxter’s arms were strong enough to move the PID controlled limb to match the other, even with the full weight of a limp human arm grasping the robotic limb. Second Objective: To make physical interactions with Baxter easy, I designed and 3D printed two handles that mount onto Baxter’s end effectors (Figure 1). These handles allow for an effective interaction with Baxter. We were therefore able to test Baxter’s ability to move a participant’s limp arm to a variety of positions. While making these


handles, I also designed a simple connecter plate that will attach firmly to Baxter’s end effectors and can be modified to allow a variety of equipment to be attached to them, making the development of new end effectors for Baxter much simpler. Third Objective: To communicate with Baxter through ROS one must use either the UNIX command prompt or a compatible programming language such as Python. However, these programming languages are basic and therefore complicated functions must be pre-programed. Matlab, on the other hand, comes with several built in tools that make writing complex functions significantly easier. Baxter has not been developed to run Matlab scripts; however, we explored several ways to get around this issue. Initially, we attempted to use a built in Matlab function to run Python code from a command window and use the output values in Simulink, a block coding environment connected to Matlab. In later testing with Simulink, it was found that the time it took to execute this code was roughly half a second. This is far too long to be used in any system that would control Baxter’s limbs in real time. Matlab does not have any ROS functionality by default. However, there are add-ons that allow Matlab to create and communicate with ROS nodes. ROS nodes are used to exchange data between systems that use ROS. The addition of this module allowed for Matlab to communicate information with Python code reducing the processing time from half a second to approximately one millisecond. Successful communication with Matlab also allowed us to communicate the information to Simulink. In order to test the effectiveness of controlling Baxter through Matlab Simulink, I wrote a program that would use Simulink’s PID function to make one of Baxter’s simulated joints follow a sine wave. During testing, the desired and actual joint values were recorded and graphed (Figure 2). RESULTS After some tuning of the PID in Simulink, the simulated joint was able to follow the desired trajectory with a root mean squared error of 0.1 radians.

Figure 2: Simulation Results

Discussion This work showed that it is possible to reliably control the Baxter simulation using Simulink functions. Due to time constraints the PID testing was only done on the simulated Baxter. Thus, further testing is require to ensure that the Simulink to Baxter communication occurs in real-time when run on the real Baxter robot. Conclusion Baxter’s low cost, full range of motion, dexterity, and safety features make him a very good candidate for usage in physical rehabilitation. Baxter’s programming is being actively improved and the community is constantly finding ways to add on to Baxter’s functionality and developing solutions to new problems that arise. Running code through Matlab Simulink to control Baxter makes developing code for Baxter even more convenient as it allows for the usage and creation of complicated functions through a user friendly interface. Although running Baxter using Matlab Simulink can be a cumbersome process, it can likely be simplified in future projects. REFERENCES [1]. Center for Disease Control and Prevention.2014. http://www.cdc.gov/stroke/facts.htm [2]. Lum, P.S.; Burgar, Charles G.; Van der Loos, M.; Shor, P.C.; Majmundar, M.; Yap, R., "The MIME robotic system for upper-limb neuro-rehabilitation: results from a clinical trial in subacute stroke," Rehabilitation Robotics, 2005. ICORR 2005. 9th International Conference on , vol., no., pp.511,514, 28 June-1 July 2005 [3]. www.ros.org [4]. Official Baxter Wiki Baxter.http://sdk.rethinkrobotics.com/wiki/Main_Page


INVESTIGATION OF ELECTROMYOGRAPHY AS A MUSCLE FATIGUE INDICATOR DURING FUNCTIONAL ELECTRICAL STIMULATION Henry Phalen, htp2@pitt.edu Advisor: Dr. Nitin Sharma, Neuromuscular Control and Robotics Laboratory INTRODUCTION In 2013, the United States was home to some 273,000 individuals with a spinal cord injury (SCI) [1]. An SCI may result in paraplegia, making voluntary gait difficult if not impossible. Functional electrical stimulation (FES) has been developed to restore mobility to individuals with an SCI. FES is the application of a low-level current across a muscle via surface electrodes to elicit a muscle contraction [2]. However, the application of FES presents some challenges. Muscles fatigue much more rapidly during FES than during voluntary contractions [3]. As fatigue occurs, the muscle force output decreases. This results in an FES performance that degrades over time. The need for FES systems to be capable of detecting and compensating for fatigue motivates the need for fatigue measurement. The objective of this project was to determine the effectiveness of using electromyography (EMG) for fatigue indication in FES systems.

allowed for an investigation into the practicality of using EMG for fatigue identification without having to make the major investments needed for external noise isolation techniques.

THE ELECTROMYOGRAPHY SIGNAL EMG is the measurement of electrical signals produced when muscles contract. EMG can be used for fatigue identification [4]. It is an attractive solution as it reflects muscle contractile activity and can be collected noninvasively by electrodes on the skin [3]. However, surface EMG is susceptible to noise and stimulation artifacts. Surface EMG electrodes are designed to pick up faint electrical signals due to muscle activity. But, during FES, these electrodes also pick up stimulation artifacts several orders of magnitude greater than the EMG signal. Additionally, small electromagnetic disturbances such as stimulator current draining or device powering can create noise in the same frequency range of the EMG signal.

During the leg-extension experiments, the subject opposed a constant torque created by a Leeson® brushed DC motor to induce fatigue. Tests continued until the subject felt that they had reached a fatigued state. Evidence of fatigue included the subject reporting a “burning” feeling in the muscle and uncontrollable trembling of the leg. During the first fatigue test the subject was instructed to extend their knee as far as possible and oppose an electric motor to maintain their position. This test served as a close approximation of an isometric condition. A decrease in extension angle evidenced decreasing muscle force due to fatigue. As the extension angle decreases the muscle needs to exert less force due to decreased torque from gravity and the forcelength relationship of knee extensor muscle dynamics. The second protocol was non-isometric with the subject repeating a periodic extension pattern in opposition of the motor to induce fatigue.

The original experimental protocol included the use of FES to induce muscle fatigue. However, powering the stimulator unit and an electric motor resulted in a signal to noise ratio that was too small for the isolation of a clean EMG signal from the FES artifact. It has been shown that FES artifacts can be removed, either through external methods such as shorting the EMG electrodes during stimulation pulses [3] or filtering of the signal [5]. Because design of the external isolation techniques needed to reduce noise in the EMG signal during FES application was beyond the scope and timeframe of this project, experimentation was done without FES. This

EXPERIMENTATION The project objective was revised to investigate EMG as a fatigue indicator during voluntary muscle contractions. Three experiments were performed, beginning with a preliminary test on the biceps brachii to show the relationship between EMG amplitude and muscle torque. In the absence of fatigue, the subject exerted a torque supplied by known weights in isometric conditions. This was followed by two fatigue experiments on the rectus femoris in a modified leg-extension machine. These legextension experiments were performed to analyze the EMG data for evidence of fatigue. EMG signals were collected at 1000 Hz using a Delsys® Bagnoli-8 recording system during all tests.

RESULTS The biceps brachii test was performed to verify the wellsupported relationship between muscle force output and EMG amplitude [6]. The relationship between the torque and mean average value (MAV) of EMG amplitude over the contraction interval is displayed in Figure 1 and proved similar to the relationship found in a similar study on the biceps brachii [7].


(mV)

this relationship was found to be inappropriate for use in fatigue identification. JASA, an accepted alternative method, failed to identify the fatigue present in the rectus femoris. There may be other factors involved that led to results that indicate no change in EMG frequency and amplitude despite subjects reporting fatigue. Perhaps changes in the rectus femoris are subtle compared to other muscles or the fatigue has to be much more significant to be detectable. Regardless, the effectiveness of the EMG signal for fatigue analysis in FES systems remains inconclusive.

FIGURE 1: Relationship between EMG amplitude and isometric muscle torque generation in the biceps brachii test.

Because fatigue results in decreased muscle torque, it was theorized that this relationship could be used to indicate fatigue through changes in EMG amplitude during the leg-extension tests. However, upon further investigation it was discovered that EMG amplitude alone is a misleading identifier of fatigue and thus a method called Joint Analysis of EMG Spectrum and Amplitude (JASA) was employed [4]. When a muscle fatigues, the elicited EMG signal decreases in frequency and increases amplitude [4]. When JASA was performed on the leg-extension data, a plot of the power spectral density (PSD) of an EMG signal from a fatigued muscle should have been shifted up and left from the same plot of a non-fatigued muscle. The results, however, proved to be inconclusive for multiple subjects as the analysis of EMG PSDs did not support the observed presence of fatigue. Figure 2 shows the PSDs at the beginning (rested state) and end (fatigued state) of both tests. The EMG signal appeared to be composed of relatively the same frequency content throughout testing regardless of muscle fatigue state. Even the mean frequency, a common benchmark of EMG spectral fatigue indication [8], did not significantly vary between the beginning and end of testing. Comparison of PSD from Non-Isometric Test Magnitude (dB)

Magnitude (dB)

Comparison of PSD from Isometric Test

Frequency (Hz)

CONCLUSION EMG can be an attractive solution for fatigue identification, as it is non-invasive and easy to add to an existing FES system. However, obstacles still remain to its implementation. For example, adequate EMG signal quality and its analysis are compromised by FES artifacts. Even when FES artifacts are successfully isolated from the EMG signal, this study showed that current EMG analysis methods may not be appropriate for fatigue identification. In the future, research into different methods of quantifying fatigue from EMG are needed, especially methods to detect small changes over a range of fatigue states. As the EMG signal is difficult to process, especially with FES integration, and did not show significant change with fatigue in this study, it seems practical to explore other methods of fatigue indication for restorative FES devices. Alternative methods could include the direct measurement of muscle torque output or approximation of fatigue through dynamic models. REFERENCES [1] National Spinal Cord Injury Statistical Center. 2013. https://www.nscisc.uab.edu/ [2] Sheffler et al. Muscle & Nerve. 35.5, 562-590, 2007. [3] Chesler & Durfee. J. Electromyogr. Kinesiol. 7.1, 2737, 1997 [4] Cifrek et al. Clinical Biomechanics. 24, 327–340, 2009 [5] O’Keeffe et al. J. Neurosci. Meth. 109.2, 137-145, 2001. [6] Zhou & Rhymer J. Neurophysiol. 92. 2878-2886, 2004. [7] Potvin & Brown. J. Electromyogr. Kinesiol.14, 389399, 2004 [8] Alty & Georgakis. EUSIPCO. 19, 1387-1390, 2011.

Frequency (Hz)

FIGURE 2: Comparison of power spectral density of EMG signals at beginning and end of knee-extension tests.

DISCUSSION Although the biceps brachii testing confirmed a relationship between EMG amplitude and force output (i.e. EMG can indicate the amount of torque generated),

ACKNOWLDEGEMENTS Thanks to Dr. Nitin Sharma, Nicholas Kirsch, and Naji Alibeji for their endless support as well as Dr. Ervin Sejdić for assistance in EMG data analysis and Dr. April Chambers and Ms. Jenna Trout for facility access and assistance in the University of Pittsburgh Human Movement and Balance Laboratory.


Predicting Strength of Nanocrystalline Copper from Molecular Dynamic Simulations By: Bhim Dahal Mentor: Dr. Guofeng Wang Swanson School of Engineering Summer 2014 Abstract The strength of polycrystalline materials depends on their grain size. Generally, ultra-fine crystalline (UFC) metals and alloys are considered to be those with an average grain diameter of 100-1000 nm in range [1]. And those with grain diameter <100 nm are considered to be nanocrystalline (NC) materials [1, 2]. The hardness and yield strength of NC materials increase with the decreasing grain size, this effect is called the Hall-Petch effect [1-5]. Therefore, in this three months work I have employed classical molecular dynamic (MD) simulations to verify this Hall-Petch phenomenon. I investigated the yield strength of an NC FCC copper, and my results show that yield strength and Young’s modulus of NC materials depend on the grain size. Background NC materials have hardness 24 times greater than annealed coarsegrained counterparts [2]. Properties such as hardness and ductility increase with decreasing grain size [1-5]. The Hall-Petch effect is caused by the grain boundaries (GBs) impeding the motion of dislocations as the grains become smaller [1-5]. Dislocation is a key to a deformation mechanism. Dislocations are linear defect and all most all materials come with this kind of defect. The presence of dislocation makes the materials to deform easily (I am not going to explain the details behind the dislocation mechanism in this paper because it cannot be explained in two pages paper, see reference 8 for dislocation theory). However, there must be a limitation to this effect. In fact the Hall-Petch effect is valid until the grain size is reduced to 15 nm [1, 3-4]. This is typically a characteristic length (CL). Further refinement of grains below this size will decrease the strength of the materials, and this is known as the reverse Hall-Petch behavior [1-5]. The materials become an amorphous as the grain size is further reduced from its CL. No dislocations are created for smaller grain size and those previously existed will be removed during deformation [4]. Methodology There are several ways of producing NC materials both experimentally and with MD simulations. For an experimental ways of extracting NC materials please see the reference 2 and 3. I generated 3-D atomic models of NC Cu using periodic boundary conditions using the Atom Eye [7]. In order to produce the samples with a random grain distribution I used Voronoi tessellation method [7]. After the model was made I used the LAMMPS [6] to explore the mechanical properties of NC Cu. The interaction between the Cu atoms was determined by the modified embedded atom method (MEAM) potential. In LAMMPS, style MEAM computes the pairwise interactions for a variety of materials [6]. In the MEAM, the total energy E of a system of atoms is given by [6]: = ∑ { ( ̅ ) + ∑ ∅ ( )} Where F is the embedding energy, which is a function of the atomic electron density, , and ø, is a pair potential interaction. The pair interaction is summed over all neighbors J of atom I within the cutoff distance. There are several other possible potentials listed in the reference 6. I performed the simulations under a constant pressure, temperature, number of atoms, and strain rate. The temperature was controlled by the Nose-Hover ensemble [6]. For each sample, I

equilibrated the model by annealing it at 300K for 50 ps allowing the model to relax before deforming. After relaxation, I used the constant strain rate of 109/s to deform the sample in the z-direction while the x and y boundaries were fixed. Each sample was deformed with the thermal load of 300K and with the time step size of 0.001 ps; I calculated the stress in the z-direction. The total strain was 15% for each sample. I studied the following three models: Model: Number of Avg grain diameter Number of atoms (nm) grains 1 2 3

9,928 101,898 2,837,191

3.4 5.4 22.0

3 8 3

Table 1: Data obtained from the Atom Eye

Results The results suggest that the yield strength and the Young’s modulus of the materials are the grain size dependent characteristics. I found the results as predicted by the Hall-Petch and the reverse Hall-Petch effect. In Fig 1, Plot ‘a’, the linear part is the elastic regime, which determines the materials elastic modulus or Young’s modulus. I found the Young’s modulus of the Cu between 81.0-93.0 GPa, increasing with the increasing grain diameter. From the curves it can be observed that for grain size of 3.4 and 5.4 nm, linearity diverges approximately at the same point, where I calculated the Young’s modulus to be ~81.0 GPa, and for the grain size of 22.0 nm I found ~93.0 GPa.

Figure 1: Plot ‘a’, stress-strain curves at 300K at strain rate of 10^9/s in the zdirection. Plot ‘b’, yield strength vs. grain diameter at 300K

However, the strength of a material is determined by its yield strength. The yield strength is the point where a curve deviates from its linearity on a stress-strain curve and plastic deformation occurs. Before a curve deviates from its linearity, a material will stretch elastically, which means a material will go back to its original shape if the stress is removed. From the above curves I observe the yield strength 0.2% offset by the intersection of the stress-strain curve with a line parallel to the slope of Young’s modulus. From Plot ‘a’ it is observed that the stress increases as the strain increases and from the plot ‘b’ it is observed that yield strength increases with the increasing grain size as defined by the reverse Hall-Petch effect. In the work done by a research group of ref.5, they found that at 300K the yield strength of the Cu started to decrease with decreasing grain size after passing the CL of ~10.0-15.0 nm. They reported at 300K yield strength between ~1.7-2.5 GPa for grain diameter ranging 5.0-50.0 nm, whereas my results showed yield strength between


~1.7-2.0 GPa for grain diameter ranging 3.4-22.0 nm. They also suggest that at 300K the transition from the reverse Hall-Petch to the Hall-Petch was not observed up to the grain size varying from 5.050.0 nm. However, they reported that at 600K yield strength shows a transition to the Hall-Petch regime for grain sizes larger than 30 nm. Taking that intuition from this research group [6] I decided to test only for three samples at 300K because of time restrictions. This research group [5] employed the embedded atom method (EAM) potential where as I used MEAM potential in my simulation.

they do such will be removed during deformation process. The main deformation mechanism in the first and second model was the large number of independent small slip events in the GBs and from time to time few partial dislocations may have nucleated through GBs but their contribution on the total deformation is insignificant. This kind of similar observation is also noted in the ref.5 during their simulation. From the Table 2, we can see that as the grain diameter is reduced a large fraction of the total volume of the atoms reside in the GBs, which makes GBs sliding and softening of materials easier. Model

% of atoms in grains

%of atoms in GBs

1 2 3

62.0 74.3 93.6

38.0 25.7 6.4

Table 2: Data obtained from the Atom Eye when models were built

In conclusion, I performed classical MD simulations to obtain the properties of the NC Cu of three different grain diameters at a thermal load of 300K. For a sample less than characteristic length, I have observed the reverse Hall-Petch effect, as the grain size in that range are too small for dislocations to pile up inside the grains; thus most deformations in that regime are intergranular or grain boundary mediated process caused by a large fraction of the volume in GBs, where many independent slip events took place; which result in weakening of an NC Cu as a grain diameter decreases. Acknowledgements I. University of Pittsburgh, Swanson School of Engineering II. Dr. Guofeng Wang III. Yinkai Lei IV. Corinne Gray Figure 2: snapshots of computer simulated NC Cu before and after deformation. ‘A’ is the sample of NC Cu before deformation with an average grain diameter of 3.4 nm. ‘B’ and ‘C’ are snapshots after 4% and 15% strain respectively in the z-direction (right in the plane of the paper). ‘D’ is the sample of NC Cu with an average grain diameter of 5.4 nm before deformation. ‘E’ and ‘F’ are after 4% and 15% strain respectively in the z-direction (right in the plane of the paper). Finally, ‘G’ is the sample of NC Cu with an average grain diameter of 22.0 nm before deformation. ‘H’ and “I’ are after 4% and 15% strain respectively in the z-direction (up in the plane of the paper). In the first and second column of the Fig 2, atoms in the grains are colored cyan, gold in the GBs, and red in the stacking faults. In the third column, atoms in the grains are colored blue, cyan in the GBs, and gold in the stacking faults. (Different color coding has to be adjusted to show the GBs and defects for different model)

In the first and second column of the Fig 2 the atoms in cyan have a coordination number (CN) of 12, whereas the atoms in red and gold possess a different CN, with Centro-Symmetry (CS) value larger than zero. In the third column of the Fig 2, the atoms in blue are in 12 CN and other colored atoms possess a different CN. The CS value is an important parameter in determining whether the crystal is a part of a perfect lattice, a dislocation, a stacking fault, or at a surface defect [5]. The CS parameter is determined by the following equation [7]: ∑ / ≡ 2∑ | | Where m is the maximum number of neighbors also known as CN and Dk and dj are vectors from the central atom to a particular pair of nearest neighbors. The Ci value will be <.01 for a perfect crystal. Stacking faults are left behind by the partial dislocations. In the 22nm grain simulation, the stacking faults are observed at 4%, indicated by the red arrow in Fig 2 H, in the 5.4nm grain simulation, the stacking faults are observed at 15%, indicated by the red arrow in Fig 2 F, and in the 3.4nm grain simulation, the stacking faults are not observed. However, it appears that a partial dislocation is forming at the 15% strain shown by the red arrow in ‘C’ of the Fig 2 nucleates in the grain at the end of deformation. As defined by the reverse Hall-Petch effect at smaller grain size no dislocation(s) will exist, and even if

References: 1.Kumar, K. S., H. Van Swygenhoven, and S. Suresh. "Mechanical Behavior of Nanocrystalline Metals and Alloys." Acta Materialia 51 (2003): 5743-774. Web. 2.Sanders, P.g., J.a. Eastman, and J.r. Weertman. "Elastic and Tensile Behavior of Nanocrystalline Copper and Palladium." Acta Materialia 45.10 (1997): 4019-025. Print. 3.Schiotz, Jakob, Francisco D. Di Tolla, and Karsten W. Jacobsen. "Softening of Nanocrystalline Metals at Very Small Grain Sizes." Nature. N.p., 5 Feb. 1998. Web. 4.Swygenhoven, H. Van, M. Spaczer, and A. Caro. "Microscopic Description of Plasticity in Computer Generated Metallic Nanophase Samples: A Comparison between Cu and Ni." Acta Materialia 47.10 (1999): 3117-126. Print. 5.Choi, Yongsoo, Youngho Park, and Sangil Hyun. "Mechanical Properties of Nanocrystalline Copper under Thermal Load." Physics Letters A 376.5 (2012): 758-62. Print. 6. lammps.sandia.gov 7. li.mit.edu 8. Weertman, Johannes, and Julia R. Weertman, Elementary Dislocation Theory, New York: Oxford UP, 1992. Print.


COORDINATED RESET DEEP BRAIN STIMULATION TO TREAT PARKINSON’S DISEASE Andrew Macgregor and Robert S. Turner, PhD Center for the Neural Basis of Cognition University of Pittsburgh, PA, USA Email: ajm194@pitt.edu al. demonstrate increased therapeutic INTRODUCTION aftereffects using a “coordinated reset” (CR) Parkinson’s Disease (PD) affects millions stimulation paradigm in which high frequency, worldwide, being the second most common low voltage stimulation is delivered through neurodegenerative disorder after Alzheimer’s electrodes selected randomly. By randomly Disease [1]. In addition to progressive dividing the desynchronization spatially and worsening of motor control and gait due to temporally within the BG, therapeutic muscle tremor and rigidity, later stages of PD aftereffects are observed to persist for days [7, are associated with dementia, sleep disorders, 8]. There is no commercially available and depression [1-3]. stimulator which is capable of delivering CR DBS, so in the interest of exploring the Deep Brain Stimulation (DBS) can be parameter space (number of electrodes, timing, administered as a complement to etc.) this project set out to design a signal router pharmaceutical treatment in patients with for the purpose of simultaneously delivering CR refractory symptoms. In DBS, a patient receives DBS and recording local field potentials. electrical stimulation to a structure in the brain known as the Basal Ganglia (BG), which is part of the brain network controlling skeletal muscle METHOD Device Overview movement. Although the exact mechanism by A novel design for a signal routing device was which PD arises is unclear, PD has been conceived in order to enable later delivery of associated with abnormally synchronous CR DBS. The signal router switches connection network activity within the BG [1, 4, 5]. time and location between electrodes rapidly Electrical stimulation of the BG is observed to according to the CR paradigm as well as disrupt this pathologically synchronous activity, manually. All control of the signal router is reducing motor impairments associated with conducted by an Arduino DUE in PD. communication with a GUI client which is run on a local computer. In current clinical practice, the BG may be stimulated constantly by electrodes which Software deliver alternating current at a high (130-180 The software to control the signal router is Hz) frequency and a voltage which must be based on the GUINO library released by Mads increased chronically to maintain therapeutic Hobye under a Creative Commons Attributioneffects[4]. Within 30 seconds after halting ShareAlike license. GUINO uses the stimulation, the therapeutic effect is lost, which EasyTransfer library to effect serial necessitates constant stimulation and limits the communication between an Arduino battery life on a device which requires invasive microcontroller and the client software which surgery to replace [4, 5]. runs a GUI using the OpenFrameworks library on the user’s computer. For the purposes of this Research efforts are currently underway to project, the GUINO library was substantially develop stimulation parameters with increased modified to run on the Arduino DUE, which efficiency and efficacy[5-8]. Studies by Tass et.


uses 32 bit architecture. Notably, because the Arduino Due does not use Electrically Erasable Programmable Read Only Memory (EEPROM), the functionality for storing and modifying variables in flash memory was enabled through the DueFlashStorage library released by Sebastian Nilsson. The required functionality to edit stimulation parameters without recompiling was added to the GUINO library in the form of text box input. The Arduino sketch was written in tandem with changes to GUINO and related libraries.

Figure 1: Screenshot of the GUI as it appears onscreen. All twelve channels may be controlled manually or used to deliver CR DBS.

RESULTS The GUI software produces a GUI which allows user input to the Arduino DUE via a serial connection. Stimulation using a CR methodology may be delivered and stimulation parameters may be altered by text inputs within the GUI. Preliminary validation of the switching circuit under DC conditions confirmed that the ESD protection diodes conduct at voltages exceeding the switch supply voltages. An insertion loss of approximately -0.25 dB was observed across the circuit. DISCUSSION There is currently no commercially available neurostimulator capable of stimulation in random or specified patterns. This paper presents the first step towards producing a signal router which can potentially be used in a laboratory or clinical setting. This device will have the added benefit of reducing the time and risk associated with manipulating physical connections when a patient or animal is undergoing stimulation or recording of brain local field potentials. REFERENCES

Hardware The signal routing hardware comprises twelve channels which switch connectivity to the electrodes between four states: connection to the stimulator cathode, connection to the stimulator anode, connection to the recording preamp, and no connection. Electrical isolation from mains power is effected by means of powering the switches with regulated battery power (Cincon EC8AW) and by placing in-line optical isolation on the serial connection between the Arduino and the client computer (Keterex USB-150). Electrostatic discharge (ESD) protection is achieved by means of bidirectional diode pairs (ON NUP4102XV6) between the switches’ input/output pins as well as resettable fuses (TE LVR005NS) between all external connectors and the switches(Vishay DG412).

[1] Tolleson et al. 2013, Discovery medicine, 15(80), pp. 61-66. [2] Opara et al. 2012, Journal of medicine and life, 5(4), pp. 375-381. [3] Ossig et al. 2013, Journal of neural transmission, 120(4), pp. 523-529. [4] Shen, 2014, Nature, 507(7492), pp. 290-292. [5] Zwagerman et al. Neurosurgery, 72(4). [6] Brocker et al. 2013, Exp Neurol, 239, pp. 60-67. [7] Tass et al. 2012, Annals of neurology, 72(5), pp. 816-820. [8] Adamchic et al. 2014, Movement disorders : official journal of the Movement Disorder Society.

ACKNOWLEDGMENTS Funding and mentorship was provided by Dr. Robert Turner. Initial project motivation was conceived by Dr. Turner and Dr. R. Mark Richardson


HIGH-DIMENSIONAL NEURAL CORRELATES OF CHOICE AND ATTENTION IN V4 Michael J. Morais, Adam C. Snyder, and Matthew A. Smith Visual Neuroscience Laboratory, Department of Opthalmology University of Pittsburgh, PA, USA Email: mjm211@pitt.edu, Web: http://www.smithlab.net/ INTRODUCTION Complex cognitive tasks, such as making decisions about ambiguous stimuli, or allocating visuospatial attention, are necessarily accomplished by the concurrent activity of large populations of neurons. These processes have historically been studied mostly by observing only the first-order firing rate statistics of single neurons [1], but researchers have recently begun to use high-dimensional analyses to elucidate the coding strategies employed dynamically in the brain during these processes [2]. For this study, our aim was to assay how well these approaches can reveal the mechanisms of spatial attention in an earlier stage of sensory processing. METHODS We recorded populations of neurons using microelectrode arrays (Utah array, Blackrock Microsystems) in V4 of macaque monkeys in a spatial selective attention task (Figure 1). Briefly, animals fixated, after which a visual cue directed attention into or opposite the recorded neurons’ receptive fields (RFs). Two oriented drifting grating stimuli were presented, one in and one opposite the RFs. The monkey was trained to look towards the stimulus that changed speed, if one changed. The target was validly cued on 80% of trials, to ensure proper attentional deployment. On some trials, no target appeared, during which the monkey needed to maintain fixation. Correct behavior was reinforced with liquid reward. DATA PROCESSING Data were collected from one monkey over 31 days, yielding 3624 neurons concatenated and used as a “superset” of data. Demixed principal components analysis (dPCA) was employed to orthogonalize, in a lowdimensional space, the trial-

averaged population data according to particular task variables [3]. Using variables associated with the animal’s choice along with temporal, cue, and target characteristics, we calculated the covariance associated with each individual variable, called the marginal covariance. The marginal covariance matrices were used to inform an objective function that, when maximized via simple gradient ascent methods, creates an optimal low-dimensional projection matrix. This subspace optimally describes the variability within the marginal covariance matrices while penalizing dimensions that describe more than one task variable. In this way, the high-dimensional population data are demixed into an interpretable low-dimensional space informed by the task design. Trials of data were defined not in terms of ensembles of neural activity but rather as trajectories through this lowdimensional space. A nonparametric bootstrapping procedure was used to assay the uncertainty of these trajectories. We repeatedly resampled, with replacement, the trials associated with each unique trial outcome, such that the same population of neurons would be informed by different subsets of the data. With the same neural population intact, each resampled

Figure 1. Schematic of attention task. The animal begins by fixating a centrally presented dot. A cue is then presented, which directs attention to one of the two stimulus locations. After a preparatory interval, two drifting grating stimuli are presented. The animal is tasked with looking towards the stimulus (if any) that changes drift rate during the trial, and is rewarded if successful.


iteration of data could be projected into the same low-dimensional neural subspace. A pair of trajectories was declared significantly different (p < 0.01) at a particular time point if the bootstrapped confidence intervals (95% CI) were nonoverlapping within a sufficiently long sequence of time points. The length of this sequence was computed according to the autocorrelation of the difference of the two trajectories to control the family-wise error rate. RESULTS We tested differing pairs of trajectories relative to two critical events in the trial process—the cue presentation and target onset. For example, aligning trials to the cue onset where the monkey was given identical cues and targets but made opposite choices, we observed that cue information was encoded with a significantly reduced magnitude. This drop could reflect ineffective cue encoding, weak attentional deployment, or low arousal, any of which would yield reduced confidence in the decision. Significantly reduced choice activity during this period could also implicate the role preestablished bias (Figure 1A). Similarly, aligning trials to the target onset where the monkey made identical choices to identical targets but was given opposite cues, we observed significantly increased cue and target

information. This increase could reflect increased alertness, attention directed opposite the cue, or biases, any of which would increase the monkey’s perception of the invalidly-cued target (Figure 1B). A more comprehensive assessment of how different trial conditions gave rise to divergent trajectories will follow in future investigation. DISCUSSION We showed that differing trial outcomes were associated with differing trajectories through the low-dimensional subspace. In this way, trial outcomes were defined not in terms of individual neurons’ activities but as states of a larger population. This indicates that spiking correlates of attention can be investigated in their native highdimensional spaces in order to more fully characterize how networks of neurons dynamically shift their resources to meet cognitive demands in real time. REFERENCES 1. Moran & Desimone, Science 229(4715), 782-784, 1985. 2. Mante et al. Nature 503, 78-84, 2013. 3. Brendel et al. NIPS 24, 2011.

ACKNOWLEDGEMENTS MJM was supported by summer research grants from the University of Pittsburgh Dept of Bioengineering and the Fight for Sight Foundation. .

Figure 2. Neural trajectories for two illustrative trial comparisons. Three-dimensional trajectories are shown as three one-dimensional trajectories, for simplicity. Shaded regions illustrate 95% CI from bootstrapping; the line thickness is often thicker than the error bars. A. When aligned to the cue onset, trials that receive the same cue and target, but yield opposite choices, encode the cue differently (top). Reduced choice information (bottom) could reflect a pre-established bias. B. When aligned to the target onset, trials with the same target and choice, but opposite cues, process the target differently (top, middle). For invalid cues (blue), elevated cue and target activity indicates increased detection ability of the correct, but invalidly cued target.


Quantifying Tibiofemoral Joint Contact Forces in Patients with Knee Osteoarthritis Using OpenSim Paige Kendell1,2, Shawn Farrokhi1,3, PT, PhD, DPT Human Movement Research Laboratory1, Department of Bioengineering2, Department of Physical Therapy3 University of Pittsburgh, PA, USA Email: pek29@pitt.edu INTRODUCTION Knee osteoarthritis (OA) is a disease that commonly affects older adults and is characterized by pain and functional limitations during daily activities. Walking exercise is frequently prescribed as an effective therapy to keep joints mobile, maintain or lose weight and boost overall health for patients with knee OA. However, prolonged walking exercise results in symptomatic knee pain and may increase the risk of disease progression in those affected by OA. To date, the overall quantitative effects of prolonged therapeutic walking exercise on symptoms and disease progression for patients with knee OA remain unknown.

METHODS Patients with knee OA were included in this study if they met the American College of Rheumatology criteria for knee OA. Patients’ lower extremities were outfitted with clusters of reflective markers and virtual markers were calibrated at different joint centers for data collection (The MotionMonitor). Each patient was attached to a harness as a safety precaution and walked continuously for 45 minutes on a split belt instrumented treadmill (Bertec Corp., Columbus, OH). Ground reaction force data was collected at 1000 Hz. Marker trajectory data was collected using a 14 camera-system (Vicon Motion Systems Ltd.) at 100 Hz. A 3D musculoskeletal model was created in OpenSim based on the Lower Limb Model 20101. Due to deprecation, all Schutte muscles were replaced with Thelen muscles. Virtual markers and a marker representing the centroid of each cluster used during data collection replaced the markers found in the 2010 model. All lumbar degrees of freedom were locked as data on any bony landmarks above the superior iliac spine were not collected.

Figure 1: Default OpenSim model.

The purpose of this study was to quantify tibiofemoral joint contact forces (JCF) in patients with knee OA during a continuous 45-minute bout of walking exercise. To achieve this goal, a methodology for quantifying tibiofemoral JCFs was developed using OpenSim. OpenSim is a 3D musculoskeletal modeling software capable of predicting JCFs using patient-specific models (Figure 1). We hypothesized that prolonged walking would lead to increased tibiofemoral JCFs throughout a bout of continuous walking and that there would be a difference in the tibiofemoral JCFs in the painful and non-painful limbs.

Subject-specific models were created for two patients. Tibiofemoral JCFs were predicted at the beginning of a 45-minute bout of walking (baseline) and again at end of the walking exercise trial. Each model was anthropometrically scaled based on the patients’ height and weight and experimental markers were positioned according to respective location during data collection. After scaling, joint angles were calculated using an inverse kinematics tool. The inverse kinematics tool derived the angles by minimizing weighted square errors of the predicted marker location to the actual marker location.


Next, the torso center of mass and segment masses within the model were adjusted using a residual reduction algorithm. The changes made by the algorithm allowed the model to be more dynamically consistent and minimize any residual forces that may exist due to marker trajectory error and modeling effects. After completion of the residual reduction algorithm, all muscle wrapping surfaces were deactivated to decrease computational time. Computed muscle control was then applied to the model to compute the muscle activations driving the kinematics. Finally, a joint reaction analysis predicted the tibiofemoral JCFs for the painful and non-painful limbs at baseline and at the end of the trial. Tibiofemoral JCFs resulting from the stance phase of gait were averaged and reported as a percentage of body weight. RESULTS For both patients, the tibiofemoral JCF during the early stance phase of gait was greater in the painful limb compared to the non-painful limb at baseline as seen in Figures 2 & 3. In addition, both patients demonstrated an increase in peak tibiofemoral JCF during the early stance phase of gait after 45 minutes of walking for both the painful limb (Figure 2) and the non-painful limb (Figure 3). The peak tibiofemoral JCF increased by an average of 15% in the painful limb from baseline to 45 minutes. Likewise, the peak tibiofemoral JCF increased by an average of 11% in the non-painful limb from baseline to 45 minutes.

Figure 3: Average weight-normalized tibiofemoral joint contact forces during stance phase of gait at baseline (black) and 45 minutes (gray) for the patients’ non-painful limb.

CONCLUSION The model created in OpenSim predicted physiologically reasonable tibiofemoral JCFs which supported our hypotheses. The tibiofemoral JCFs were greater in the painful limb at baseline and the peak tibiofemoral JCFs were greater after walking for 45 minutes compared to baseline. Overall the results predicted by the OpenSim model for this preliminary study are consistent with previously published literature2. Currently, a formal model validation study is underway which compares electromyography recordings collected during the trial to OpenSim estimated muscle activations. Walking exercise data has been recently collected for more patients with knee OA. Tibiofemoral JCFs will be predicted for these patients in order to determine a threshold of walking exercise that is ideal for this patient population. REFERENCES 1. Arnold et al. Ann Biomed Eng 38, 269-279, 2010. 2. Steele et al. Gait Posture 35, 556-560, 2012.

Figure 2: Average weight-normalized tibiofemoral joint contact forces during stance phase of gait at baseline (black) and 45 minutes (gray) for the patients’ painful limb.

ACKNOWLEDGEMENTS The author would like to acknowledge the University of Pittsburgh’s Department of Physical Therapy and Swanson School of Engineering for funding, Shawn Farrokhi, PT, PhD, DPT for his mentorship, Arash Mahboobin, PhD and Liying Zheng, PhD for OpenSim guidance and William Anderton, BS for his assistance with OpenSim.


ANALYZING ANIMAL MODEL AND DRUG-LOADED MICROSPHERES FOR LOCAL BREAST CANCER RECURRENCE IN AUTOLOGOUS FAT GRAFTING Ana Taylor1,3, Jacqueline Blileyˡ, Wakako Tsuji1, Kacey G Marra1,2,3, Albert D Donnenberg4, Vera S Donnenberg4, J Peter Rubin1,2,3 2

1 Department of Plastic Surgery, University of Pittsburgh, 200 Lothrop Street, Pittsburgh PA 15213 McGowan Institute for Regenerative Medicine, University of Pittsburgh, 450 Technology Drive, Pittsburgh, PA 15219 3 Department of Bioengineering, University of Pittsburgh, 300 Technology Drive, Pittsburgh, PA 15219 4 University of Pittsburgh Cancer Center, 5117 Center Avenue, Pittsburgh, PA 15213

INTRODUCTION Autologous fat grafting has been recently reevaluated in breast augmentation and breast reconstruction after surgery (1, 2). Fat grafting is an innovative method to use in breast reconstruction for cancer patients; it is less invasive, more naturallooking, and can cause fewer immunological problems. It has less complications such as ruptures, malposition, and capsular contractures, compared with silicone breast implants. In addition to the preferred fat grafting for reconstructive purposes, there is interest in using adipose-derived stem cells for the regenerative process. In 2001, adipose-derived stem cells (ASCs) were found as mesenchymal stem cells in fat tissue. Because of their abilities of multi-lineage differentiation and their wound healing effects, ASC-enriched fat grafting was developed. It is reported that the ASC-enriched fat graft retention is better than retention without ASCs (3, 4). There is however major concern that this fat grafting with adipose-derived stem cells may cause cancer cell progression. The aim of this initial study was to develop an animal model for breast cancer recurrence in fat grafting and to analyze our doses of cancer cells in fat tissue to determine what effect the cells’ environment will have on cell progression. Microspheres loaded with chemotherapeutic drugs have been an on-going study as a solution to the concern of the stem cells’ effects on cancer cell progression. Microspheres are small spherical particles with diameters in the micron range, hence the name. They have a history of being used for drug delivery since they can also be less invasive and help evenly spread treatment over a specific area. Anti-cancer drugs (Doxorubicin and Paclitaxel) were encapsulated into microspheres and

tested with multiple assays to determine effects on cell progression. METHODS For the fat grafting study, an exact dose of MDAMB-231 or BT-474 breast cancer cells were mixed with human fat tissue and injected into the subcutaneous of 8 to 10 week old, immune-deficient female mice, injecting the same doses with Matrigel as a positive control. After 6 or 8 weeks, the mice were euthanized and samples were taken to be weighed, and analyzed histologically with H&E, human-specific pan-cytokeratin, and Ki67. Analysis was focused on the Ki67 staining because of its focus on cell proliferation; it is a protein that marks the proliferation and is therefore helpful for identifying growth of the cancer cell populations in each injection. NIS Elements software was then used to determine the percentage of proliferating cancer cells in each sample from the mice. For the microsphere portion of the study, 125 milliliters of an aqueous solution containing a chemotherapeutic drug was encapsulated in either a poly (dl-lactide-co-glycolide) solution for singlewalled microspheres or in a poly(l-lactide) solution then dispersed in poly (dl-lactide-co-glycolide) for double-walled microspheres. Empty microspheres with no drug encapsulated were used as a control. Those solutions were spun in polyvinyl alcohol, centrifuged to isolate the microspheres, and freezedried to eliminate any remaining water. Then, each batch of microspheres containing Paclitaxel or Doxorubicin were analyzed with multiple assays to assess cell viability, proliferation, differentiation, or biological activity. DATA PROCESSING The NIS elements imaging software was used to actually quantify the effects the cell’s environment will have on cell progression. The software can


differentiate the pigmentation of the stained slides of samples from the mice injections. In Ki67 staining, proliferated areas appear brown while the rest of the tissue appears a blue/purple. The brown areas were highlighted and compared to the total overall imaged area to obtain a percentage of proliferation. A Cyquant assay and Bioactivity assay were conducted on the microspheres added to cancer cell lines to respectively assess proliferation and biological activity. These assays are based on DNA content, and can give accounts of activity based on cell numbers since the amount of DNA remains constant for any cell line or cell type. RESULTS Table 1 below shows a few of the proliferation percentages from the samples. Although the study is still ongoing, results do show differences of proliferation depending on the cells environment. Figure 1 displays a sample of the assay results, specifically the Bioactivity assay. The assessment of microspheres showed that although certain drugs and specific doses work better on certain cancer cells, the microspheres do effectively reduce these cancer cells and therefore help prevent cell progression.

Figure 1: By comparing the DNA concentration to the amount of Doxorubicin in the microspheres (either empty or loaded), we see that this method of drug delivery can reduce the number of cancer cells. DISCUSSION The NIS Elements program demonstrates that we can effectively count cancer cells using the software and Ki67 staining technique to help conclude the effects of MDA-MB-231 or BT-474 breast cancer cells and their environment on cancer cell progression. This can be used in further, more specific analysis of the ASC’s effects in fat grafting for cancer patients. There seems to be a specific, preferred drug and dose amount in the microspheres that works for each different cancer line. The assays give us information of their effects on cell progression that may help with determining their role in fat grafting for the future. REFERENCES 1. Coleman SR, Saboeiro AP. Plastic and reconstructive surgery. 2007 Mar;119(3):775-85; discussion 86-7. 2. Delay E, Garson S, Tousson G, Sinna R. Aesthet Surg J. 2009 Sep-Oct;29(5):360-76. 3. Yoshimura K, Asano Y, Aoi N, Kurita M, Oshima Y, Sato K, et al. The breast journal. 2010 Mar-Apr;16(2):169-75. 4. Yoshimura K, Sato K, Aoi N, Kurita M, Hirohi T, Harii K. Aesthetic Plast Surg. 2008 Jan;32(1):48-55; discussion 6-7.

ACKNOWLEDGEMENTS This work was supported by the University of Pittsburgh’s departments of Plastic Surgery and Bioengineering, the McGowan Institute for Regenerative Medicine, and the Hillman Cancer Center. Special Thanks to the Center for Biological Imaging for their service and training with the software and to Jacqueline Bliley and Wakako Tsuji for their guidance and mentorship.

Table 1: Average cell proliferation percentage for cancer cells in fat, media, and/or Matrigel. Average % MDA+Matrigel+Lipo BT+Media BT+Matrigel MDA+Media MDA+Matrigel Proliferation 100k dose 0 11.4614±10.0885 10.9567±11.7749 19.8201 17.268±11.9439 10k dose

5.12047±7.80708

2.00738±

4.59679±

4.48863

8.60257

0

7.14578±11.6797


AN ADIPOSE STEM CELL SUSPENSION IN KERATIN HYDROGEL FOR PERIPHERAL NERVE INJURY TREATMENT Lindsey Marra1, Danielle Minteer1, and Kacey Marra1-3 1) Dept of Bioengineering 2) Dept of Plastic Surgery 3) McGowan Institute for Regenerative Medicine INTRODUCTION Engineering a biomaterial to provide biochemical and mechanical support for regenerating peripheral nervous tissue will raise the standard of treatment for peripheral nerve injuries.1 An adipose-derived stem cell-suspension in keratin hydrogel that can be injected into the lumen of the nerve conduit may be the biomaterial that can provide support to the regenerating peripheral nerve.2,3 Keratin must be able to form a gel, maintain its gel state in physiological conditions and support adipose stem cells (ASCs) and neurons in a shared environment. MATERIALS AND METHODS Keratin gelation: Keratin was extracted from human hair and lyophilized. Lyophilized keratin was dissolved in phosphate buffered saline. Concentrations of 20%, 22% and 25% keratin were compared for time to gelation and ability to maintain structure when submerged in PBS. Fluorescent imaging experiment: keratin gel and collagen gel were plated on glass-bottomed dishes and ASCs were seeded on top of the gels. The media was replaced every 2 or 3 days for 7 days. On day 7 the ASCs were labeled with calcein AM and ethD-1 then imaged using fluorescence microscopy and cell viability assessed. Cytoviability experiment: ASCs were seeded on top of keratin gel or seeded directly in the well plate as a positive control. The cells were proliferated for 4 days without a media change. On day 4 the ASCs were labeled with calcein AM and ethD-1. Fluorescence was quantified in a Tecan Infinite M200 Pro plate reader. The percentage of live and dead cells was calculated for both experimental groups (Equation 1), having already subtracted the noise from keratin autofluorescence that was determined with a negative control. A two-tailed, independent t-test was used to determine statistical difference between experimental groups. ASC & PC12 co-culture experiment: Different ratios of PC12 cells to ASCs were plated on a 24 well plate. 1:3, 2:2, and 3:1 were compared based on morphology after 7 days of co-culture. The ratios of

the cells had the same ratio of the respective cell media used in each well. These same groups were repeated with the cells plated on top of keratin gel. PC12 cells are a rat nerve cell line, and the model for human neurons. Bright field microscopy was used to image the experimental groups to gain insight on cell morphology and confluency. RESULTS A 25% lyophilized keratin solution in PBS was studied due to its optimal gelation properties at physiological conditions. The 25% solution formed a gel within 30 minutes and maintained its form when gel was submerged in PBS. Fluorescent microscopy indicated that ASCs are viable within keratin gel over 7 days (Figures 1A and 1B). The majority of the fluorescence in the keratin experimental group was green, indicating live staining with calcein AM. Fluorescence was quantified after a four-day experiment of cells grown on keratin gel (Figure 1C). No significant difference was observed in the percentage of live cells on keratin compared to live cells grown directly on the well plate (p= 0.1528, Îą=0.005). There is a significant increase in the percentage of dead cells in keratin compared to the percentage of dead cells grown directly on the well plate (p= 0.0002158, Îą=0.001). In the co-culture experiment cells reached 100% confluency and maintained their individual morphology as shown in Figure 2. DISCUSSION Once the keratin gelation procedure was optimized, it was used in all of the experiments to determine how the substance affects the cells plated within. ASCs were shown to survive in the gel for at least 7 days. It would be ideal to culture them longer, but the ASC media starts to dissolve the gel and refreshing the media becomes impossible without disturbing the cells plated in it. The quantification of cell viability showed positive results, as the keratin was not toxic to the ASCs since there is no difference in the number of live cells in keratin compared to no keratin. There were significantly more dead cells in the keratin compared to no keratin, and this may be because dead cells were not rinsed immediately from the keratin when they were


plated. The co-culture experiment shows that the use of ASCs to increase nerve regeneration is promising since the cells were able to cohabitate successfully. Further experimentation using an MTT assay for cell metabolism would determine whether cells are more active in co-culture than if grown separately.

the standard of treatment for peripheral nerve injury. Updated results including a degradation study and a cell proliferation study on keratin gel will be presented. A quantification of PC12 cell metabolism when co-cultured with ASCs will also be studied further.

CONCLUSIONS These experiments demonstrate that ASCs are viable within our previously developed keratin hydrogel, and that the optimized concentration of the gel is not toxic to the ASCs. The co-culture experiment further showed evidence that the combination of ASCs with nervous tissue could be beneficial to the neurons. Both the qualitative and quantitative results from these experiments yield evidence that ASCs suspended in a keratin hydrogel may be a practical biomaterial for use in a nerve guide that will raise

ACKNOWLEDGEMENTS Department of Bioengineering and the Adipose Stem Cell Center summer grant 2014.

% of cellscompared to maximum fluorescence

REFERENCES 1 Lin, Y., Marra, K. Biomedical Materials 2012, Volume 7 2 Rouse, J., Van Dyke, M. Materials 2010, Volume 3, Pages 999- 1014 3 Marra, K., Plastic Reconstructive Surg. 2012, Volume 1, Pages 67-78

A

B

330

280

Keratin & ASCs

230 180

ASCs

130 80 30

-20

Live (Calcein AM)

Dead (EthD-1)

C

Figure 1. Fluorescence microscopy of ASCs on keratin (A) and collagen (B) indicate results of live/dead assay. Green fluorescence represents live cells and red fluorescence dead cells. (C) compares the % Live cells and % dead cells between experimental groups.

Figure 2. ASC and PC12 co-culture shows that the cells can reach 100% confluency in a shared environment seeded in collagen (A) or keratin (B). A

B


Detecting Electrophysiologic Abnormalities in Chronic Insomnia Using Detrended Fluctuation Analysis Anthony Cugini1,2, David Cashmere2, Jean Miewald2 Daniel J. Buysse, MD2 University of Pittsburgh Bioengineering Department1, Department of Psychiatry2. University of Pittsburgh, PA, USA Email: avc10@pitt.edu, Web: http://sleep.pitt.edu INTRODUCTION Chronic insomnia is a disorder characterized by difficulty initiating or maintaining sleep along with effects on the waking functions such as mood and cognitive function1. Chronic insomnia affects nearly 10% of the adult population2. Current theoretical models of insomnia focus on homeostatic sleep-wake control mechanisms, hyper-arousal, and rapid eye movement (REM) instability; however, none of these theories have yielded consistent findings2,3,4. Many studies of insomnia focus on using spectral analysis to identify insomnia-specific characteristics in the sleep EEG, but recent data suggest non-stationary biologic signals such as the EEG and ECG are characterized by long-range (fractal) correlations and therefore require additional analysis techniques to account for this behavior5,6,7. Fractal analysis methods have been developed to calculate the long-range correlation factor called the fractal dimension6. The fractal dimension represents the relative self-similarity or ‘complexity’ of a given signal6,7. Previous Biomedical applications have examined the fractal dimension of: inter-spike-intervals of neuron firing, inter-stride-intervals of human walking, inter-breath-intervals of human respiration, and inter-beat-intervals of the human heart, and is able to differentiate between different pathologic conditions8,9,10,11,12. However, fractal analysis has not been widely applied to the study of insomnia. For the purpose of this study, we chose the Detrended Fluctuation Analysis (DFA) method of calculating the fractal dimension for its recent success in cardiovascular research5. Here we compare the fractal dimension of two different sleep states across the night: Non Rapid Eye Movement (NREM) and Rapid Eye Movement (REM) in individuals with insomnia versus healthy individuals. The focus of this analysis is to gain more insight into the physiologic characteristics of insomnia in hopes of identifying a better explanatory model of the disorder. We hypothesize insomnia can be characterized in terms of a lower fractal dimension signifying a more ‘complex’ EEG when compared to a control participants’ EEG.

METHODS The study consisted of 40 adults (31F, 9M; mean age 66 ± 4.9 years) who were screened to be free of any comorbid psychological disorders. Primary insomnia was diagnosed according to the DSM-IV guidelines. Participants were screened to ensure no substance, alcohol, or caffeine use prior to the study. Participants were prepared for the overnight Polysomnographic sleep study using specific EEG landmarks (10-20 system) for the measuring of wakesleep activity. 6 Grass® gold disc electrodes (10 mm diameter) were adhered to the scalp using Collodion™ glue on the frontal, central, and occipital regions. Electrodes were referenced to linked mastoids (4-pole Bessel band-pass filter 0.3-100Hz, sampling rate 256Hz). An Electrode Impedance Meter (Grass®) verified the impedance of each electrode to be below 10kohms. Participants’ brain-wave activity was continuously recorded over the course of the night. In this study only EEG data from the central electrode on the right hemisphere (C4) was used for analysis. The other electrode data are part of a larger study and will not be examined in this paper. DATA PROCESSING Raw EEG data was taken from the C4-electrode and scored as NREM, REM, or Wake by a registered (RPSGT) sleep technician using the criteria described by Rechtshaffen and Kales13. Spectral Analysis and DFA were then performed on the data. Spectral analysis was carried out using a Fast Fourier Transform (FFT). DFA was accomplished using the method described by Ihlen14. A Mixed Model analysis was done to determine statistical differences between the groups. Students ttests were also performed on the data. For this study pvalues less than 0.05 were considered significant. RESULTS Mixed Model Analysis of fractal dimension identified significant differences between the groups in overall NREM (p=0.002) and overall REM (p=0.01) periods. T-test results of the fractal dimension showed significant differences in the first two periods of Non-


REM sleep as well as the first period of REM sleep [Fig. 1]. Table 1 shows the results of the spectral analysis; PSD values during NREM periods showed no significant differences in any frequency bin. Spectral analysis on REM periods is part of a larger study and has not been included in this paper.

ACKNOWLEDGEMENTS I would like to thank the Swanson School of Engineering for providing funding for my research this summer. REFERENCES 1. 2.

3.

4.

5. Figure 1: A plot of the mean (n=20) fractal dimension in healthy and insomnia participants as it changes across different sleep states. The sleep states are arranged in chronological order as they occur in the night. Standard error bars are plotted. Significant differences (p<.05) are denoted by ‘*’.

DISCUSSION Determining the fractal dimensions in EEG signals has many implications for bioengineering, neurological, and psychiatric research6,10. Here we compared the fractal dimensions of healthy individuals and individuals with insomnia across the night. Our findings show that across the beginning of the night, individuals with insomnia have lower fractal dimensions suggesting more ‘complex’ brain activity. Additionally, the spectral analysis showed no significant difference in any frequency bin across the two states during NREM. Together, these findings suggest that insomnia is characterized by more ‘complex’ brain activity. Furthermore, the resultant increase in ‘complexity’ is not attributable to differences in brainwave frequency.

6. 7.

8.

9.

10.

11.

12.

13.

14.

Buysse, Daniel J. “Chronic Insomnia.” Am. J. Psychiatry. 2008; Vol 165: 678-686. Riemann, D., Spiegelhalder, K., Nissen, C., Hirscher, V., Baglioni, C., Feige, B. “REM Sleep Instability – A New Pathway for Insmonia?” Pharmacopsychiatry. 2012; Vol 45: 1-10. McCauler, P., Kalachev, L., Mollicone, D., Banks, S., Dinges, D., Van Dongen, H. “Dynamic Circadian Modulation in a Biomathematical Model for the Effects of Sleep and Sleep Loss on Waking Neurobehavioral Performance.” Sleep. 2013; Vol 36: 19871997. Buysse, D., Germain, A., Hall, M., Monk, T., Nofzinger, E. “A Neurobiological Model of Insomnia.” Drug Discovery Today: Disease Models. 2011; Vol 8: 129-137. Cervena, K., Espa, F., Pergamvros, L., Perrig, S., Merica, H., Ibanez, V. “ Spectral Analysis of the Sleep Onset Period in Primary Insomnia.” Clinical Neurophysiology. 2014; Vol 125: 979-987. Lopes, R. Betrouni, N. “Fractal and Multifractal Analysis: A Review.” Medical Image Analysis. 2009; Vol 13: 634-649. Goldberger, A., Amaral, L., Hausdorff, J., Ivanov, P., Peng, C., Stanley, H. “Fractal Dynamics in Physiology: Alterations with Disease and Aging.” PNAS. 2002; Vol 99: 2466-2472. Ivanov, P.C., Amaral, L. A. N., Goldberger, A., Havlin S., Rosenblum G., Struzik, Z., and Stanley, H. “Multifractility in Human Heartbeat Dynamics.” Nature. 1999; Vol 399: 461-465. Peng, C. K., Havelin, S., Stanley, E. and Goldberger, L. “Quantification of Scaling Exponents and Crossover Phenomena in Non-Stationary Time Series.” Chaos. 1995; Vol 5: 82-89. Zheng, Y., Gao, B., Sanchez, J., Principe, C. and Okun, S. “Multiplicative Multifractal Modeling and Discrimination of Human Neuronal Activity.” Phys. Lett. A. 2005; Vol 344: 253-264. Hausdorff, M. “Gait Dynamics, Fractals and Falls: Finding Meaning in the Stride-to-Stride Fluctuations of Human Walking.” Human Movement Science. 2007; Vol 26: 555-589. Wang, G., Huang, H., Xie, H., Wang, Z., and Hu, X. “Multifractal Analysis of Ventricular Fibrillation and Ventricular Tachycardia.” Med. Eng. Phys. 2007; Vol 29: 375-379. Rechtshaffe, A., Kale, A. “A Manual of Standardized Terminology, Techniques and Scoring System for Sleep Stages in Human Subjects.” EEG and Clinical Neurophysiology. 1969; Vol 26: 644. Ihlen, E. “Introduction to Multifractal Detrended Fluctuation Analysis in Matlab.” Frontiers in Physiology. 2012; Vol 3: 1-18

Table 1: Average Power Spectral Density values in NREM across the Night (V2/Hz) with Standard Error Patient Type

Control

Insomnia

Frequency Delta (0.4-5Hz) Theta (4-8Hz) Alpha (8-12Hz) Beta (16-20Hz) Delta Theta Alpha Beta

Sleep Period NREM3

NREM1

NREM2

NREM4

24.9±0.585

24.3±0.666

19.4±0.852

12.8±0.31

2.70±0.058

2.62±0.063

2.41±0.059

1.90±0.036

1.52±0.042

1.40±0.04

1.49±0.043

1.20±0.034

0.158±0.004

0.14±0.003

0.158±0.004

0.153±0.004

27±1.14 2.91 ±0.086 1.56±0.071 0.24±0.012

21.9±0.84 2.51±0.073 1.77±0.1 0.16±0.008

14.3±0.48 2.48±0.083 1.79±0.09 0.19±0.007

10.6±0.32 2.04±0.06 1.61±0.1 0.18±0.008


DETERMINATION OF SLOPE AND COLLECTION OF SIDEWALK LOCATION USING A PATHWAY MEASUREMENT TOOL Ian P McIntyre; Tianyang Chen; Eric Sinagra, MS; Jonathan Duvall, MS; Dr. Jonathan Pearlman, PhD Human Engineering Research Laboratories, Department of Rehabilitation Science and Technology, University of Pittsburgh, PA, USA Email: ipm4@pitt.edu, Web: http://herl.pitt.edu INTRODUCTION The desire for an objective measuring device, a pathway measurement tool (PathMeT), has been demonstrated by Sinagra et al. for measuring surface roughness, an accessibility aspect that will soon be regulated by the United States Access Board. A device such as PathMeT can aid stakeholders who are invested in the design, construction, and maintenance of public pathways, conform to the Americans with Dsabilities Act Guidelines (ADAAG) for accessible route. To further the value of PathMeT, the device should have the ability to accurately assess surface crossand running-slope during travel over pedestrian walkways and curb cuts in a manner that is consistent with its surface roughness measuring operation [1]. This research assessed the design of a sensor and filter which can provide an accurate representation of surface running-slope during motion.

maintained. Only pitch, the angle corresponding to running slope, is considered when testing both inclination sensors. The X3M Inclinometer outputs pitch angles calculated by its on-board processor through the sensing of accelerations from a MEMS accelerometer. The angles are reported as angles relative to gravity, and a calibration for angle offsets is performed prior to each run. The Razor 9DOF IMU outputs bytes from raw acceleration, angular velocity, and magnetic field orientation from the three integrated sensors. The raw Razor 9DOF IMU readings are fused using a Madgwick algorithm in the post-processing environment. Both data sets are analyzed using a C# post-processing script, and pitch angles (in degrees) are matched with corresponding time points (in seconds) after data smoothing with a moving average filter. PathMeT was tested on flat, smooth ground with a running slope of 0 degrees, and each sensor was mounted on the horizontal cross-bar of the device.

The utility of PathMeT can be increased if the absolute location of a rough surface or inaccessible slope was noted. The device can geotag questionable sidewalks while profiling so that property owners may be aware of issues. In addition, the system can passively generate a map of sidewalks [1]. This research further tests the use of global positioning systems to convey the location of sidewalks and the depiction of data into available geographical information system (GIS).

To assess the potential for sidewalk mapping, a Venus GPS Logger evaluation board was connected to PathMeT via a serial connection. Latitude and longitude data points were collected at 20 Hz. PathMeT was pushed across sidewalk surrounding Bakery Square in Pittsburgh, PA. The location data was formatted for display in Google Earth, a popular and free GIS program, after data processing.

METHODS Two sensors were considered for measuring surface inclination: a US Digital X3M Multi-Axis Absolute MEMS Inclinometer, and a 9 degree-of-freedom (9DOF) Razor inertial measuring unit (IMU) consisting of an ADXL345 triple axis accelerometer, an ITG-3200 MEMS gyroscope, and a HMC5883L magnetometer. The inclination sensor sampling rates were set to 125 Hz so that the high sampling rate of the laser and encoder are

RESULTS The eight successive trials for the X3M Inclinometer each show significant errors in the computation of slope. The effects of linear acceleration are prevalent in the beginning and end of each trial as PathMeT begins and concludes data collection. As PathMeT accelerates, the sensor incorrectly reports positive slope; as PathMeT decelerates, the sensor incorrectly reports negative slope. Although the damping of the sensor is minimal, the response of the sensor lags. However,


during the middle of a trial, the X3M Inclinometer reports accurate inclination readings. The averaged RMS errors for the eight X3M Inclinometer trials was 1.607. The eight successive trials for the 9DOF Razor IMU provide readings that are more responsive and more accurate but influenced by noise. Linear accelerations and the movement of PathMeT do not affect the orientation calculations. The sensor quickly responds to slight deviations in slope. The averaged RMS errors for the eight 9DOF Razor IMU trials was 0.6958. The Venus GPS Logger provided the GPS location of PathMeT at a 20 Hz resolution. The data was formatted to a KML file that was imported into Google Earth. A map of the sidewalk was produced (see figure 1). Errors, which can be over 50 m away from the sidewalk, are noted near pathways surrounding tall, wide buildings.

The 9DOF Razor IMU shows potential to serve as a valid slope sensor. The fusion of information from the three sensors coupled with the Madgwick algorithm can produce accurate determinations of running slope. The sensor can be improved by properly calibrating the three sensors prior to each surface trial. Additionally, the Madgwick filter can also be better tuned for the three sensors. Further testing will be conducted on sloped surfaces and public pathways to assess sensor accuracy in the presence of vibrations. The GPS system allows for the determination of sidewalks, and a visual representation shows distinction between the street and sidewalk. Sidewalk mapping and GPS tracking can be improved through the use of dead reckoning, a process that will incorporate the wheel-mounted quadrature encoder and directional heading readings from the magnetometer. Further work will be performed to deduce the location of rough surfaces and inaccessible sidewalk slopes. REFERENCES 1. Sinagra, E. et al. (2013) Development and Characterization of a Pathway Measurement Tool (PathMeT). Transportation Research Board 93rd Annual Meeting. Available from: http://trid.trb.org/view.aspx?id=1288078

Figure 1 Errors in GPS location were noted near larger buildings such as the building in the north-west of the picture. At these points, the GPS position was reported nearly 50 m off from the actual point.

DISCUSSION The X3M Inclinometer does not suffice as an effective PathMeT sensor for evaluating surface slope while moving. To properly use this sensor, PathMeT should be held still on a slope to obtain an accurate measurement. Linear accelerations of PathMeT determined by the wheel-based quadrature encoder could be used to remove linear accelerations of the accelerometer to produce an accurate orientation.

ACKNOWLEDGEMENTS This project was funded by the Interlocking Concrete Pavement Institute (ICPI, grant PATHMET), the Brick Industry Association (BIA, grant PATHMET), and the United States Access Board (grants H133E070024, H133N110011). Ian would like to thank Dr. Pearlman and the Swanson School of Engineering for their support. The authors would like to thank the Department of Veteran Affairs for the use of its facilities in conducting this research. The contents of this paper do not represent the view of the Department of Veteran Affairs or the United States Government. DISCLOSURE The authors of this paper are inventors of the technology PathMeT and equity holders in Pathway Accessibilities Solutions, Inc. The technology PathMeT has been licensed from the University of Pittsburgh to the company Pathway Accessibility Solutions, Inc. for commercial use.


PLASMA PERMEABILITY OF SYNTHETIC VASCULAR GRAFTS Hannah Voorhees, Michael Griffin, Salim Olia, and Marina Kameneva McGowan Institute for Regenerative Medicine University of Pittsburgh, Pittsburgh, PA Email: hjv2@pitt.edu INTRODUCTION Synthetic grafts made of polyester (PET) and polytetrafluoroethylene (PTFE) derivatives are commonly used for vascular reconstruction, AV access, and ventricular assist device (VAD) outflow tracts. While PET is predominately used for large diameter grafts, patency rates dramatically decrease with smaller diameter grafts making PTFE more suitable1. However, clinical studies have shown that expanded PTFE grafts can undergo plasma weeping before adequate protein deposition occurs. Although complications due to plasma weeping are infrequent, weeping can lead to decreased plasma protein levels, perigraft seromas, and additional surgeries. Significant differences in graft permeability have been seen between graft types in vivo, as well as when grafts have been treated with alcohols, oils, or antibiotics2,3. As grafts are sometimes resterilized after assembly for use in VADs, this experiment compared two ePTFE grafts to assess the differences in plasma permeability ex vivo after re-sterilization. METHODS An untreated Bard® 5 mm ePTFE graft was cut down to approximately 14 cm. Nearly 1 cm of the flex beading strain relief unraveled when the graft was cut. The graft was tied into a custom acrylic chamber with 2-0 Sofsilk® Braided Silk (Syneture, Mansfield, MA). A second Bard® 5 mm ePTFE graft was resterilized using ethylene oxide and trimmed to approximately 14 cm. The acrylic chamber suspends the graft within a PBS bath while allowing intraluminal blood flow. The circulating blood loop consisted of a reservoir, a centrifugal pump, a pinch valve to induce pulsatility, and a throttle to adjust afterload. Temperature, flow rate and pressure drop

across the graft was measured using a thermistor, a clamp-on ultrasonic flow probe, and two pressure transducers, respectively (Figure 1). Next-day venipuncture citrated ovine blood (Lampire Inc., Pipersville, PA) was obtained, filtered, and adjusted to a hematocrit of 30±1% through the addition or removal of autologous plasma. An Evans blue solution (12.8 mg/ml), which binds to albumin, was mixed with the blood (1:10) to label the plasma proteins. Finally, broad spectrum antibiotic gentamicin (Gentamax® 100 Injection, Nature Vet®) was added (1.0 ml/L) in the prepared blood.

Figure 1. Schematic of the pulsatile mock loop showing the graft submerged in the PBS bath while being circulated with blood.

The flow loop was rinsed with saline for 10 minutes and drained before filling with the prepared blood. Pump speed and resistance was adjusted to reach a mean flow of 1.5±0.1 L/min with 1.0±0.2 L/min of pulsatility and an outlet pressure of 70±10 mmHg. The acrylic box was filled with 500 ml of PBS submersing the graft. Samples were taken from the PBS bath at baseline, 2, 4, 8, 24, 48, and 72 hours. Light absorbance and osmolarity of the samples were measured using a spectrometer (Genesis 5, Thermo Spectronic®, Rochester, NY) at 620 nm and a micro osmometer (Precision Systems,


Natick, MA), respectively. A standard curve created from the blood supernatant was used to determine the relationship between albumin concentration and absorbance. RESULTS Plasma weeping in the untreated graft became visibly evident after 24 hours, predominately on the outlet side where the flex beading had unraveled, while the sterilized graft showed extravasation after 48 hours. The concentration of albumin in the non-sterile graft increased from 0 to 0.2 g/L while the sterilized graft increased from 0 to 4*10-3 g/L (Figure 2). The PBS bath osmolarity of the untreated and sterilized grafts decreased slightly from 287 to 285 mOsm and 291 to 286 mOsm, respectively. Bard® 5mm Graft Permeability Concentration (g/dL)

0.2

Untreated Graft Re-sterilized Graft

0.1

0.0 0

20

40

60

80

Time (hours)

Figure 2. Concentration of albumin in the PBS bath versus time over 72 hours.

CONCLUSIONS Plasma weeping occurred in both grafts within 72 hours, consistent with clinical findings4. For the untreated graft, weeping occurred predominately on the left side where the flex beading had detached, indicating that the localized permeance may be due to the same mechanical manipulation or wear that led to the unraveled strain relief, and not the lack of resterilization. Weeping also occurred along the length of the graft where the flex beading was seated as shown in Figure 3.

a)

b) Figure 3. a) The untreated Bard ® 5 mm ePTFE graft at baseline, b) and the same graft after 72 hours after draining the PBS bath.

The graft that was re-sterilized, and without the disturbed flex beading, underwent significantly less plasma weeping than the non-sterilized graft, however these experiments must be repeated before drawing a conclusion between sterilization and leakage. This method was proven to be adequate for testing plasma permeability of synthetic grafts. The study will be continued with the additional tests done on Bard® 5 mm ePTFE, Vascutek® 6mm Gel-Sealed Dual Layer SEALPTFE™, and Vascutek® 6 mm Gelweave™ gelatin impregnated Dacron® to determine the most appropriate graft for small diameter applications as well as the potential effects of graft sterilization on permeability. REFERENCES 1. Kannan RY, et al. J Biomed Mater Res B Apl Biomater. 2005 Jul ;74: 570-581. 2. Doble M, et al. Biomedical Materials. 2008 3. Gale DW, et al. Regional Anesthesia & Pain Medicine. 1994 Nov. 4. Demircin M, et al. The Turkish Journal of Pediatrics. 2008. 46: 275-278.

ACKNOWLEDGEMENTS I gratefully acknowledge the Swanson School of Engineering for providing funding and J. Andrew Holmes and Thorin Tobiassen of the Swanson Center for Product Innovation.


In Vitro Evaluation of Hemocompatibility of MPCPSi-Coated Titanium Alloy Roland Beard, Alyson Schlieper, Venkat Shankarraman, Sang-Ho Ye and William R. Wagner McGowan Institute for Regenerative Medicine, Department of Bioengineering University of Pittsburgh, PA, USA Email: rkb15@pitt.edu, Web: http://www.mcgowan.pitt.edu/ INTRODUCTION Blood contacting medical devices such as artificial lungs, vascular stents and heart valves, as well as ventricular assist devices are currently being employed to both preserve and prolong lives. However, these devices pose risks to the patients since implanted medical devices can be thrombogenic. Patients using these devices often require chronic anticoagulation or anti-platelet therapy to reduce the risks of blood clotting. Material coatings are a viable option in minimizing the thrombogenicity of blood contacting medical devices [1]. To develop reliable, hemocompatible coatings for medical devices, in vitro hemocompatibility assessments of the coated samples are necessary to verify a coating’s effectiveness. Currently, hemocompatibility is predominantly quantified by measuring thrombotic deposition or quantities of one specific coagulation cascade actor, such as thrombin generation. In order to achieve a more holistic view of the thrombogenicity of a material, we employ thromboelastography (TEG) along with flow cytometry. Thromboelastography provides a comprehensive view of the kinetics of whole blood coagulation by measuring clot intiatiation, strength, stability, and breakdown [2]. Flow cytometric analysis along with TEG was used to evaluate the hemocompatibilities of titanium alloy, a ubiquitous material used in medical device design. METHODS The study evaluated three different materials; titanium alloy (Ti 6 Al 4 V), titanium alloy coated with a siloxane functionalized phosphorylcholine polymer (MPCMPSi), and alumina (Al 2 O 3 ). Alumina was used as a positive control for the experiments. Ti 6 Al 4 V was compared with alumina in preliminary experiments. Titanium was contrasted with both its MPCMPSi counterpart and alumina.

Material Preparation MPCMPSi was synthesized via a typical reversible addition fragmentation chain transfer (RAFT) technique with a determined monomer ratio of 2methacryloyloxyethyl phosphorylcholine (MPC) and methacryloxypropyltrimethoxysilane (MPSi) (MPC: MPSi = 10:2). Both the alumina and titanium were polished and cut to 1 x 2.5cm dimensions. The titanium samples were coated with MPCMPSi using a simple silanization technique after the surfaces were passivated with a 35% nitric acid for 1hr [1]. Alumina was used as a control throughout the entirety of experimentation. Before in vitro testing, all materials were sterilized by alternating washing between unadulterated acetone and ethanol. Following cleansing, samples were stored in 70% ethanol. In Vitro Testing Whole ovine blood was collected by jugular venipuncture using an 18 gauge 1 1/2″ needle directly into a syringe after discarding the first 3 ml. The blood was withdrawn into a 60 ml syringe which contained 6 ml of 0.1 M sodium citrate. Sterilized alloy samples were fixed in a Vacutainer® tube (Becton-Dickinson, with no additives), and each tube was subsequently filled with 5.2 ml of citrated whole ovine blood. Submerged samples were incubated at 37° C and rocked for 45 min. After rocking, 5 µl of ovine blood was withdrawn from each sample to assay with flow cytometry. Flow cytometric analysis was performed using a previously published method [3]. Then, 340 µl of blood was also withdrawn from each sample tube and loaded into a TEG® hemostasis analyzer (Haemoscope Corp. IL) after adding 20µl of 0.2M CaCl 2 to the blood. Each sample surface was observed using Scanning Electron Microscopy (SEM).


DISCUSSION The data from the TEG measurements coupled with the SEM images suggests that the MPCMPSi coated titanium possesses a greater hemocompatibility than both its unmodified counterpart and alumina. However the statistical analysis (ANOVA) of the Rtime and K-time did not distinguish the materials as statistically different, p<0.05. Due to β type error, additional experiments may be required to verify the aforementioned trends. Quantification of the platelet activation using flow cytometry also showed large variations. Further studies are also needed to correlate both the TEG and flow cytometric data.

RESULTS The average R-time and K-time were obtained from the TEG measurements (n=4). R-time indicates the time for the first clot to form while K-time indicates the time for the clot to reach a specific size of 20 mm [2]. TEG data was normalized to nonmaterial contacting blood. Fig. 1. shows both prolonged both R and K times for Ti-MPCMPSi when compared to the other materials. Fig. 2. shows the resulting surface thrombotic deposition after ovine blood contact for 2hrs. The surface of the TiMPCMPSi had minimal platelet deposition while the uncoated titanium control,Ti, produced high platelet deposition and aggregation. The alumina surface experienced moderate platelet deposition.

REFERENCES [1] Ye SH, Johnson CA Jr, Woolley JR, Murata H, Gamble LJ, Ishihara K, Wagner WR. Colloids Surf, B 2010; 79:357-64. [2] Shankarraman V, Davis-Gorman G, Copeland JG, Caplan MR, McDonagh PF. J Biomed Mater Res Part B Appl Biomater 2012;100B:230–238. [3] Johnson CA Jr., Shankarraman V, Wearden PD, Kocyildirim E, Maul TM, Marks JD, et al. ASAIO J 2011;57:516-2. ACKNOWLEDGEMENTS Funding was provided by the Swanson School of Engineering and the Office of the Provost.

Fig. 1. Normalized R-time and K-time averages obtained from a TEG analysis for four experiments with standard deviation bars.

Alumina

Titanium

Ti-MPCMPSi

Fig 2. Scanning electron micrographs of material surfaces following submersion in citrated whole ovine blood for 2 h at 37°C (scale bar = 10µm).


VISUALIZING REAL-TIME PLATELET DEPOSITION ONTO Ti6Al4V CAUSED BY DISTURBED FLOW GEOMETRIES Drake D. Pedersen, Megan A. Jamiolkowski, Marina V. Kameneva, James F. Antaki, William R. Wagner McGowan Institute for Regenerative Medicine University of Pittsburgh, PA, USA Carnegie Mellon University, PA, USA Email: ddp17@pitt.edu, Web: http://www.mcgowan.pitt.edu/ INTRODUCTION Heart disease is the leading cause of death in the United States, killing approximately 600,000 Americans per year [1]. In order to treat heart disease, the first generation pulsatile flow ventricular assist devices (VADs) were implemented as a bridge-to-transplant therapy. The development of continuous flow VADs decreased the size and power requirements and have extended mechanical life of these devices. For example, the Heartmate II (Thoratec Corp.) is FDA approved for use as destination therapy. The majority of VADs are composed of assembled components that inevitably contain micro-scale crevices and steps along the flow path. These regions can produce areas of recirculating flow and are often a nidus for thrombus formation. Although several in vitro studies have been performed to visualize fluid flow within these features, few studies have examined real time platelet deposition onto such regions with clinically relevant, metallic surfaces [2]. Using hemoglobin depleted red blood cells (RBC ghosts), long working optics, and a custom designed parallel plate chamber, we sought to perform such an analysis. METHODS The custom designed parallel plate flow chamber was modified from a similar design developed in a previous study [3]. The flow path was molded from polydimethylsiloxane (PDMS) and contained a 100x100 µm2 crevice in one side along the length. The assembled parallel plate flow chamber is shown in Figure 1. A whole blood analog was perfused through the flow path across a titanium alloy (Ti6AlV4) surface for 10 min at a wall shear rate of 1000 s-1. The whole blood analog was composed of RBC ghosts created using a previously established protocol and fluorescently labeled platelet-rich plasma (PRP) at a 25% hematocrit [3].

Ti6Al4V was chosen to emulate the blood-wetted surfaces of VADs.

Figure 1. Top view of the custom designed parallel plate flow chamber. The flow path containing the 100x100 μm2 crevice is molded into the transparent PDMS and is clamped onto a sample of Ti6Al4V. The blue shimstocks ensured a chamber height of 70 μm.

Platelet adhesion was visualized in real time under an inverted epifluorescent microscope with a 40x super long working distance objective. Images were acquired using a CCD camera. The experimental setup is shown in Figure 2. Images were analyzed using a custom designed Matlab program at 2.5, 5, and 10 min to generate probability maps of platelet deposition at any position in the crevice.

Parallel Plate Syringe Pump Flow Chamber

CCD Camera

Inverted Epifluorescent Microscope

Figure 2. Experimental setup showing how images were captured using a CCD camera. The syringe pump perfused blood through the parallel plate flow chamber as the inverted epifluorescent microscope and CCD camera were used to visualize and capture platelet deposition on the Ti6Al4V surface.


RESULTS Probability maps were generated from acquired fluorescent images of platelet deposition utilizing a custom designed Matlab program (Figure 3, N=6). Sample images of one of the trials were taken at 2.5, 5, and 10 min of perfusion, shown in Figure 3. We observed that, with blood moving in the downward direction, some platelets colliding with the bottom wall would either adhere or be redirected around the crevice. It also appeared that some platelets not adhering to the bottom wall would instead adhere to the top wall, eventually aggregating and forming, in some cases, thrombi that would span the width of the crevice. We observed only 0-17% chance of platelet deposition in the corner regions, compared to 67100% on the bottom wall and 50-67% on the top wall.

Figure 3. Sample images of platelet deposition at 2.5, 5, and 10 min (top) and corresponding probability maps (bottom). Probability maps show much higher likelihood of platelets adhering at top/bottom walls over the 10 min perfusion, contrary to current expectations based on the literature. Scale bar = 50 μm.

DISCUSSION The results suggest that this is a viable method for evaluating the effect of common geometric irregularities found in cardiovascular devices on platelet deposition. Mathematical and computer models attempting to predict platelet deposition and thrombogenesis have been developed, some of them even incorporating a step into the model [4-7]. These models, however, are only accurate to the extent of the physiological knowledge and experimental data available. The lack of experimental data characterizing platelet deposition onto clinically relevant surfaces in disturbed flow geometries limits the efficacy of predictive mathematical and computer models. Additionally, previous studies using fluorescent beads have suggested that

deposition is more likely to occur in the corners of sudden expansions [2]. However, the results of this study suggest that platelets adhere first to the front corners of the crevice and thrombi grow towards the center and back of the crevice. The experimental data collected in this study could be used to aid in the development of predictive models and reduce the time and money required to produce VAD designs that are safer and more efficient. PDMS was chosen as the chamber material due to its ability to be molded into microscopic geometries. However, molding the chamber from PDMS could have been a limitation to our findings. Throughout the perfusions, it seemed as though some platelets would attach to the walls of the chamber instead of the titanium alloy surface. In order to improve clinical relevance of this study, the chamber could be cut into the titanium surface instead of molded into PDMS. Additionally, since the ideal geometry is no crevice at all, one would ideally attempt to produce the smallest crevices possible. Further results using crevices of varying sizes would help to confirm the efficacy of this method while determining the effect of crevice size on thrombus formation. REFERENCES [1] Murphy et al. Deaths: Final Data for 2010, 2010 [2] Zhao et al. Ann. Biomed. Eng. 36, 1130-4,. 2008 [3] Jamiolkowski et al J Biomed Mater Res. Part A 00A, 000–000, 2014 [4] Hund et al. Phys Med Biol. 54, 6415–6435, 2009 [5] Goodman et al. Ann. Biomed. Eng. 33, 780-97, 2005 [6] Burgreen et al. ASAIO J. 45, 328-33, 1999 [7] Tamagawa et al. Artif Organs. 33, 604-10, 2009 ACKNOWLEDGEMENTS The assistance of Amanda Sivek, Salim Olia, Rudolf Hellmuth, Andrea Martin and Fang Yang in sample processing and data collection is acknowledged. Funding was received from contract grant numbers: T32HL076124, T32-EB001026, HHSN268200448192C, R01 HL089456-01, and the McGowan Institute for Regenerative Medicine. The provision of funds by the Swanson School of Engineering and Office of the Provost for the Swanson School of Engineering summer research fellowship are also acknowledged.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.