Swanson School of Engineering
Undergraduate Summer Research Program Summer 2015
Welcome to the 2015 Issue of the Swanson School of Engineering (SSOE) Summer Research Abstracts! Every year the SSOE invites all undergraduates to propose a research topic of interest to study for the summer and to identify a faculty mentor willing to serve as a mentor and sponsor for their project. In this way, students get to work on cutting edge research with leading scientists and engineers while spending their summertime at SSOE. The students, however, were not restricted to the Swanson School of Engineering or even the University of Pittsburgh. The world has been fair game. As a result, 16 students spent their internship at the National University of Singapore, 2 students spent time in India, 1 student went to Panama, and stateside we had a student at Texas A&M University. There are multiple programs that offer summer research opportunities to the SSOE undergraduates, the largest of these being the Summer Internship Program jointly sponsored by the Swanson School and the Provost. This year, the program was able to fund over 70 students, with generous support from both the SSOE and the Office of the Provost. Additional support was provided by the Department of Bioengineering, Department of Electrical and Computer Engineering, Department of Chemical and Petroleum Engineering, and the Department of Biological Sciences in the School of Arts and Sciences. The following individual investigators also provided support: Eric J. Beckman, Bryan N. Brown, Andrew P. Bunger, Karen M. Bursic, Markus Chmielus, Xinyan Tracy Cui, Richard E. Debski, Robert M. Enick, William Federspiel, Neeraj J. Gandhi, Alex K. Jones, Lei Li, Michel M. Modo, Bryan A. Norman, Robert S. Parker, Anne M. Robertson, David V. Sanchez, Ian A. Sigal, George D. Stetten, Scott Tashman, Sachin S. Velankar, Goetz Veser, Chaim-Gadi Wollstein, Guofeng Wang, and Jorg M. Wiezorek. Students also submitted poster abstracts to Science 2015 – Unleashed! in October. Almost sixty of the students were selected to present posters in a special undergraduate student research session at Science 2015. Students from all of the SSOE summer opportunities were invited to submit an abstract to be considered for expansion into a full manuscript for consideration for the second issue of Ingenium: Undergraduate Research in the Swanson School of Engineering. This provides undergraduates with the experience of writing manuscripts and graduate students – who form the Editorial Board of Ingenium – with experience in peer-review and editing. We hope you enjoy this compilation of the innovative, intellectually challenging research that our undergraduates took part in during their tenure at SSOE. In presenting this work, we also want to acknowledge and thank those faculty mentors who made available their laboratories, their staff, and their personal time to assist the students and further spark their interest in research.
David A. Vorp, Associate Dean for Research Larry J. Shuman, Senior Associate Dean for Academic Affairs
Student
Student Department
Mentor(s)
Mentor Department(s)
Maria G. Gan
Bioengineering
Steven L. Orebaugh Anesthesiology
Sarah Shaykevich Bioengineering
Aaron P. Batista
Arjun K. Acharya Bioengineering
Kurt E. Beschorner Bioengineering
Ali O. Balubaid
Bioengineering
Bryan N. Brown
Bioengineering
Kelley A. Brown
Bioengineering
Bryan N. Brown
Bioengineering
Shweta Ravichandar
Mechanical Engineering
Bryan N. Brown
Bioengineering
Meredith P. Meyer
Bioengineering
Rakie Cham
Bioengineering
Gregory J. Brunette
Bioengineering
Xinyan Tracy Cui
Bioengineering
Bioengineering
William E. McFadden
Bioengineering
Xinyan Tracy Cui
Bioengineering
Laura E. Bechard
Bioengineering
Richard E. Debski
Bioengineering
Stephanie L. Sexton
Bioengineering
Richard E. Debski
Bioengineering
Joseph M. Takahashi
Bioengineering
Richard E. Debski
Lindsey J. Marra
Bioengineering
William Federspiel Bioengineering
Luke J. Drnach
Bioengineering
Neeraj J. Gandhi
Bioengineering
Joseph T. Samosky
Bioengineering
Michael R. Adams Bioengineering
Bioengineering
Title Mechanics of Anesthetic Needle Penetration into Human Sciatic Nerve Learning Coincides with Stability in Neural Tuning The Effect of Hardness and Contact Area on the Overall Hysteresis Coefficient of Frictionin a Multi-Scale Computational Model Optimization of Intervertebral Disc Decellularization Evaluation of the Host Response to Mesh Implantation in Mice Loading and Release of MCP-1 from Coated Surgical Meshes for Pelvic Organ Prolapse The Effects of Central and Peripheral Visual Field Loss on Standing Balance in Adults Dexamethasone Attenuates Microglial Activation During Brain Microdialysis as Revealed by 2-Photon Microscopy Fabricating and Evaluating Carbon Fiber Microelectrode Arrays Recording and Stimulation of Muscle Activity Design of Customized Lower Leg Specimen Fixtures for 6 DOF Robotic Testing System Surface Strain in the Anterolateral Capsule of the Knee Quantifying Tibiofibular Kinematics Using DMAS7 Motion Tracking System to Investigate Syndesmotic Injuries Bicarbonate Hemodialysis for Low-Flow CO2 Removal: Dialysate Recycling Identifying Neuronal Pathways for Generating Saccades to Stationary Targets Evaluation and Optimization of Drug Recognition System for Simulation Based Learning
All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property
Student
Student Department
Mentor(s)
Mentor Department(s)
Mingzhi Tian
Electrical Engineering
George D. Stetten
Bioengineering
Jennifer J. Zhuang
Bioengineering
Yadong Wang
Bioengineering
Thomas G. Kappil Bioengineering
Huaxiu Li
Bioengineering
Chuqi Liu
Electrical Engineering
Adam T. Payonk
Chemical Engineering
Emelyn E. Haft
Chemical Engineering
Justin S. Weinbaum Bioengineering
Leo Hwa Liang
Title Ridge Matching Based on Maximal Correlation in Transform Space Optimizing Porosity of Small Diameter Fast-Degrading Synthetic Vascular Grafts Computational Modeling of Wall Stress in Ascending Thoracic Aortic Aneurysms with Different Valve Phenotypes
Biomedical The Flow Loop Study of Engineering, Functional Tricuspid National University Regurgitation Valves* of Singapore
Biomedical Mary P. McDougall Engineering, Texas A&M University Chemical and Eric J. Beckman Petroleum Engineering Chemical and Di Gao Petroleum Engineering
PIN Diode Driver Design for Nuclear Magnetic Resonance Radiofrequency Lab Reactive Extraction as a Novel Means for Desalination* Embolic Microsphere with Sub Micrometer Pores
Karl J. Johnson
Chemical and Petroleum Engineering
Screening a Variety of Catalytic Lewis Pair Moieties for Their Hydorgen and Carbon Dioxide Binding Energies
Bioengineering
Lei Li
Chemical and Petroleum Engineering
Perfluoropolyether Polymers May Have Anti-Biofouling Applications
Chemical Engineering
Robert S. Parker
Chemical and Petroleum Engineering
A Cell-Scale Model of Pulmonary Epithelial Transport Dynamics in Cystic Fibrosis
Tai Xi Gentile
Chemical Engineering
~Chemical and Petroleum Engineering, SSOE Sachin S. Velankar ~Chemical and Batch Anaerobic Digestion of and Biomolecular Food Groups Tong Yeh Wah Engineering, National University of Singapore
Joseph A. Pugar
Chemical Engineering
Chemical and Sachin S. Velankar Petroleum Engineering
Chemical Benjamin Y. Yeh Engineering
Amy M. Howell
Nicholas W. Lotz
Topographic Control of Complex Surfaces*
All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property
Student
Student Department
Mentor(s)
Mentor Department(s)
Title
Goetz Veser and Zhao Dan
~Chemical and Petroleum Engineering SSOE ~Chemical and Biomolecular Engineering National University of Singapore
Postsynthetic Metal Ion Exchange of ZIF-8 for the Catalysis of the Oxygen Reduction Reaction
A Simple, Efficient and Transferable Approach for HighYield Separation of Nanoparticles Structured Bed Reactors for Chemical Looping Processes
Jonathan M. Hightower
Chemical Engineering
Andrew P. Loughner
Chemical Engineering
Goetz Veser
Chemical and Petroleum Engineering
Anna E. Williams
Chemical Engineering
Goetz Veser
Chemical and Petroleum Engineering
Robert M.Enick
Chemical and Petroleum Engineering
Chemical James L. Sullivan Engineering Garrett E. Green
Chemical Engineering
Alexander Deiters
Chemistry
Alexander J. Szul
Computer Engineering
Alexander Deiters
Chemistry
Abraham C. Cullom
Civil Engineering Kyle J. Bibby
Danielle M. Broderick
Chemical Engineering
Daniel D. Budny
Civil and Environmental Engineering Civil and Environmental Engineering
Garrett S. Swarm Civil Engineering Andrew P. Bunger
Civil and Environmental Engineering
Rachel A. Upadhyay
Civil Engineering Andrew P. Bunger
Civil and Environmental Engineering
Hannah C. Fernau
Civil Engineering Andrew P. Bunger
Naomi E. Anderson
Civil Engineering Kent A. Harries
Taylor R. Shippling
Civil Engineering Kent A. Harries
Civil and Environmental Engineering Civil and Environmental Engineering Civil and Environmental Engineering
CO2 Thickener Design to Reduce Viscous Fingering for Enhanced Oil Recovery and Hydraulic Fracturing Testing of Biosensor Paper Diagnostic Strips Applying Cell Free, Paper Based Sensor for Biological Testing and Protein Transfer 桅6 Disinfection with
Hypochlorite in Deionized Water Cerro Patac贸n Water System and Mocambo Feasibility Analysis Interaction Between Hydraulic Fractures and Fully Cemented Natural Fractures of Varying Strength Well Plugging with Clay-Based Minerals: Characterizing the Intrusion of Beontnoite Into NearWellbore Cracks Effect of Loading Rate on Breakage of Granite Considering Artificial Glaciers: Climate-Adaptive Design for Water Scarcity Considering Artificial Glaciers: Climate-Adaptive Design for Water Scarcity
All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property
Student
Student Department
Christopher J. Borland
Civil and Civil Engineering Piervincenzo Rizzo Environmental Engineering
Mentor(s)
Mentor Department(s)
~Civil and Environmental Engineering SSOE ~Civil and Environmental Engineering National University of Singapore
Joshua J. Hammaker
David V. Sanchez and Civil Engineering Oliver Patrick Lefebvre
Ni'a M. Calvert
Civil Engineering, Civil Engineering Ilinca Stanciulescu Rice University
Alec P. Rosenbaum
Computer Engineering
Title Non-Destructive Evaluation of Tennis Balls Using Highly NonLinear Solitary Waves
Graphene-Based Electrodes in Electro-Fenton Process for Treatment of Synthetic Industrial Wastewaters
Responses of Hybrid Masonry Structures in 1994 Northridge Earthquake Simulated Using Finite Element Analysis Program
Murat Akcakaya and Yen Shih-Cheng
~Electrical and Computer Engineering SSOE ~Electrical and Compuer Engineering National University of Singapore
Murat Akcakaya and Yen Shih-Cheng
~Electrical and Computer Engineering SSOE ~Electrical and Spike Train Distance Analysis of Computer Prefrontal Cortex Engineering National University of Singapore Electrical and Computer Engineering
A Software Interface to Compliment Original Hardware Capable of 10 Channel Simultaneous Recording and Analysis
Stephen C. Snow
Computer Engineering
Matthew J. Sybeldon
Electrical Engineering
Murat Akcakaya
Computer Engineering
~Electrical and Computer Engineering SSOE Alexander K. Jones Erradicating Bad Bit Patterns ~Electrical and and Surrounding Corrupted Memory Compuer Ge Shuzhi Sam Cells in Dram Engineering National University of Singapore
Erin L. Higgins
Development of an Adaptive Brain Computer Interface To Automatically Address Nonstationary EEG Data
All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property
Student
Student Department
Mentor(s)
Mentor Department(s)
Title
Electrical Brian J. Rhindress Engineering
~Electrical and Computer Engineering SSOE Alexander K. Jones ~Electrical and and Compuer Ge Shuzhi Sam Engineering National University of Singapore
Carl W. Morgenstern
Thomas E. McDermott
Electrical and Computer Engineering
Duquesne Light Power Distribution Model Creation
Lawrence Wong
~Electrical and Computer Engineering, National University of Singapore
Enhancement of The Mobile Application “where am I 2�--An Indoor Localization App Using Wireless Network
Karen M. Bursic and Ng Tsan Sheng Adam
~Industrial Engineering SSOE ~Industrial and Analyzing The Life Cycle Systems Assessment of Waste Treatment Engineering Scenarios in Singapore National University of Singapore
Industrial Alannah J. Malia Engineering
Karen M. Bursic and Ng Tsan Sheng Adam
~Industrial Engineering SSOE ~Industrial and Analyzing The Life Cycle Systems Assessment of Waste Treatment Engineering Scenarios in Singapore National University of Singapore
Mohamed A. Kashkoush
Industrial Engineering
Paul W. Leu
Industrial Engineering
Christopher M. Jambor
Industrial Engineering
Bryan A. Norman
Industrial Engineering
Shannon Biery
Materials Science Markus Chmielus
Mechanical Engineering and Materials Science
Materials Science Markus Chmielus
Mechanical Engineering and Materials Science
Yilun Xu
Electrical Engineering
Electrical Engineering
Industrial Diana B.T. Hoang Engineering
Eamonn T. Hughes
Design and Implementation of Portable, Social Robogt on Android with Speech Recognition and Text to Speech
Black Silicon Fabrication for Photovoltaics Optimal Design of a Pharmaceutical Distribution Network Oxidation of Nickel-Based Superalloy 625 Prepared by Powder Bed Binder Jet Printing Influence of Powder Atomization Techniques and Sintering Temperature on Dennsification of 3D Printed Alloy 625 Parts
All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property
Mentor Department(s)
Title
Materials Science Markus Chmielus
Mechanical Engineering and Materials Science
Additive Manufacturing of NIMN-GA Magnetic ShapeMemory Alloys: The Influence of Linear Energy on the Martensite Phase Transformation
Emma K. Sullivan Materials Science Markus Chmielus
Mechanical Engineering and Materials Science
Influence of Sputter Power and Wafer Plasma Cleaning on Stress and Phase Formation of ASDeposited Tantalum Thin Films
Chuyuan Zheng
Materials Science Jung-Kun Lee
Mechanical Engineering and Materials Science
Isaac H. Wong
Bioengineering
Student
Yuval L. Krimer
Student Department
Mentor(s)
Mechanical Anne M. Robertson Engineering and Materials Science
Albert C.F. To and Lu Wen Feng
~Mechanical Engineering and Materials Science SSOE ~Mechanical Engineering National University of Singapore
Guofeng Wang
Mechanical Engineering and Materials Science
Adedoyin B. D. Ojo
Mechanical Engineering
Glen Ayes
Mechanical Engineering
Cyrus Eason
Materials Science Jorg M. Wiezorek
Mechanical Engineering and Materials Science
Johnathan A. Maynard
Bioengineering
Kang Kim
Medicine
Michael P. Jacus
Bioengineering
Marc A. Simon
Medicine
Michael P. Urich
Electrical Engineering
Bryan M. Hooks
Neurobiology
Garrett D. Grube
Computer Engineering
Ian A. Sigal
Opthalmology
Parameter Study of Tin-Oxide Nanowire Growth on FTO and Stainless Mesh Substrates Effect of Variations in Blood Velocity Waveforms on Wall Shear Stresses in an Intracranial Aneurysm
A Novel Approach to PowderBased 3D Printing: Designing a Compact Binder Jetting Food Printer
Optimization of Processing Parameters of Additively Manufactured Inconel 625 and Inconel 718 Miniaturized Shear Punch Testing of Plastic Flow Behavior of Metal and Alloy Thin Foil Specimens Visualization of Perfused Rabbit Heart Mechanics Via Ultrasound Elasticity Imaging* Implementation of Butterworth Filtering to Improve Beat Selection for Hemodynamic Analysis Quantification of Axonal Projections from Topologicallyrelated Areas of Motor and Sensory Cortex in Transgenic Mice Developing Software to Parameterize Collagen Crimp Fibers*
All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property
Student
Student Department
Mentor(s)
Mentor Department(s)
Title
Deepa Issar
Bioengineering
Matthew Smith
Opthalmology
Mapping and Modeling EEG Signals Before and After a Craniotomy Procedure
Opthalmology
Using Optical Coherence Tomography to Evaluate NonLinear In-Vivo Deformations of Optic Nerve Head Tissues With Changes in Intraocular Pressure*
Jeremy J. Teichmann
Andrew N. Sivaprakasam
Bioengineering
Bioengineering
Chaim-Gadi Wollstein
Scott Tashman
Patrick J.Haggerty Bioengineering
Srivatsun Sadagopan
Tatyana A.Yatsenko
Bioengineering
Shilpa Sant
Robert J. C. Donahoe
Bioengineering
Hanna Salman
Jessie Liu
Bioengineering
Contralateral Limb Differences in Knee Kinetics and Correlations to Kinematic Orthopedic Surgery Differences After Anterior Cruciate Ligament Reconstruction Optimal Receptive Fields for the Classification of Conspecifc Otolaryngology Vocalizations Novel In-Vitro Biomimetic Pharmaceutical Mineralized Matrix Model for Science Studying Breast Cancer Metastasis Studying the Application of Phsyics and Synthetic, Paper-Based Sensors Astronomy for Biological Testing Mapping the Extracellular Matrix: An Automated Analysis of the Striatal Distribution of Thrombospondin Using Cellprofiler
Michel M. Modo
Radiology
Electrical and Computer Robot Dancing Along a Song: Engineering, Beat and Rhythm Detection in National University Social Robots of Singapore
Sean M. Brady
Electrical Engineering
Ge Shuzhi Sam
Abigail E. Loneker
Bioengineering
Stephen F. Badylak Surgery
Enhancing Hepatocyte Function Using Liver Extracellular Matrix Derived from Various Species
All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property
MECHANICS OF ANESTHETIC NEEDLE PENETRATION INTO HUMAN SCIATIC NERVE Maria G. Gan1, Joseph E. Pichamuthu1,2,3, Steven L. Orebaugh5 and David A. Vorp1,2,3,4 1. Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 2. McGowan Institute for Regenerative Medicine, Pittsburgh, PA 3. Center for Vascular Remodeling and Regeneration, University of Pittsburgh, Pittsburgh, PA 4. Department of Surgery, University of Pittsburgh, Pittsburgh, PA 5. Department of Anesthesiology, University of Pittsburgh School of Medicine, Pittsburgh, PA Email: mag223@pitt.edu INTRODUCTION A main strategy used by anesthesiologists to control post-operative pain is the injection of local anesthetic agents in close proximity to peripheral nerves. Painful lower extremity procedures may benefit from nerve blocks with local anesthetic at the femoral and sciatic nerves [1]. The sciatic nerve, comprised of the tibial nerve (TN) and common peroneal nerve (CPN), is unique in its anatomy among peripheral nerves (Figure 1). To optimize onset and effectiveness of the injected local anesthetic, the dose is administered into the area between the paraneural sheath and the epineurium of the nerves. However, this space is quite limited in size, and there exists the significant possibility of the needle puncturing the epineurium, which may lead to nerve injury [3-4].
Figure 1. Histological image of the sciatic nerve. The nerve bundles, highlighted in pink, represent the TN and CPN. A connective tissue sheath, known as the paraneural sheath, holds both nerves together and is highlighted in yellow. The epineurium, a connective tissue lining the TN and CPN individually, is highlighted in the image with green fill surrounding the nerve bundles [2].
The puncture force necessary to penetrate the paraneural sheath, the epineurium of the nerve, and the nerve with overlying paraneural sheath has not been investigated. We believe that understanding the relationships of these forces will allow anesthesiologists to more accurately and safely deliver local anesthetics to this small space within the sheath, while avoiding nerve puncture or trauma. Therefore, the purpose of this work is to study the mechanics of anesthesia “block needle” penetration into cadaveric sciatic nerves, and to measure the penetration force for the paraneural sheath, isolated nerve, and nerve with overlying paraneural sheath. METHODS Five cadaveric sciatic nerves were harvested and stored in saline at 4°C. Three specimens were dissected from each nerve: 1. Isolated paraneural sheath (IPS), 2. Isolated nerve (IN) and 3. Nerve with overlying paraneural sheath (NPS). Specimens were mounted onto a 50g (IPS) or 500g (IN & NPS) load cell and secured with sutures onto the mounting stage of an ASTM standard calibrated micro indentation system. A Stimuplex A 21Gx4” nerve-block needle was mounted on the indenter using needle holding jaws. The indenter was driven towards the specimen by a stepper motor at a speed of 0.1 mm/sec and after the initial puncture was made, the indenter was retracted. The indenter was inserted a total of three times at the same location in the specimen. Punctures were made on each specimen at three different locations. The needle tip force and displacement was continuously recorded using LabView software and calculation of puncture force and energy released during each insertion was performed in Matlab.
RESULTS The sudden drop in the load during first insertion was identified as the puncture event and the puncture force was noted (Figure 2).
punctured during the first insertion for all tissue types. 1600 1400
Force (mN)
DATA PROCESSING An average of the puncture force over the three locations was calculated and this average was used to represent the force required to puncture the specific portion (IN, NPS or IPS) of the nerve being tested. This puncture force was then averaged over the number of nerves tested.
1200 1000 800 600 400 200 0
Isolated Paraneural Sheath
Isolated Nerve
Nerve with Paraneural Sheath
Figure 3. Average puncture force for each tissue type. The puncture force was averaged over the total number of nerves tested.
Figure 2. Force-displacement data of one trial performed on NPS tissue. The puncture force is circled in black and located on the first insertion curve in red. The second and third insertions into the same location are shown in blue and green, respectively.
The average penetration force was determined to be significantly different (p<0.05) between the IPS (120±21 mN, n=4) and the IN, (1158±109 mN, n=3) as well as between the IPS and the NPS (1292±208 mN, n=3). On the other hand, there was no significant difference (p=0.60) between the puncture force of the IN and NPS (Figure 3). Confirmation of a puncturing event was determined via comparison of the energy released during each insertion. For the IPS, the average energy released was significantly higher for the first insertion (129±30 µJ) than the second insertion (10±4 µJ, p<0.05). Similarly, for the NPS, the average energy released was 2643±211 µJ for the first insertion and 288±113 µJ for the second insertion (p<0.05). For the IN, the average energy released was also significantly higher for the first insertion (1600±200 µJ) than the second insertion (437±238 µJ, p<0.05). This suggested that the tissue was successfully
DISCUSSION Since the force required to puncture the isolated nerve is approximately ten-times greater than the force required to penetrate the paraneural sheath, it is possible that applying a force greater than 1 N may result in nerve trauma. Further investigations into the measurement of needle depth after penetration of the nerve with overlying paraneural sheath specimen could aid in determining whether needle-nerve contact is occurring. This would confirm our suggestion that for optimal delivery of the anesthetic, a low puncture force of magnitude of 0.1 N should be used in order to avoid nerve trauma. REFERENCES 1. Wegener et al. Regional Anesthesia and Pain Medicine 36, 481-488, 2011. 2. Perlas et al. Regional Anesthesia and Pain Medicine 38, 218-225, 2013. 3. Andersen et al. Regional Anesthesia and Pain Medicine 37, 410-414, 2012. 4. Tsui et al. Regional Anesthesia and Pain Medicine 39, 373, 2013. ACKNOWLEDGEMENTS This work was supported by the Swanson School of Engineering and Office of the Provost at the University of Pittsburgh.
LEARNING COINCIDES WITH STABILITY IN NEURAL TUNING Sarah F. Shaykevich, Emily R. Oby, and Aaron P. Batista Department of Bioengineering and Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA, USA
Email: sfs28@pitt.edu Web: http://smile.pitt.edu/ task, in which he moved the cursor from the center of a screen to one of eight radial targets. In each experiment, he first learned to control the cursor under an “intuitive decoder,” a Kalman filter-based decoder determined from natural firing patterns generated while observing the computer perform the task. A “shuffled” decoder was then created by randomly shuffling the role of each channel in the relationship between neural activity and kinematics [2]. This manipulation impaired his control of the cursor. The monkey then practiced using the new decoder for multiple days to achieve a stable level of proficiency.
INTRODUCTION Brain-computer interface (BCI) technology provides an avenue for insight into the neuroscience of learning. Users learn to use a BCI to move an onscreen cursor by modulating neural activity. Although we know that neural activity patterns change with learning, the details of the relationship between learning and changes in population-level neural activity are not well understood. Previous studies have indicated that as a monkey learns a BCI task, he forms and improves a mental map of the relationships between inputs from different neurons and the output on the screen [1]. To gain further insight into this process, we tested the hypothesis that an improvement in control will correspond to a change in neural tuning followed by a stabilization of the tuning.
We assessed stability of neural activity patterns by the preferred directions (PDs) of the neurons in our population. Here, preferred direction for a channel is determined by observing that channel’s firing rates to all eight targets. The set of firing rates was fit with a cosine tuning curve, and the direction for which firing rate peaks was that channel’s PD (Figure 1A). A stable neuron would show no significant change in PD from one day to the next. Our metric of interest, ΔPD, was defined as the absolute value of the difference in PD for a channel between one day and the previous day (Figure 1B).
METHODS A Rhesus macaque (Macaca mulatta) was implanted with a 96-channel microelectrode array (Blackrock Microsystems) in the primary motor cortex. He was then trained to control a BCI cursor using 92 neural units. He performed a center-out
B
Figure 1: A | For each channel, average firing rates over all trials are obtained to each target within a day. A cosine curve is fitted to this data, and the preferred direction is the direction at which the curve reaches its highest value. B | ΔPD is the distance between the preferred directions of a channel on different days.
To assess skill at the BCI task, we used relative success rate and relative time to target, as consistent and quick task completion indicates proficiency. Relative success rate was the ratio between the success rate on a day of an experiment and that achieved with the intuitive decoder for that experiment. Relative movement time was the difference between the average movement time achieved under the intuitive decoder and that on a certain day. Figure 2 shows the progress of one of the experiments as illustrated by success rates and movement times during each day, relative to those attained under the intuitive decoder at the start of the experiment. RESULTS As success rates increased and movement times quickened, ΔPDs decreased. The relationship was most evident during experiments that showed evidence of learning throughout (that is, those with significant positive slope in the learning metrics as in Figure 2) and inexistent (with a very low R2 value) in those with minimal improvement or change (Figure 3).
Figure 2: This example experiment evinces learning, as success rate eventually neared 100% and movement time approached that attained under the intuitive decoder.
DISCUSSION As the monkey’s control of the cursor improved and then stabilized, his neural tuning changed less from day to day. Initially, on less successful days, many neurons underwent a large change, suggesting a search for an effective combination of neural activity patterns. On more successful days, few channels showed changes, implying refined recall of the proper neural activity required to control the cursor. The greater change in neural tuning on less successful days perhaps indicates the monkey’s exploration in search of a working map of relationships between neural inputs and BCI outputs. In the future, we will investigate patterns in ΔPDs which may emerge across individual channels and groups of individual channels. We will also explore alternative metrics to quantify learning and stability to improve accuracy and visualization of neural and performance changes. REFERENCES 1. Ganguly K and Carmena JM. PLoS Biol (2009) 7: e1000153. 2. Sadtler PT. Nature (2014) 512: 423-426. ACKNOWLEDGEMENTS We thank the Batista lab for discussions. This work was funded by Dr. Aaron Batista and the University of Pittsburgh’s Swanson School of Engineering Bioengineering Department. Tuning Change vs. Performance Over All Experiments
Figure 3: As success rates increased, changes in PD decreased. In this plot, the relationship is evident across the combined set of all experiments. Each color represents one of eight experiments (mean experiment duration = 6.25 days). Each point represents the average ΔPD over all successful trials for one day.
THE EFFECT OF HARDNESS AND CONTACT AREA ON THE OVERALL HYSTERESIS COEFFICIENT OF FRICTION IN A MULTI-SCALE COMPUTATIONAL MODEL Arjun Acharya, Kurt E Beschorner, Seyed MR Moghaddam Human Movement and Balance Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: aka040@pitt.edu INTRODUCTION Slip, trip, and fall accidents account for a quarter of non-fatal occupational accidents [1]. Inadequate friction is commonly cited as the primary cause of slipping [1]. Specifically, slips typically occur when the coefficient of friction (COF) between the shoe and the floor is less than the COF required for walking [2]. In the presence of a liquid contaminant, the primary friction mechanism that contributes to the overall shoe-floor COF is hysteresis [3]. Hysteresis friction is the loss of energy during deformation of the soft elastomer when its asperities interact with asperities of the hard contacting surface. Preliminary multiscale computational models, developed at the Human Movement and Balance Laboratories, have demonstrated that hysteresis friction can be predicted, consistent with experimental outcomes, by simulating contact between rough shoe and floor surfaces [4]. Furthermore, previous literature has determined that certain macro-scale features, such as shoe hardness and contact area, are correlated with coefficient of friction [5, 6]. The purpose of this study is to quantify the effect of hardness and contact area on the predicted hysteresis COF using a multiscale computational modeling approach. METHODS A previously developed multi-scale model was applied to 5 existing shoes labeled by their manufacturer as “slip-resistant”. In the micro-scale model, contact between shoe and floor asperities quantified hysteresis COF as a function of contact pressure at the asperity scale (Figure 1). Preliminary simulations of the micro-model revealed that an exponential decay model described the relationship between contact pressure and hysteresis COF. The shoe model geometry was created based on the topography (i.e., roughness) and the material model was viscoelastic. The roughness in the model was consistent with the
Figure 1. Micro-scale model of Shoe E. The shoe model (red) is three asperities wide (< 100 µm) and is dragged across the length of the floor model (blue).
Figure 2. Macro-scale model of Shoe E. The shoe model (red) is created to actual size, and is dragged along the floor model (green) for 2 ms.
average peak to valley roughness measured on the shoe tread with a surface profilometer. Viscoelastic material properties were derived from the measured values of hardness. A durometer was used to measure the hardness of each shoe. The needle of the durometer was pressed into the shoe tread for 2 minutes with measurements every 10 seconds. The hardness readings were fit with an exponential decay curve. Each trial was repeated 5 times and averaged. The floor model geometry was created based on the measured roughness of a vinyl floor. The floor material was assumed to be rigid. The shoe was translated down onto the floor to achieve a specific pressure, and then forward at a speed of 0.3 m/s. Hysteresis COF for the micro-model was determined as the ratio of the average shear to average normal force during sliding, and the simulation was repeated over a range of contact pressures to produce an exponential decay regression fit between contact pressure and hysteresis COF.
The macro-model geometry was created with ANSYS Mechanical APDL, and was modeled after the shoes’ tread pattern and heel geometry (Figure 2). The material was modeled as linear elastic and the surfaces were modeled as smooth for computational efficiency since viscoelastic effects and roughness were already accounted for in the micro-model. In the same manner as the micromodel, the shoe model was translated down onto the floor model, and then forward at a speed of 0.3 m/s. A normal force of 250 N was achieved. The overall hysteresis frictional force was calculated based on the average contact pressures during sliding, average contact areas during sliding, and the hysteresis COF functions from the micro-model (Eq. 1) [2]. The overall hysteresis COF was calculated as the ratio of hysteresis frictional force to the normal force (Eq. 2).
observed experimental trends is an important step towards developing a valid computational tool for designing slip-resistant shoes.
Figure 3. Graph of overall hysteresis COF against Shore A hardness for five shoes.
Contact area was calculated from the macro-model. Regression analyses were performed to correlate hardness and contact area with the predicted hysteresis COF. FFriction = Σ COF(pi)×pi×Ai COFhysteresis=FFriction / FNormal
Eq. 1 Eq. 2
RESULTS The results indicate that an increase in hardness resulted in a decrease in hysteresis COF (Figure 3), and that an increase in contact area was associated with an increase in hysteresis COF (Figure 4). The positive correlation in COF with contact area is consistent with a previously conducted prospective study on geriatric slips [5] whereas a decrease in hysteresis COF with increased hardness is consistent with previous human slipping data [6]. DISCUSSION The model predicted that hysteresis COF trends that are largely consistent with previous literature. Roughness and shoe angle were not kept consistent across the 5 shoe models, since they were meant to replicate the actual shoe heel and its motion during sliding. A sensitivity analysis that systematically modifies roughness and shoe angle may reveal the impacts of these covariates on the results. Obtaining consistency between the model and previously
Figure 4. Graph of overall hysteresis COF against contact area for five shoes.
REFERENCES [1] Bureau of Labor Statistics (BLS). 2013. “Nonfatal Occupational Injuries and Illnesses Requiring Days Away From Work, 2012.” [2] Hanson, J.P., Redfern, M.S., Mazumdar, M. 1999, Ergonomics. 42 (12). pp 1619-1633. [3] Cowap, M.J.H. et al. (2015). Tribology Materials, Surfaces & Interfaces. 9 (2). pp 77-84. [4] Moghaddam S.R.M., Redfern, M.S., Beschorner, K.E. Tribology Letters, 2015. In Press. [5] Tencer, Allan F., et al. 2004. Journal of the American Geriatrics Society. 52.11. 1840-1846. [6] Tsai, Y.J., Powers, C.M. 2009. Gait and Posture. 30 (3). pp 303-306. ACKNOWLEDGEMENTS Funding was provided by the Summer Internship Program from the Swanson School of Engineering and the Office of the Provost at the University of Pittsburgh, as well as NIOSH (R01 OH008986)
OPTIMIZATION OF INTERVERTEBRAL DISC DECELLULARIZATION Ali O. Balubaid McGowan Institute of Regenerative Medicine University of Pittsburgh, PA, USA Email: alb315@pitt.edu, Web: http://www.mirm.pitt.edu/ INTRODUCTION Back pain is the second most common cause for a visit to a physician, and accounts for $50 billion of annual spending in America. Degenerative disc disease (DDD), the loss of functional properties of the intervertebral discs (IVD), is a leading cause for back pain [1]. Despite a wide range of treatment options available, no current solutions provide a satisfactory solution to DDD. Extracellular matrix (ECM) biomaterials have proven to be suitable biomaterials for promoting the regeneration of many tissues in the body, including regenerate cartilage structures such as IVDs [2]. IVD ECM will be derived through several protocols from the annulous fibrosus (AF) and nucleus pulposus (NP) regions due to site specificity of the ECM. This task proves challenging as the AF and NP regions are distinct in their structure and function. The proposed project will identify an optimized method for the production of AF and NP IVD ECM. METHODS Mechanical separation of the IVD into AF and NP regions was required to achieve the needed level of optimization. The fibrous build of the AF requires a harsh physical infusement method which would greatly damage the fragile NP. Decellularization protocols are aimed to maintain the collagenous structure of the AF and the glycosaminoglycan-rich properties of the NP. The study used young rabbit IVDs. Two protocols were used to decellularize the AF. The protocols applied in this study are adjustments to what Xu, H. et al suggested [3]. The AF was placed in a hypotonic Tris-HCl buffer (10 mM, pH 8.0) with 0.1% ethylenediamine tetraacetic acid (EDTA; Sigma) for 24 hours while shaking. The AF was then agitated in Tris-HCl buffer with 3% Triton X-100 and 0.1% EDTA for 72 hours. The washing solutions were changed every 24 hours for both steps. Finally, the decellularized AF was washed with PBS for 24 hours to remove residual reagents. The second protocol uses trypsin to
decellularize the tissue. The AF was incubated under continuous shaking in trypsin/EDTA (0.5% trypsin and 0.2% EDTA; both Sigma) in a hypotonic Tris-HCl buffer at 37°C for 72 hours. The solution was then changed every 24 hours. After the 72 hours were completed, the decellularized AF is washed with PBS for 24 hours under shaking in order to remove any residual substances. The gelatinous NP region is sensitive to shear stresses used in standard decellularization protocolsâ&#x20AC;&#x2122; mechanical decellularization infusement methods. As a result, a pressure chamber was used to infuse the decellularizing reagents with tissue. The tissue was placed in a vacuum chamber which puts the tissue through 30 cycles. During each cycle, the pressure is brought down to -80 atm within 30 seconds, and then released back to 1 atm within 30 seconds. The tissue would first be put in 30 cycles of PBS, followed by 30 cycles of decellularizing reagent, and then 30 cycles of type I water. The tissue underwent this process 7 times. The first reagent used was Tris-HCl mixed 2% (w/v) deoxycholate. The second was 50 mM TrisHCl buffer, 0.1% (w/v) ethylenediamine tetraacetic acid, 0.6% (v/v) Triton X-100, and 1.0% (w/v) deoxycholic acid. The physical dimensions of the IVD discs and regions were assessed using a digital caliper. The optimality of decellularization protocol is determined through histochemical staining including Hematoxylin and Eosin (H&E) and diamidino-2-phenylindole (DAPI). DATA PROCESSING Optimality of decellularization protocol is determined through qualitative and quantitative means. The qualitative processes included histochemical staining with H&E, DAPI, Picrosirius red, and Safranin O. Both H&E and DAPI were used to show cell removal. The Picrosirius red was used to show collagen content for the AF, while the Safranin O was used to show glycosaminoglycan (GAG) content for the NP.
RESULTS The AF used in this study had a large diameter of 1.09cm ± 0.16, small diameter of 0.57cm ± 0.05. The NPs of the IVDs had a large diameter of 0.54cm ± 0.05, and small diameter of 0.33cm ± 0.08. The IVDs had a thickness of 0.3cm ± 0.12. The results for the NP weren’t as successful as intended, as shown by figures 1-A and 1-B. On the other hand, the AF decellularization DAPI stains shows significant difference in cell count as shown by figures 1-C and 1-D. Hydroxyproline and GAG assays were conducted to determine collagen and glycosaminoglycan retention in AF tissue. Figure 1. This figure presents DAPI stained tissue. A. Shows decellularized NP. B. Shows native NP. C. Shows decellularized AF. D. Shows native AF.
Figure 2 demonstrates the effect of decellularization treatments on the glycosaminoglycan content in the AF tissue. A significant decrease is noticed for samples decellularized with trypsin, meanwhile no significant difference is observed in samples decellularized in Triton X-100.
Concentration (mg/ml)
GAG Assay 30 20 10 0
IVD samples
Figure 2. This figure shows GAG concentration in decellularized tissue.
In figure 3, it is noticed that none of the applied decellularization treatments have any significant effects on the collagen content of the AF tissue.
Hydroxyproline Assay Concentration (mg/ml)
A quantitative hydroxyproline and glycosaminoglycan assay were also conducted for the AF tissue to determine the collagen and GAG content respectively.
50 40 30 20 10 0
IVD samples
Figure 3. This figure shows collagen concentration in decellularized tissue.
DISCUSSION The AF decellularization could be improved with a method to sustain 37°C throughout trypsin treatment. The NP decellularization could be enhanced with the development of a positive pressure decellularization system. The delicateness of the NP made it susceptible to mechanical damage, causing significant portions to be lost due to a lacking reagent replacement method in between the cycles. Primary results have showed better cell removal for the trypsin protocol as opposed to the Triton X-100 AF decellularization protocol. However, Triton X-100 shows better GAG retention than trypsin. A more improved decellularization for the AF is expected if the EDTA wash was increased back to 48 hours in the Triton X-100 wash; that is in order to better cleave the cells from the ECM. Lyophilizing the NP pretreatment was suggested to improve mechanical handling. REFRENCES 1. "Back Pain Facts & Statistics." American Chiropractic Association. American Chiropractic Association, 2012. Web. 16 Nov. 2012. 2. Brown et al. Inductive, Scaffold-Based, Regenerative Medicine Approach to Reconstruction of the Temporomandibular Joint Disk. J Oral Maxillofac Surg 2012. 3. Xu et al. Intervertebral Disc Tissue Engineering with Natural Extracellular Matrix-Derived Biphasic Composite Scaffolds. PLoS ONE 2015. ACKNOWLEDGMENTS Research was conducted at McGowan Institute for Regenerative Medicines in the lab of Dr. Bryan Brown, and under the mentorship of Samuel LoPresti, with partial funding from the Department of Bioengineering, University of Pittsburgh.
EVALUATION OF THE HOST RESPONSE TO MESH IMPLANTATION IN MICE Kelley Brown, Deepa Mani, Samuel LoPresti, Daniel Hachim, Bryan Brown McGowan Institute of Regenerative Medicine, Department of Bioengineering University of Pittsburgh, PA, USA Email: kab261@pitt.edu INTRODUCTION Throughout medicine, biomaterials are used for aiding the healing process. One prototypical example, polypropylene mesh, is used in more than 1 million hernia repairs and surgeries for pelvic organ prolapse each year in the United States alone. [1] Implanting polypropylene mesh is most often used for providing or enhancing mechanical support of the surrounding tissues. Since these surgical mesh materials are used for permanent support the material should ideally integrate with host tissue during the course of host healing, provide adequate long-term mechanical support, and not cause pain or discomfort in patient’s daily lives. All biomaterials, when placed inside a biological system, trigger an immune response. How the immune system responds is crucial to the integration of the material during the healing process and therefore the long-term effectiveness of the biomaterial. Previous studies indicate the role of macrophages to be an essential part of the host response. [1] Macrophages are able to polarize into a spectrum of phenotypes with the M1 phenotype being pro-inflammatory and causing tissue degradation and the M2 as an antiinflammatory/pro-remodeling and promoting biomaterials integration. Little is known regarding differences in the host immune response to biomaterials in aged patients as well as the implications of this response for longterm functionality. However, it is known that as the host ages, the ability of the immune system to respond appropriately decreases in a process termed immune senescence. [2] As the average lifespan of the population continues to increase, developing an improved understanding of how the immune system is changing will provide crucial insight for the effective use of biomaterials in aged patients. This study is focusing on the host immune response to biomaterials in young versus old subjects.
Histological differences of the host response were observed after implantation of mesh material in young and old mice over the course of acute and chronic phases of inflammation. Additionally, the different phenotypes of macrophages responding to implants in young and old animals were evaluated. METHODS Young and old mice were implanted with Gynemesh PS (ETHICON) over the course of acute and chronic phases of inflammation; 3 day, 7 day, 14 day, and 90 day, time periods. Tissue samples including the muscle, mesh, and skin layer complex, were extracted and cut onto slides. These samples stained with H&E, Masson’s Trichrome, and Alcian Blue allowed for visualization of any histological differences, which once imaged at 20X, were compared. The Masson Trichrome images were used to measure the capsule and cell thickness surrounding a subset of three mesh fibers per sample. Additionally, the different phenotypes of macrophages responding to implants in the young and aged mice were evaluated. Specific antibodies that label for macrophages (pan-macrophage marker F4/80, M1 marker iNOS, and M2 marker Arginase) were used through Immunofluorescent (IF) labeling to identify the specific polarization present in the tissue sections. RESULTS Slides were imaged and observed for any histological differences. There was a distinctive host response surrounding each individual mesh fiber, seen at all time points. Over time connective tissue encapsulated groups of cells surrounding each individual mesh fiber. To grasp a better understanding of the relationship between young vs. old mouse and each time period subgroup, the areas of the capsule formation and cells surrounding the fiber were measured from images of the Masson’s Trichrome stain. The average capsule areas around 90 day young and aged was 23657.48 ± 26378.53 μm and 45812.84 ± 59651.45 μm, respectively.
These were obtained by measuring the areas around the capsule or the cells and subtracting the area of the fiber. Although cell areas were present for every sample the capsule formation became increasingly apparent over time. At 7 days, the capsule was nonexistent for both young and old mouse samples. Capsule formation started to appear in the 14 day samples but by the 90 day, both young & old samples had every fiber contained within a capsule.
Figure 1: A: 90 day aged mouse mesh fiber with an H&E Stain. B: 90 Day aged mouse mesh fiber in Masson Trichrome. C: 90 day aged mouse mesh fiber with Alcian Blue stain. D: 90 day young mouse mesh fiber with H&E stain. E: 90 day young mouse mesh fiber with Masson Trichrome. F: 90 day young mouse mesh fiber with Alcian Blue staining. All pictures are showing the fiber surrounded by cells and a capsule comprised of connective tissue.
DISCUSSION/CONCLUSION In each image taken of a single mesh fiber, within young and aged mice, a clear indication of an immune system response was evident. The cells clustering around the fiber had a thickness proportional to that of the time lapse. Early 7 day fibers had smaller capsule areas, whereas 14 & 90 day fibers increasingly grew larger. Since the 7 day
capsule were observed to a lesser amount than the 90 day capsules, this indicates regeneration of the tissue over time. Between young & old samples the sizes of the cell areas differed in that the young 7 and 14 days had greater areas than the old samples. The 90 day sample, in both cells and capsule areas, was reversed in that the 90 day young had smaller areas than the aged samples. The capsules started appearing around 14 day samples and became abundant by 90 day. The areas of the capsules were also larger in the young for the 14 day vs. old group, but smaller in the young 90 day vs old group. Staining for the specific macrophage polarization has begun with the iNOS staining for the M1 phenotype. The Arginase staining for M2 phenotype is optimized but still in the process of being stained. Future studies could include changing the mechanical properties of the mesh fiber to see if there is change in the immune response at each time interval. This would test if the response is only due to a chemical factor or if any forces influence the immune system response. REFERENCES [1] B.N. Brown, S.F. Badylak. Expanded Applications, Shifting Paradigms, and Improved Understanding of Host Biomaterial Interactions. Acta Biomater. 2013 Feb;9(2):4948-55. [2] Brusse, Paula J., and Sameer K. Mathur. "Agerelated Changes in Immune Function: Effect on Airway Inflammation." Journal of Allergy and Clinical Immunology 126.4 (2010): 690-99. Science Direct. Web. 5 Mar. 2015. ACKNOWLEDGEMENTS Thank you to the Swanson School of Engineering and the Department of Bioengineering for their support. Bryan Brown, Deepa Mani, Joey Kennedy, and Brown Lab for training & guidance throughout.
LOADING AND RELEASE OF MCP-1 FROM COATED SURGICAL MESHES FOR PELVIC ORGAN PROLAPSE Shweta Ravichandar, Daniel Hachim, Bryan Brown McGowan Institute of Regenerative Medicine, Department of Bioengineering University of Pittsburgh, PA, USA Email: shr43@pitt.edu INTRODUCTION Pelvic Organ Prolapse is a disorder affecting 3050% of menopausal women, where tissue in the pelvic cavity loses mechanical support. Symptoms include, pain, discomfort, and urinary incontinence. [1] When symptoms become severe, surgical polypropylene meshes can be implanted to maintain function of the muscular walls. However, mesh use is highly linked to complications such as, severe pain, organ perforation, and mesh exposure. These complications have been described to be mostly due to the foreign body reaction in which proinflammatory (M1) macrophages are key players. On the other hand, pro-remodeling (M2) macrophages are associated with healthy tissue regeneration. It has been hypothesized that recruitment and polarization of macrophages towards an M2 phenotype, will lead to a better implant integration and mitigation of the foreign body response. METHODS To confirm the hypothesis, sequential delivery of MCP-1 and IL-4 from coated surgical meshes is proposed. The present research studies the loading and release of MCP-1 (macrophage chemoattractant protein 1) on coated meshes. Additionally, the evaluation of the chemoattractant activity of released MCP-1 from coated meshes in an in-vitro macrophage migration assay was also performed.
To corroborate MCP-1 loading and distribution on the mesh coating, a fluorescent immunolabeling was done and observed under confocal microscopy. Release assays were performed through the use of an ELISA kit (Peprotech) where MCP-1 loaded (20 and 40 bilayers) and coated (no MCP-1) meshes were assayed after 72 hours of release on 1X PBS containing Chitosanase and Chondroitinase ABC. To verify that MCP-1 bioactivity was kept after the LbL procedure and sterilization, a Migration assay kit (Cell Biolabs) was used. Macrophages were seeded on the upper membrane insert, then MCP-1 loaded (40B) and coated (no MCP-1) meshes were placed on the bottom wells. Serum-free media and soluble MCP-1 (250 ng/mL) were used as controls. After 24 hours of incubation, migrated cells were detached from the membrane and combined with cells of the bottom well. Then, cells are fluorescently tagged and lysed previous quantification. DATA PROCESSING Data from ELISA assays were obtained as absorbance values. Data from the migration assay was obtained as fluorescent intensity units. Means and standard errors of these assays were analyzed by One-way ANOVA to determine the statistical differences (P < 0.05) and Tukey post-tests. Confocal microscope images were qualitatively analyzed by observation. RESULTS AND DISCUSSION
Polypropylene meshes were irradiated with plasma to produce a negatively charged surface. To construct the coating, a layer-by-layer (LbL) procedure was performed using chitosan as polycation and dermatan sulfate as polyanion. Cycles were repeated with intermediate washing steps until desired number of layers. Loading of MCP-1 was performed by prior incubation with dermatan sulfate.
Immunolabeling allowed qualitative observation of MCP-1 loading into the mesh coating under confocal microscopy (See Figure 1). First red fluorescence intensity levels are evidently higher than controls, which confirm the loading of MCP-1 into the mesh. Also, this red fluorescence reveals that MCP-1 is loaded consistently and uniformly through the entire surface of the mesh.
migration assay has to be optimized for further validation of MCP-1 loaded mesh studies.
The graph on figure 2 shows the absorbance values obtained from the release studies performed on MCP-1 loaded (20 and 40 bilayers) and coated (no MCP-1) meshes and measured using ELISA assays. These results reveal that number of layers in the coating containing MCP-1 can be modified to release defined amounts of MCP-1. Therefore, as the number of layers increase, so does the amount of MCP-1.
REFERENCES 1. K.A. Jones, et al. Int Urogynecol J Pelvic Floor Dysfunct, 20: 847-53, 2009. 2. M.T. Wolf, et al. Biomaterials, 35: 6838-49, 2014. 3. M.L. McDonald, et al. Biomacromolecules, 11: 2053-59, 2010. 4. E. Dekker, et al. J Immunol, 180: 3680-88, 2008. 5. N. Aumsuwan, et al. Langmuir, 27: 11106-10, 2011.
To corroborate that MCP-1 still maintains bioactivity even after coating and sterilization, a migration assay was performed. Results (See Figure 3) are expressed in ratios, relative to the free-serum media control, in which a value of 1 has been assigned to normalize data to each experiment. An MCP-1 positive control has been used to validate methodology. Results revealed non-significant differences among conditions; however, a tendency to increased migration is observed for MCP-1 loaded meshes and positive controls. Since positive control was not significant compared to the freeserum control, there is clear evidence that the
ACKNOWLEDGEMENTS I would like to thank Dr. Bryan Brown for opportunity to work in his lab as well as Daniel Hachim, my PhD student mentor. Funding was provided by the Swanson School of Engineering and the Office of the Provost at the University of Pittsburgh.
Figure 1:
Fig Pristine Gynemesh
Coated (no MCP-1) Mesh
Figure 2:
Figure 3:
1.0
1.5
*
0.8
Ratio Migration
ng/cm2 of mesh
MCP-1 Loaded Mesh (10 B)
0.6 0.4 0.2
1.0
0.5
* 0.0
ng /m (2 50 1 C P-
M
C PM
M (n o oa te d C
1
C
(4 0B
P1)
)M
M
es h
es h
ng /m (1 P1
C M
L)
L)
) B (4 0 es h M
ad ed Lo P1
M
C
P1 C M
C
oa te d
Lo
(n o
ad ed
M
M
C
es h
P1)
M
(2 0
B
es h
)
0.0
THE EFFECTS OF CENTRAL AND PERIPHERAL VISUAL FIELD LOSS ON STANDING BALANCE IN ADULTS Meredith P. Meyer Human Movement and Balance Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: mpm75@pitt.edu INTRODUCTION The purpose of this study is to determine the effect of different patterns of visual field occlusions, peripheral or central, on standing balance. Particularly, which visual field dominates in balance tasks and the differences between older and younger adults. An understanding of these effects will, when compared to subjects with ocular pathologies, allow for an identification of the actual mechanisms of balance impairments resulting from vision loss. Two types of vision loss of particular interest are age-related macular degeneration (ARMD) which leads to central visual field loss and glaucoma which causes peripheral visual field loss. This research will be valuable in developing strategies for reducing fall occurrences in older adults possessing these visual impairments.
After the first visit, custom contact lenses were ordered for each eye for each control subject for central and peripheral occluding conditions. All contacts were made with the subjectsâ&#x20AC;&#x2122; prescription. The centrally occluding pair of contacts had an 8 mm central opacity. The peripherally occluding pair of contacts had unobstructed centers of 1 mm in diameter with an opaque periphery [1]. Prior to balance testing, subjects were required to have their eye dilated in order to ensure a standard pupil size throughout all balance tasks. During the second visit, subjects completed a balance test. The test was adapted from the Sensory Organization Test (SOT) and completed using an Equitest posture platform (NeuroCom, Inc.) at the University of Pittsburgh Medical Center for Balance Disorders.
METHODS Seventeen subjects were recruited to participate in this study. Older subjects were between the ages of 65 and 85 (n=7), while younger subjects were between the ages of 25 and 35 (n=9). All subjects went through phone screenings to determine eligibility and informed consent was obtained prior to enrolling in the study. Subjects were screened for vestibular and visual conditions impacting balance during their first visit. Figure 2: Balance test 1 [2]
Six trials for each lens condition were completed of the SOT using varieties of the conditions: eyes open (EO)/ eyes closed (EC), fixed-floor (FF)/ swayreferenced floor (SRF), fixed visual scene (FV)/ sway-referenced visual scene (SRV). Trial conditions are described in Table 1.
Figure 1: Contact lenses with opacities [1]
Table 1. Description of balance test conditions. Trial Number Trial Conditions 1 EO, FF, FV 2 EC, FF 3 EO, FF, SRV 4 EO, SRF, FV 5 EC, SRF 6 EO, SRF, SRV DATA PROCESSING Center of pressure (COP) movement was used to quantify postural sway. Using MATLAB, the first and last 5 seconds of each trial was cut to eliminate factors relating to the starting and stopping of the testing equipment. The remaining data was filtered with a Butterworth filter of the 4th order with a cutoff frequency of 2.5 Hz. The data was also down-sampled to 20 Hz. A mixed linear regression analysis was used to determine the effects of age, lens condition, and their interaction on postural sway magnitude (RMS) and sway velocity (MV). Subject was included as a random effect. The statistical analysis was conducted within each SOT trial. Statistical significance was set at 0.05. RESULTS MV was more sensitive to changes in testing conditions, and thus was primarily used for analysis. During trials 2 and 5, where subjectsâ&#x20AC;&#x2122; eyes were closed, there was no effect of lens condition, as expected. Age had a significant effect on postural sway, with older subjects swaying more. Overall, lens condition had no significant effect on sway in
young adults. However, in trials 4 and 6 (swayreferenced floor), peripheral occlusion had a significant effect on sway in both age groups (pvalues= 0.0359; 0.0889). Additionally, for older adults in trial 3 when wearing peripheral occlusion lenses there was a significant increase in sway over all other conditions (p-value = 0.0407). DISCUSSION Vision is an important factor in maintaining balance. The results suggest that peripheral vision has a greater impact on balance than central vision, especially when proprioception is unreliable, as it is with the sway-referenced floor condition. Also, there is a possibility that with increasing age, the importance of peripheral vision over central vision intensifies. Further research will compare results from healthy adults to patients with ARMD and glaucoma in order to determine the mechanisms used by patients to maintain balance. REFERENCES 1. A. C. Nau, "A contact lens model to produce reversible visual field loss in healthy subjects," Optometry, 2012 2. https://spinoff.nasa.gov/spinoff1996/images/52.jpg ACKNOWLEDGEMENTS Funding was provided by the National Institute on Aging (R03 AG04374). Credit is deserved by both the study participants and the Eye & Ear Institute at the University of Pittsburgh Medical Center.
Figure 3: Graph of the average mean velocity per subject group per lens condition per trial. Only trials with statistically significant differences are shown.
DEXAMETHASONE ATTENUATES MICROGLIAL ACTIVATION DURING BRAIN MICRODIALYSIS AS REVEALED BY 2-PHOTON MICROSCOPY Gregory J. Brunette, Takashi D.Y. Kozai, Andrea S. Jaquins-Gerstl, Alberto L. Vazquez, Adrian C. Michael, X. Tracy Cui Neural Tissue Engineering Laboratory University of Pittsburgh, PA, USA Email: gjb23@pitt.edu, Web: http://engineering.pitt.edu/cui INTRODUCTION Brain microdialysis (BM) sampling allows researchers to monitor chemical neurotransmission for clinical and research applications [1]. BM allows nonspecific sampling of small molecular weight compounds from brain tissue and has demonstrated stability over days and weeks, making it an optimal method for determining basal levels and long term changes in tissue neurotransmitter concentrations. However, BM probe implantation elicits an inflammatory tissue response, damaging the tissue directly in the probe tract and resulting in substantial neuronal loss, blood-brain barrier opening, and glial activation in the surrounding tissue, all of which alter the sampled tissue environment, potentially confounding observations [2]. Microglia, the immune cells of the central nervous system, are key mediators of the inflammatory tissue response. Under resting physiological conditions, microglia sample their extracellular environment for signs of injury and invasion. In their activated state, microglia secrete neurotoxic cytokines, signal neuronal apoptosis, and engulf viable neurons. Upon activation in response to injury, microglia assume globular morphology, retracting their fine processes and extending thick new processes in the direction of injury [3]. These morphological indicators are used to characterize microglial activation in image analysis. Dexamethasone (DEX) is a synthetic glucocorticoid drug commonly used for its anti-inflammatory effects. Previous histological studies have demonstrated that DEX may be a promising therapeutic against the BM-induced tissue response [4]. This study assesses the potential of DEX to preserve the physiological state of the BM-sampled microenvironment through its effect on microglial activation.
METHODS Concentric style BM probes (280 µm o.d.) constructed from hollow fiber dialysis membrane (Spectra-Por RC Hollow Fiber, Spectrum Laboratories, Inc., Rancho Dominguez, CA) and fused silica outlet lines (Polymicro Technologies, Phoenix, AZ) were implanted in 6 mice expressing green fluorescent protein (GFP) on microglia under the CX3CR1-GFP promoter (Jackson Labs, Bar Harbor, ME). Probes were perfused with artificial cerebrospinal fluid (aCSF: 142 mM NaCl, 1.2 mM CaCl2, 2.7 mM KCl, 1.0 mM MgCl2, 2.0 mM NaH2PO4, pH 7.40) or 10 µM DEX (APP Pharmaceuticals LLC, Schaumburg, IL) in aCSF at 0.610 µL min-1. Two-photon microscopy was performed. Z-stacks of brain tissue were collected hourly up to 7 h post-implantation, capturing the three dimensional structure of microglia. Dialysate was not further analyzed for the purposes of this study. DATA ANALYSIS Microglia at least 50 µm below the surface of the brain were analyzed to exclude surface macrophages. Microglia distance from the outer wall of the probe was taken using the ‘Measure’ function on ImageJ (National Institutes of Health). Microglia were then binned by distance from the probe. To quantify activation, each cell was assigned a transition (T)state morphology and microglial (M)-directionality index [5] 7 h post-probe implantation. T-stage morphology was calculated by measuring the longest process originating from the hemisphere facing the probe (n) and the longest process originating from the hemisphere opposite the probe (f). Mdirectionality was assessed by counting the number of processes facing the probe (n) and the processes opposite the probe (f). The same equation was used to calculate both: Index = (f – n) (f + n)-1 + 1
For both indices, a value of 1 indicates a ramified cell; 0 indicates the cell is fully activated. RESULTS Microglia were binned by distance from the BM probe and their index values 7 h post-implantation were plotted (Figure 1). T-stage index values were significantly higher for microglia receiving DEX at distances 25-225 µm from the outer wall of the probe (Figure 1a). Similarly, M-directionality index values were significantly higher for DEX microglia 75-175 µm from the probe (Figure 1b). These higher index values (closer to 1) indicate that microglia receiving DEX are in a more ramified state than those receiving only aCSF.
Figure 1: (a) T-stage and (b) M-directionality index 7 h post-implantation values versus distance from BM probe. Data presented as mean ± SEM. Asterisk indicates significant difference between DEX and aCSF (t-test with *p < 0.05 considered significant).
DISCUSSION Continuous DEX retrodialysis resulted in significantly higher T-stage index values than aCSF at distances 25-225 µm from the probe. Furthermore, DEX M-directionality index values are significantly closer to the ramified value at distances 75-175µm from the probe. Immediately adjacent to the probe, however, DEX does not have a significant effect on M-directionality index values. This may indicate that, closest to the probe, where tissue disruption is most pronounced, DEX has limited ability to mitigate microglial activation. Additionally, it should be noted that DEX, a glucocorticoid, may have additional effects on tissue not assessed in this study. With effects mediated by the widely expressed glucocorticoid receptor, DEX may affect the activity of surrounding neurons and astrocytes, altering the results collected through BM. Nonetheless, these results suggest that DEX preserves the resting state of tissue during BM and may be a promising agent to improve the validity of BM sampling. REFERENCES 1. Anderzhanova, E. and C.T. Wotjak, Brain microdialysis and its applications in experimental neurochemistry. Cell and Tissue Research, 2013. 354(1): p. 27-39. 2. Kozai, T.D.Y., et al., Brain Tissue Responses to Neural Implants Impact Signal Sensitivity and Intervention Strategies. Acs Chemical Neuroscience, 2015. 6(1): p. 48-67. 3. Nimmerjahn, A., F. Kirchhoff, and F. Helmchen, Resting microglial cells are highly dynamic surveillants of brain parenchyma in vivo. Science, 2005. 308(5726): p. 1314-8. 4. Jaquins-Gerstl, A., et al., Effect of dexamethasone on gliosis, ischemia, and dopamine extraction during microdialysis sampling in brain tissue. Anal Chem, 2011. 83(20): p. 7662-7. 5. Kozai, T.D., et al., In vivo two-photon microscopy reveals immediate microglial reaction to implantation of microelectrode through extension of processes. J Neural Eng, 2012. 9(6): p. 066001. ACKNOWLEDGEMENTS Funding was provided by NIH grants 5R01NS062019, R21NS086107, the Swanson School of Engineering, and the Office of the Provost, University of Pittsburgh.
FABRICATING AND EVALUATING CARBON FIBER MICROELECTRODES FOR RECORDING AND STIMULATION OF MUSCLE ACTIVITY William McFadden, Xin Sally Zheng, Vasil E. Erbas, Vijay S. Gorantla, X. Tracy Cui, Takashi D. Y. Kozai. Neural Tissue Engineering Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: wem25@pitt.edu, Web: http://www.engineering.pitt.edu/cui/ INTRODUCTION Many neural processes, such as memory formation and learning, occur over a relatively long time frame. Conventional neural electrodes lose much of their accuracy over these time periods, due to multiple factors including glial cell encapsulation, micromovement in the brain, and neuron cell death. To address this problem, electrodes for long-term neural stimulation or recording must be small, flexible, and compatible with the brain’s immune system. One such system involves chronically implantable carbon fiber microthread electrodes [1]. While the efficacy of carbon fiber microelectrodes has been explored in the brain, new challenges exist in the peripheral nervous system. Carbon fiber microelectrodes have the potential to finely stimulate small muscle groups with a limited physiological footprint. However, the electrode’s strength and durability also need to be considered in the periphery where macromotion is substantially greater than the brain micromotion. Here, we evaluate the efficacy of carbon fiber based microcable electrodes for muscle implantation and stimulation in vivo. METHODS Strands of continuous multi-filament 0.0048 mm diameter carbon fiber (Goodfellow) were cut to lengths of 25-300 mm. The carbon fibers were then separated and individually affixed to the ends of 30 mm stainless steel wires using conductive Ag epoxy (World Precision Instruments) as previously established [1]. An array system based on previously developed carbon fiber electrode arrays [2] was also explored. To assemble longer carbon fiber microcables, bundles of 10-50 fibers were cabled at two rotations per 100 mm. Assembled fibers and microcables were insulated with 800 micron thick non-conductive Parylene-C Dimer (SCS Coatings) via chemical vapor deposition.
Once each electrode was coated, the stainless steel metal wire was re-exposed and the tip from each fiber was cut, exposing the active recording/stimulation site. Electrode sites were exposed using a razor or microscissors. Electrical characteristics of the recording sites were measured via electrochemical impedance spectroscopy and cyclic voltammetry (Autolab). Carbon fiber tips were coated with the conductive polymer PEDOT using a Gamry potentiostat to further lower electrode impedance, as previously established [1]. Microcable electrodes had low enough impedance that PEDOT deposition was not necessary. Carbon fiber microcables were implanted via a needle shuttle, and were used to stimulate innervated and 1 wk deinnervated lateral gastrocnemius muscle of 2.5% isoflurane anesthetized male Sprague-Dawley Rats (Charles River). Once the electrodes were implanted, a cathodic leading symmetric biphasic pulse (pulse width: 0.5 ms) was used to drive muscle contraction. Stimulus amplitude was changed to identify the lowest current threshold for evoked visible muscle contraction. These results were compared to a Pt/Ir control microwire. All experimental protocols were approved by the University of Pittsburgh, Division of Laboratory Animal Resources and IACUC. RESULTS Single fiber electrodes could not be implanted into muscle with current insertion methods due to its brittleness and the resistance of the muscle to implantation compared to the brain. However, microcable bundles with > 20 fibers were easily implanted. In early results, medium bundle carbon fiber microcable electrodes (~25) were able to evoke visible muscle twitch at 70 µA for innervated and 230 µA for deinnervated muscles, while large microcables (>50) were able to activate the muscle
at 90 µA for innervated, and 600 µA for deinnervated muscles. In contrast, Pt/Ir microwire required 1,400 µA to activate deinnervated muscles. DISCUSSION While the use of carbon fibers presents many distinct advantages over conventional electrode arrays, it remains to be evaluated whether the benefits of carbon fiber microelectrodes justify the difficulty of manipulating and implanting the carbon fibers and adapting them to a usable format. REFERENCES 1. Kozai TDY, et al. Nature Mater. 2012. 11, 1065– 1073. 2. Guitchounts G, et al. J Neural Eng. 2013. 10. ACKNOWLEDGEMENTS The author would like to thank NIH 5R01NS062019, US ARMYW81XWH-13-C-0157, the Swanson School of Engineering, and the Office of the Provost for summer funding for living expenses, and all advisors associated with the summer undergraduate research program. The authors would also like to thank James R. Eles, and Lee Fisher for assistance with the in vivo experiment.
DESIGN OF CUSTOMIZED LOWER LEG SPECIMEN FIXTURES FOR 6 DOF ROBOTIC TESTING SYSTEM Laura E. Bechard, Kevin M. Bell, Macalus V. Hogan, Richard E. Debski Orthopaedic Robotics Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: leb94@pitt.edu, Web: http://www.engineering.pitt.edu/labs/ORL/ INTRODUCTION High ankle sprains, or ankle syndesmosis injuries, account for 10% of all ankle sprains [1]. The most common mechanisms of injury are external rotation and hyper-dorsiflexion of the ankle joint. The ankle syndesmosis restrains tibiofibular motion, which is vital to ankle stability [2]. Previous studies of ankle syndesmosis injuries have relied on static trials and manually applied loads. Additionally, previous studies have utilized sectioned lower leg specimens, which disrupt the proximal syndesmosis. The MJT Model FRS2010 robotic testing system is capable of dynamic manipulation repeatable to 0.001 mm. The robotic testing system is currently used to study biomechanics of the knee, shoulder, and spine. However, the current system is incapable of rigidly fastening a full lower leg specimen for testing. The objective of this project was to design and test the rigidity of a customized clamping system for fastening a full lower leg specimen to the robotic testing system. DESIGN CRITERIA In addition to accommodating a full lower leg specimen, the customized lower leg specimen fixtures must not disrupt the syndesmosis or inhibit fibular motion. Within the size constraints of the robotic testing system, the clamps must be adjustable to specimen height without preventing motion of the robotic testing system. The customized lower leg fixtures must rigidly fasten both the tibia and the calcaneus. Expected tibiofibular motion is approximately 0.43 â&#x20AC;&#x201C; 4.28 mm with inhibited fibular motion [1]. Therefore, in order to qualify the fixture as rigid, movement of the specimen and fixture must be less than 0.5 mm during testing. Additionally, the fixture must also prevent bending of the tibia. Movement of the medial malleolus with respect to the tibia mid-shaft must be less than 0.5 mm during testing.
FINAL DESIGN The final clamp design consists of both massproduced and custom designed materials. The lower leg specimen is fastened to the robotic testing system with the proximal tibia inferior to the distal tibia. The proximal tibia is fastened with two screws to a custom designed plate (Figure 1D) which is fastened directly to the base of the robotic testing system.
A
B
C
D Figure 1: Final customized fixture design to fasten full lower leg specimen to robotic testing system. A) 8020 Tslotted aluminum construct. B) Lockable ball joint. C) Small tibia plate. D) Proximal tibia plate.
Two small custom designed plates (Figure 1C) are fastened to the tibia approximately mid-shaft using bone screws. The plates minimize the amount of soft tissue dissection and increase stability of the boneplate joint. The two plates are attached to lockable ball joints (Figure 1B) to allow multiple fastening options. The lockable ball joint is fastened to a vertical construct which consists of custom base, 8020 T-slotted aluminum, and custom adapter plate (Figure 1A). The ankle joint is manipulated through the calcaneus. Screws are inserted in the calcaneus through the talus. Approximately 4 cm of the head of the screw is left exposed. The entire screw-calcaneus complex is potted in bondo putty (3M, Saint Paul, MN). The cylindrical form is fastened the robotic testing
system using an existing clamp originally designed for knee joint testing (Figure 2).
Figure 2: Existing clamp used to fasten potted calcaneusscrew complex to robotic testing system.
RIGIDITY QUANTIFICATION Methods One fresh frozen lower leg (shank and foot) cadaveric specimen was fastened to the MJT FRS 2010 robotic testing system (Technology Services Ltd., Chino, Japan) using the fabricated fixtures. At neutral position, a mechanical digitizer (Faro Arm, Lake Mary, FL) was used to collect location of anatomical and clamp landmarks: calcaneus clamp bolt, calcaneus anatomical bolt, tibia clamp bolt, medial malleolus, and tibia mid-shaft bolt. The mechanical digitizer is accurate to 0.05 mm. Location of the landmarks was recollected at various loading conditions representing normal ankle motion and injury states: 10 Nm external rotation at 0° flexion, 10 Nm at 10° dorsiflexion, 10 Nm external rotation at 30° plantarflexion, and 10 Nm dorsiflexion. In order to quantify rigidity, the threedimensional distance between calcaneal landmarks and between tibial landmarks is calculated. MATLAB (MathWorks, Natick, MA) was used to determine distance between the calcaneus clamp bolt and calcaneus anatomical bolt, the tibia clamp to the medial malleolus, the tibia clamp and the tibia midshaft bolt, and the medial malleolus and the tibia mid-shaft bolt. Results The change in distance between clamp and specimen
landmarks ranged from 0.00-0.46 mm. The average change in distance was 0.29 mm. The change in distance between the medial malleolus and the tibia mid-shaft ranged from 0.03-0.26 mm. The average change was 0.12 mm. Results are summarized in Table 1. CONCLUSION The change in distance between clamp and specimen landmarks met the design criteria by remaining under 0.5 mm for all loading conditions. The fixture also prevented bending of the tibia in all trials. Therefore, the clamp can be considered a rigid fixture for full lower leg specimens. The study is limited by the limited volume the mechanical digitizer is able to digitize. Due to the size of the specimen, it was impossible to digitize a landmark on the proximal tibia. A proximal tibia landmark would have been beneficial in further validating the use of the fixture to prevent tibia bending. FUTURE DIRECTION The design of a full lower leg specimen fixture enables utilization of the MJT Model FRS2010 robotic testing in ankle syndesmosis studies. In conjunction with a motion tracking system, tibiofibular motion in intact and injured states can be compared. Further, different types of syndesmosis surgical fixation methods can be investigated in vitro. The study can help improve rehabilitation and patient outcomes after ankle syndesmosis surgeries. REFERENCES 1. Markolf et al. FAI 33, 779-786, 2012. 2. Van Heest et al. J Bone Joint Surg 96, 603-613, 2014. ACKNOWLEDGEMENTS Funding was provided by the Department of Bioengineering and the Department of Orthopaedic Surgery.
Table 1: Change in three-dimensional distance between lower leg specimen and fixture Flexion External Calcaneus Tibia clamp Tibia clamp Medial Rotation clamp medial tibia midmalleolus-tibia (Nm) specimen (mm) malleolus (mm) shaft (mm) mid-shaft (mm) 0.38 0.00 0.13 0.03 0° 10 0.30 0.02 0.12 0.04 10° 10 0.46 0.16 0.18 0.15 -30° 10 0.24 0.24 0.10 0.26 10 Nm 0
Surface Strain in the Anterolateral Capsule of the Knee Stephanie L. Sexton1, Daniel Guenther1,2, Kevin M. Bell1, Sebastian Irarrazaval1, Ata A. RahnemaiAzar1, Freddie H. Fu1, Volker Musahl1, Richard E. Debski1 1 Orthopedic Robotics Laboratory, Departments of Orthopedic Surgery and Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, USA 2 Trauma Department, Hannover Medical School (MHH), Hannover, Germany Email: sls180@pitt.edu, Web: http://www.engineering.pitt.edu/labs/ORL/ INTRODUCTION Knee injuries are one of the most common reasons for physician appointments. The most commonly injured ligament in the knee requiring surgery is the anterior cruciate ligament (ACL) [1]. ACL reconstruction is the sixth most common orthopedic surgical procedure in the US, with an estimated 150,000 ACL reconstructions performed annually [2]. However, injuries of the anterolateral capsule are often under diagnosed in conjunction to more common ACL procedures. Under diagnosis of anterolateral capsule injuries can lead to decreased rotational stability of the knee and early arthritis. Normal structure and function of the knee was restored in only 37% of the patients undergoing ACL reconstruction [3]. Significant clinical interest exists in anterolateral capsule-injuries with anterior cruciate ligament (ACL)-deficient knees and potential surgical treatment of the capsule at the time of ACL surgery [4]. Knowledge of the function of the anterolateral capsule can lead to development of an injury model and more informed choices of repair. Repair of the anterolateral capsule could increase ACL surgery success. The purpose of this study was to determine the surface strain of the anterolateral capsule in response to multiple loading conditions in the ACL intact and deficient knee during 30°, 60° and 90° of flexion. It was hypothesized that the greatest surface strain in the anterolateral capsule will be found at a 90-degree flexion angle with a combined internal rotation torque and anterior tibial load. METHODS Six fresh frozen cadaveric knees (mean age 53.7 years, range 46-59 years) were dissected until the anterolateral capsule was clearly visible. Forty black markers were placed on the anterolateral capsule in a 5 x 8 grid beginning from Gerdy’s tubercle posteriorly to the LCL insertion and from
the LCL insertion superiorly to the LCL origin. The specimens were then loaded using a robotic testing system (MJT Model FRS2010, Chino, Japan). The femur was rigidly fixed relative to the lower plate of the robotic testing system and the tibia was attached to the upper end plate of the robotic manipulator through a 6-degree-of-freedom universal force/moment sensor (UFS, ATI Delta IP60 (SI660-60), Apex, NC). The robot was used to apply loads to the knee at 3 flexion angles and a DMAS 7 Motion Capture System (Spica Technology Corporation, Haiku, HI) was utilized to track motion of the markers attached to the surface of the capsule. The loads were: Anterior Tibial Load, Combined Anterior Tibial Load and Internal Rotation Torque, External Rotation Torque, Combined External Rotation Torque with Varus Torque, Internal Rotation Torque, Combined Internal Rotation torque with Valgus Torque, Posterior Tibial Load, Combined Posterior Tibial Load with External Rotation Torque, Valgus Torque, and Varus Torque. The ten loading conditions were applied at 30°, 60°, and 90° of knee flexion for the intact and ACL deficient knee. DATA PROCESSING Peak maximum principal strain was computed by comparing the 3D marker positions in a non strained reference configuration to the loaded configurations using ABAQUS modeling software (Dassault Systems, Velizy-Villacoublay, France). The peak maximum principal strains computed by ABAQUS were averaged over the 6 specimens and compared with Students t-Tests. Two 2 tailed tTests with Bonferroni corrections were utilized to determine significance between flexion angles at each loading condition in both the ACL deficient and ACL Intact states. A one tailed t-Test compared the significance of ACL deficient versus ACL Intact state for each loading condition at each flexion angle.
RESULTS Each specimen was found to have a different loading condition that produced the highest peak maximum principal strain. For all specimens the highest peak maximum principal strain of the anterolateral capsule was found between four loading conditions: Anterior Tibial Load, Combined Anterior Tibial Load with Internal Rotation Torque, Internal Rotation Torque, and Combined Internal Rotation Torque and Valgus Torque (Figure 1). The highest peak maximum principal strains were found in both ACL Intact and ACL deficient states. The ACL deficient state was generally found to have higher maximum principal strain than the ACL Intact state. Finally, higher flexion angles correlated with higher strains in the majority of loading conditions (Figure 2). For example, one maximum was found to be at 60° flexion with Internal Rotation Torque in the ACL deficient state.
Figure 1. Mean Maximum Principal Strain at flexion angle of 30 degrees. AT = Anterior translation. ATIR = Anterior Translation + Internal rotation. ER = External rotation. ERVR = External rotation + Varus torque. IR = Internal rotation. IRVG = Internal rotation + Valgus torque. PT = Posterior translation. PTER = Posterior translation + External rotation. VG = Valgus torque. VR = Varus torque. Graph is representative of difference between loading conditions and ACL state at the same flexion angle. Blue = ACL deficient Red = ACL Intact. Statistically significant difference (*p<0.05)
Figure 2. Mean Maximum Principal Strain during Anterior Tibial Load. Graph is representative of difference between flexion angle and ACL state during same loading condition with statistical difference between 30 and 60 degrees and 30 and 90 degrees. Blue=ACL deficient and Red = ACL Intact
DISCUSSION When comparing the ACL deficient and intact states there was a significant difference (p < 0.05) between 14 of the 30 compared states. In the ACL deficient state comparing the flexion angles 30 v 60 and 30 v 90 displayed statistical significance after application of a Bonferroni correction (p < 0.0167). The ACL intact knee flexion angles displayed minor statistically significant differences, only 4 of 30 compared states. When the ACL is deficient the strains on the anterolateral capsule are greater compared to when the ACL is intact because the capsule is acting as a primary restraint. Also, as the flexion angle of the knee increases from 30 to 60 and 30 to 90 the strains also increase as more load is accepted by the capsule when the ACL is deficient. Overall, when the ACL is intact less strain is placed on the anterolateral capsule. Finally, the values of the peak maximum principal strain were found to be much greater than the ultimate strains that can be held by tendons and ligaments. In conclusion, the anterolateral capsule in the ACL deficient state with higher flexion angles and four of the ten tested loading conditions results in the highest surface strain. Understanding the loading conditions that most greatly strain the anterolateral capsule in knees with and without ACL damage will improve injury models. Knowledge of maximum principal strain patterns will also help surgeons to more accurately repair damage when both the ACL and the anterolateral capsule are damaged [4,5]. REFERENCES 1. Miyasaka et al. Am J Knee Surg., 4, 43â&#x20AC;&#x201C;48, 1991 2. Spindler KP, Wright RW. N Engl J Med., 359(20):2135-2142, 2008 3. Biau et al. Clin Orthop Relat Res., 458:180-187, 2007 4. Guenther et al. Anterolateral rotatory instability of the knee. Knee Surg Sports Traumatol Arthrosc. 2015. 5. Moore et al. Ann Biomed Eng 38(1):66-76, 2010
ACKNOWLEDGEMENTS Thanks to the Department of Bioengineering and the Department of Orthopedics at the University of Pittsburgh for partial funding.
QUANTIFYING TIBIOFIBULAR KINEMATICS USING DMAS7 MOTION TRACKING SYSTEM TO INVESTIGAE SYNDESMOTIC INJURIES Joseph M Takahashi, Kevin M Bell, MaCalus V Hogan and Richard E Debski Orthopaedic Robotics Laboratory, Departments of Bioengineering and Orthopaedic Surgery University of Pittsburgh, PA, USA Email: joseph.takahashi@pitt.edu, Web: http://www.engineering.pitt.edu/labs/ORL/ INTRODUCTION Injuries to the ankle syndesmosis, commonly referred to as high ankle sprains, typically occur as a result of external rotation or hyper dorsiflexion of the foot, which may disrupt the syndesmotic ligaments and compromise the integrity of the distal tibia, fibula, and talus [1]. Surgical treatment seeks to stabilize the distal tibiofibular structure to allow the syndesmosis to heal. Because an increase in fibula displacement relative to the tibia provides an effective method of determining a syndesmotic injury, quantifying such displacements are of immense interest clinically [2]. To quantify the rigid body motion between the tibia and fibula, a robotic testing system was used manipulate the foot relative to the tibia. However, an external system was necessary to track fibular motion. The objective of this study was to develop a protocol to develop a protocol to measure fibula motion using a motion capture system (DMAS7, Spicatek, HI) and assess the accuracy and repeatability of the methodology. METHODS In order to assess the DMAS7â&#x20AC;&#x2122;s ability to track kinematics, â&#x20AC;˘ A mechanical digitizer was designed to register points in the DMAS7 system in order to create an anatomical coordinate system for the tibia and fibula â&#x20AC;˘ Marker triads were designed to track kinematics. â&#x20AC;˘ After validating the system, a practice test on a specimen was performed to determine a baseline for physiologic kinematics of the fibula. Develop Mechanical Digitizer The mechanical digitizer was validated based on its accuracy and repeatability. To evaluate repeatability, a bolt was digitized ten times and the difference of the x,y,z location of each point digitized and the average location of the ten points was determined. Accuracy was determined by digitizing a bolt and
then translating the bolt 2 mm. (Distance determined using a linear translator accurate to 10 đ?&#x153;&#x2021;đ?&#x153;&#x2021;m). Develop Marker Triads A marker triad consisting of four contrast based markers (fourth marker for redundancy), was fixed to the robotic testing system, which displaced the triad three trials of 50 mm (large displacement) and rotated the triad three trials of 15o (large rotation). The camera system was validated based on its ability to accurately record these known displacements. Track Physiological Motion A cadaveric specimen with syndesmosis intact was mounted on the robotic testing system, preserving the syndesmosis. Marker triads were rigidly attached to the bone (Figure 1). The medial and lateral malleoli, and two points located inferiorly and superiorly along the tibial crest were digitized to define the tibia anatomical coordinate system. Four loading conditions were simulated, each repeated three times to determine repeatability. Four testing conditions were applied: 10 Nm external rotation (ER) torque was applied at 0o of flexion, 10 Nm torque was applied to the ankle, 10 Nm ER torque Figure 1: Set up to validate marker was applied at 30o of triads. Triads attached to tibia and plantarflexion as well fibula as at 10o of dorsiflexion. The DMAS7 system was used to record the position of the marker triads at the endpoints of each testing condition. Data Analysis All video data was processed in the DMAS7 software to extract x,y,z coordinates of the tracked markers. All marker data was analyzed in Matlab (MathWorks Inc., MA). For both the mechanical digitizer and the
A
Physiological Motion For each physiological testing condition, variation in translation ranged from 0.04 mm to 0.69 mm (Table 1). In all testing conditions, medial lateral (ML) translation was observed, ranging from 0.66 – 1.51 mm (Table 1). Additionally, at least a 0.5 mm posterior displacement was observed during all testing conditions, which was especially prevalent during external rotation at 0o of flexion. 1.2
Error in Translation (mm)/Rotation (deg.)
marker triads, error associated with displacement and rotation was calculated as the absolute difference of the observed motion and the known motion. VALIDATION CRITERIA Average reported literature values for measured fibular displacement was found to be 2.35 mm [3]. Therefore, the validation criteria for accuracy and repeatability was set at ≤0.20 mm, an order of magnitude better. This validation criteria was applied to both the mechanical digitizer and tracking kinematics of the marker triads. RESULTS Mechanical Digitizer The accuracy of the digitizer was determined to be 0.15 mm and repeatable to 0.11 mm. Marker Triads The error of the camera system for quantifying kinematics was found to be ≤ 0.66 mm for large displacements and ≤ 0.24 degrees for rotations (Figure 2).
1 0.8 0.6 0.4 0.2 0 -0.2
Rot.
Figure 2: Error associated with large displacements and large rotations.
Table 1: Fibular translations due to external rotation and maximum dorsiflexion Testing Condition Translation Direction 30o Plantarflex.+10 0o Flex.+10 Nm 10o Dorsiflex.+10 Nm Nm ER ER ER ML (mm) (+ = lateral) 1.36±0.06 1.51±0.06 1.60± 0.08 AP (mm) (+ = anterior) -4.45±0.36 -6.02±0.38 -4.91±0.69 SI (mm) (+ = superior) -0.22±0.28 -0.39±0.08 -0.19±0.07 DISCUSSION The design criteria established for accuracy and repeatability of the mechanical digitizer was 0.20 mm. The mechanical digitizer met the established design criteria for both accuracy and repeatability. Accuracy of tracking marker triads for large displacements and large rotations was greater than the established design criteria. However, 0.66 mm error for a large translations and rotation results in 1.32% and 1.86% error respectively, which was deemed to be acceptable at this time. Because expected fibular motion is small, further analysis will investigate the system accuracy for small translations and rotations. Physiological Motion Throughout all testing conditions, the variation between trials was on average 0.21 mm indicating the ability to perform physiological testing conditions to a high degree of repeatability. In all testing conditions performed, medial-lateral displacement (mortise widening) was observed. This
Lg. Disp.
10 Nm Dorsiflex. 0.66±0.04 -0.55±0.30 0.34±0.17
is concurrent with some literature findings that report lateral displacement of the fibula [2] during syndesmotic injuries. While all testing conditions resulted in larger posterior displacements than reported in literature, a limitation of this study is that only one specimen was tested. Future Directions Upon completion of additional validation of the DMAS7 system, this methodology can be used in the analysis of additional ankle specimens to investigate syndesmotic injuries and repair procedures. REFERENCES 1. Van Heest, T., Injuries to the Ankle Syndesmosis, JBJS, 2014:96 603-13. 2. Porter D., OAJSM, 2014:5 173-182. 3. Markolf, K., Syndesmotic Injuries, FAI, 2012:33 779 - 86. ACKNOWLEDGEMENTS This project was supported by the Department of Bioengineering and the Department of Orthopaedic Surgery.
BICARBONATE HEMODIALYSIS FOR LOW-FLOW CO2 REMOVAL: DIALYSATE RECYCLING Lindsey Marra, Alexandra May, and William J. Federspiel Ph. D. Medical Devices Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: ljm70@pitt.edu, Web: http://www.mirm.pitt.edu/medicaldevices/ INTRODUCTION Chronic obstructive pulmonary disease (COPD) and acute respiratory distress syndrome (ARDS) are substantial burdens on health care due to their high cost of treatment and prevalence in American patients.[1] In 2012 in the United States, 15.6 million adults have reported that they were diagnosed with COPD.[2] For COPD, CO2 removal may negate the need for mechanical ventilation.[3] For ARDS, CO2 removal may allow for lung protective ventilation.[4,5] There is a significant need for a less invasive treatment to manage hypercapnia and respiratory acidosis in patients experiencing acute respiratory failure. The severe cases of acute respiratory distress syndrome and chronic obstructive pulmonary disease are most often the cause of acute respiratory failure. The use of continuous renal replacement therapy (CRRT) has potential as a relatively noninvasive, low-flow CO2 removal method via the dialysis of bicarbonate ion from blood. 95% of the CO2 is in the form of the bicarbonate ion which hemodialysis targets.[6] In our previous work, we demonstrated that using a specially designed dialysate with no bicarbonate ion, bicarbonate hemodialysis resulted in clinically significant levels of CO2 removal from blood. In this study, we evaluate in a scaled-down model system whether recycling of dialysate in CRRT using a closed loop with a membrane oxygenator for CO2 removal can also produce clinically significant levels of CO2 removal. METHODS A phosphate buffered saline solution with physiological pCO2 and bicarbonate ion concentration was used to replicate blood. As shown in Figure 1, â&#x20AC;&#x153;Blood" and dialysate flow countercurrent to each other through an M10 dialyzer (Gambro, Sweden). While blood is discarded after a single pass, dialysate flows in a closed loop through a customized mini-oxygenator (O2 sweep gas 0.2 L/min, surface area 0.04 m2), and is then recycled back to the dialyzer. Each of four experimental groups (each with n = 3) have varied flow rates of blood and dialysate as listed in Table 1. As indicated
in Figure 1, samples of blood were taken at the inlet and outlet of the dialyzer and samples of dialysate were taken at the inlet and the outlet of the oxygenator. From these samples, pCO2 (mmHg) was measured with the RAPIDlab 248 (Siemens, Germany). O2 sweep gas flows through the oxygenator countercurrent to the dialysate. From the sweep gas, CO2 concentration (ppm) was measured with the WMA-4 CO2 Analyzer (PP Systems, MA), was used to calculate CO2 removal rate from the dialysate(mL/min).
Figure 1. This figure depicts the schematic of the experimental test loop. DATA PROCESSING CO2 concentration (ppm) data were used to calculate the CO2 removal rate (mL/min) in each experimental group. This calculated value is derived from the multiplication of the sweep gas flow rate, a temperature correction factor, and the concentration of CO2 in the sweep gas. We calculated the average and standard deviation of the CO2 removal rate for each experimental group. The average and standard deviation of the pCO2 for each sample location in each experimental group were calculated. The standard deviation was used to assess the significance of the differences between each group. The dialyzer and oxygenator devices used in this experiment are 22 times smaller than clinical pediatric devices. In the results the CO2 removal rate from the dialysate will be evaluated by scaling the rate by 22. We make the assumption that these devices scale linearly.
clinically relevant CO2 removal rate (80 ml/min) using an oxygenator alone.[4] Dialysate recycling will require the efficient conversion of bicarbonate to CO2 and removal of CO2 in the dialysate loop. A limitation of this study is that a direct method of measuring bicarbonate concentration was not available. While the partial pressure of CO2 data are useful, a more complete analysis will involve an additional method to measure bicarbonate concentration. Future studies will involve a packed bed CO2 scrubber that incorporates the enzyme carbonic anhydrase for improved conversion of bicarbonate to CO2 and greater CO2 removal. Materials are being evaluated for their potential as packing in this device. This will improve the feasibility of dialysate recycling and bicarbonate hemodialysis as a potential method for low-flow CO2 removal and the treatment of patients with COPD and ARDS.
RESULTS Dialysate recycling does achieve measurable CO2 removal (Figure 2). The CO2 removal rate increases with either an increase in blood or dialysate flow rate. The variability in the data alone does not explain the increases observed in the CO2 removal rate. The partial pressure of CO2 measurements at key locations in the hemodialysis loop confirms that CO2 is being removed by the devices used in this study. Table 1 details these measurements including the average and standard deviation for each experimental group. The dialyzer is able to remove a significant amount of CO2 from the blood. Once this CO2 is transported into the dialyzer, much of it is removed by the oxygenator. When the system is scaled up by a factor of 22 (corresponding to pediatric dialyzers and oxygenators). CO2 removal rates between 6.6 and 9.0 ml/min would be possible with clinical pediatric devices.
REFERENCES 1. Halbert et al. Eur Respir J., 3, 523-532, 2006. 2. American Lung Association, May 2014. 3. Blankman et al. Acta Anaesthesiol Scand., 2015. 4. Hickling et al. Crit Care Med., 10, 1568-78, 1994. 5. Lund et al. Curr Respir Care., 3, 131-138, 2013. 6. Comroe et al. “Physiology of Respiration,” Year Book Medical Pub., 1979. ACKNOWLEDGEMENTS The Swanson School of Engineering, The Office of the Provost, the McGowan Institute for Regenerative Medicine, and NIH grant R01HL117637-03. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Figure 2. Higher dialysate flow rates achieve a greater rate of CO2 removal from the dialysate. BF: blood flow rate. DF: dialysate flow rate. DISCUSSION While it is promising to see a reduction in dialysate and blood pCO2, our approach would not achieve a
Table 1: Partial pressure of CO2 in the blood (mmHg) at the inlet and outlet of the dialyzer and in the dialysate at the inlet and the outlet of the oxygenator. Flow Rate (mL/min)
Group 1 Group 2 Group 3 Group 4
Average pCO2 and Standard Deviation (mmHg), n = 3
Blood
Dialysate
Blood Dialyzer Inlet
Blood Dialyzer Outlet
Dialysate Oxygenator Inlet
Dialysate Oxygenator Outlet
9 9 13 13
18 28 18 28
48.8 ± 3.94 48.5 ± 2.14 51.2 ± 2.50 48.8 ± 3.37
21.6 ± 1.14 24.0 ± 1.96 36.6 ± 0.86 37.0 ± 0.37
22.0 ± 0.83 22.5 ± 0.92 32.2 ± 0.17 31.2 ± 0.98
6.80 ± 0.59 11.3 ± 1.35 24.2 ± 2.65 19.8 ± 2.19
IDENTIFYING NEURONAL PATHWAYS FOR GENERATING SACCADES TO STATIONARY TARGETS Luke Drnach, Uday Jagadisan and Neeraj Gandhi Cognition and Sensorimotor Integration Lab, Department of Bioengineering University of Pittsburgh, PA, USA Email: ljd32@pitt.edu INTRODUCTION The ability for foveate animals to locate and identify objects in their environment is critical for survival. Orientation to objects of interest is achieved through activation of appropriate neural circuits that generate a high-velocity eye movement (saccade) to align the target with the fovea [1]. Previous studies have demonstrated that the superior colliculus (SC) in the subcortex plays a significant role in generating saccades [2]. The superficial layers of SC are known to process visual information while the intermediate layers are involved in visuomotor processing; however, the functional network structure of the saccade-generating neural circuits within the SC remains largely unclear. We hypothesize that bidirectional communication between the superficial and intermediate layers plays an important role in the generation of saccades. To test our hypothesis, we performed coherence and Granger causality analyses on local field potential (LFP) recordings from SC to elucidate the directional influences among the connections within the SC. METHODS A 16-contact linear electrode array was used to simultaneously record local field potentials (LFPs) in the SC of a rhesus macaque monkey (male, Macaca mulata) performing standard saccade tasks to stationary targets. All experiments were performed in accordance with the institutional animal care guidelines at the time. Channels 6-16 of the array recorded from the putative intermediate layers containing visuomotor neurons; the remaining channels recorded from the superficial layers containing visual neurons. 118 trials were recorded and analyzed. DATA PROCESSING LFPs were low-pass filtered at 250 Hz and digitized at 1 kHz. Recordings were analyzed from 400 ms before to 600 ms after the animal received a cue to make a saccade to the target (GO cue). LFPs from
each channel were normalized by subtracting the ensemble mean and by dividing by the ensemble standard deviation. All pairwise combinations of channels were analyzed by fitting a multivariate autoregressive (MVAR) model to the data in successive 50 ms windows. For each window, model order was selected by minimizing the Akaike Information Criterion and model parameters were estimated via the Levinson-Wiggins-Robinson algorithm [3]. The transfer functions were computed from the MVAR model and averaged to estimate the power spectral density matrix of the LFPs. Granger causality spectra were computed from power spectral density matrix according to: đ?&#x2018;&#x201C;đ?&#x2018;&#x152;â&#x2020;&#x2019;đ?&#x2018;&#x2039; (đ?&#x153;&#x201D;) = ln
|đ?&#x2018;&#x2020;đ?&#x2018;&#x2039;đ?&#x2018;&#x2039; (đ?&#x153;&#x201D;)| |đ?&#x2018;&#x2020;đ?&#x2018;&#x2039;đ?&#x2018;&#x2039; (đ?&#x153;&#x201D;) â&#x2C6;&#x2019; đ??ťđ?&#x2018;&#x2039;đ?&#x2018;&#x152; (đ?&#x153;&#x201D;)ÎŁđ?&#x2018;&#x152;|đ?&#x2018;&#x2039; đ??ťđ?&#x2018;&#x2039;đ?&#x2018;&#x152; (đ?&#x153;&#x201D;)â&#x2C6;&#x2014; |
where đ?&#x2018;&#x201C;đ?&#x2018;&#x152;â&#x2020;&#x2019;đ?&#x2018;&#x2039; is the Granger influence of channel đ?&#x2018;&#x152; on channel đ?&#x2018;&#x2039;, đ?&#x2018;&#x2020;đ?&#x2018;&#x2039;đ?&#x2018;&#x2039; is the power spectrum of channel đ?&#x2018;&#x2039;, đ??ťđ?&#x2018;&#x2039;đ?&#x2018;&#x152; is the transfer function between the channels, and ÎŁđ?&#x2018;&#x152;|đ?&#x2018;&#x2039; is the partial covariance matrix [4]. Coherences between channels were also calculated from the power spectral density matrix. Statistical significance thresholds for the coherence and causality spectra were assessed via a permutation procedure and were corrected for multiple comparisons. RESULTS Granger causality analyses revealed significant causal connections between all pairs of channels within SC; however, the greatest causal influences were observed for pairwise connections from channels 9-12 to channels 1-7. Figure 1 illustrates the Granger causal influences between channel 12 (intermediate layer) and channel 3 (superficial layer) and is representative of the Granger causality spectra observed for the mentioned connections. For the connection from channel 12 to channel 3, the Granger causal influence is greatest during the 300
ms before the GO cue - the motor planning epoch and in frequencies <50 Hz, indicating a directional influence from the intermediate layers of the colliculus to the superficial layers. By comparison, the Granger causality spectra from channel 3 to channel 12 demonstrates little causal influence within the same window. Coherence analyses also revealed significant coherences between all site pairs in the frequencies <50 Hz. Figure 2 illustrates the coherence between channels 3 and 12 and is representative of the coherences between individual site pairs.
Figure 1: Granger causality time-frequency spectra for the causal influence from channel 12 to channel 3 (top) and from channel 3 to channel 12 (bottom). The colors indicate the Granger causality values and time index 0 indicates the GO cue. Granger causality values greater than 0.01 are significant at the corrected significance level of p<0.0002.
Figure 2: Time-frequency coherence spectrum between channels 3 and 12. Colors indicate coherence values and time index 0 corresponds to the GO cue. Coherence values of 1 indicate perfect coherence; coherences of 0 indicate no coherence. Coherences greater than 0.03 are significant at the corrected significance level of p<0.0002.
DISCUSSION The strong coherences between the intermediate and superficial layers indicate the presence of a functional connection between the layers; however, coherence analysis does not indicate in which direction the information is transferred. The causal connection between the intermediate layers and the superficial layers indicates that the intermediate layers pass low frequency information to the superficial layers during movement planning. Additionally, the lack of a reciprocal connection indicates that the information transfer is mainly oneway. The Granger causality spectra suggest that the visuomotor neurons in the intermediate layer inform the visual neurons in the superficial layer of the planned movement. Together, the shared spectral features between the coherence and Granger causal spectra support the notion that motor planning information is transferred between SC layers in the 0-50 Hz frequency range. Our findings indicate that causal connections exist throughout the layers of the superior colliculus during motor planning. The results are consistent with literature on the role of the SC in generating saccades. Future analyses will focus on studying the connection dynamics during different epochs â&#x20AC;&#x201C; i.e. during the visual response and during motor execution. Additionally, future studies will employ conditional Granger causality analysis to distinguish between indirect and direct connections within SC. REFERENCES 1. Sheliga et al. Exp Brain Res. 981 507-522, 1994. 2. Schiller and Stryker. J. Neurophysiol. 35, 915924, 1972 3.Marple. Prentice Hall. 1987. 4.Barnett, Seth. J. Neurosci. Methods. 223. 50-68. 2014. ACKNOWLEDGEMENTS Experiments were performed at the Eye & Ear Institute in the University of Pittsburgh Medical Center. Funding was provided by the Swanson School of Engineering, the Office of the Provost, and the Cognition and Sensorimotor Integration Lab at the University of Pittsburgh.
EVALUATION AND OPTIMIZATION OF DRUG RECOGNITION SYSTEM FOR SIMULATION BASED LEARNING Michael R. Adams, Douglas A. Nelson Jr., BSE, Joseph T. Samosky, PhD Simulation and Medical Technology R&D Center, Department of Bioengineering University of Pittsburgh, PA, USA Email: mra63@pitt.edu INTRODUCTION Adverse drug events (ADEs) are one of the most common preventable medical errors [1]. Simulationbased training has been shown to decrease the number of errors observed during medication administration by an order of magnitude [2]. Simulation also affords the opportunity to show clinicians the consequences of ADEs in a safe way without putting actual patients at harm. A novel simulated drug recognition system has been previously developed [3] as part of the BodyExplorer system [4]. The drug recognition system measures what simulated medications are administered to a simulated patient, and the BodyExplorer system automatically responds to the injected drug.
METHODS The drug recognition system leverages the conductivity of salt solutions. The simulated drugs are salt solutions of varying concentrations. The solution passes through an electrode chamber where an alternating current is applied across the electrodes. The ions in solution allow the current to pass through the water, and the conductance is measured based on the current induced across the electrode, which is converted to a voltage, amplified, rectified, RCfiltered, and then read digitally into a computer through an Arduino Nano interface (v3.0, ATmega328 microcontroller). Once read into the computer, there is further software filtering using an averaging buffer.
To identify opportunities to incorporate this technology in the classroom, meetings were held with 15 faculty, 3 deans, and more than 20 students at the University of Pittsburgh School of Nursing. Based on responses from these meetings, it was determined that the BodyExplorer drug recognition system would be ideal for the medication administration safety workshop offered to nurse anesthesia students. Due to the hands-on nature of medication administration, students could greatly benefit from the more immersive educational experience provided by this simulation-based learning technology.
The hardware in the drug recognition system consists of a custom-designed, printed circuit board (PCB), an electrode cell, a flow meter, and an Arduino Nano.
The medication administration safety workshop focuses on the five most frequently used drugs in the operating room. To serve effectively in its role, the drug recognition system must recognize not only these five drugs, but also varied concentrations within each drug.
Analysis of data from the injection trials revealed some conductivity value drift over time. Since this might reduce the number of conductivity levels that could be recognized by the system, more trials were conducted to determine the cause. Tests were performed examining the salt solutions, electrode cell, circuit components, and the software that inputs and smooths the data.
To ensure that the drug recognition system would meet the curricular needs determined by faculty interviews, it was necessary to evaluate and optimize the drug recognition system.
To verify the drug recognition systemâ&#x20AC;&#x2122;s functionality, we first tested the voltage signal at each component in the drug measurement circuit with an oscilloscope (Agilent MSO-X 2004A). This was done to ensure each component was behaving as intended. For a more comprehensive analysis, more than 95 drug injection trials were performed and data regarding system performance was collected and analyzed.
DATA PROCESSING Data was analyzed in Excel and Matlab. A Matlab script was created to filter raw data collected during
Both raw and filtered data were imported into Excel for organization, graphing, and analysis. Multiple iterations of the same trial were compared and significant data trends identified. RESULTS Observation of the voltage output signal of the conductance measurement circuit showed that the output of the full wave rectifier was not the signal expected. Instead of reflecting the negative values of the AC wave to be positive, the negative portions of the wave were being filtered out completely. This yielded a positive AC wave signal with only half the expected values. After troubleshooting each component in the rectifier circuit, two resistor values were adjusted yielding a doubling of the rectifier output and achievement of the designed performance. After circuit fine tuning and adjustment, salt concentrations between 0 g/L and 250 g/L were tested, and it was determined that 1-6 g/L produced the most reliable conductivity values. Trial data showed that the drug recognition system produced the most stable conductivity values when the gain was set to amplify the signal such that the highest concentration would correspond to the maximum possible value readable by the Arduino. To verify that the drug recognition system was now functioning properly, another injection trial was completed. The raw data was filtered to show only the portions of injections expected to produce a steady state (delta conductivity < 1 and flow rate > 1 mL/s). The results graphed in Figure 1 show clearly defined, stable conductivity values for each concentration. Nine more trials were performed and each verified that the system was functioning reliably. DISCUSSION A goal of this project was to verify that the drug recognition system could identify the five drugs involved in the medication administration safety workshop.
Steady State Conductivity Values (1-6 g/L) 1200
Arduino Conductivity Value
the trials. The script allowed a user to select a data file, sort values within a specific threshold, and write the desired values and their corresponding time to a new text file.
1000 800 600 400
200 0 0
50
100
150
200
250
300
350
400
450
Time (seconds) Flow Rate
Conductivity
Figure 1: The orange lines represent the conductivity value read by the Arduino. The blue lines represent flow rate.
Figure 1 shows that the system can identify six, and the data suggests the ability to recognize even more. Future work will finalize the setup for the medication administration workshop, and verify system performance for the specific drug injection scenarios to be employed. This will include defining exact conductivity windows for each simulated medication. Additionally, based on the interviews with the Nursing School faculty, an area for continued exploration will be the ability to identify not only simulated drugs, but also recognize different concentrations of each drug. This may involve measuring another property of the simulated drug solutions or expanding the range through which salt concentrations are reliably measured. More work will be done in this area to finalize a solution. REFERENCES [1] Kohn LT, Institute of Medicine, 2000 [2] Ford DG, Intensive Care Medicine, 2010, 36(9): 1526-31 [3] Samosky JT, et al, Proceedings TEI â&#x20AC;&#x2122;12, 2012, 263270 [4] Samosky JT, MMVR 19, 2012, 430-432
ACKNOWLEDGEMENTS This work was supported by a Coulter Translational Research Partners II Award, University of Pittsburgh, Department of Bioengineering, Swanson School of Engineering, and the Office of the Provost.
Ridge Matching Based on Maximal Correlation in Transform Space Mingzhi Tian, Jihang Wang, John Galeotti, Samantha Horvath, George Stetten VIA Lab, Department of Bioengineering University of Pittsburgh, PA, United States Email: mit46@pitt.edu INTRODUCTION: Image matching, a common technique in Computer Vision to identify objects, persons, locations, etc., is widely used in both military and civilian applications. Depending on the specific application, different image matching approaches are applied. In the current project, which we call ProbeSight, the construction of 3D ultrasound models relies on the location data found by matching camera images to a pre-acquired image of the skin [1]. For common image matching algorithms, the precision of the location data can be compromised when changes in ambient lighting conditions affect the camera images. Motivated by the need to reduce the unwanted influence from the ambience, a novel method is proposed to match images that contain features associated with an inherent direction. Since these features often represent real physical structures, they should be consistently captured by the camera under normal variations in ambient light. METHODS: Our new method first extracts ridge features in the images, using preprocessing algorithms based on the scale-invariant ridge detection method [2]. The ridge extraction algorithm produces a black and white image BW, in which binary 1â&#x20AC;&#x2122;s represent ridge points. For each ridge point, an orientation θ is calculated from the eigenvectors of the Hessian matrix of image intensity. The set of ridge points is represented as a matrix S, with each row containing a ridge point location (x, y) and orientation θ. đ?&#x2018;Ľđ?&#x2018;Ľ1 đ?&#x2018;Śđ?&#x2018;Ś1 đ?&#x153;&#x192;đ?&#x153;&#x192;1 đ?&#x2018;Ľđ?&#x2018;Ľ2 đ?&#x2018;Śđ?&#x2018;Ś2 đ?&#x153;&#x192;đ?&#x153;&#x192;2 đ??&#x2019;đ??&#x2019; = ďż˝ ďż˝ đ?&#x2018; đ?&#x2018; . đ?&#x2018;Ąđ?&#x2018;Ą đ?? đ?? đ?? đ?? (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;&#x2013;đ?&#x2018;&#x2013; , đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;&#x2013;đ?&#x2018;&#x2013; ) = 1 (1) â&#x2039;Ž â&#x2039;Ž â&#x2039;Ž đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; đ?&#x153;&#x192;đ?&#x153;&#x192;đ?&#x2018;&#x203A;đ?&#x2018;&#x203A;
The resulting matrix S is used as input to the core matching algorithm. To match two images with a rigid transform, we need to find the best overall translation and rotation between the two images.
We define a Transform Space K as the set of all possible rigid transforms {Î&#x201D;x, Î&#x201D;y, Î&#x201D;θ} between the two images; the best match is represented by a vector in K. Since the images have been reduced to two sets
of vectors S1 and S2, each pair of vectors, v1 = (x1, y1, θ1) â&#x2C6;&#x2C6; S1, v2 = (x2, y2, θ2) â&#x2C6;&#x2C6; S2, are correlated by a transform given by:
ďż˝
t(đ?&#x2018;Łđ?&#x2018;Ł1 , đ?&#x2018;Łđ?&#x2018;Ł2 ) = (â&#x2C6;&#x2020;đ?&#x2018;Ľđ?&#x2018;Ľ, â&#x2C6;&#x2020;đ?&#x2018;Śđ?&#x2018;Ś, â&#x2C6;&#x2020;đ?&#x153;&#x192;đ?&#x153;&#x192;) â&#x2C6;&#x2020;đ?&#x153;&#x192;đ?&#x153;&#x192; = đ?&#x153;&#x192;đ?&#x153;&#x192;2 â&#x2C6;&#x2019; đ?&#x153;&#x192;đ?&#x153;&#x192;1
đ?&#x2018;Ľđ?&#x2018;Ľ1 â&#x2C6;&#x2020;đ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?â&#x2C6;&#x2020;đ?&#x153;&#x192;đ?&#x153;&#x192; ďż˝ = ďż˝đ?&#x2018;Śđ?&#x2018;Ś ďż˝ â&#x2C6;&#x2019; ďż˝ â&#x2C6;&#x2020;đ?&#x2018;Śđ?&#x2018;Ś 1 đ?&#x2018; đ?&#x2018; đ?&#x2018; đ?&#x2018; đ?&#x2018; đ?&#x2018; â&#x2C6;&#x2020;đ?&#x153;&#x192;đ?&#x153;&#x192;
(3)
(2)
â&#x2C6;&#x2019;đ?&#x2018; đ?&#x2018; đ?&#x2018; đ?&#x2018; đ?&#x2018; đ?&#x2018; â&#x2C6;&#x2020;đ?&#x153;&#x192;đ?&#x153;&#x192; đ?&#x2018;Ľđ?&#x2018;Ľ2 �� ďż˝ đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?â&#x2C6;&#x2020;đ?&#x153;&#x192;đ?&#x153;&#x192; đ?&#x2018;Śđ?&#x2018;Ś2
(4)
We map every such pair to a vector in the Transform Space, and thus generate a cloud of points in K. The density of the point cloud reaches a global maximum at the optimal transform. To measure the density, we treat the point cloud in Transform Space as a sum of impulse functions of 3 variables (See Eq. 5), and convolve it with a blurring kernel f(â&#x2C6;&#x2020;đ?&#x2018;Ľđ?&#x2018;Ľ, â&#x2C6;&#x2020;đ?&#x2018;Śđ?&#x2018;Ś, â&#x2C6;&#x2020;đ?&#x153;&#x192;đ?&#x153;&#x192;) to yield the density function D: đ??ˇđ??ˇ(Î&#x201D;đ?&#x2018;Ľđ?&#x2018;Ľ, Î&#x201D;đ?&#x2018;Śđ?&#x2018;Ś, Î&#x201D;đ?&#x153;&#x192;đ?&#x153;&#x192;) =
ďż˝ ďż˝ ďż˝ δ ďż˝â&#x2C6;&#x2020;x â&#x2C6;&#x2019; â&#x2C6;&#x2020;xďż˝vi , vj �� δ ďż˝â&#x2C6;&#x2020;y â&#x2C6;&#x2019; yďż˝vi , vj �� δ ďż˝â&#x2C6;&#x2020;θ vi â&#x2C6;&#x2C6;D1 vj â&#x2C6;&#x2C6;D2
â&#x2C6;&#x2019; â&#x2C6;&#x2020;θ�vi , vj �� ďż˝ â&#x2C6;&#x2014; đ?&#x2018;&#x201C;đ?&#x2018;&#x201C;(Î&#x201D;đ?&#x2018;Ľđ?&#x2018;Ľ, Î&#x201D;đ?&#x2018;Śđ?&#x2018;Ś, Î&#x201D;đ?&#x153;&#x192;đ?&#x153;&#x192;)
(5)
Here, the function f has the value 1 in the cuboidal region of size 1Ă&#x2014;1Ă&#x2014;0.2 centered at (0, 0, 0) and has the value 0 elsewhere. The best match is found as the transform at which D obtains maximal value. RESULTS: We used a pair of images (Fig. 1) sampled from a large high resolution image at known locations and known angles. The offsets are (80, -20) and the rotation between them is 80 degrees. There is significant overlap between the images. These images were preprocessed to find ridges and then matched as described above. The preprocessing algorithm produced the binary images shown in Figure 2. The black ridge points and their orientations were used in the matching process.
Figure 1: Sampled images of the human palm.
Figure 5: A cross section of the density map at Δθ = 80 degrees. The maximal correlation is found at (80,-19) Figure 2: Binary images showing the location of detected ridge points. The orientation of each ridge points was also found, but no shown.
Once all the correlating pairs of ridge points from Figure 2 were mapped to the Transform Space, a point cloud (Fig. 3) was generated. The vertical axis of the graph represents the Δθ dimension. No clear maximum density is evident in Figure 3 because overlapping points obscure each other. To demonstrate the maximum density, we convolved it with the blurring kernel f to obtain the density map D displayed in Figures 4 and 5.
The density map D is a function of three variables. Figure 4 is a projection of D onto the Δθ axis. Figure 5 displays a cross section of D at the peak rotation Δθ=80° shown in Figure 4. The maximal correlation occurs at (80, -19, 80°), accurate to a single pixel, given sampling error. DISCUSSION: The new matching method takes advantage of features that should be resistant to changes in ambient lighting. The inherent orientation of the ridge points provides additional constraints to the matching problem. The result is a closed form solution that is both fast and reliable. While the preprocessing algorithm is the most time consuming part, often images are matched to a known model, which can be precomputed prior to any real time acquisition. The current matching algorithm is limited to images with constant scales, and the addition of scale variation may increase the computation costs.
Figure 3: Point cloud in transform space, representing regions where the ridge features correlate.
This method is also generalizable to any features with direction information. To accommodate other types of features, modifications to the preprocessing code are required. REFERENCES: [1] Galeotti et al, Image Guided Therapy Workshop, Oct 2011. [2] Lindeburg, International Journal of Computer Vision 30(2), 79-116 (1998)
ACKNOWLEDGEMENTS:
Figure 4: Correlation density as a function of Δθ, a prominent global peak occurs at 80 degrees.
This research was funded by a summer internship through the Swanson School of Engineering and the Office of the Provost, NIH grant R01EY021641, US Army grants W81XWH-14-1-0370 and -0371, and a NSF Graduate Research Fellowship under Grant #DGE-1252522.
OPTIMIZING POROSITY OF SMALL DIAMETER FAST-DEGRADING SYNTHETIC VASCULAR GRAFTS Jennifer J. Zhuang, Robert A. Allen, Chelsea E.T. Stowell, Yadong Wang Biomaterials Foundry, Department of Bioengineering University of Pittsburgh, PA, USA Email: jjz11@pitt.edu, Web: http://www.biomaterialsfoundry.pitt.edu/ INTRODUCTION In severe cases of cardiovascular disease, replacement or use of a bypass to redirect blood flow around the damaged or obstructed area is often necessary for treatment. Synthetic resorbable grafts, a promising alternative to traditional vascular bypasses, are cell-free polymer scaffolds that induce neovessel formation as they degrade in vivo. Sufficient pore size and interconnectivity within the graft allow for infiltration of cells and nutrients that promote remodeling into vascular tissue. Wang et al. demonstrated that thicker fibers (~5-6 Îźm) and larger pores (~30 Îźm) enhanced vascular remodeling; macrophages cultured on scaffolds with thicker fibers tended to polarize into the tissue remodeling M2 phenotype, whereas macrophages cultured on thinner-fiber scaffolds induced the M1, proinflammatory phenotype[1]. Therefore, the goal of this project was to explore methods to optimize initial cell infiltration and graft remodeling by manipulating graft fabrication techniques to increase porosity. MATERIALS AND METHODS Grafts were fabricated by electrospinning, as outlined in Figure 1A, a prepolymer poly(glycerol sebacate) and polyvinyl alcohol (pPGS:PVA) solution onto a rotating steel mandrel, then crosslinking and washing out residual PVA as described in Jefferies et al.[2]. To investigate the effects of electrospinning parameters on graft properties, we manipulated: (1) flow rate of pPGS:PVA solution from the syringe needle (29 and 55 ÎźL/min), (2) angular velocity of the mandrel (200 and 600 RPM), and (3) distance between the syringe pump needle and mandrel (25 and 35 cm).
Porosity was measured using gravimetric analysis on at least three 5 mm grafts per mandrel. Grafts were lyophilized and massed; volume was calculated by measuring the graft dimensions under a dissecting microscope. The porosity Ď&#x2022; was calculated using the equation, đ?&#x153;&#x2122;đ?&#x153;&#x2122; = 1 â&#x2C6;&#x2019;
đ?&#x153;&#x152;đ?&#x153;&#x152;đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;? đ?&#x153;&#x152;đ?&#x153;&#x152;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;
(1)
where đ?&#x153;&#x152;đ?&#x153;&#x152;đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;? represents the bulk mass density of the graft wall, obtained by dividing the mass of the graft by its volume, and đ?&#x153;&#x152;đ?&#x153;&#x152;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x161; represents the mass density of nonporous PGS, which is reported to be 1.1 g/mL [3]. Average fiber diameter was measured using scanning electron microscopy (SEM) images. For each graft type, two SEM images at 450x were analyzed, and at least 30 fibers were measured per image using ImageJ software (NIH USA, 2008). RESULTS AND DISCUSSION We found that increasing flow rate led to larger fibers. The average fiber diameter was 1.77 Âą 0.40 Îźm (n = 60 fibers) for grafts spun at 29 ÎźL /min, and increased to 2.74 Âą 0.83 Îźm (n = 60 fibers) at flow rate 55 ÎźL /min. We hypothesize this was due to a higher volume of solution being drawn from the needle, forming thicker fibers on the mandrel. Reducing angular velocity of the mandrel had an impact on porosity. At 600 RPM, the porosity was 59.5 Âą 0.02% (n = 3 grafts); at 200 RPM, the porosity was higher at 63.1 Âą 0.04% (n = 3 grafts). Fibers being spun at a slower RPM would be drawn onto the mandrel under less tension, thus minimizing compression. Quantitative assessment of the graftsâ&#x20AC;&#x2122; luminal surfaces showed that increasing needle to
mandrel distance led to less fusion and more pores in the graft lumen (Fig 1B and 1C). Spinning at a farther distance from the needle would allow the polymer solution to dry after being dispensed, and fibers landing on the surface of the mandrel would be less likely to fuse together. CONCLUSIONS The effects of electrospinning parameters on graft properties were investigated. Each parameter tested resulted in a noticeable difference in porosity or fiber diameter of the graft. By increasing polymer flow rate, reducing angular velocity of the mandrel, and increasing distance for fibers to travel, more porous grafts were produced; we believe this will improve cell infiltration and remodeling in vivo. Future work will include further optimization of porosity as well as assessment of graft degradation in small animal models.
A
REFERENCES [1] Wang, Z., et al. The effect of thick fibers and large pores of electrospun poly(epsilon-caprolactone) vascular grafts on macrophage polarization and arterial regeneration. Biomaterials 2014; 35(22): 5700-5710. [2] Jeffries EM, Allen RA, Gao J, Pesce M, Wang Y. Highly elastic and suturable electrospun poly(glycerol sebacate) fibrous scaffolds. Acta Biomater 2015;18:30-9. [3] Pomerantseva I, Krebs N, Hart A, Neville CM, Huang AY, Sundback CA. Degradation behavior of poly(glycerol sebacate). J Biomed Mater Res A 2009;91:1038-47. ACKNOWLEDGEMENTS Imaging instruments were provided by the University of Pittsburgh Center for Biologic Imaging. This work was funded by the University of Pittsburgh Department of Bioengineering. B
C
Figure 1. (A) A schematic outlining the graft fabrication process. A prepolymer PGS (pPGS) and PVA solution was ejected through a positively charged syringe needle. The solution formed fibers after ejection, which deposited onto a rotating stainless steel mandrel. The result was a microfibrous pPGS/PVA graft, which was then thermally crosslinked under vacuum at 120°C for 48 hours to convert pPGS to PGS. Grafts were then washed in a solution of water and ethanol to remove the mandrel and PVA, resulting in a microfibrous PGS core. (B) SEM visualization of the luminal surface of a graft spun 25 cm from the syringe pump needle. The scale bar in the image represents 100 μm. (C) SEM visualization of the luminal surface of a graft spun 35 cm from the syringe pump needle. The scale bar in the image represents 100 μm.
COMPUTATIONAL MODELING OF WALL STRESS IN ASCENDING THORACIC AORTIC ANEURYSMS WITH DIFFERENT VALVE PHENOTYPES Thomas G. Kappil1, Joseph E. Pichamuthu1,2,3, Julie A. Phillippi1,2,3,4, Thomas G. Gleason1,2,3,4, David A. Vorp1,2,3,4,5 1. Department of Bioengineering, University of Pittsburgh, Pittsburgh PA, USA 2. McGowan Institute for Regenerative Medicine, Pittsburgh PA, USA 3. Center for Vascular Remodeling and Regeneration, University of Pittsburgh, Pittsburgh PA, USA 4. Department of Cardiothoracic Surgery, University of Pittsburgh, Pittsburgh, PA, USA 5. Department of Surgery, University of Pittsburgh, Pittsburgh, PA, USA Email: tgk10@pitt.edu INTRODUCTION Ascending thoracic aortic aneurysms (ATAA) affect about 15,000 people annually in the United States [1]. Aneurysm formation is associated with the stiffening and weakening of the aortic wall, leading to potential rupture [2]. Weakening of the wall occurs as a consequence of medial degeneration, deriving from apoptotic loss of smooth muscle cells and fragmentation of elastin and collagen fibers. Once the ascending aorta’s diameter reaches 6.0 cm, the risk of rupture or dissection becomes 31%, therefore normal surgical intervention for ATAAs is suggested once the diameter of the aorta reaches 5.5 cm [3]. Bicuspid aortic valve (BAV) is the most common congenital heart malformation, occurring in 1% to 2% of the population. ATAAs develop earlier in patients with a BAV than in those with a tricuspid aortic valve (TAV) [4]. Studies have shown that patients with different valve morphologies have different material properties for the ascending aorta [5]. Dissection and rupture are biomechanical phenomena occurring when the aortic wall stress due to hemodynamic factors exceeds local wall strength. Hence, the goal of the study was to evaluate the spatial and temporal variation in wall stress between BAV and TAV ATAAs. METHODS Virtual 3D aortic geometries were reconstructed from pre-operative CT scans of patients (n=19) undergoing elective surgery (after IRB approval) using computational tools including pixel thresholding (in Mimics) and smoothing (in Geomagic). For most patients (n=16), there were additional preceding scans that were analyzed. The
reconstructed ATAA wall surface geometry was then meshed and discretized into finite elements (in Abaqus 6.13). The ATAA was modeled as a shell of homogeneous material with a uniformly distributed thickness of 2.25mm. The wall was considered nonlinear, isotropic, hyper-elastic and incompressible, and under a constant pressure of 120 mmHg. The strain energy function (W) used was previously reported by our group for BAV and TAV: In this function, α and β are model parameters (in N/mm2) characteristic of the tissue’s material properties and I1 is a strain invariant. The model parameter set [α, β] was [0.0465, 0.152] for BAV models and [0.065, 0.955] for TAV models [5]. The output of our computational modeling was a stress map across the wall surface. DATA PROCESSING The measure of transmural wall stress used is the “equivalent” or von Mises stress, which is frequently used to describe the stress field of materials under multi-axial loading conditions. We define the peak wall stress (PWS) for a given model as the maximum von Mises stress acting anywhere on that model. Both mean wall von Mises stress (MWS) and PWS of the ascending aorta were calculated, and a two sample t-test was performed to evaluate the differences between TAV and BAV. RESULTS From the data collected in our study, patients with a BAV had a MWS ranging from 14.2 N/cm2 to 21.0 N/cm2, and patients with a TAV had a MWS ranging from 10.2 N/cm2 to 17.6 N/cm2. Similarly, BAV patients had a PWS between 23.6 N/cm2 and
45.6 N/cm2, while TAV patients had a PWS ranging from 19.0 N/cm2 to 48.9 N/cm2. At the time of elective surgery (the final time-point in each case), the average MWS in BAV patients (17.27Âą2.00 N/cm2) was significantly higher than the MWS in TAV patients (15.04Âą2.16 N/cm2), with p=0.016, as seen in Figure 1. However, the difference in PWS between TAV and BAV patients was not statistically significant. The location of peak stress within the ascending aorta was consistent between BAV and TAV groups, with the maximum wall stress localized above the left coronary artery, as seen in Figure 2. The area of peak stress (shown in red) occurs in the lower half of lesser curvature of the ascending aorta, above the left coronary artery (indicated with a white asterisk).
Figure 1: Box and whisker plot of peak wall stress (PWS) and mean wall stress (MWS) for the final scan before surgical intervention for TAV and BAV ATAA patients.
DISCUSSION The results from the study indicate a difference in stress within the aortic wall between the valve morphologies, but no difference between the two valve types when looking at the location of peak stress within the ascending aorta. While the peak wall stress was equivalent between the two valve morphologies, the mean wall stress was different. However, as BAV aortic valve tissue has a greater tensile strength than TAV tissue [5], judgements concerning relative risk of dissection and rupture between TAV and BAV ATAA patients cannot be made. Improvements to this study could be made by including the blood pressure at the time of scan (instead of assuming maximum normal physiological conditions), as it would more accurately characterize patient stress. Stratifying our
groups according to gender and age would add a new dimension to the study, potentially teasing out differences between these patient groups.
Figure 2: Location of high stress locations (shown in red) for 8 ATAA models. BAV subjects are shown on the left, TAV subjects on the right. The left coronary artery is marked with a white asterisk.
REFERENCES 1. "Diseases & Conditions." Cleveland Clinic. N.p., n.d. Web. 17 July 2015. 2. Vorp, David A, et al. "Effect of Aneurysm on the Tensile Strength and Biomechanical Behavior of the Ascending Thoracic Aorta." The Annals of Thoracic Surgery 75.4 (2003): 1210-214. Web. 3. Elefteriades, John A. "Natural History of Thoracic Aortic Aneurysms: Indications for Surgery, and Surgical versus Nonsurgical Risks." The Annals of Thoracic Surgery 74.5 (2002): n. page. Web. 4. Fedak, P. W. M., et al. "Clinical and Pathophysiological Implications of a Bicuspid Aortic Valve." Circulation 106.8 (2002): 900-04. Web 5. Pichamuthu, Joseph E., et al. "Differential Tensile Strength and Collagen Composition in Ascending Aortic Aneurysms by Aortic Valve Phenotype." The Annals of Thoracic Surgery 96.6 (2013): 2147-154. Print. ACKNOWLEDGEMENTS Patient images were obtained from the Department of Cardiovascular Surgery, University of Pittsburgh Medical Center. Funding was provided through the Swanson School of Engineering, at the University of Pittsburgh.
PIN Diode Driver Design for Nuclear Magnetic Resonance Radiofrequency Lab Chuqi Liu, Edwin Eigenbrodt and Mary P. McDougall Nuclear Magnetic Resonance Radiofrequency Lab, Department of Biomedical Engineering Texas A&M University, TX, USA negative voltage across the PIN diode will make it INTRODUCTION reverse bias. [3] As Figure 2 shows, the PIN diode The purpose of this project was to design and is in the middle, the upper loop is AC loop, in series construct a PIN Diode Driver to control the PIN with two capacitors to make AC signal go through diode for use as an RF switch in a Magnetic but cut off the DC current, and the lower loop is the Resonance Imaging (MRI) coil device. A pin diode DC loop has a 20 ohm resistor connected in series, is made of P-region silicon, pure silicon and which limited the current going through the PIN N-region silicon. Its performance mainly depends diode. on its geometry and the material. When forward DC is applied to the PIN diode, it allows an RF signal to go through. The forward RF resistance can reach less than 0.5 Ohm. Thus, PIN diode can handle more RF power. When DC is disconnected, the RF signal can still pass for a short period of time if no Figure 2: PIN diode sample used for testing reverse DC is applied. This is called carrier lifetime. This property is utilized in one of the detuning The PIN Diode Driver we designed has two modes, elements later.[1] Our MRI device has six receive one is manual mode and the other is System mode. coils and one transmit coil. We turn on the receive The system mode can use the trigger signal (+5V as coil and the transmit coil separately. The first step is 1 and 0V as 0) as input signal to control the PIN to send RF energy into the phantom. The transmit diode on or off. The goal for our output is +5V for coil will send the specific frequency RF magnetic forward bias and -12V for reverse bias. We field energy into the designed two types of PIN diode driver. experiment environment. At The first type is constructed by two Op-Amps. We the same time, we turn on the designed and simulated it through Pspice first to PIN diode for the receive coil make sure it works. The circuit diagram shows in Figure 1: Trap Circuit so that the receive coil is Figure 3. connected to a trap circuit [2], which is tuned to the receive coilâ&#x20AC;&#x2122;s resonant frequency. When the trap is active, most of the RF energy at that frequency will be absorbed by the phantom and not receiving by the receiving coil. After the phantom receives enough energy, the phantom will keep releasing the energy through radio frequency electro magnetic field at the same specific frequency. Then, we turn off the PIN diode for the receiving coil, to make it resonate at that specific frequency, and turn on the PIN diode for the transmit coil, to make it not resonance at that frequency. At this point most of the energy will be Figure 3: Pspice circuit diagram for Op-amp design absorbed by the receive coil with little interference from the transmit coil. The right op-amp (L165, 3A max current output [4]) is used as a comparator; the voltage source on METHODS the top going to the positive pin is used as the The PIN diode we used in this experiment is reference voltage, which is -2.5V for our design. UM9401F. The condition to make it forward bias is The input to the negative pin will be compared with pushing 100mA current in the forward direction. A the reference voltage, and the op-amp will amplify
the difference between the input and the reference voltage. Since the gain is huge, the output will hit the Vcc (5V) and Vee (-12) limitation. Thus, when the input is below -2.5V, it will output 5V; when the input is above -2.5V, it will output -12V. The other op-amp is used as a -1V/V amplifier, which change the trigger signal from 5V and 0V to -5V and 0V. The second type PIN diode driver is referred to in the PIN Diode Driver Handbook [5]. The Pspice circuit diagram shows in Figure 4.
Figure 6
DISCUSSION As the result shows, the BJT’s design’s rising time is much faster than the op-amp design’s. Thus, we chose the BJT design PIN diode driver to drive the following experiment. After connected to the PIN diode shows in Figure 1, we used the logic analyzer to test if the PIN diode driver working correctly.
Figure 7: Left: PIN diode turned off, -19.92dB Signal passing through the PIN diode Right: PIN diode turned on, -0.69 dB signal passing through the PIN diode Figure 4: Pspice circuit diagram for BJT design
The BJT we used is TIP32C, max current out is 3A, current gain at 0 to 2 A is around 100A/A[6]. The value of each component shows in Figure 3. Both boards were designed in Eagle CAD software and then printed with an LPKF circuit printer. RESULTS Figure 5 shows the oscilloscope measurement for the Op-Amp design. Left: rising edge delay time: 6us. Right: dropping edge delay time: 3us.
As Figure 7 shows, when the PIN diode driver is in the off mode, only -19.92dB (1%) of energy passing through the PIN diode, which proved the PIN diode was cut off. When the PIN diode driver is in the on mode, about -0.69dB(85.3%) of energy passing through the PIN diode, which shows the PIN diode was in the on mode. REFERENCES 1. Xiaoyu Yang, Tsinghua Zheng, Hiroyuki Fujita et al. T/R Switches, Baluns, and Detuning Elemets in MRI RF Coils, 2, 2004. 2. Joëlle Barral et al. Building RF Surface Coils for MRI, 2009. 3. Microsemi et al. UM9401F Datasheet, 2, 2006. 4. W. E. Doherty, Jr. & R. D. Joos et al. The PIN Diode Circuit Designers’ Handbook 40, 431-435, 1996. 5. ST et al. L165 Datasheet, 1-4, 2003. 6. ST et al. TIP32C Datasheet, 1-4, 2006.
Figure 5
Figure 6 shows the oscilloscope measurement for the BJT design. Left: rising edge delay time: 200ns. Right: dropping edge delay time: 4us.
ACKNOWLEDGEMENTS I would like to thank Dr. McDougall, Edwin Eigenbrodt and all the members in Dr. McDougall’s lab for their extraordinary support. Also, I am grateful to the Swanson School of Engineering & Office of the Provost for providing me the summer research funding.
EMBOLIC MICROSPHERES WITH SUB MICROMETER PORES Emelyn Haft Benedum Hall, 9 Floor, Department of Chemical Engineering University of Pittsburgh, PA, USA Email: eeh40@pitt.edu th
INTRODUCTION Embolic microspheres have many uses in the biological industry. They can be used to deliver medicine for treatment of a variety of different disease. These embolic materials allow for smoother injection and more precise control of the lever of occlusion [1]. The main purpose of this research is to create microspheres that are uniform in size that also have sub micrometer pores which will help to deliver drugs into a patient. The goal is to create biodegradable microspheres that are approximately 100-300 micrometers in diameter with pores that are approximately 50-300 nanometers. It is important that these microspheres be biodegradable because it is reported that longterm presences of these microspheres in the human body may be harmful [1]. METHODS In order to create microspheres, an oil phase and water phase were created using a polymer called poly(lactic-co-glycolic acid) or PLGA. PLGA is a common choice for use with biomedical devices because of its biodegradability. The goal was to morph the shape and size of the PLGA so that the microspheres would be uniform and the desired size of 100-300 micrometers. The PLGA solution is diluted using dichloromethane. The water phase used consisted of polyvinyl alcohol, or PVA. For each trial, 1% PVA is used. It is created by using 1 gram of PVA for every 100mL of water. PVA has great emulsifying properties, which makes it very useful in the creation of microspheres. 20 mL of the PVA solution was used for every experiment. The oil phase (table 1) was then mixed until it was homogeneous and slowly dropped into the water phase using a syringe, one drop at a time. The solution was then allowed to sit overnight. While mixing overnight, the dichloromethane will dissolve and a particle of
PLGA will form. The particles would be cleaned the next day to remove the PVA from the micro particles. To clean the particles, we varied between using two different methods. In the quicker, more efficient method, a filtration flask was used which would allow the PVA to pass through the film while keeping the microspheres on the film. It was cleaned using deionized water. In the other method, we would clean the particle by using a centrifuge and deionized water. After these were cleaned, the particles would freeze dry overnight. PLGA alone is not ideal for microspheres because it is not ideal for cell growth and unable to interact with cells [2]. With later samples, we used surface modification in order to further optimize the features of the microspheres. The surface modification also helped to increase the number of pores on the microspheres. Table 2 shows the procedures used for each surface modification experimentation. DATA PROCESSING After freeze drying the particles, they could then be analyzed under a Scanning Electron Microscope (SEM).This would allow for further analysis of the shape and size of the microspheres. After analyzing each sample, a new formula would be created to create the next sample. RESULTS The microspheres yielded from PLGA-7 seemed to be the best. They were the most uniform in size and contained more pores than the other samples. Screenshots of images from the SEM are shown below, as you can see, the spheres are consistent with the desired size.
Figure 2: A closer look at PLGA 7, the pores are distinctly captured
beneficial to the health field. It is also important to consider costs while deciding on a final design. In this case, the cost did not fluctuate much between different samples. REFERENCES 1. Weng, Lihui. et al. In vitro and in vvivo evaluation of biodegradable embolic microspheres with tunable anticancer drug release. Elsevier. Minneapolis, MN. 2. Croll, Tristan. Et al. Controllable Surface Modification of Poly(lactic-co-gylcolic acid) (PLGA) by Hydrolysis or Aminolysis I: Physical, Chemical and Theoretical Aspects. Biomacromolecules. 5(2): University of Melbourne, Australia. ACKNOWLEDGEMENTS Research was conducted under Dr.Gao with help of Dr. Jiamin Wu in Dr.Gaoâ&#x20AC;&#x2122;s laboratory. We also used the Biotech center to conduct experiments. Swanson School of Engineering and the Office of the Provost provided the necessary funding to conduct this research
DISCUSSION While the results yielded from PLGA-7 seemed to be the most beneficial, testing still needs to be done in order to confirm that these microspheres will be Table 1: Samples of PLGA created SAMPLE NAME PLGA-1 PLGA-2 PLGA-3 PLGA-4 PLGA-5 PLGA-6 PLGA-7 PLGA-8 PLGA-9 PLGA-10 PLGA-11 PLGA-12 PLGA-13
MIXING SPEED 325 RPM 350 RPM 400 RPM 400 RPM 500 RPM 500 RPM 500 RPM 500 RPM 500 RPM 500 RPM 550 RPM 550 RPM 550 RPM
PLGA AMT .6 mL 10% PLGA .6 mL 10% PLGA .6 mL 10% PLGA .6 mL 10% PLGA .6 mL 10% PLGA .6 mL 10% PLGA .6 ml 10% PLGA .6 ml 10% PLGA .6 mL 5% PLGA .6 mL 5% PLGA .6 mL 5% PLGA .6 mL 5% PLGA .6 mL 5% PLGA
Table 2: Samples with surface modification treatment Sample Name PLGA-2 PLGA-2 PLGA-5 PLGA-5 PLGA-5 PLGA-6 PLGA-6 PLGA-6
Surface Modification Type .05 M Ethylenediamine .05 M Ethylenediamine 6% hexamethylene diamine 6% hexamethylene diamine 6% hexamethylene diamine 6% hexamethlyene diamine 6% hexamethylene diamine 6% hexamethylene diamine
Length of Time 1 hr 2 hr 2 mins 5 mins 10 mins 2 mins 5 mins 10 mins
OTHER SOLVENT .06ml 2-methylpentane .12 ml 2-methylpentane .12 mL 2-methylpentane .1 mL 2-methylpentane .18 mL 2-methylpentane .24 mL 2-methylpentane .06g Pluronic F-127 .03g Pluronic F-127 .03g Pluronic F-127 .06g Pluronic F-127 .18g Pluronic F-127 .12g Pluronic F-127 .21g Pluronic F-127
SCREENING A VARIETY OF CATALYTIC LEWIS PAIR MOIETIES FOR THEIR HYDROGEN AND CARBON DIOXIDE BINDING ENERGIES Benjamin Yeh Department of Chemical Engineering University of Pittsburgh, PA, USA Email: byy5@pitt.edu INTRODUCTION The global growth in the demand for energy means that fossil fuels will continue to be used for the next several decades and that CO2 will continue to be released into the atmosphere faster than it can be recycled by the natural carbon cycle. CO2 is considered as the major greenhouse gas generated from fossil fuels usage. Therefore, CO2 capture and conversion is a promising way to reduce CO2 emissions. Ye et al.[1] used computational modeling to design a novel catalyst for CO2 hydrogenation to produce formic acid. The catalyst consists of a microporous metal organic framework (MOF) containing functional groups having both Lewis acid and Lewis base sites (Lewis pairs). These Lewis pairs can heterolytically dissociate H2 into a proton and a hydride, which react with CO2 to form HCOOH. However, the problem is that CO2 binds more strongly to the catalytic Lewis pairs, poisoning the active sites. Therefore, the goal of this project is to use computational simulation to screen hundreds of Lewis pair catalysts that could be incorporated in MOFs and bind H2 more strongly than CO2. METHODS Various intramolecular catalytic Lewis pairs (LP) were built using Avogadro molecule editor and visualizer software. Each catalytic Lewis pair consists of a Lewis base site, either phosphorus or nitrogen, and a Lewis acid site, boron. The skeletal structures of sample intramolecular Lewis pairs are show in Figure 1. We then varied the R groups with various electron donating (CH3, OH, NH2, O-CH3, benzene) and electron withdrawing groups (H, F, Cl, Br, CN, CF3, NO2) using combinatorial techniques to modify the acidity and sometimes, basicity of the Lewis pairs.
(1)
(2)
(3)
Figure 1: Three families of Lewis pairs considered in this work.
To calculate the H2 binding energy, we added one hydrogen atom to the Lewis acid site and another hydrogen to the Lewis base site. Similarly, to calculate the CO2 binding energy, we attached the carbon to the Lewis base site and attach one oxygen to the Lewis acid site. Once molecules were built, we exported the Cartesian coordinates from Avogadro and ran density functional theory (DFT) calculations to determine binding energies of H2 and CO2 using the Gaussian 09 software package. We used the M062X density functional with the 6-311g basis set for all the calculations. We used the Molden software to visualize the optimized structures by measuring the bond lengths and angles to check if the final structure was reasonable. DATA PROCESSING Gaussian’s output file returned the energy (E), zero point corrected energy (EZPE), and Gibb’s free energy (G) for each system. The H2 or CO2 binding energies on each Lewis pair are defined as following equations: ΔE = E(M/LP) - E(LP) - E(M) ΔEZPE = EZPE(M/LP) – EZPE (LP) – EZPE(M) ΔG = G(M/LP) - G(LP) - G(M) where M represents H2 or CO2 and LP represent the Lewis pair. RESULTS The results of H2 and CO2 binding energies for specific families of Lewis pairs are shown in Figure 2. 1 with CH3 as R1 is in blue, 2 with CH3 as R1 and R2 is in red, and 3 with R1=R2 is in green. The other R groups are not shown for clarity, but the graph shows the trends of each skeletal structure when changing the unidentified R groups. In the yellow box is the target region for H2 and CO2 binding energies (0.0 > ΔE(H2) ≥ -0.6; 0.0 < ΔE(CO2) ≤ 0.3 in eV)
Table 1 summarizes the binding energies for H2 and CO2 on different Lewis pairs, as shown in Figure2, as well as ΔEDFT, ΔEZPE, ΔG.
For 2, CO2 binding energy is significantly higher than that of 1 because the distance between the Lewis acid and base site is smaller, making CO2 binding unfavorable on the Lewis pair. However, the H2 binding energy was still too high. This could be due to 2’s structure which has a dative bond between the Lewis acid and Lewis base. Decreasing the electron withdrawing ability of the R groups and the acid and base distance were two ways to modify the Lewis pair and help us create 3. For 3, we create a structure where the Lewis acid and Lewis base are next to each other without a dative bond. Therefore, 3 is promising because H2 binding is in a reasonable H2 binding energy range and its binding is stronger than CO2 binding
Figure 2: Calculated H2 binding energy is on the x-axis while calculated CO2 binding energy is on the y-axis.
DISCUSSION We calculated H2 and CO2 binding energies on 1 while varying the R-groups. We found that there is a general trend between the electron donating and electron withdrawing groups on the acid site. The stronger the electron withdrawing groups (F < Cl < Br < CN < CF3 < NO2), the stronger both H2 and CO2 will bind. Conversely, the weaker the electron donating group (benzene ≈ CH3 < OCH3 < OH < NH2), the more likely the molecule will bind H2 and CO2, but the trend is not as clear as the one mentioned earlier. These trends can be used to help modify the Lewis pair to obtain certain H2 and CO2 binding ability. It is important to note that when using combinatorial techniques with electron donating and electron withdrawing groups, the Lewis pair always bonded to H2 and CO2 more strongly when the electron withdrawing group was closer to the base site. Nevertheless, the problem with 1 is that it either bound H2 and CO2 too strongly, or that it bonded to CO2 more strongly than H2, potentially poisoning the active site.
Further work includes testing the CO2 hydrogenation reactions on these promising Lewis pairs. We will also functionalize these Lewis pairs in MOFs to test their ability for H2 and CO2 capture together with CO2 hydrogenation reactions. REFERENCES [1] Ye et al. “Design of Lewis Pair-Functionalized Metal Organic Frameworks for CO2 Hydrogenation.” (2015 April 1). ACS Catalysis. DOI: 10.1021/acscatal.5b00396 ACKNOWLEDGEMENTS I would like to thank Dr. Karl Johnson and Jingyun Ye for mentoring me in my first research project. I would also like to thank the Swanson School of Engineering and the Office of the Provost for giving me this opportunity and funding me on this project.
PERFLUOROPOLYETHER POLYMERS MAY HAVE ANTI-BIOFOULING APPLICATIONS Amy Howell and Lei Li Departments of Bioengineering and Chemical Engineering University of Pittsburgh, PA, USA Email: amh188@pitt.edu INTRODUCTION Biofouling is the accumulation of biological matter (proteins, cells, etc.) on a fluid-contacting surface [1]. Anti-biofouling surfaces are those which resist bio-debris accumulation. The exploration of antibiofouling surfaces has been partially motivated by medical device development [1,2,3]. Devices such as catheters, blood vessel grafts, vascular stents, artificial heart valves, and dialysis membranes all come in direct contact with blood during normal use. Bio-fouling of these surfaces begins by adsorption of blood albumin and other blood proteins to the surface [2]. In extreme cases, protein accumulation can trigger blood coagulation, leading to device failure or other serious complications [3]. One common strategy to prevent biofouling is to alter the surface energy of the fluid contacting material by adding a polymer coating [1]. Materials with low water interfacial-energy (hydrophilic surfaces) have, in some cases, been shown to reduce the adsorption of proteins to the surface. However, when protein adsorption does occur on these surface, it is often increasingly difficult to remove the adsorbed proteins. Hydrophobic surfaces, on the other hand, have characteristically high interfacial-energy. For these surfaces, there is an entropic benefit for protein adsorption, which often causes an increase in protein adsorption on these surfaces. However, hydrophobic surfaces often also exhibit high â&#x20AC;&#x153;fouling release,â&#x20AC;? or easy removal of adsorbed proteins under low shear stress [1]. The fouling release property of hydrophobic surfaces arises from the favorable conformation change of proteins back to the soluble state upon removal from the hydrophobic surface [4]. Researchers in Li lab have recently reported unique surface properties of materials coated in a perfluoropolyether (PFPE) polymer commercially known as ZDOL [5]. The ZDOL coating creates surfaces that are simultaneously hydrophilic and oleophobic (oil repelling), or more attractive to water than oil. Very few surfaces have thus far been
identified with this quality; in fact, even extremely hydrophilic surfaces are usually more oleophilic than hydrophilic [5]. Surfaces that are more attractive to water than oil have the potential to display antibiofouling properties. If a surface repels the hydrophobic domains in a protein, the protein may be more likely to remain in its soluble conformation and less likely to adhere to the surface. The research here presents the preliminary evaluation of ZDOL and other PFPE polymers as potential anti-biofouling coatings. METHODS This study evaluated the anti-biofouling potential of two PFPE polymer coatings, ZDOL and Ztetraol, by quantifying the adsorption of bovine serum albumin (BSA) on coated or uncoated silica wafers after 24 hour incubation in a 5mg/mL buffered BSA solution. Silica wafers were cut into appropriate shapes and cleaned by ultraviolet light and ozone exposure. Wafers were optionally coated in ZDOL or Ztetraol via dipcoating. ZDOL was applied from a 1.5 g/L solution to attain a polymer thickness of approximately 1.5 nm. The Ztetraol solution was prepared to be 0.75 g/L to achieve a coating thickness of approximately 2 nm. The polymer coating thickness as well as the native oxide layer on the wafers were quantified by ellipsometry. Following coating, the substrates were aged for one week to allow the polymer to reach equilibrium on the substrate. Immediately prior to protein adsorption testing, all substrates were tested for water contact angle (WCA) using a single drop goniometer. Hexadecane contact angles (HCA) are typically used to determine oliophilicity of the polymer coatings. HCAs were not determined for the substrates in this test group, but can be found elsewhere for the same polymer coatings and sample preparations [5]. Samples were then incubated at room temperature for 24 hours in darkness in a 5mg/mL solution of fluorescently labeled BSA in 0.1M phosphate buffered saline (PBS). After incubation, the
The XPS data for many samples were collected and compared for average surface nitrogen percentage. An additional set of blank silicon wafers (not incubated with BSA) were also analyzed and compared to establish a baseline nitrogen percentage due to natural contamination over time. RESULTS The WCA for the ZDOL group was 32.37° ± 2.18°, slightly higher than the uncoated/blank WCA of 25.69° ± 3.39°. The Ztetraol coated substrates were substantially more hydrophobic, with an average WCA of 59.41° ± 1.99° (Table 1). Substrate Average WCA (°) ZDOL 32.37 ± 2.18 Ztetral 59.41 ± 1.99 Uncoated/Blank 25.69 ± 3.39 Table 1: Average water contact angle (WCA) for the substrates and coatings used in this study, reported in degrees.
The average nitrogen percentage by XPS was highest in the Ztetraol coated group at 2.04% ± 0.86% and lowest (not considering the blank group) in the ZDOL coated group at 1.54% ± 0.86%. The uncoated silicon wafers had an average nitrogen percentage of 1.64% ± 1.21%, slightly higher than the ZDOL coated group. The blank control group had an average nitrogen percentage of 0.48% ± 0.26%. DISCUSSION The adsorbed nitrogen percentage on the blank substrates was significantly lower than all the BSA incubated substrates. This finding suggests that only a small amount of protein contamination occurs during the various equilibration periods in the test method and supports the conclusion that the nitrogen percentages reported for the other test groups are a direct result of BSA exposure. The data also reveals a trend towards higher BSA adsorption in the
Ztetraol group and lower BSA adsorption in the
SURFACE NITROGEN PERCENTAGE (%)
substrates were washed with PBS and deionized water and imaged on a fluorescent microscope. Protein adsorption levels were qualitatively compared using the fluorescent images. Quantitative analysis was performed using x-ray photoelectron spectroscopy (XPS). Surface nitrogen levels reported by XPS were understood to directly correlate to protein levels on the surface since nitrogen is found in all proteins but is not native to the silicon wafers or the PFPE polymer.
3.50
Adsorbed Nitrogen Percentage on Coated or Uncoated Silicon Wafers
3.00 2.50 2.00 1.50 1.00 0.50 0.00 ZDOL
Ztetraol Uncoated SURFACE TREATMENT
Blank
Figure 1: Average nitrogen percentage on the surface of coated or uncoated silicon wafers with or without BSA adsorption. Standard deviations are represented by error bars.
ZDOL group; however, statistical significance cannot be determined within this sample set due to small sample size and high standard deviation in the test groups. Additionally, the XPS data reported low surface fluorine percentages on the PFPE coated substrates, which is inconsistent with the polymer coating. Additional studies will be needed in order to understand this finding and its implications on the data presented above. Further studies should also be performed to expand upon the current data. Additional trials may help strengthen the trends observed here. Including a HCA analysis could also be added to evaluate correlation between HCA and protein adsorption. REFERENCES 1. Krishnan et al. J. Mater. Chem. 18, 3405-3413 2008. 2. Felgueiras et al. IFMBE Porceedings 41, 15971600, 2013. 3. Werner et al. J. Mater. Chem. 17, 3376-3384, 2007. 4. Gao et al. Journal of Colloid and Interface Science 344, 468-474, 2010. 5. Li et al. J. Mater. Chem. 22, 16719-16722, 2012. ACKNOWLEDGEMENTS Amy would like to thank Dr. Lei Li, the Swanson School of Engineering and the Office of the Provost for helping fund this research.
A CELL-SCALE MODEL OF PULMONARY EPITHELIAL TRANSPORT DYNAMICS IN CYSTIC FIBROSIS Nicholas W. Lotz, Matthew R. Markovetz and Robert S. Parker Department of Chemical and Petroleum Engineering University of Pittsburgh, PA, USA Email: nwl4@pitt.edu INTRODUCTION Improved disease treatments based on individual patient conditions may be realized through systems medicine, which incorporates systems biology and systems engineering to aid clinicians in treatment implementation [1]. One potential area for improved patient care is in treating cystic fibrosis (CF). CF is a genetic disorder characterized by a malfunctioning cystic fibrosis transmembrane conductance regulator (CFTR) [2]. The disease causes dehydration of the extracellular surface, leading to infection, impaired physiological development, and early death. Although previous studies have modeled improved mucociliary clearance thorough the administration of hypertonic saline, a robust cell-scale model of ion transport dynamics is necessary to develop a clinically viable model-based decision support system [3]. A deterministic model structure coupled with a sufficient optimization algorithm should provide patient-specific parameters that accurately predict treatment effects. METHODS A cell-scale model of an epithelial cell was constructed using the MatLab software suite (© 2014, The MathWorks, Natick, MA). Systems of ordinary differential equations described the transport dynamics of chloride, sodium, potassium, water, and DTPA in the apical surface layer (ASL), cell interior, and blood. A binary switch on CFTR terms differentiated CF (off) and non-CF (on) cells in the model. An affine parallel-tempering Markovchain Monte Carlo (APT-MCMC) algorithm fit the model parameters to experimental DTPA transport and ASL volume data using a least-sum of squares objective function. The model was then run to assess agreement with experimental conditions.
RESULTS Calculated parameter fits are presented in Table 1.1. CF cells exhibit higher ENaC and CaKC permeabilities but lower transcellular and paracellular hydraulic permeability than non-CF cells. Transcellular permeability is four orders of magnitude lower in CF cells (see discussion). Figures 1 and 2 show ASL regulation and simulated DTPA transport dynamics in CF and non-CF cells in response to 10 µL hypotonic challenge. Figure 3 presents cell volume regulation in response to 10 µL hypotonic challenge in CF and non-CF cells.
Figure 1: Apical volume response to 10 µL hypotonic challenge. Experimental means plotted with standard error, with n = 112 lines and n = 93 lines. CF
Non-CF
DISCUSSION Increased ENaC permeability in CF agrees with existing literature, and occurs to preserve cell electroneutrality in response to increased chloride retention [4]. However, lower transcellular
Figure 2: Transport dynamics after 13,000 counts DTPA added to apical layer. Experimental means plotted with standard error.
Figure 3: Simulated cell volume response to 10 µL hypotonic challenge.
permeability in CF relative to non-CF contradicts experimental literature [5]. The discrepancy between CF-and non-CF paracellular hydraulic permeability raises the possibility of parameter unidentifiability in the optimization procedure. The parameter space should be further constrained to limit optimized parameters to physiologically reasonable values.
[2] “Cystic Fibrosis.” Genetics Home Reference. U.S. National Library of Medicine, Aug. 2011. Web. <http://ghr.nlm.nih.gov/condition/cystic-fibrosis [3] Markovetz MR, Corcoran TE, Locke LW, Myerburg MM, Pilewski JM, Parker RS (2014) A Physiologically-Motivated Compartment-Based Model of the Effect of Inhaled Hypertonic Saline on Mucociliary Clearance and Liquid Transport in Cystic Fibrosis. PLoS ONE 9(11): e111972. doi: 10.1371/journal.pone.0111972 [4] O’Donoghue, Donal L., et al. "Increased apical Na+ permeability in cystic fibrosis is supported by a quantitative model of epithelial ion transport." The Journal of physiology 591.15 (2013): 3681-3692. [5] Matsui H, Davis CW, Tarran R, Boucher RC. Osmotic water permeabilities of cultured, welldifferentiated normal and cystic fibrosis airway epithelia. Journal of Clinical Investigation. 2000;105(10):1419-1427.
DTPA dynamics and ASL volume are well captured by the model, but further analysis must be performed to determined whether any experimental increases in apical volume during hypotonic challenge are truly physiological. Finally, a simulated spike in cell volume in non-CF cells followed by a sharp decrease and leveling occurs due to feedback control terms built into basolateral rectifying channels. The rectifiers respond to increasing cell volume by increasing the rate of ion secretion. Cell osmolarity drops, water leaves the cell, and overall cell volume decreases. REFERENCES [1] Florian Jr., J.A., and Parker, R.S., 2005, "Empirical Modeling for Glucose Control in Diabetes and Critical Care," European Journal of Control, Special Section on Automatic Drug Delivery for Anesthesia and Critical Care, no.6, pp. 601-616.
ACKNOWLEDGEMENTS The author wishes to sincerely thank Matthew Markovetz and Dr. Robert Parker for their mentorship during this project. The authors also wish to thank the University of Pittsburgh Swanson School of Engineering and the Office of the Provost, as well as the National Science Foundation (EEC-1156899) for financial support.
BATCH ANAEROBIC DIGESTION OF FOOD GROUPS Tai Xi Gentile Polymers for BioApplications Lab, Chemical & Biomolecular Department National University of Singapore Email: tag54@pitt.edu INTRODUCTION Globally, a large percentage of food produced is disposed of in landfills. Especially for the compact island nation of Singapore, management of food waste is an important issue. One solution employed today is subjecting food waste to the process of anaerobic digestion (AD). AD is the process by which a substrate is digested by microorganisms in the absence of oxygen, thus producing biogas. Biogas produced from AD is composed of methane, carbon dioxide, and other gases, and can therefore be used as combustible fuel. Bio-compost is produced by composting the digestate, the residue from the main anaerobic process, and composting it to form high-nutrient fertilizers [1]. The author carried out two batch experiments on different foods with the purpose of better characterizing what type of foods would produce the most biogas. METHODS Three foods were obtained from each of three food groups. The carbohydrate-rich samples selected were rice, noodles, and steamed chapathi; the protein-rich samples were fish, steamed beef, and fried chicken; and the fruits included banana, apple, and papaya. Food samples were obtained from the National University of Singapore Koufu food court on May 25th, 2015. The food samples were blended in a consumer blender until homogenous. Anaerobic sludge from the PUB Ulu Pandan Water Reclamation Plant was used as inoculum. The experiment was divided into two batch digestion trials: Batch 1 and Batch 2. For Batch 1, the bioreactors were prepared using 250 mL Erlenmeyer flasks with 24/29 socket joints and sealed rubber stoppers with turnover flanges and serrations (22.0 mm plug diameter). An 18G needle was mounted on a syringe cut off to fit into Masterflex C-Flex Ultra L/S 16 tubing, which was connected to Tedlar gas bags. Two trials of each food sample were used, for a total of 18: rice, noodles, chapathi, fish, beef, chicken, banana, apple, and papaya. The samples were
weighed based on a 1:1 ratio of 2.5g volatile solids (VS). An anaerobic chamber was used to assemble the bioreactors: the samples were combined with 100 mL of sludge (2.5g volatile solids) in the Erlenmeyer flasks, then plugged with the stoppers and shaken to mix. A blank control with 100 mL sludge (no substrate) was also assembled within the anaerobic chamber to estimate the amount of biogas produced from the sludge alone. Tedlar bags were connected to the bioreactors via needles stuck into the stoppers. Parafilm and grease were applied to each bioreactor around the stopper and needle-tube joint. The reactors were placed in a warm room set at 37ยบC. For Batch 2, a different bioreactor was used: 22.0mm open rods instead of syringes were drilled through the rubber stoppers used to seal the 250mL Erlenmeyer flasks. Tedlar bags were attached directly to the open rods. 200 mL sludge was added in a 1:1 substrate to inoculum ratio with 2.5g VS. Three trials of each food sample (18 total) were conducted on rice, noodles, fish, beef, banana, and apple. Once the reaction had terminated (when no more biogas was shown to be produced), 5-7 mL liquid samples were taken from each reactor within the anaerobic chamber. The liquid samples were centrifuged at 10,000 RPM for 5 minutes. Using a filter, the liquid from each sample was separated from the solids and stored in cold room. DATA PROCESSING All food samples and sludge underwent total solids and volatile solids analysis. To determine total solids, a known mass of each sample was placed in a crucible and dehydrated overnight at 105ยบC. The samples were then weighed, dehydrated again for 1 hour, and weighed again. To determine volatile solids, the solids left from the total solids analysis were placed into a 550ยบC furnace for 2 hours, then weighed. Biogas volume was measured every 2 days using a 60 mL syringe. Also analyzed were volatile fatty acids (VFA), and the methane and carbon dioxide concentrations using a gas chromatograph.
Concentrations were calculated using a standard curve.
RESULTS Figure 1 below shows the cumulative biogas yield for each substrate over a 6-day digestion period.
Cumulative Biogas Yield (mL/ g VS)
Batch 1 Cumulative Biogas Yield 400 300 200 100 0
Roti Noodles Rice 0
2
4
6
Chicken
Cumulative Biogas Yield (ml/g VS)
The pH of the liquid samples were measured using a pH meter (ExTech Instruments). Chemical oxygen demand, ammonium, and total nitrogen were analyzed with Dr. Lange cuvette tests (Dr. Bruno Lange, GmbH & CO. KG, Dusseldorf, Germany) and measured spectrophotometrically with a HACH XION 500 spectrophotometer. COD was analyzed by reacting 2 mL of extraction solution with sulphuric acidâ&#x20AC;&#x201C;potassium dichromate in the presence of silver sulphate as a catalyst. The solution was kept under 150ÂşC for 2 hours in an LT 100 thermostat before measuring COD values. Nitrate was analyzed by reacting 0.2 mL of the extraction samples and 2.6dimethylphenol to form 4-nitro-2.6-dimeyhylphenol. Ammonium samples were analyzed after reaction in pH 12.6 with hypochlorite ions and salicylate ions in the presence of sodium nitoprusside as a catalyst to form indophenol blue. All Lange methods are validated according to ISO 8466-1 (ISO, 1990), DIN 32645 (DIN, 1996) and DIN 38402 A51 (DIN, 1986) (Fabio Kaczala, 2010).
Batch 2 Cumulative Biogas Yield 600.00
Noodles
400.00
Rice Beef
200.00
Fish
0.00 0.00
5.00
10.00
Time/Day
15.00
Banana Apple
Figure 2: Cumulative biogas yield per substrate in Batch 2.
DISCUSSION Batch 1 biogas yield per substrate is shown in Figure 1. Fruits (apple, banana) produced the most biogas, followed by noodles and papaya. Below them is roti, chicken, and rice. The beef and fish produced the least amount of biogas. The highest cumulative biogas yield was 315.6 ml/g VS for the apple, while the lowest was 114.8 ml/g VS for the fish. It is important to note that the digestion was ended after 6 days since this was a preliminary experiment, and it is possible the certain substrates were not fully digested. Batch 2 biogas yield per substrate is shown in Figure 2. The digestion of grains and fruits ended on the 8th day, whereas the meat produced biogas for 16 days total. The beef and fish produced the most biogas by far, indicating that our preliminary Batch 1 was likely ended prematurely, and the digestion process was not allowed to run to completion. Similarly to Batch 1, the fruits produced more biogas than grains. REFERENCES
Beef
1. Komatsu et al. Anaerobic Codig 2-4, 2012. 2. U.S. EPA. Total, Fixed, and Volatile 1-4, 2001.
Figure 1: Cumulative biogas yield per substrate in Batch 2.
ACKNOWLEDGMENTS This award was funded by the Swanson School of Engineering and the Office of the Provost. Laboratory use was courtesy of the SERIUS program and the National University of Singapore. Special thanks to Professor TONG Yen Wah, Dr. LI Wangliang, and Jonathan Lee.
Time/Day
Figure 2 below shows the cumulative biogas yield for each substrate over a 16-day period (the grains and fruits ended after only 8 days).
POSTSYNTHETIC METAL ION EXCHANGE OF ZIF-8 FOR THE CATALYSIS OF THE OXYGEN REDUCTION REACTION Jonathan M. Hightower National University of Singapore, Department of Chemical and Biomolecular Engineering University of Pittsburgh, PA, USA Email: jmh216@pitt.edu INTRODUCTION Catalytic materials for use in fuel cells and metal-air batteries are being researched in order to make more sustainable methods of renewable energy production and storage more commercially viable. The hydrogen oxidation reaction (HOR) at the anode of these devices is very fast, but the oxygen reduction reaction (ORR) at the cathode is very slow, making these technologies inefficient [1]. The aim of this project is to develop a cost-effective, efficient, and high-performing catalyst in order to speed up the ORR. Platinum has been tested as a possible catalyst for these technologies and has the best performance results so far, but it is also an expensive and scarce material [2]. Metal-organic frameworks (MOFs) can serve as precursors to the catalysts that we are developing because MOFs have good stability, high porosity, and high surface area [1]. These characteristics make MOF-derived catalysts very attractive because they are good for electron transport, catalysis, and maintaining structure which would increase the durability, performance, costeffectiveness, and efficiency of fuel cells and metalair batteries [1].
bath container. The 3.266 g of Mn(Ac)24H2O was then taken out of the vacuum oven and placed in the round-bottom flask with 1.000 g of ZIF-8 and 50 mL of MeOH at 55°C and 300 rpm for 6, 12, 18, and 24 hours. This mixture was then collected and washed with 5 × 20 mL of distilled water. After washing, centrifuging, and decanting, the mixture was then dried in a vacuum oven at 100°C overnight [3]. Then 0.007 g of ZIF-8 (Zn/Mn), after pyrolysis and ball-mill grinding, was placed into a small container along with 7.000 mL of primer solution. The container was then sonicated 2 × 30 minutes with a 5 minute break in between. Once sonicated, the sample was then used in a Cyclic Voltammetry experiment to determine its reactivity and efficiency. RESULTS Successful Postsynthetic Exchange of ZIF-8 was verified using elemental analysis, which showed that Manganese was between about 3-8% by weight of the sample. Based on the Cyclic Voltammetry data that was collected, our catalyst was indeed reactive when O2 was introduced into the system as seen by the peak that appears on the graph in Figure 1.
METHODS ZIF-8 was prepared using 1.630 g of ZnO and 3.612 g of 2-methylimidazole. This mixture was then placed in an oven at 200°C for 24 hours and then washed with 3 × 20 mL of EtOH. After washing, centrifuging, and decanting, the mixture was then dried in an oven at 75°C in air overnight. Postsynthetic Exchange (PSE) of ZIF-8 to ZIF-8 (Zn/Mn) was performed by first placing 3.266 g of Mn(Ac)24H2O in a vacuum oven at 100°C for 3 hours. A 150 mL round-bottom flask and a temperature sensor were then placed in a silicon oil bath that was set on a hot plate so that the apparatus did not touch the bottom or the sides of the silicon oil
Figure 1: CV graph where the black line is the experimental catalyst in an Argon atmosphere and the red line is the experimental catalyst in an Oxygen atmosphere. Peaks in graph represent moments of reactivity.
The nature of the reaction that took place during this experiment was also documented, and the results shown in Figure 2 indicate that when O2 was introduced into the system, it reacted to form a significant amount of H2O2 instead of initiallypredicted H2O.
that are used during the synthesis of the catalyst in future experiments. REFERENCES 1. Hou et al. Adv. Energy Mater. 4, 1-8, 2014. 2. Roche et al. J. Phys. Chem., 111, 1434-1443, 2007. 3. Fei et al. Inorganic Chemistry 52, 4011-4016, 2013. ACKNOWLEDGEMENTS This award was funded by the Swanson School of Engineering and the Office of the Provost. Also, special thanks to the National University of Singapore, Dr. Dan Zhao, and Qian Yuhong.
Figure 2: Percentage yields of peroxide where the black line is the experimental catalyst and the red line is a platinum-based catalyst.
DISCUSSION The MOF-derived catalyst was reactive in an O2 atmosphere, but the product was hydrogen peroxide instead of water, which would be a more stable and environmentally-friendly byproduct of this alternative method of energy production. This indicates that the reaction that took place in the system only transferred 2 electrons to form H2O2 instead of the desired 4 electron reaction pathway to form H2O. Transferring fewer electrons also means that this catalyst is less efficient in catalyzing ORR than platinum-based catalysts, which means that improvements have to be made if it is to become a costeffective, high-performing, environmentally-friendly catalyst that can be used to make fuel cells and metal-air batteries more commercially-viable alternative energy options.
In order to tailor the catalyst to only allow the 4 electron reaction pathway to take place, some adjustments have to be made in the design of the catalyst. As shown by Roche et al., manganese is a very inexpensive yet effective catalyst for ORR [2]. However, manganese only comprised about 3-8% by weight of our sample as shown through elemental analysis. Perhaps by increasing the percentage by weight of manganese of the sample, the reactivity and selectivity of the experimental catalyst would also be increased. This can be accomplished by modifying the amount of reactants
A SIMPLE, EFFICIENT AND TRANSFERABLE APPROACH FOR HIGH-YIELD SEPARATION OF NANOPARTICLES Andrew Loughner, Chris Ewing, and Götz Veser Department of Chemical & Petroleum Engineering University of Pittsburgh, PA, USA Email: apl34@pitt.edu INTRODUCTION Nanomaterials are used in a wide range of applications including pharmaceuticals [1], optics [2], coatings [3], and catalysis [4]. This has given rise to the desire to produce smaller, monodisperse nanoparticles (NPs) with tailored functions and increased reactivity [5]. Producing particles of such size, however, poses challenges in restricting growth of particles once they reach a desired size and maintaining stability of these particles for both short and long periods of time. To overcome this, ligand capping agents have been used, which surround NPs in their growth medium and inhibit agglomeration beyond a certain size. While effective in producing monodisperse NPs of a specific size, using capping agents introduces problems with process efficiency. Removing NPs (< 10 nm) surrounded by capping agents from solution – typically conducted via centrifugation with a membrane filter – is very time and energy consuming. The capping agents themselves also need to be removed (decomposed) at high temperatures in order to fully expose the surface of the encapsulated NPs, which often leads to sintering and loss of activity. Recent work by DiSalvo and coworkers demonstrated a novel approach to produce NPs without the use of capping agents entirely. Pt3Fe NPs were trapped in KCl during synthesis and were shown to resist agglomeration at temperatures up to 600 °C while encased in the salt [6]. However, their approach used a rather complex separation procedure, partly defying the purpose of developing a simple and widely applicable approach. Our work hence focused on developing a novel approach via “salt recrystallization”, which provides a much more general approach to NP recovery that is simpler and more efficient than traditional separation methods.
EXPERIMENTAL 6 nm Silica NPs with controlled particle size (6 nm and 120 nm, respectively) were produced using a modified Stöber method. Briefly, ammonia and deionized (DI) water (with pH = 11.4) were mixed with tetraethyl-orthosilicate (TEOS) and ethanol and stirred for 3 hours at 60 °C. The resulting solution was centrifuged and rinsed with ethanol several times to separate the NPs. Similarly, size-controlled Pt NPs (3-5 nm) on silica supports were synthesized from a solution containing 1.25 mL 10 mM chhloroplatinic acid, 2.5 mL of 1.39 mM polyvinylpyrrolidone (PVP, ~10,000 MW), and 1.25 mL of 0.1 M sodium borohydride at 0 °C. The solution was left to react for 30 minutes, centrifuged, rinsed several times with DI water, and then added to 240 mg of 120 nm silica support and stirred for 1 hour at room temperature. Transmission electron microscopy (TEM) images were taken on a JEOL JEM2100F with an accelerating voltage of 200 keV. RESULTS AND DISCUSSION Our salt recrystallization method begins where the conventional method would require a membrane centrifuge. First, we applied this method to the separation of 6 nm silica NPs. Once the synthesis solution finished reacting, it was saturated with ammonium chloride salt. To this, ethanol was added to reduce the solubility of NH4Cl and hence recrystallize the salt, resulting in encapsulation of the silica NPs. This resulting white solid, composed of NH4Cl and silica, was centrifuged conventionally (i.e. without the need for a membrane filter) in order to recover the solid. The obtained solid was then heated to 500 °C to dissociate the salt and burn off any residual TEOS, leaving behind 6 nm silica NPs. Applying this method led to identical size distributions between conventional and salt recrystallization recovery methods (Figure 1). However, the salt recrystallization method required
fewer steps to complete and resulted in a 43 – 59% reduction in the amount of time required to complete NP recovery.
Finally, the use of thermally stable salts for recovery of NPs from solution was investigated. KCl (778 °C decomposition temperature) was used to separate Pt NPs the same way as described above. The obtained solid, containing Pt NPs embedded in KCl, was then heated to 500 °C and compared to the same sample from before heating. Size distributions showed that the Pt NPs were identical in size between samples and did not agglomerate (Figure 3).
Figure 3. TEM images of 2.5 nm Pt embedded in KCl (a) before and (b) after calcination at 500 °C. NP diameters were (a) and (b) nm
Figure 1. TEM images of silica NPs separated using (a) membrane centrifugation and (b) our salt recrystallization method. (c) NP size distributions taken from TEM images. NP diameters for conventional and (N=249) and salt method (N=167) were nm, respectively.
(a)
(b)
Figure 2. Representative TEM images of Pt/SiO2 synthesized using (a) the conventional method and (b) the salt recrystallization method. and (b) nm. NP diameters were (a)
To demonstrate the flexibility of this approach, the same method was applied to recovering Pt NPs and depositing the onto 120 nm silica. Again. After the solution of Pt NPs had finished reacting, it was saturated with ammonium bicarbonate. Next, silica dispersed in isopropanol was added in order to recrystallize the salt and encapsulate the NPs. The resulting solid was centrifuged conventionally and heated in a vacuum oven at 60 °C to dissociate the NH4HCO3, yielding Pt NPs deposited on silica. Again, the size distribution was identical to that for the conventional method (Figure 2).
CONCLUSIONS The ability to saturate a synthesis solution with a salt, independent of the salt choice or the type of NP in question, is testament to the generality of our salt recrystallization method. It is much simpler and more efficient than conventional membrane centrifugation. It is further noteworthy that the salt that is used and decomposed in the recrystallization method could be recycled and reused, making the method more sustainable. As there are problems with scaling up current methods for producing small NPs with ligand capping agents, our method provides a novel way of possibly bringing NP production to an industrial scale. REFERENCES 1. Yaguee et al. ChE Journal 137, 2008. 2. Deak et al. Colloids and Surfaces 278, 10-16, 2006. 3. Wang et al. Materials Science and Engineering 395, 148-152, 2005. 4. Korach et al. European Polymer Journal 44, 889903, 2008. 5. Bishop et al. Small 14, 1600-1630, 2009. 6. Chen et al. Journal of the American Chemical Society 134, 18453-18459, 2009. ACKNOWLEDGEMENTS Joint funding was provided Dr. Götz Veser, the Swanson School of Engineering, and the Office of the Provost.
STRUCTURED BED REACTORS FOR CHEMICAL LOOPING PROCESSES Anna Williams, Amey More, and Götz Veser Department of Chemical and Petroleum Engineering University of Pittsburgh, PA, USA Email: aew36@pitt.edu INTRODUCTION According to the Environmental Protection Agency (EPA), the release of CO2 into the atmosphere causes environmental problems including: global temperature changes, the rising of sea levels, an increase in intensity of storms and heat waves, and harm to water supplies, agriculture and wildlife. Therefore, new technologies need to be developed and implemented in order to help decrease the amounts of CO2 released into the atmosphere. A promising new technology, Chemical Looping Combustion (CLC), allows for the sequestering of pure CO2 gas before it enters the atmosphere. CLC uses two separate reaction stages, oxidation and reduction, which are realized in two separate reactors: an oxidizer, or “air reactor”, and a reducer, or “fuel reactor”. The main components or species used during CLC are a metal oxide, fuel, and the oxygen source. In the fuel reactor, the fuel is oxidized in contact with the metal oxide, which is reduced in this process, and hence, needs to be reoxidized in the air reactor. The overall net reaction, therefore, yields conventional fuel combustion, now split into two separate reaction half steps. The metal oxide used during CLC is often in the form of nanoparticles, to enhance reactivity. However, nanoparticles have the tendency to sinter, or clump together, when heated to high temperatures, causing the metal oxide to become less reactive, which decreases the efficiency of CLC. To help alleviate and ultimately avoid the metal oxide from clumping together, a solid support is often utilized, which prevents the particles from sintering. In this project the solid support utilized was cerium oxide (CeO2). Beyond combustion, the CLC principle can be used for partial oxidation reactions (“CLPO”). CLPO can be employed to produce synthesis gas (syngas) from natural gas feeds. Syngas, a gaseous mixture of hydrogen (H2) and carbon monoxide (CO) has
many industrial uses that make is a desirable product. The aim of the present project was to use CLPO utilizing a “structured bed”, i.e. a packed bed reactor with several packed sections, separated by quartz wool, containing different components able to undergo oxidation and reduction. A simplified representation of the structured bed can be seen in Figure 1. The basic concept behind the structured bed is that the products from the first bed will react with the second bed to yield the desired product: syngas. By varying the gas flow rates or the type and amount of material in each ‘packing’, the structured bed can be tailored to control the final products produced in CLPO.
Figure 1. The structured bed.
METHODS Much of this project was conceptual and theoretical. Before running actual experiments in a structured bed, a conceptual analysis of different configurations was conducted. These analyses involved completing total mass and energy balances, as well as listing possible side reactions or unwanted reactions. If a concept proved promising following this analysis, the actual experimentation began. The nanoparticles required for this project included: MnCeO2, Cu-CeO2, Fe-CeO2 and Ni-CeO2. These four materials were synthesized via wet impregnation at a 40:60 wt% ratio (metal: supporting oxide, respectively). Following the synthesis, the nanoparticles were in a packed bed reactor and reaction products were
analyzed via a mass spectrometry (MS). The MS data was then imported into Microsoft Excel for further analysis, including calculation of selectivity, yield, conversion, etc. RESULTS The conceptual analysis started with five potential structured bed configurations, which can be found in Table 1. Each of these five configurations was chosen based on unique characteristics for each bed, which ideally would yield syngas as final product. An overall mass and energy balance was applied to each configuration to ensure viability. Table 1: Potential structured bed configurations. Configuration #
Bed 1
Bed 2
1
Fe2O3
Ni
2
CuO
Ni
3
MnO
Ni
4
NiO
Fe
5
NiO
Ni
Initially the iron oxide followed by nickel structured bed (Configuration 1) seemed promising due to combining the selectivity of iron oxide with the reactivity with nickel to produce syngas. Another quality that was attractive about this configuration was the idea that carbon deposition on nickel would be limited, due to methane reforming reaction, which can be found in Equation 1. 2CH4 + CO2 + H2O à 3CO + 5H2 Equation 1: Overall methane reforming reaction over a nickel catalyst.
The methane feed would react first with the iron oxide to produce carbon dioxide and steam, which would then react with unconverted methane over the (reduced) nickel catalyst to form syngas. However, experimental tests of an iron oxide (single) packed bed yielded a product distribution with insufficient CO2 and H2O to perform the reforming reaction over the nickel bed.
The ideas behind copper oxide or manganese oxide followed by a nickel bed (Configurations 2 and 3) were very similar to the Fe-Ni bed discussed above. Both copper oxide and manganese oxide spontaneously release their oxygen at sufficiently high temperatures, resulting in a possible gas phase oxidation of the fuel (rather than a gas-solid reaction). The structured bed would hence be heated to high temperatures, which would allow for the release of oxygen from the copper or manganese bed, which would oxidize the nickel bed. Next, methane would be flown over the nickel oxide to produce carbon dioxide and steam, which would oxidize the copper or manganese bed, and hence, result in the production of syngas. In this way, this configuration would “shuttle” oxygen back and forth between the two beds. However, experimental tests showed that both copper and manganese showed insufficient reduction of carbon dioxide, and hence also were unable to release sufficient oxygen at the experimental temperatures. Finally, nickel oxide followed by iron or nickel beds (Configurations 4 and 5) both offer promising paths in producing syngas. One of the main concerns for each of these beds was the amount of time needed to oxide the first bed, without affecting the iron or nickel second bed. Therefore, on-going experiments focus on the required timing to achieve efficient operation. DISCUSSION The project identified two potentially promising configurations for clean and efficient production of synthesis gas from methane. The next steps will involve further experimental evaluation of these configurations (4 and 5) in order to evaluate the efficiency of these structured beds and eventually demonstrate their operability. ACKNOWLEDGEMENTS Partial funding was provided by Dr. Götz Veser, the Swanson School of Engineering, and the Office of the Provost.
CO2 THICKENER DESIGN TO REDUCE VISCOUS FINGERING FOR ENHANCED OIL RECOVERY AND HYDRAULIC FRACTURING James L. Sullivan Benedum Hall of Engineering, Department of Chemical and Petroleum Engineering University of Pittsburgh, Pittsburgh, PA, USA Email: jls326@pitt.edu INTRODUCTION Research on efficient and environmentally safe energy supplying techniques is growing in tandem with the growing global demand for energy. One such technique is Enhanced Oil Recovery (EOR) in which CO2 floods are common. By improving the mobility ratio of CO2 through oil and applying conformance control techniques to CO2 floods, oil extraction rates could increase in EOR and hydraulic fracturing. Methods of thickening CO2 have been researched for over 40 years in order to discover a CO2 thickener that is an “affordable, safe, water-insoluble additive that can dissolve in CO2 at typical well head and reservoir conditions during CO2 EOR and increase the viscosity of CO2 to nearly the same viscosity as the oil” while using only a small weight percent according to Enick et al.1 There will be many opportunities in the future for the United States to be a leader in energy with CO2 floods in EOR. METHODS A Brookfield viscometer was used to measure the viscosity of liquid solutions at room temperature and atmospheric pressure. The Brookfield viscometer measured the viscosities of fluids by loading a small plate of sample against an impeller that measured resistance of flow i.e. viscosity. By comparing the sample to the pure solvent, we were able to make judgments on viscosity increases. The fundamental concept behind the viscometer was a drag force at the liquid surface, which opposed rotation, thereby exerting a drag against the spindle. This drag was calibrated to output a digital value of viscosity on its screen. Measurements were repeated multiple times (4-5 times) to get consistent results.
Daily experiments were run in the DBR cell by 1. Increasing temperature and pressure to induce solubility for additives that are not readily soluble and 2. Using falling ball viscometry tests. Temperature in the DBR cell can reach a maximum temperature of 160C and a maximum pressure of 10,000 psi, so that reservoir conditions can be reached. A schematic of the DBR cell is shown below. A supercritical solvent acts as the injection fluid, which we measured solubility considering CO2 as well as NGL’s like ethane, propane and butane. The piston-like displacement went against silicone oil, our overburden fluid.
Figure 1: Schematic of cylinder in DBR high-pressure cell.
DATA PROCESSING Our foremost data processing technique was converting the time of the falling ball into generating relative viscosity plots. The results were more qualitative, to get an idea of a thickening compound’s effect on viscosity. We produced relative viscosity data using falling-ball viscometer data by dividing the velocity with which the Pyrex ball would fall in the thickened fluid by the velocity with which the Pyrex ball would fall in pure solvent. RESULTS Throughout the 12-week term we studied a variety of polymers, drag-reducing agents, and selfassociating small molecules. I was responsible for
Relative Viscosity
producing plots from our experiments of industrially sold polymers in CO2, light alkanes, and heavier alkanes such as C5-C12 at various concentrations. For example, one can see the effect that this particular viscoelastic polymer had on C5C12 at increasing wt% and ambient temperature and pressure in the Brookfield Viscometer. Relative Viscosity vs DRA concentration
21
1
0
Weight % of DRA polymer
0.2
Figure 2: Relative viscosity of a DRA polymer in a variety of alkanes
Other parts of our research using the DBR cell and Brookfield viscometer can be seen in our published articles for the drag-reducing agent and tri-butyl tin fluoride. DISCUSSION Mobility ratio of CO2 through oil: M => mobility ratio of CO2 / mobility ratio of oil Simplifies to: M => viscosity of oil / viscosity of CO2 Thus increasing the viscosity of CO2 with a thickening agent would allow for a better sweep of the well in a Five-Spot Injection-Production Pattern by reducing the likelihood of viscous fingering. Below, I break down the diagram to the smaller square, showing only a corner from the injection well to the production well:
Figure 3: Shows corners of injection and production wells
Figure 4: Shows sweep efficiency from the injection well to the production well. The x-axis indicates the mobility ratio while the yaxis indicates pore volumes.
The upper left corner of Figure 4 is ideal; it has a low mobility ratio and a great sweep efficiency. The lower right corner’s fingering is what thickening CO2 can avoid as it leaves vast amounts of oil in place. The most successful thickening polymer are polyfluoroacrylates. One such sample that I worked with, designed by Dr. Enick at the University of Pittsburgh, includes poly FAST, a fluoro-actylate styrene co-polymer that induces solubility while enhancing viscous tendencies. REFERENCES 1. Enick, R., Olsen, D., Ammer, J., Schuller, W., Mobility and Conformance Control for CO2 EOR via Thickeners, Foams, and Gels – A Literature Review of 40 Years of Research and Pilot Tests, paper SPE 154122, presented at the SPE Improved Oil Recovery Symposium, April 14-18, 2012, Tulsa OK ACKNOWLEDGEMENTS Dr. Robert Enick, the Swanson School of Engineering and the office of the Provost jointly funded this research internship. I owe much gratitude, as well, to Dr. Enick’s team: Jason Lee, Aman Dhuwe, Stephen Cummings, and GE Global Research for allowing me to be a member of their team. I learned valuable information and gained unique insight into the Petroleum industry.
Testing of Biosensor Paper Diagnostic Strips Garrett Green, Konstantin Borisov, Robert Donahoe, Alexander Szul, Apurva Patil Mentor: Jason Lohmueller; Faculty advisors: Alexander Deiters, Hanna Salman, Sanjeev Shroff Dr. Hanna Salman’s Biology Lab, Room 214, Old Engineering Hall University of Pittsburgh, PA, USA Email: geg36@pitt.edu; Pitt iGEM 2015 Website: 2015.igem.org/Team:Pitt Introduction The primary objective of this research is to test a biosensor capable of detecting estradiol in a cell-free system; the biosensor will report the presence of the target analyte through selective transcription of a reporter protein—eGFP. The estradiol sensitive system was based on an estradiol-responsive T7 polymerase (ERT7) that should have a higher transcriptional activity in the presence of the small molecule estradiol. This research also addresses one major problem with previously reported cell-free extract systems—leaky expression of reporter proteins that can yield a false-positive result [1]. T3 and T7 polymerase decoy oligonucleotides were designed that mimic T3 and T7 RNA polymerase (RNAP) promoter regions. The RNAPs bind to the decoys which limits the amount of transcribed reporter protein, potentially reducing noise. Pardee et al. found that cell-free expression systems can be freeze dried onto paper and stored for extended periods of time while maintaining their activity [1]. By combining Pardee’s advancement with the estradiol-sensitive mechanism, this project aims to create unique lowcost rapid-testing paper diagnostic strips. Methods The study used a modified S30 protocol [2] to create cell extracts of the estradiol-detection system. Estradiol sensitive T7 polymerase and wild-type T7 polymerase plasmids were provided by Cheryl Telmer at Carnegie Mellon University (CMU). This plasmid was transformed into NiCo21 (DE3) cells then lysed by sonication (20% amplitude, 10 cycles of 10s sonication followed by 30s rest period) at Cheryl Telmer’s CMU laboratory. The lysed products were then purified and dialyzed according to the modified S30 protocol [2]. The lysates were tested with varying concentrations of estradiol (1nM-100uM with 10x intervals) and with 6ng/ul of pT7-GFP. Kinetic fluorescence data was gathered over 4 hours with a Tecan M200 with excitation and emission wavelengths of 395nm and 509nm, respectively. Decoy binding sites for polymerases were designed with ApE software and purchased from IDT. Hairpin structured RNAs were designed to mimic T3 and T7 RNAP binding sites. This RNA was tested at varying concentrations (0nM, 4-4000nM with 10x intervals and then again with 0-1.2uM with .2uM intervals) with a positive control lysate (made with NiCo21 (DE3) cells with T7 polymerase induced by IPTG) and 6ng/ul pT7-GFP. Kinetic fluorescence data was gathered over 3 hours with a Tecan M200 with excitation and emission wavelengths of 395nm and 509nm, respectively. Paper based cell-free system behaviors were tested. .25 cm2 (.5cm x .5 cm) pieces of chromatography paper were cut, and then NEB PURExpress Protein Synthesis formula (a commercially available pT7 cell-free extract) plus 10 ng/ul of pT7-GFP was aliquoted onto the paper with 2ul per piece of paper. These papers were immediately
transported to a -80C freezer in order to minimize transcription. After freezing (~3 hours), these papers were freeze dried in a lyophilizer. In order to test the effectiveness of the technique of storing cell-free systems on paper, the GFP translating papers were tested with and without rehydration. Only one sample was rehydrated with 2ul of water to test for pre-rehydration translation of GFP. Both samples were incubated at 37C for an hour. The results of the freeze-drying methods were examined by using a ChemiDoc to take pictures of the samples with the emission and excitation wavelengths of GFP. Two methods of transferring GFP from one paper to another were tested. GFP was placed on one piece of .25 cm2 chromatography paper (the source paper) by applying 2ul of a GFP-rich cell lysate (diluted with water at ratios of 1:5, 1:10, 1:20, 1:200) to it, and the other piece of paper (the destination paper) was blank chromatography paper. The first method (equilibrium method) of GFP transfer consisted of hydrating both pieces of paper with 5ul of water and putting them together for 10 minutes. The second method (forced-transfer method) consisted of placing the the source paper on top of the destination paper and adding 10ul of water to the back of the source paper. The results of the transfer methods were examined by using a ChemiDoc to take pictures of the samples with the emission and excitation wavelengths of GFP. Data Processing Quantitative data was obtained using a Tecan M200 set at 100x gain. All useful data from the Tecan was obtained using a fluorescence intensity measurement with the primary excitation and emission wavelengths of GFP (395nm and 509nm respectively). Data was recorded under kinetic cycles with intervals of 5 minutes in order to show the evolution of GFP expression over time. Qualitative data for preliminary testing of paper-based cell free systems was obtained using a ChemiDoc imager. Pictures were taken using the emission and excitation wavelengths of GFP. Results Preliminary testing with ERT7 and T7 lysates failed because no signal was received from T7 lysate, the positive control (Fig. 1). Testing of the DNA decoys (Fig. 2) showed that there is a suppression of polymerase activity when concentrations of the decoy hairpins greater than .2uM were used. The images of GFP translation squares (Fig. 3) show that both the rehydrated and non-rehydrated samples produced comparable GFP expression. The images of GFP transfer (Fig. 4) shows that the forced-transfer method is superior to the equilibrium method (source papers are at the top of each set and the destination papers are at the bottom). The equilibrium method left roughly equivalent amounts of GFP on both the source paper and the destination paper. However, the forced-
incapable of altering the activity of the polymerase, then the rate constant of the polymerase transcription reaction is constant.
transfer method produced results which show that nearly all of the GFP was transferred from the source paper to the destination paper.
= kdecoy*[T7 RNAP]*[pT7-GFP] And:
5000
GFP fluorescence (RFU)
250
GFP Fluorescence (RFU)
Rate = kno decoy*[T7 RNAP]*[pT7-GFP]
Figure 2: Induced N21 Extract with 150ng pT7-GFP
Figure 1: Positive Control for ERT7 (T7 Extract) with 150ng pT7 GFP
200 150 100 50 0 0
50
100
150
200
4000 3500 3000 2500 2000 1500 1000 500 0 0
50
Time (min)
100
150
200
Time (min)
no pT7 GFP
0nM estradiol
no pT7-GFP
0 uM decoy
1nM estradiol
10nM estradiol
0.20 uM decoy
0.40 uM decoy
100nM estradiol
1000nM estradiol
0.61 uM decoy
0.79 uM decoy
1.00 uM decoy
1.22 uM decoy
10000nM estradiol
Ratedecoy < Rateno decoy
4500
Then [T7 RNAP]decoy < [T7 RNAP]no decoy since the concentration of pT7-GFP plasmid was constant. This suggests that the effective concentration of T7 RNAP is reduced by the decoys. In other words, the T7 RNAP is recruited by the binding sites on the hairpin decoys and not released. However, this only accounts for a decrease in reaction rate and does not explain why the samples with lower concentrations of decoys had larger GFP fluorescence at the plateaus. It can be seen in (Fig. 2) that the fluorescence curves each tend to reach a plateau at Time=100 minutes. Since this trend is consistent throughout each sample, it can be inferred that they all have a similar time-sensitive component. This can potentially be explained by the heat-sensitivity of enzymes involved in the cell-free reactions such as creatine phosphokinase which is a necessary additive in the reactions. The reduction in enzyme activity would essentially bring transcription and other cellular functions to a halt—yielding a plateau in fluorescence. The comparable GFP expression between rehydrated and nonrehydrated samples seen in (Fig. 3) suggests that cell-free systems should be frozen using a faster method to prevent pre-hydration translation in the systems. Flash freezing with liquid nitrogen could potentially halt the translation process and also prevent the denaturation of proteins in the systems.
Figure 3. Left: Rehydrated sample Right: Non-rehydrated sample
Figure 4. Top two sets: Equilibrium method. Bottom two sets: Forced-transfer method. From left to right 1:5, 1:10, 1:20, 1:200 dilutions of GFP-rich cell lysate.
Discussion The lack of GFP expression in (Fig. 1) suggests that there was either a complication in the creation of T7 lysate, that an additive component of the reaction mix was not active, or that the DH5a cell line is not suitable for cell free reactions. However, a lysate made from NiCo21 (DE3) cells with T7 polymerase induced by IPTG yielded a highly active cell free reaction. This was the same lysate used for testing the activity of RNA hairpin decoys. The high activity of the N21 lysate verifies that a correct protocol for creation of cell-free lysate was achieved and suggests that DH5a cells are not suitable for cell-free systems. The fluorescence data plotted in (Fig. 2) illustrates that the lysates with .2uM of decoy added produced fluorescence levels similar to the lysates with no decoy. This suggests that there is a minimum threshold concentration of decoys for the effective diversion of polymerases from transcription. It can also be seen in (Fig. 2) that from Time=20 minutes to Time=50 minutes, the slopes of the fluorescence curves decrease with increasing concentration of decoy. If it is assumed that the hairpin decoys are
The effectiveness of the forced-transfer method of protein transfer seen in (Fig. 4) seems obvious because as the added water flows through the porous paper, it will tend to carry any polar components with it. This yielded a transfer of the majority of the GFP from the source paper to the destination paper. This suggests that multiple cell-free paperbased systems may be coordinated to produce more complex reactions and tests. References 1. Pardee, K., et al. “Paper-Based Synthetic Gene Networks.” Cell. 2014. Vol 159. 940-954. 2. Kigawa, T., et al. “Preparation of E. coli cell extract for highly productive cell-free protein expression.” Journal of Structural and Functional Genomics. 2004. Vol 5. 63-68.
Acknowledgements Biosynthesis, experiments, and consumption of disposables were performed under the guidance and sponsorship of Dr. Hanna Salman. Significant reagent contributions to the Pitt iGEM program were received from Thermo Fisher Scientific Incorporated. Reagents and general project advice were provided by Dr. Alexander Deiters. Leadership, consulting, reagents, and laboratory training were provided by Dr. Jason Lohmueller. This research was funded jointly by the Swanson School of Engineering, the Office of the Provost of the University of Pittsburgh, the Pittsburgh Tech Council, and Alethia Weiland. Estradiol, estradiol-sensitive T7 RNAP plasmids, wild-type T7 RNAP plasmids, and use of cell-lysing equipment were provided by Carnegie Mellon University.
APPLYING CELL FREE, PAPER-BASED SENSORS FOR BIOLOGICAL TESTING AND PROTEIN TRANSFER Alexander J. Szul, Robert J. Donahoe, Konstantin A. Borisov, Garrett E. Green and Apurva E. Patil The Hanna Salman Physics Lab, Department of Physics University of Pittsburgh, PA, USA Email: AJS239@pitt.edu, Web: http://www.physicsandastronomy.pitt.edu/ INTRODUCTION Freeze-dried, paper-based, cell free extracts have a multitude of applications including rapid disease detection, water purity analysis and small molecule examination. One facet of this study examined the in vitro translation of green fluorescent proteins (GFP) and their fluid dynamics on and between sections of filter paper. Studying this will help determine the best possible implementation of an in vitro signal amplification circuit. Using a series of interdependent plasmids, this amplification circuit acts as a high pass filter in limiting unwanted GFP expression. The circuit can be broken into different sections and freeze dried onto separate pieces of paper for precisely controlled activation. Yetisen et al. studied fluid dynamics between multiple pieces of paper as miniature, point of care diagnostic and filtering devices [1]. They used multiple layers of filter paper as their substrate and varying color dyes as their analyte. After constructing different multilayered systems they determined that a greater number of layers exponentially increased the difficulty of containing fluids, however multi-layered paper sensors still are applicable. They also note that a large amount of separation increases the likelihood of cross contamination of samples. METHODS The initial procedure consisted of determining the dispersion pattern of dye on filter paper. Pure water with blue dye was added in varying amounts to 2mm square pieces of paper. Samples incrementing from 1,2,3,5, to 10 uL in volume were added to the paper and given 15 minutes to dry. After comparing the results it was observed that all samples, excluding the 1 uL square, gave relatively similar results. The failed test simply did not have enough dye for a solid conclusion. In all other cases, dye tended to migrate towards the circumference of the drop.
The next step was to determine the observable levels of GFP visible to a Bio-Rad Chemidoc. Typically used for gel or western blot imaging, the Chemidoc gave high definition fluorescence read outs for paper samples of GFP. Pure GFP was obtained from Alexander Deiter's chemistry lab and diluted at ratios of 1:10, 1:20, 1:50, 1:100, and 1:1000 with double deionized water. After imaging, all samples gave positive results as compared to controls and mimicked the circumferential localization seen in the blue dye test. Research moved onto how well GFP translated on paper before and after freeze drying. 1.8 uL samples of constitutively producing GFP extract were added to 2mm square pieces of filter paper. Half were then freeze dried with liquid nitrogen while the other half were placed in Eppendorf tubes to incubate. After the hour incubation, the freeze dried samples were hydrated and left to incubate. Once imaged, the rehydrated samples had similar GFP fluorescence to the freeze dried samples. Two methods of hydration were compared to study how fluid dynamics affected protein transfer between two sheets of paper. The first method hydrated a blank piece of paper with water and a second piece of paper with GFP separately. The second piece was placed on top of the first, pressed down with tweezers, then left for 30 minutes. The other method hydrated a blank piece of paper with water , but placed a dry, GFP containing piece on top and water was added from above. This set up was pushed together and sat for 30 minutes. DATA PROCESSING All data was obtained through unaided visual confirmation or through the use of a Bio-Rad Chemidock. As this data is completely qualitative, analytical charts of results are unnecessary
RESULTS Results obtained from the protein transfer test indicate a deviation in the GFP concentrations that are dependent upon the method by which the paper is hydrated. If the two pieces are hydrated independently and then pushed together an equilibrium of protein concentration is reached. Regardless of which piece has more GFP present, both will contain a relatively similar amount given enough time. If one piece is hydrated then the other put on top and hydrated almost all of the GFP will move to the bottom piece. These results held for varying concentrations of GFP. In regard to the varying ratios of GFP, having all the samples passing indicates a low threshold of detection who's dispersion mimicked that of the dye. DISCUSSION The results obtained from the protein transfer test were fairly surprising but not unexpected. In figures 1 and 2 the top piece is on the top row while the bottom is the second row. The bright white and light grey portions of the paper correspond to high GFP presence while the dark grey and black portions or paper lack significant if any GFP. The left most piece in figure two had the highest concentration of any set and therefore has excess GFP that was not transferred. The differences in these hydration methods are important depending upon which type of circuit is implemented within this cell free system.
Figure 1. Hydration of separate pieces
Figure 2. Hydration of pieces together
The circuit being applied by our group uses pT7 to drive pT3 which then drives addition pT3 and GFP. A set up in which pT7 is freeze dried on one piece of paper and pT3 with pT3 driven GFP on another would fully encompass the usefulness of the second hydration method. This would push all the pT7 onto one piece and activate the second component in the circuit. Further testing in solution has to be done to apply this amplification circuit, but the results will be useful when finished. REFERENCES 1. Yetisen et al. Lab Chip 13, 2210-2250, 2013. ACKNOWLEDGEMENTS This research was made possible by the University of Pittsburgh's Provost's office, Swanson School of Engineering, and the Bioengineering, Biological Sciences, Chemical & Petroleum Engineering, Chemistry, and Electrical & Computer Engineering departments. We appreciate ThermoFischer and the Pittsburgh Life Sciences Green House's fiscal contributions. We would like especially to thank Jason Lohmueller for his support and guidance, Alexander Deiters for letting us use his facilities and loaning us materials, Hannah Salman for letting us use his laboratory, and Sanjeev Shroff for making this research possible.
Φ6 DISINFECTION WITH HYPOCHLORITE IN DEIONIZED WATER Abraham Cullom and Kyle Bibby Department of Civil and Environmental Engineering University of Pittsburgh, PA, USA Email: acc69@pitt.edu For the experimental medium, demand-free water was autoclaved and then allowed to cool to room temperature. The pH of this water was approximately 6.3.
BACKGROUND Water treatment research has historically focused on the disinfection of non-enveloped viruses such as noroviruses, enteroviruses, and rotaviruses. Recently, notable public health concerns have recently been caused by enveloped viruses such as severe acute respiratory syndrome (SARS), Middle East respiratory syndrome (MERS), and Ebolavirus. The 2014 outbreak of Ebolavirus in Western Africa has called into question assumptions regarding the appropriate disposal methods and disinfection kinetics of enveloped viruses. This may be of particular importance for Ebolavirus, as a single treatment bed may create 300 liters of liquid waste per day (1). In order to study the environmental persistence and treatment response of Ebolavirus, surrogates must be developed that do not require Biological Safety Level 4 access and the associated limitations and costs (2). Bacteriophage Phi6, a virus of Pseudomonas syringae pv. phaseolicola, has been proposed as one possible research surrogate because it is also enveloped and similar to Ebolavirus in genome size (2)
Quenching Experiments The ratio of free chlorine to sodium thiosulfate and effectiveness of new sodium thiosulfate solutions were verified to ensure total quenching of chlorine at sampling time. μL. Below are the results of two such tests with free chlorine concentrations similar to those of the disinfection experiments performed towards the end of this research. Disinfection Experiments Ten μL of 1010 pfu/mL phage stock were thawed and added to 990 μL of PBS and then vortexed to mix. Sampling test tubes for each time point were filled with 9 ml of PBS and sodium thiosulfate at ten times the concentration of the free chlorine desired and then vortexed. The 108 pfu/mL solution was added to 40 mL of deionized (DI) water in an autoclaved flask and then vortexed for a starting concentration of 106 pfu/mL. 20 mL of this was pipetted into the experimental test tube, and 20 into the control test tube. Immediately following, 1 mL samples from each were pipetted into sampling tubes and vortexed briefly for a time zero sample. The desired amount of hypochlorite solution was added to the experimental test tube. 1 mL samples were taken from the experimental and control test tubes at the desired sampling times and vortexed to ensure complete
MATERIALS AND METHODS The bacteriophage Phi6 and host strain P. syringae strain (HB10Y) were acquired from Dr. Leonard Mindich. The concentration of phage in stock solutions was determined in previous persistence studies. Bacteria were cultivated for plating overnight (approximately 20 hours) by adding frozen concentrate to an LB media.
Table 1: Results of quenching tests to verify the concentration of sodium thiosulfate used.
Sodium Thiosulfate Concentration Test 1 Test 2
1 mg/L 1 mg/L
Initial Free Chlorine Concentration 0.15 mg/L 0.05 mg/L
Free Chlorine Concentration After Sampling 0.00 mg/L 0.00 mg/L
The appropriate dilutions of each sample were completed by adding 0.1 mL of the original sample to 0.9 mL of DMEM in a microcentrifuge tube, vortexing briefly, and continuing in this manner until all desired serial dilutions were made. These were then plated onto LB agar plates along with the cultivated P. syringae host and counted on plates with 10 to 100 plaques. For each experiment, a control plate was used to check for contamination by replacing the the sample with DMEM used to make dilutions.
This study determined a minimum CT value for 99.9% phi6 inactivation using hypochlorite (0.025 mg-min/L). This study validates the efficacy of hypochlorite disinfection for Phi6 and suggests rapid inactivation of enveloped virus by disinfectant in deionized water 100000000
pfu/mL
quenching; however, samples from the control were not necessarily taken at each time point due to time constraints. The primary test tubes were hand vortexed before sampling if more than 3 minutes passed between sampling times.
10000000
1000000
RESULTS AND DISCUSSION Initially, a higher concentration of virus was used in experiments to mirror the aforementioned persistence studies; however, the media in which the virus was suspended eliminated too much of the free chlorine to maintain a stable dose concentration and thus obtain a reliable concentration-time (CT) curve. At lower concentrations, the phage appeared to be extremely labile, exhibiting a greater than 99.9% reduction at even a 0.025 mg-min/L dose. These results vary from a previous study of the disinfection of Phi6, in which the die off was not as quick (3), but the inconsistencies might be partially explained by the different experimental conditions. This study used water at a lower pH and at a higher temperature, both of which would be expected to increase the rate of disinfection. The rapid die off below the detectable limit did not appear to result from experimental conditions, as control studies showed a consistently smaller (less than one log10), inactivation than that observed due to chlorine dose, as shown in Figure 1. Additionally, quenching experiments showed that the free chlorine to sodium hypochlorite ratio used and the sampling method resulted in no detectable chlorine in the sample. Therefore, it seems unlikely that inactivation continued after sampling as a result of residual chlorine.
0
10
20
30
40
50
60
Time (minutes) Figure 2: Log10 survival of Phi6 in DI water, control test
REFERENCES 1. Sorensen, R. Ebola in Liberia: Keeping communities safe from contaminated waste. World Health Organization. 2015 2. Bibby, Kyle; Casson, Leonard W.; Stachler, Elyse; Haas, Charles N. Ebola Virus Persistence in the Environment: State of the Knowledge and Research Needs. Environmental Science and Technology Letters. 2015 3. Adcock, Noreen J.; Rice, Eugene W.; Sivaganesan, Mano; Brown, Justin D.; Stallknecht, David E. Swanye, David E. The use of bacteriophages of the family Cystoviridae as surrogates for H5NI highly pathogenic avian influenza viruses in persistence and inactivation studies. Journal of Environmental Science and Health. 2015 ACKNOLEDGEMENTS The research reported above was completed in the Environmental Engineering Laboratory and supported by the Swanson School of Engineering and Office of the Provost.
Cerro Patacón Water System and Mocambo Feasibility Analysis Danielle Broderick Swanson School of Engineering, Department of Civil Engineering University of Pittsburgh, PA, USA Email: dmb149@pitt.edu INTRODUCTION Thanks to the tireless efforts of the University of Pittsburgh’s Human Engineering and Design club, in culmination with senior design projects across multiple engineering disciplines such as Mechanical, Industrial, and Civil, over 4,000 underprivileged citizens living in the Panamanian villages of Kuna Nega, La Paz, and San Francisco (located in the area of Cerro Patacón) have been given access to clean water with roughly 80,000 gallons of water tank storage in their communities. Over the course of four years, these students have constructed a water pipe system totaling around approximately 3,000 meters of pipe. These projects have also allowed over 100 young engineering students the opportunity to travel to a foreign country and gain real-world engineering experience that directly impacts peoples’ quality of life. REQUEST FOR AS BUILTS Unfortunately, this still leaves so many villagers, in the aforementioned communities as well as others, without the luxury of clean water. The local government has recognized the incredible work of University of Pittsburgh students and would like to support their work on future projects. However, in order to obtain this support, the local engineering and water supply company, IDAAN, requires as built designs and inventory of the work already completed. Prior to this summer, no such files existed or were kept on file for future use in any sort of organized manner. This required a new trip to Panama in order to familiarize with and record the existing water systems in Kuna Nega and La Paz. I spent many weeks leading up to our departure exploring the AutoCAD software, meeting with
experienced professors, and organizing all of the previous documents created regarding this project. TRIP DETAILS We arrived in Panama City on Wednesday, June 3, 2015 and began our work in the villages the following day. The International Maritime University of Panama (UMIP) volunteered numerous cadets and their facilities to accommodate our work. Since the only helpful recourse we had was a paper map of the Kuna Nega community, the first day was spent walking through the entire existing water system – making general markings throughout the process. Subsequent days after were spent pacing out exact distances of the varying diameters of pipe as well as accounting for all valves and pumps on our maps. Topographic data was also collected for use in future projects and additional accuracy of our records. CAD files were created for Kuna Nega and La Paz that detailed the lengths and locations of pipe, ranging from 1 to 3 inch, and different valves (ball and check). From these files, we were able to sum up all of the different materials in order to create an inventory of all the material used. After the designs were completed, we began planning our presentation for IDAAN. The final posters (see figure 1) included a map of the communities, visuals of the pipe system, close-ups of key areas in the system, inventory totals, a legend for the different materials, and logos of the numerous organizations that have helped make these projects a reality. On Friday, June 12, 2015, final deliverables including the CAD and as built files, project summaries, and detailed photos were presented at a meeting to representatives from IDAAN as well as citizens of the community.
and feasible solution would be to put pipe as far up the mountain as possible, without exceeding the height of the existing La Paz tank—approximately 100 meters. Though it would take considerable time for a new tank to fill, it could be done using only gravitational force instead in place of costly alternatives.
Figure 2: Mocambo distance vs. elevation
CONCLUSION Figure 1: Kuna Nega (top) and La Paz (bottom) as built posters FUTURE SYSTEM FEASABILITY Also present at this meeting, were leaders of a nearby village, Mocambo. After presenting our current successes, we began discussing expanding the existing water network to this community. During our work in Kuna Nega and La Paz, we also collected topographic data of the new area in question. It was now my next task to bring this data back to the University of Pittsburgh and discuss the feasibility and required materials to provide this village with clean water. As can be seen in figure 2, we used the topographic data to plot distances from the existing water tanks against the elevation of points between the two communities. There is a significant difference between the heights of the existing tank and the location of the Mocambo population (circled in red). In order to get water to the higher parts of this community would require a substantial pump, which would quickly escalate the cost of this project. The best low-cost
The Pitt HEAD projects have been extremely beneficial for not only the villagers these projects directly affect, but the students involved in them as well. One of the main goals of my work with Dr. Budny over the course of this summer was to be able to continue the engineering senior design projects that have positively impacted the lives of so many Panamanian villagers living in extreme poverty. The work we have recently done has helped foster the university’s relationship with the local government as well as collect data to use as the basis for future projects. ACKNOWLEDGEMENTS I would first like to thank the Swanson School of Engineering and the Office of the Provost. Without such generous financial support, I never would have been able to travel to Panama and complete my projects. I would also like to thank professor Budny who is the center of these amazing service projects. Without his passion for education and those living in poverty-stricken communities, these projects wouldn’t exist and thrive as they do.
INTERACTION BETWEEN HYDRAULIC FRACTURES AND FULLY CEMENTED NATURAL FRACTURES OF VARYING STRENGTH Garrett S. Swarm, Wei Fu, Andrew P. Bunger Hydraulic Fracturing Laboratory, Department of Civil and Environmental Engineering University of Pittsburgh, PA Email: gss18@pitt.edu
INTRODUCTION Hydraulic fracturing is a popular process for extracting oil and natural gases from unconventional reservoirs. Continuing efforts are being made to better understand and predict the path of a fracture during reservoir stimulation. One important aspect to consider involves the interaction between a growing hydraulic fracture (HF) and pre-existing natural fractures (NFs). While a number of experimental and modeling studies have addressed the role of friction on uncemented NFs [1,2,3], less is known about the role of the cement. The hypothesis is that strong cement will promote direct crossing of NFs while weak cement will lead to the HF being diverted to grow along the NF. Here we present a laboratory investigation seeking an experimental threshold between behavior associated with weak and strong cement. METHODS The experiment was conducted on mortar specimens which included two glued interfaces to simulate naturally occurring fractures. The glued interfaces were fully bonded in order to mimic a natural fracture that is completely cemented with a material of different strength. The specimens were constructed with three mortar blocks that had identical dimensions (3’’x3’’x2’’). The first part of the experimental procedure consisted of selecting adhesives to use for the interfaces. Four different types were tested with a DYNA Z16 Pull-off Tester in order to categorize their strength in relation to the strength of the mortar samples. Two concrete blocks, with a 2”x2” cross sectional area, were bonded together using a specified adhesive. The adhesive cured for 24 hours to reach maximum strength after which the samples were loaded in tension until failure. Figure 1 depicts the machine and experimental setup.
Figure 1: Hydraulic Fracture Experimental Setup [4]
The mortar samples were assembled by bonding the interfaces in the same manner as for the pull-off tests while a 3/8” diameter wellbore was installed in a hole drilled through the central mortar block. Confining stresses on the mortar samples were applied using a tri-axial loading frame. Stresses in the vertical and horizontal directions were 1.2 MPa and 0.8 MPa, respectively. A syringe pump pumped a glycerin – food dye mixture at a constant flowrate of 6 mL/min into the test samples. As a result, the pressure inside the wellbore increased until the fluid induced a fracture. Once the fracture reached the interfaces, the fluid either crossed directly through or debonded the cemented region. After completion of the test, the specimen was disassembled for direct observation. The HF will reach the full height of the specimen as it progresses towards the interface. This makes it possible to determine the direction of the HF as well as its stopping point by observing the top and bottom surfaces. The glycerin – food dye mixture stains the mortar and manual separation along the surface HF path and the interfaces expose the internal geometry
a)
b)
c)
d)
Figure 2: Elmer’s All Purpose Stronger Formula. a) Surface view of HF path, b) Internal HF path. Elmer’s School Glue. c) Surface view of HF path, d) Internal HF path and debonded interface
RESULTS Sikadur 32 and Elmer’s All Purpose Stronger Formula adhesives both resulted in crossing. The HF directly crossing the interface for the Elmer’s All Purpose Stronger Formula can be observed in Figure 2a and b. Figure 2c and d illustrate the debonding which occurred during the Elmer’s School Glue test. The results from the adhesive tensile strength tests are listed in Table 1. A series of tests were conducted with each adhesive type to verify the results. Figure 3 categorizes the results into two outcomes, crossing versus non-crossing. Adhesive type
Tensile Strength (MPa)
Sikadur 32
>3.3
Elmer’s All Purpose Stronger Formula Elmer’s School Glue Weldwood Carpenter’s Wood Glue Quikrete Concrete Acrylic Fortifier Table 1: Adhesive Tensile Strength
1.2 0.82 0.76
Tensile strength of Block (MPa)
Test: Noncrossing Test: Crossing 0.31
0.76
3.3
1.2
1.5 0.82
Non-crossing
Crossing
0 -0.5
0.5
1.5
2.5
Comparable to various rock materials possessing different properties; various types of adhesives possess different properties. For example, although Elmer’s School Glue and Weldwood Carpenter’s Wood Glue have very similar tensile strength, the wood glue is more ductile. Arriving at the same outcome for the two adhesives provides additional justification that the ratio of adhesive to block tensile strength is the dominate feature influencing HF crossing versus non-crossing. Testing additional adhesives with tensile strengths between 0.82-1.2 MPa will provide a more exact boundary separating the two categories. REFERENCES
0.3
Interaction Transition 3
is less than approximately 55 percent of the block strength. On the other hand, cement material with a tensile strength of at least 80 percent of the block strength does promote HF crossing.
3.5
Tensile Strength of Bonding Material (MPa) Figure 3: Interaction of HF at interface
DISCUSSION The results demonstrate that HFs won’t cross a NF region if the tensile strength of the cement material
1. Blanton, T. L. 1982. An Experimental study of interaction between hydraulically induced and pre-existing fractures. SPE10847-MS. Presented at the SPE/DOE Unconventional Gas Recovery Symposium, Pittsburgh, PA, 16 -18 May. 2. Renshaw, C.E., and D.D. Pollard. 1995. An experimentally verified criterion for propagation across unbounded frictional interfaces in brittle, linear elastic materials. Int. J. Rock Mech. Min. Sci. & Geomech. Abstr. 32: 3,237-249. 3. Gu, H., X. Weng, J. Lund, M. Mack, U. Ganguly and R. Suarez-Rivera. 2011. Hydraulic Fracture Crossing Natural Fracture at Non-Orthogonal Angles, A Criterion, Its Validation and Applications. SPE-139984-MS. Presented at the SPE Hydraulic Fracturing Technology Conference, The Woodlands, TX, 24-26 January. 4. Fu W. et al. 2015. Presented at 49th US Rock Mechanics Symposium, San Francisco, CA, USA. Paper 132.
ACKNOWLEDGEMENTS I would like to thank Alexei Savitski and Shell International Exploration and Production Inc., the Swanson School of Engineering, and the Office of the Provost at the University of Pittsburgh for partial funding for this project.
WELL PLUGGING WITH CLAY-BASED MINERALS: CHARACTERIZING THE INTRUSION OF BEONTNOITE INTO NEAR-WELLBORE CRACKS Rachel Asit Upadhyay and Andrew P. Bunger Department of Civil & Environmental Engineering University of Pittsburgh, PA, USA Email: rau5@pitt.edu INTRODUCTION A significant challenge to the future of nuclear energy is the responsible disposal of high-level nuclear waste. Following the indefinite suspension of Yucca Mountain as a repository for spent nuclear fuel, the US Department of Energy’s Blue Ribbon Commission proposed the option of deep borehole disposal [1]. This involves drilling a well to a depth of 5000 meters, using canisters to store the waste in the bottom 2000 meters, and then plugging the well with various materials such as bentonite clay [2]. The proposed method of bentonite emplacement involves dropping compacted clay pellets down the borehole, allowing them to hydrate and swell, thus forming a plug [2]. Deep boreholes commonly exhibit near-wellbore cracks which form as a result of the drilling process [3]. Thus, the success of the bentonite plug hinges critically on its ability to effectively and permanently plug near-wellbore cracks. Prior studies of borehole plugging with bentonite were done at relatively shallow depths with few if any near-borehole cracks; even in studies which directly address crack intrusion, researchers utilized liquid-consistency drilling fluid rather than semisolid or plastic clay [4, 5]. The goal of this research is to understand and optimize the ability of hydrated bentonite clay pellets to plug near-borehole cracks. METHODS A device dubbed the “swell cell” was constructed such that the borehole is represented by a cylindrical chamber, and a near-borehole crack is represented by a slot adjacent to the center chamber, as shown in Figure 1. The sodium montmorillonite-based bentonite clay pellets pass a 3/8 in. sieve and are retained on a 1/4 in. sieve, with an average pellet size of 0.3125 in. The experiments consist of placing the pellets into the center chamber and filling the entire
cavity with distilled water so that the pellets hydrate and swell, intruding into the slot because the cell prohibits swelling in the vertical direction along the borehole. With the use of removable spacers, the slot’s width can be adjusted in order to examine the relationship between slot width w and clay intrusion length l. face seal
l =105 mm center chamber
slot
w = 19 mm
Figure 1: A top view sketch of the swell cell.
DATA PROCESSING The volumetric swelling of the pellets is quantified by percent change in volume. The initial volume is determined by measuring the mass of the pellets and then using their density to obtain a volume. The final volume of the pellets is determined by measuring their geometry after swelling. RESULTS Results indicate that the bentonite clay pellets do not fully plug the slot, as demonstrated by Figure 2.
Figure 2: A side view of the swell cell exhibiting minimal slot intrusion length at w = 1.53 mm.
It is proposed that the penetration is limited by (1) the free swelling potential intrinsic to the system comprised of the bentonite pellets and the hydrating fluid and (2) resisting shear force along the walls of the slot. These two limiting factors work against each other, leading to a non-monotonic relationship between slot width and intrusion length as demonstrated by Figure 3. For narrow slots, the resisting shear force is more pertinent than the swelling capacity, and clay intrusion length is represented by the linear equation l = 3.3w + 5.9. The theoretical intrusion length dependent on only the force equilibrium is demonstrated by the equation l = (P/2τ)w. Here P is the pressure exerted by the pellets on the swell cell; it is approximated by the applied force an equivalent height column could sustain before failing. The quantity τ is the shear stress along the pellet-wall interface; it is estimated by the force a 1 m column of expanded pellets could support before shear failure occurs. Bounding the order of these quantities leads to the approximate theoretical equation for narrow crack intrusion length l = 5w.
Clay Intrusion Length l [mm]
In the case of wide slots, the volumetric expansion capacity dominates the shear stress from the wall, as demonstrated by the empirical power law relationship l = 54.6w-0.5. Theoretically, if volumetric expansion is the only consideration and shear resistance is neglected, intrusion length is expressed as l = V(wh)-1. Using data from separately conducted free swelling experiments to estimate the maximum swelling volume as well as the geometry of the swell cell (with h as height), the intrusion length is predicted to approach l = 590w-1 in the limit for large width.
DISCUSSION The qualitative similarities between the theoretical predictions and the data are promising while the differences demonstrate a need for further characterization of the bentonite properties. Furthermore, the projected inability of bentonite clay to fully plug a near-wellbore crack coupled with the lack of comparable research mandates further study to optimize the plugging efficiency of semi-solid or plastic clay in deep boreholes. REFERENCES [1] Blue Ribbon Commission. The Blue Ribbon Commission on America’s Nuclear Future - Report to the Secretary of Energy, 2012. pp. 29-30. [2] Arnold, et.al. Research, development, and demonstration roadmap for deep borehole disposal. Technical Report SAND2012-8527P, Sandia National Laboratories, 2012. [3] Tingay, et al. Borehole breakout and drillinginduced fracture analysis from image logs, 2008. World Stress Map Project Stress Analysis Guidelines [4] Garagash et al. Int J Solids Struct 49, 197-212, 2012. [5] Sun et al. J Energy Challenges and Mechanics 1, 1-5, 2014. ACKNOWLEDGEMENTS This summer research fellowship award was funded by the University of Pittsburgh’s Swanson School of Engineering and the Office of the Provost. Additional funding was provided by the University of Pittsburgh Department of Civil and Environmental Engineering.
Clay Intrusion Length vs. Slot Width Experimental Data and Theoretical Estimates
60 50
l = 5w
l = 590w-1
40 30 l = 54.6w-0.5
20 l = 3.3w + 5.9
10 0 0
2
4
6
8 10 12 Slot Width w [mm]
14
16
18
Figure 3: Experimental data is shown in blue, while derived theoretical relationships are shown in red.
20
EFFECT OF LOADING RATE ON BREAKAGE OF GRANITE Hannah C. Fernau, Andrew P. Bunger Hydraulic Fracturing Laboratory, Department of Civil and Environmental Engineering University of Pittsburgh, PA, USA Email: hcf8@pitt.edu INTRODUCTION METHODS It has long been known that loading rate affects the failure stress of a sample in tension. ASTM standards for testing have been established to account for this phenomenon, and ensure that results from individual tests can be compared accurately. Zhurkov (1984) in his paper on the kinetic concept of the mechanism of fracture explored the idea that solids, when stressed to a point less than is required for them to fail instantaneously, will fail after a period of time. This property is known as static fatigue, and can be observed in a wide range of materials including polycrystalline metals, polymer fibers, glass, concrete, and rocks. In a paper by Bunger and Lu (In Press), a parameter, χ, was introduced relating σ(t1) and σ(t0), the stresses required to induce failure after times t1 and t0 respectively, (t0<t1) as: [1] Most notably, the data of Kear and Bunger (2013) indicate that indirect tension (split disc), 3-point beam, and 4-point beam tests all give approximately the same value of χ for the crystalline rock they tested. Further experiments run using Coldspring Charcoal Granite by Lu et. al. (2015) provide additional evidence for the configurational invariance of the χ parameter. With certain specimens, however, the delayed fracture initiation test is very long and unpredictable due to experimental variability. To reduce the amount of time per test, it was proposed that tests could be run with different loading rates to find the same parameter, χ, as could be obtained with the delayed fracture initiation tests a slight modification of Zhurkov’s equation.
In order to directly compare the results of the load ramp tests to the results of the constant load tests as detailed in the paper by Lu et. al. (2015), the same material, frame, and test setup were used. Tests in another type of granite, Pegasus Beige, were also performed. Samples for experiments using Cold Spring Granite were prepared by cutting a 6” x 6” block into bars that were approximately 5” x 1” x 1” using a wet saw. Samples for experiments using Pegasus Beige Granite were prepared by obtaining a pre-cut slab and then cutting it with a wet saw to the appropriate width resulting in samples that were approximately 4” x ¾” x ¾”. Figure 1 below shows a sample after it failed in 3point bending with supports 4” apart. Two types of tests were run to compare both the stress of the sample at failure and the χ value from the data: load ramp tests and constant load tests. For the load ramp tests, the pump was programmed to increase the pressure at a constant rate until the sample failed. For the constant load tests, the pump was set to hold a certain pressure and maintain this pressure until the sample failed. All data for these tests was measured using a data acquisition program and exported to Excel.
Figure 1: Test frame setup
RESULTS Tests for the results shown in figures 2 and 3 below were performed using Pegasus Beige Granite. The results in figure 2 below were obtained from the load ramp tests, where b is the loading rate in psi/s, and tau is the time to failure in seconds.
sample as described in Zhurkov (1984), βc is the ratio of bonds available to the bonds in the initial structure, U0 is the magnitude of the initial energy barrier, and σ0 the stress at which the material breaks at time t0. From the relationship between bτ and ln[b], it is possible to predict the values of tau corresponding to discrete values of sigma from:
[3] The predicted values for the Pegasus Beige Granite using this method are shown in figure 3 as the red plotted points and give the following values: χ(1,1000)=0.32 for the load ramp tests, and χ(1,1000)=0.35 for the constant load tests. DISCUSSION Figure 2: Load ramp tests using Pegasus Beige Granite
Similar tests were performed using Coldspring Charcoal Granite and compared to the results from Lu et. al. (2015). From the Coldspring Charcoal Granite load ramp tests, χ(1,10000)=0.33 and from the constant load tests using 3-point and 4-point bending, χ(1,10000)=0.34-0.36. For both materials, the χ value for the load ramp tests was within 10% of the χ value for the constant load tests. REFERENCES
Figure 3: Time to failure of Pegasus Beige Granite based on constant load tests (blue points) compared to predictions based on loading ramp results (red points)
A theory has been proposed by Fernau and Bunger (In Preparation) to modify the theories proposed by Zhurkov (1984) to describe quasi-static tests for static fatigue in addition to the static tests completed by Zhurkov. From the proposed theory, the following equation can be obtained:
1. Bunger AP, Lu G. In Press. SPE Journal. Accepted 20 March 2015. 2. Kear J, Bunger AP. 2014. Proceedings 11th International Fatigue Congress, Melbourne, Australia, 2-7 March 2014. Advanced Materials Research, 892:863-871. 3. Zhurkov, S. 1984. Kinetic concept of the strength of solids. Int. J. Fracture volume 26 (4): 295-307. 4. Lu, G., Uwaifo, E. C., Ames, B. C., Ufondu, A., Bunger, A. P., Prioul, R., Aidagulov, G. 49th US Rock Mechanics/Geomechanics Symposium; 28 June-1 July 2015; San Francisco. California: American Rock Mechanics Association; 2015. ACKNOWLEDGEMENTS
[2] where b is the loading rate in psi/s, τ is the time to failure, is Boltzmann’s constant, T is temperature, γ is a descriptor of the molecular structure of a
I would like to thank Schlumberger, the Swanson School of Engineering, and the Office of the Provost at the University of Pittsburgh for partial funding for this project.
CONSIDERING ARTIFICIAL GLACIERS: CLIMATE-ADAPTIVE DESIGN FOR WATER SCARCITY Naomi E. Anderson, Taylor R. Shippling, Carey Clouse Department of Civil & Environmental Engineering University of Pittsburgh, PA, USA Email: trs67@pitt.edu, nea22@pitt.edu, careyclouse@gmail.com INTRODUCTION As water scarcity threatens the survival of subsistence agriculture practices in many communities around the world, and as age-old irrigation methods become less reliable in the face of climate change, the design and engineering disciplines are well positioned to be of service. One site for study is Ladakh, a dry desert environment located north of the Himalayan mountain range, where centuries-old subsistence agricultural practices are now threatened by climate-induced drought conditions. Among the many solutions is the artificial glacier: a large engineered earthwork built on the land above villages, designed to direct, control, retain, and release water for agricultural use. Artificial glaciers might more accurately be called ice dams, as the man-made structures employ masonry walls to trap water, which freezes and then thaws over the arc of the year. Unlike a natural glacier, artificial glacier pools change states seasonally. However, we intentionally join the engineers, NGOs and villagers of Ladakh in using this term to describe the water harvesting system (Norphel, 2012). METHODS In this study, our team researched six different artificial glaciers, serving the Ladakhi villages of Stakmo, Phuksey, Umlat, Saboo, Igoo, and Nang. All six of the artificial glaciers studied were located above the villages they served, typically at or above 14,000 feet. Because they are only accessible by foot, we brought minimal tools and data-gathering technology, such as a handheld GPS system and 100-ft measuring tape. When ascending water drainages in search of these glaciers, we looked for diversion channels, stone walls that could serve as retention pools, ideal mountainside characteristics including northern
orientation and altitude, and signage. Once on site, we assigned GPS coordinates and an altitude to the location of each artificial glacier, documented functional components, diagrammed the overall working structure, and mapped the entire system. ARTIFICIAL GLACIER ENGINEERING Artificial glaciers work as a means of collecting and storing snowfield and natural glacier meltwater for use later in the year. They are large engineered systems, exploiting gravity and freezing winter temperatures to amass a seasonal stock of ice. Because artificial glaciers capture and freeze meltwater above a village site rather than allowing it to flow into the Indus River far below, they increase the annual amount of irrigation water available to a village for agricultural use. Each artificial glacier system serves either a single village or several villages that share a watershed. There are a number of conditions required for an artificial glacier to be successful at a specific site. First, the artificial glacier should be located in a north-facing, or at least shaded, valley, placed at an altitude of approximately 14,000 feet, between the higher-altitude natural glaciers and the villages they serve (Higgins, 2012), and near enough to the village so when the water melts it can immediately be used to jumpstart the spring planting season (Norphel, 2012).). Further considerations include the volume of water available in the natural glacierâ&#x20AC;&#x2122;s stream during peak flow, the timing of sunrise and sunset, and the availability of a large, unobstructed area with a twenty to thirty degree slope (Norphel, 2012; Ahmed et al., 2010). The components that make up an artificial glacier system include diversion channels, regulator gates, silting tanks and distribution chambers (such as metal pipes), and retaining pools enclosed by stone masonry walls. Typically made of concrete or masonry, diversion channels carry a portion of the
parent glacier’s meltwater away from its natural stream toward the artificial glacier. It is then routed into the retaining pools, where ice is stored until early spring. If an artificial glacier has multiple pools, the diversion channel is designed to ensure even distribution of water among pools. Regulator gates control the amount of water that enters the diversion channels at different times of the year. The regulator gates remain closed in the summer months; villages access irrigation water via the natural stream at this time. The gates are opened in the late autumn months when meltwater flow is low and the temperature dips below zero, causing freezing and ice accumulation in the pools. Retaining pools are used to contain the water, although the number and dimensions of these basins vary greatly. Each pool is placed at a higher altitude than the preceding one, in an effort to synchronize the timing of melting ice with the irrigation needs in the villages below. The upstream and downstream sides of each pool are marked by stone masonry retaining walls, while the width is marked either by earthen berms or the mountainside itself. In recent projects, metal crate wire is wrapped around the stone masonry walls for further reinforcement. (Figure 1)
However, during the course of our fieldwork and in speaking with the stakeholders, NGOs, and engineers connected to these projects, we uncovered a number of design guidelines that could be of use to future artificial glacier projects. In addition to the basic instructions outlined by Ladakhi engineers, we provide a number of recommendations. We recommend strengthening the masonry walls with reinforcing mesh to create a strong, lasting gabion wall. We suggest dispersing multiple retention walls across a landscape so that if one fails, the whole system will continue to work. Ideally, artificial glaciers should be designed for use in conjunction with zings, or water retention pools at lower elevations. The Ladakhi engineers we spoke with share the same aspirations, however, they remind us that in this context, the limiting factor will always be funding. Working with minimal capital, the designers have often sacrificed goals for strength and durability in order to complete a project at a basic level. As villagers look for ways to improve water management in these landscapes, construction materials such as enclosed pipes, concrete dams, and on/off valves could also help to improve water management. Finally, a critical metric for success will inevitably be the engaged design process; we recommend better communication with, and buy-in from, villagers, as well as a more intentional technical planning effort. REFERENCES
Figure 1: Typical artificial glacier structures including a diversion channel, regulator gate, nd a crate-wirereinforced stone masonry walls.
RECOMMENDATIONS AND CONCLUSION Given the limited timeframe, technical expertise, and scope of this study, it would be irresponsible to make generalizations about the efficacy and value of an artificial glacier system.
1. Norphel, C. (2012). ‘Artificial Glacier: A High Altitude Cold Desert Water Conservation Technique,’ In Defense of Liberty Conference Proceedings. Presented at the In Defense of Liberty Conference, New Delhi, India. 2. Higgins, A. K. (2012). Artificial glaciers and iceharvesting in Ladakh, India as an adaptation to a changing climate. (Unpublished master's thesis). Yale School of Forestry, New Haven. 3. Ahmed, N., Higgins, A. & Norphel, C. (2010). Snow Water Harvesting in the Cold Desert Ladakh: An Introduction to the Artificial Glacier Project. Leh: Leh Nutrition Project. ACKNOWLEDGEMENTS This fieldwork was supported in part by funding from a Fulbright-Nehru Senior Research Fellowship, the University of Pittsburgh’s Swanson School of Engineering and Office of the Provost, and the Mascaro Center for Sustainable Innovation. We would like to thank the Leh Nutrition Project and Chewang Norphel for their support in Ladakh.
CONSIDERING ARTIFICIAL GLACIERS: CLIMATE-ADAPTIVE DESIGN FOR WATER SCARCITY Taylor R. Shippling, Naomi E. Anderson, Carey Clouse Department of Civil & Environmental Engineering University of Pittsburgh, PA, USA Email: trs67@pitt.edu, nea22@pitt.edu, careyclouse@gmail.com
INTRODUCTION As water scarcity threatens the survival of subsistence agriculture practices in many communities around the world, and as age-old irrigation methods become less reliable in the face of climate change, the design and engineering disciplines are well positioned to be of service. One site for study is Ladakh, a dry desert environment located north of the Himalayan mountain range, where centuries-old subsistence agricultural practices are now threatened by climate-induced drought conditions. Among the many solutions is the artificial glacier: a large engineered earthwork built on the land above villages, designed to direct, control, retain, and release water for agricultural use. Artificial glaciers might more accurately be called ice dams, as the man-made structures employ masonry walls to trap water, which freezes and then thaws over the arc of the year. Unlike a natural glacier, artificial glacier pools change states seasonally. However, we intentionally join the engineers, NGOs and villagers of Ladakh in using this term to describe the water harvesting system (Norphel, 2012). METHODS In this study, our team researched six different artificial glaciers, serving the Ladakhi villages of Stakmo, Phuksey, Umlat, Saboo, Igoo, and Nang. All six of the artificial glaciers studied were located above the villages they served, typically at or above 14,000 feet. Because they are only accessible by foot, we brought minimal tools and data-gathering technology, such as a handheld GPS system and 100-ft measuring tape. When ascending water drainages in search of these glaciers, we looked for diversion channels, stone walls that could serve as retention pools, ideal mountainside characteristics including northern
orientation and altitude, and signage. Once on site, we assigned GPS coordinates and an altitude to the location of each artificial glacier, documented functional components, diagrammed the overall working structure, and mapped the entire system. ARTIFICIAL GLACIER ENGINEERING Artificial glaciers work as a means of collecting and storing snowfield and natural glacier meltwater for use later in the year. They are large engineered systems, exploiting gravity and freezing winter temperatures to amass a seasonal stock of ice. Because artificial glaciers capture and freeze meltwater above a village site rather than allowing it to flow into the Indus River far below, they increase the annual amount of irrigation water available to a village for agricultural use. Each artificial glacier system serves either a single village or several villages that share a watershed. There are a number of conditions required for an artificial glacier to be successful at a specific site. First, the artificial glacier should be located in a north-facing, or at least shaded, valley, placed at an altitude of approximately 14,000 feet, between the higher-altitude natural glaciers and the villages they serve (Higgins, 2012), and near enough to the village so when the water melts it can immediately be used to jumpstart the spring planting season (Norphel, 2012).). Further considerations include the volume of water available in the natural glacierâ&#x20AC;&#x2122;s stream during peak flow, the timing of sunrise and sunset, and the availability of a large, unobstructed area with a twenty to thirty degree slope (Norphel, 2012; Ahmed et al., 2010). The components that make up an artificial glacier system include diversion channels, regulator gates, silting tanks and distribution chambers (such as metal pipes), and retaining pools enclosed by stone masonry walls. Typically made of concrete or masonry, diversion channels carry a portion of the
parent glacier’s meltwater away from its natural stream toward the artificial glacier. It is then routed into the retaining pools, where ice is stored until early spring. If an artificial glacier has multiple pools, the diversion channel is designed to ensure even distribution of water among pools. Regulator gates control the amount of water that enters the diversion channels at different times of the year. The regulator gates remain closed in the summer months; villages access irrigation water via the natural stream at this time. The gates are opened in the late autumn months when meltwater flow is low and the temperature dips below zero, causing freezing and ice accumulation in the pools. Retaining pools are used to contain the water, although the number and dimensions of these basins vary greatly. Each pool is placed at a higher altitude than the preceding one, in an effort to synchronize the timing of melting ice with the irrigation needs in the villages below. The upstream and downstream sides of each pool are marked by stone masonry retaining walls, while the width is marked either by earthen berms or the mountainside itself. In recent projects, metal crate wire is wrapped around the stone masonry walls for further reinforcement. (Figure 1)
Figure 1: Typical artificial glacier structures including a diversion channel, regulator gate, and crate-wirereinforced stone masonry walls.
RECOMMENDATIONS AND CONCLUSION Given the limited timeframe, technical expertise, and scope of this study, it would be irresponsible to make generalizations about the efficacy and value of an artificial glacier system. However, during the course of our fieldwork and in speaking with the stakeholders, NGOs, and engineers connected to these projects, we uncovered a number of design guidelines that could be of use to future artificial glacier projects.
In addition to the basic instructions outlined by Ladakhi engineers, we provide a number of recommendations. We recommend strengthening the masonry walls with reinforcing mesh to create a strong, lasting gabion wall. We suggest dispersing multiple retention walls across a landscape so that if one fails, the whole system will continue to work. Ideally, artificial glaciers should be designed for use in conjunction with zings, or water retention pools at lower elevations. The Ladakhi engineers we spoke with share the same aspirations, however, they remind us that in this context, the limiting factor will always be funding. Working with minimal capital, the designers have often sacrificed goals for strength and durability in order to complete a project at a basic level. As villagers look for ways to improve water management in these landscapes, construction materials such as enclosed pipes, concrete dams, and on/off valves could also help to improve water management. Finally, a critical metric for success will inevitably be the engaged design process; we recommend better communication with, and buy-in from, villagers, as well as a more intentional technical planning effort. REFERENCES 1. Norphel, C. (2012). ‘Artificial Glacier: A High Altitude Cold Desert Water Conservation Technique,’ In Defense of Liberty Conference Proceedings. Presented at the In Defense of Liberty Conference, New Delhi, India. 2. Higgins, A. K. (2012). Artificial glaciers and iceharvesting in Ladakh, India as an adaptation to a changing climate. (Unpublished master's thesis). Yale School of Forestry, New Haven. 3. Ahmed, N., Higgins, A. & Norphel, C. (2010). Snow Water Harvesting in the Cold Desert Ladakh: An Introduction to the Artificial Glacier Project. Leh: Leh Nutrition Project. ACKNOWLEDGEMENTS This fieldwork was supported in part by funding from a Fulbright-Nehru Senior Research Fellowship, the University of Pittsburgh’s Swanson School of Engineering and Office of the Provost, and the Mascaro Center for Sustainable Innovation. We would like to thank the Leh Nutrition Project and Chewang Norphel for their support in Ladakh.
NON-DESTRUCTIVE EVALUATION OF TENNIS BALLS USING HIGHLY NON-LINEAR SOLITARY WAVES Christopher Borland and Piervincenzo Rizzo, Ph.D. Laboratory for NDE and SHM Studies, Department of Civil & Environmental Engineering University of Pittsburgh, PA, USA Email: cjb97@pitt.edu INTRODUCTION In the research presented in this paper we examine a nondestructive testing (NDT) technique based on the propagation of highly nonlinear solitary waves (HNSWs) to determine the stiffness of tennis balls. The objective is to create a handheld device that will allow players and manufacturers to determine the serviceability of tennis balls. HNSWs are compactly supported lumps of energy that can propagate in one-dimensional (1D) granular chains composed of contacting elastic particles [1]. The most common way to generate a HNSW is by impacting the first particle of the chain with a striker. When an object is placed in contact with a granular chain and it is probed with a HNSW, three waves are typically produced: the incident solitary wave (ISW) generated by the impact of the striker and then two waves generated by the reflection of the ISW at the object-chain interface. These two waves are the primary and the secondary solitary waves, respectively. Hereinafter they will be referred to as PSW and SSW. The amount of time in between the ISW and PSW or ISW and SSW is called the time of flight (TOF). Yang et al. examined the interaction of HNSWs with linear elastic material and found that certain features such as the TOF and the amplitudes of the PSW and SSW are dependent on the properties of the media adjacent to the chain [2]. They found that as the elasticity of the bounding media decreases, the TOF of the reflected wave increases. Furthermore, as a material becomes stiffer, the prominence of the SSW, which is formed because of the last bead in the chain indenting the bounding media, decreases, and the energy that would have been present in the SSW is directed towards the PSW. METHODS In this study, five brands of tennis balls were tested: Penn Regular-Duty, Penn Extra-Duty, Gamma All
Court, Prince Play+Stay Stage 2, and Gamma Quick Kids. The tennis balls each had varying levels of stiffness; the Penn Regular-Duty (PRD), Penn Extra-Duty (PED), and Gamma All Court (GAC) were regulation USTA & ITF approved, whereas the Play+Stay Stage 2 (O) and Gamma Quick Kids (G) were designed to bounce 50% and 75% of the height of a regulation ball, respectively. Three sets of experiments were conducted: the NDT method proposed in this study; the conventional bouncing test; and the quasi-static compressive test. For the NDT approach, we generated HNSWs by assembling a granular chain made of 16 identical steel particles, with diameter D = 19.05 mm, density Ď = 7920 kg/m3, modulus of elasticity E = 193 GPa and Poissonâ&#x20AC;&#x2122;s ratio v = 0.25. The granules were confined inside a Delrin Acetal Resin tube with outer diameter Do = 22.3 mm. An electromagnet was located above the chain in order to drive a striker. A magnetostrictive sensor was fixed in the middle of the chain in order to sense the propagation of the HNSWs. At the bottom of the chain an aluminum sheet 0.254 mm thick was glued to the tube to prevent the free fall of the particles. The apparatus was controlled by a LabVIEW program, along with an NI PXI (1042Q) and DC power supply (BK PRECISION 1672).
Figure 1: The HNSW generator probes a tennis ball. The electromagnet (top) lifts the last bead in the chain and drops it onto the rest of the chain, held by the white tube. The magnetostrictive sensor (middle) senses the HNSWs.
In order to compare the results of our NDT method with the conventional bouncing test, an oscilloscope (Waverunner 44xi) was used in conjunction with a shotgun microphone (AT8015) to find the amount of time it took for a tennis ball to bounce twice after being dropped from a height of 7 ft. Moreover, each of the tennis balls were statically compressed 0.5 in. in increments of 0.05 in. using a compression machine connected to a strain box. The 10 readings on the strain box were used in conjunction with the known deformation to determine the ball’s stiffness, k. RESULTS Figure 2 shows the bouncing time as a function of the five kinds of balls. The bouncing test shows a clear difference between the serviceable and nonserviceable tennis balls.
O
G GAC PED PRD
Figure 2: A plot of each ball’s bouncing timeBalls 7-15 are considered serviceable whereas balls 1-6 are designed to ease newer players in to the sport. A 5-6% difference can be observed between the serviceable and nonserviceable tennis balls.
Figure 3 shows the ratio of the amplitudes PSW/SSW as a function of the five balls’ brands. Because of the variation in the ISW amplitude, a feature which is meant to be constant, it is ideal to use the PSW/SSW ratio since both the PSW and SSW are dependent on the ISW.
REFERENCES With the exception of ‘PRD’, our tests concluded that HNSWs can be used to observe a difference
GAC PED O
G
PRD
Figure 3: A plot of the ratio of the PSW to SSW amplitude of each ball. The balls are in the same order as in Figure 1. A 12-19% difference can be observed between the serviceable ‘GAC’ and ‘PED’ balls and the nonserviceable ‘O’ and ‘G’ balls.
between serviceable and non-serviceable tennis balls. It is surprising that ‘PRD’ did not continue the trend since the only difference between ‘PRD’ and ‘PED’ is the material on the outside of the rubber. Regular-duty tennis balls (PRD) use more wool than nylon fibers and utilize a tighter weave, making them more suitable to soft surfaces such as clay or grass. Extra-duty tennis balls (PED) have a higher nylon content and utilize a looser weave, making them ideal for hard surfaces. These differences should not account for a significant change in stiffness. REFERENCES 1. Spadoni et al. Proceedings of the National Academy of Sciences 107, 7230-7234, 2010. 2. Yang et al. Physical Review E 83.4, 2011. 3. Li et al. J. Appl. Phys., 117, 2015. 4. Cross. Am. J. Phys. 70, 2002. ACKNOWLEDGEMENTS This research was funded by a summer research stipend from the Swanson School of Engineering. The author would like to thank Amir Nasrollahi and Wen Deng for their experience and mentoring throughout the duration of the project.
GRAPHENE-BASED ELECTRODES IN ELECTRO-FENTON PROCESS FOR TREATMENT OF SYNTHETIC INDUSTRIAL WASTEWATERS Hammaker, J., Mousset, E., Lefebvre, O. Centre for Water Research, Department of Civil and Environmental Engineering, National University of Singapore, 1 Engineering Dr. 2, Singapore 117576 Email: jjh75@pitt.edu, ceeem@nus.edu.sg, ceelop@nus.edu.sg INTRODUCTION Of the many EAOP methods, the electro-Fenton (EF) method was of interest due to its straightforward application to existing systems, low operation costs, in situ production of H2O2, and minimal sludge production because of the constant regeneration of Fe2+ at the cathode. In the electroFenton reaction, the H2O2 is produced due to the electrical potential created by running a current in an undivided cell. The addition of Fe2+ acts as a catalyst and creates OH radicals (•OH) as can be seen in equation (1). These •OH non-selectively destroy the POPs that conventional wastewater treatment plants cannot remove. Fe2+ + H2O2 → Fe3+ + •OH + OH−
(1)
In order to create H2O2 at sufficient levels and take full advantage of equation (1), the cathode material must be considered, since H2O2 is produced at its surface through oxygen reduction (equation 2): O2 + 2H+ + 2e- → H2O2
(2)
Mercury and carbon based cathodes have proved to be the most efficient at producing H2O2. However, due to its high toxicity, mercury is not feasible for use in water treatment. On the contrary, carbon is very cheap, nontoxic and is easily obtainable (Brillas et al., 2009). Over the carbon-based material, graphene - a 1 atom thick layer of graphite - is a very promising material due to it contributing to an increase of surface area on the coated material (Niyogi et al., 2006) and its unique electronic properties (Molitor et al., 2011). This work explores the use of graphene monolayer as a cathode for H2O2 production. The next step was examining the effectiveness of graphene coated carbon brushes through the use of nafion® as a binder based off of techniques used in Parvez et al. (2014).
METHODS Fabrication of Cathode Materials Monolayer graphene coated on SiO2 was used as is, while carbon brushes made out of carbon fibers wrapped together using stainless steel were coated with a solution of nafion®, ethanol, ultrapure water, and graphene. The amount of nafion® (0% v/v, 0.025% v/v, 0.05% v/v, 0.10% v/v, 0.15% v/v, 0.20% v/v) and graphene (0 mg/mL, 0.1 mg/mL, 0.05 mg/mL, 1.0 mg/mL, 1.5 mg/mL, 2.0 mg/mL) was varied, while the amount of ethanol (5 mL) and the total volume of the dispersed graphene solution (10 mL) were held constant. The volume of ultrapure water varied in relationship to the nafion® to ensure the total volume remained consistent across coatings. All components were added together and then sonicated for 5 minutes to ensure the graphene was fully dispersed. After agitating slightly to allow the graphene to be suspended in the solution, the carbon brush electrode was immersed in the graphene solution. This was then allowed to dry overnight in an oven at 55°C, and was placed into a furnace the following day at 360°C for one hour with the intention of allowing the graphene to fully bond to the electrode. Assessing Performance of Monolayer Graphene In these tests, a potentiostat was used to apply a constant current intensity. Platinum was used as an anode (counter electrode) and monolayer graphene (working electrode) was used as a cathode. Ag/AgCl electrode was employed as a reference. Small currents were tested as these were found to give optimal cathode potential that promote H2O2 electrogeneration determined in preliminary tests. A 400 mL working volume of K2SO4 at 0.05 M and a pH 3 were used. H2O2 was monitored by a spectrophotometric method using TiCl4. Each sample mixture was placed into a UV Spectrophotometer and analyzed at 410 nm to determine the absorbance. Assessing Performance of Graphene Coated Electrode
In order to determine the effectiveness of the various coatings of the carbon fiber brush, the uncoated brush was initially tested to determine H2O2 production. This allowed a base line to be established to properly assess the coated brush. The desired graphene coated electrode was placed in the solution and hooked up to a power supply. The carbon electrode was used as the cathode, while platinum was used as the anode and both were connected to a power supply. A current of 0.1 A was used since it was the optimal current determined in a previous work. A percent increase or decrease in H2O2 production with respect to the uncoated brush could then be calculated. All other parameters were the same as in the monolayer graphene H2O2 production experiment. RESULTS The graphene coatings returned highly variable results, although all variables were carefully controlled when preparing the electrodes. Fiber brushes were chosen to be the coated carbon material because they yielded higher initial results than other carbon materials such as carbon felt. However, fiber is a non-porous material which complicates coating. Due to the instability of a graphene coated carbon brush, it is possible that the coating was somehow removed in either the drying process or during the experiment itself when the brush came in contact with the electrolyte and was exposed to a constant mixing environment. Though the replicate experiments gave results that were not very consistent, it can still be observed that only the brush coated with the highest concentration of graphene (2 mg/mL) was efficient with an increase of 5% of H2O2 production compared to the uncoated brush. DISCUSSION This work looked at the H2O2 electrogeneration at the surface of a monolayer graphene and at a graphene coated carbon fiber brush electrode in order to improve the electrode material employed in electro-Fenton process applied for wastewater treatment. Due to very low surface area, the monolayer graphene produced H2O2 at levels below the detection limit of the devices. However when graphene was coated on a carbon fiber brush material, an enhancement of H2O2 production could be still observed in optimal conditions that were determined as follows: graphene concentration of 2
mg/mL with 0.05% Nafion® used as a binder. This enhancement is assumed to be due to the increase of specific surface area and of the electric conductivity. If the research on this method was to be continued, a transmission electron microscope should be utilized in order to better assess the coating by comparing the uncoated surface of the brush, the brush immediately after coating, the brush immediately after drying, and the brush after the experiment is run. This would determine if the graphene is remaining on the brush through all stages of the electrode preparation. If it was determined that this coating method was not sufficient, another coating method such as the one detailed in Thi Xuan Huong Le et al. (2015) (electrophoretic deposition and electrochemical reduction of Graphene Oxide) could be explored further. REFERENCES 1. Brillas, E., Sirés, I., and Oturan, M. A. (2009), “Electro-Fenton Process and Related Electrochemical Technologies Based on Fenton’s Reaction Chemistry”, Chem. Rev., 109, 6570-6631 2. Thi Xuan Huong Le, Bechelany, M., Champavert, J., Cretin, M. (2015) “A highly active based graphene cathode for the electroFenton reaction”, RSC Adv., 5, 42536 3. Molitor, F., Güttinger, J., Stampfer, C., Dröscher, S., Jacobsen, A., Ihn, T., Ensslin, K. (2011), “Electronic properties of graphene nanostructures”, J. Phys.: Condens. Matter, 23, 243201 (15pp) 4. Nieto, A., Boesl, B., Agarwal, A., (2015), “Multi-scale intrinsic deformation mechanisms of 3D graphene foam”, Carbon, 85, 299-308, 5. Niyogi, S., Bekyarova, E., Itkis, M. E., McWilliams, J. L., Hamon, M. A., Haddon, R. C., (2006), “Solution Properties of Graphite and Graphene”, J. Am. Chem. Soc, 128, 7720 - 7721 6. Parvez, K., Zhong-Shuai Wu, Rongjin Li, Xianjie Liu, Graf, R., Xinliang Feng, Müllen, K., (2014), “Exfoliation of Graphite into Graphene in Aqueous Solutions of Inorganic Salts”, J. Am. Chem. Soc, 136, 6083−6091 ACKNOWLEDGEMENTS This project was made possible through funding provided by the Swanson School of Engineering and the office of the Provost.
RESPONSES OF HYBRID MASONRY STRUCTURES IN 1994 NORTHRIDGE EARTHQUAKE SIMULATED USING FINITE ELEMENT ANALYSIS PROGRAM Niâ&#x20AC;&#x2122;a Calvert Computational Mechanics Laboratory, Department of Civil and Environmental Engineering Rice University, TX, USA Email: nmc48@pitt.edu, Web: http://ceve.rice.edu/ INTRODUCTION Hybrid masonry is a relatively new structural system that consists of reinforced concrete masonry wall, steel frames, and steel connectors [3]. This system was created in order to provide sturdy, yet flexible, buildings that could be most beneficial to seismic areas. Hybrid masonry was first proposed in 2006 in order to offer a design alternative to the construction of frames buildings with masonry infill [3]. The ductility of the steel connectors along with the shear strength of the reinforced masonry panels makes way for a highly effective lateral-force-resisting system [2]. Hybrid masonry walls are classified into 3 different types depending on how the masonry is restrained to the frame. Type I hybrid masonry has gaps between the masonry panel, beams, and frame. The masonry panel makes only indirect contact with the frame through the steel connectors within these gaps. Therefore, gaps exist on both sides (allowing lateral drift) as well as on top (allowing vertical deflection) for of the masonry panel. Type II walls only have gaps between the masonry panel and the columns, while the top of the panel is confined to the beam. Type III masonry has no gaps and is sealed to the columns and the beam [3]. In this study, we focus on how various hybrid masonry structures of Type 1 react to the 1994 Northridge Earthquake which occurred in the highly seismic region of Los Angeles, California.
METHODS The original data that was used in order to model the behavior of these structures was obtained through the large-scale experiments of small two-story hybrid masonry structures conducted in the NEES MUST- SIM facility at the University of Illinois UrbanaChampaign [2]. These structures were subject to cyclic and monotonic loading in order to mimic the effects of an earthquake. This numerical data was sent to the computational mechanics lab at Rice University and was used to configure hybrid masonry models on the Finite Element Analysis Program (FEAP). Simulations were run for various structures of Type I in order to compare and further explore which designs would optimize energy dissipation. DATA PROCESSING Some structures would converge early on before all of the earthquake loading steps could be applied. These would require an edit in the FEAP code that would allow the simulation to run and stop at the loading step before the one in which the structure converged. The simulation would then require a re-run. After each successful simulation, a final damage picture would be produced. The intensity of damage done by the earthquake was represented using a color scale. Blue indicated no damage while red indicated fully damaged.
RESULTS After each successful run, a python code was used to output data scripts that would provide different characteristics regarding each structureâ&#x20AC;&#x2122;s reaction to the earthquake loadings. These data scripts were used to extract data at specific nodes to be used for further study. Figure 1
DISCUSSION The connector width seemed to have an effect on the severity of the damage done to the masonry panel. Since a weaker connector (one of a smaller width) is more ductile than a stronger connector (one of a larger width), structures with smaller connectors indicated less damage done to the masonry panel, but more deflection to the connectors. On the other hand, structures with larger connector widths would result in more damage done to the masonry panel. Ideally, it is easier and more cost-effective to replace damaged connectors than it is to replace masonry panels after a seismic event. REFERENCES
Figure 2
1. Asselin et al. Design of Hybrid Masonry Systems, 1150-1157, 2013. 2. Gao, Zhenjia. Computational Framework for the Analysis of Hybrid Masonry Systems Using an Improved Non-local Technique, 13, 2014. 3. National Concrete Masonry Association. Hybrid Masonry Construction, 1-8, 2010. ACKNOWLEDGMENTS I give special thanks to the University of Pittsburgh and Rice University for offering me such a wonderful summer research opportunity. Funding was provided by the Swanson School of Engineering and the Office of Provost.
Figure 1: Damage photo for Type I, 6.5 in connector subject to earthquake loading. Figure 2: Type I, twofloor hybrid masonry structure
A SOFTWARE INTERFACE TO COMPLEMENT ORIGINAL HARDWARE CAPABLE OF 10 CHANNEL SIMULTANIOUS RECORDING AND ANALYSIS Alec Rosenbaum Singapore Institute for Neurotechnology (SiNAPSE), National University of Singapore, Singapore Email: alr152@pitt.edu INTRODUCTION Understanding nerve signals and neuro-muscular interactions are one of the largest obstacles in the modern quest for functional neuroprosthetics. The term, “neuroprosthetic,” describes functional robotics that take a form similar to that of natural human biology, and are controlled using nerve signals sent from the brain. This control method allows for completely intuitive, sophisticated movements with a minimal learning curve. In addition, due to nerve complexity, connecting directly to a nerve allows a significantly higher degree of freedom over conventional prosthetics. This is primarily due to current methods relying on flexing a large muscle, of which there are few and controls are highly situational. Thus far, one of the major obstacles in the design and implementation of a functional neuroprosthetic capable of sophisticated motions, high degrees of freedom, and high precision is an equally capable control mechanism. The control mechanism focused on in this paper is using nerve signals directly from nerves, in this case being the Ulnar and Median nerve located in the upper extremities of all humans. This paper will focus on the work of Alec Rosenbaum throughout the summer of 2015 in the SINAPSE lab at the National University of Singapore. His work is towards the ability to capture and analyze neural signals in real time during in-vivo studies. In particular, the component of the system responsible for the recording and display of the raw signal data captured at the nerve. METHODS The configuration of the system is as follows. The hardware interface connects and communicates with the computer using a wired to wireless interface recognized as a serial device by the computer. All
data and communication with the hardware interface is performed using this virtual serial port. Utilizing the ability to send and receive data in a particular sequence, it is possible to not only toggle modes on the hardware device, but to digitally receive and analyze signal data from the hardwareinterfaced serial port. The hardware developed is capable of recording ten unique channels of neural data concurrently. However, the serial port is only capable of a single channel of sequenced bits. For this reason, a number of methods were taken in order to ensure accuracy and precision. All transmissions to the hardware device are done using a specific set of commands that pack data such that each command is a variable length and data “packet” is begun and terminated by specific signals. This allows great flexibility in setting modes, and the signals that signify the beginning and end of data packets eliminates many forms of error resulting for mistimed data. This method allows for changing modes, although it doesn’t account for reading data from each of the neural channels. These protocols are specified in documents with proprietary material [1], and so they will not be specified in more detail, nor will the specifying document be added to the appendix. When entering a mode for reading neural signals, however, a slightly different form must be taken to ensure signals can be read at the speeds necessary for displaying real-time data. For this reason, a start -of-signal packet is sent, followed by sequenced numbers that represent values measure from each channel of neural signal. DATA PROCESSING All data processing is done in real-time by the software developed for this purpose. When reading digital signals, the software sees only an eight or ten-bit sequence. This number must be converted
from a number in the range of 0 to 256 or 1024 (8 or 10 bit, respectively) to a meaningful measurement. In addition to this data conversion during recording, the signal must be displayed on a side-scrolling graph. There is an individual graph for each of the ten channels. Figure 1 displays an example of what the visual interface for this system looks like.
modular and able to easily adapt to any proposed changes either in interface or specified communication protocol. DISCUSSION This field of research is ever evolving, and moving at a rapid pace towards highly advanced neuroprosthetics of the likes only seen in science fiction. The acquisition of one or many neural signals is the first major stepping stone in the ability to develop this technology. Much of the mechanical aspects of a neural prosthetic are already in place, but range of motion and precision are both limited by the controlling technology. At this point in time, development of such a controlling technology is around the corner. The work outlined in this paper is an important contribution towards the end goal of a neuroprosthetic because it allows those developing methods for interfacing with nerves to accurately and easily view results of experiments, which then be analyzed immediately.
Figure 1: An example of what the visual interface used in the recording and processing of ten-channel neural data.
RESULTS The hardware aspect of this project is not complete, and henceforth, no actual testing beyond the simulated serial port could take place. On top of this, the hardware design and protocol rules are very fluid in nature, and havenâ&#x20AC;&#x2122;t been solidified. Due to the above stated nature of this project, no tangible result or dataset can be presented at this time. However, by preforming continual development on this topic, a library of commands and project setup has been established in such a way that any and all future development on this topic will be streamlined. All software written is highly
Future work for this project can proceed after the completion of the associated hardware. Once the hardware is complete, this development of this software can also be completed. With the completion of the hardware, all of the protocols will be solidified. REFERENCES 1. GF 0.18um Neutrino 1 Testchip Specifications ACKNOWLEDGEMENTS This project was completed under the direction and close working with the SiNAPSE lab at the National University of Singapore. This award was funded by the Swanson School of Engineering and the Office of the Provost.
SPIKE TRAIN DISTANCE ANALYSIS OF PREFRONTAL CORTEX Stephen C. Snow1, Roger Herikstad2, Aishwarya Parthasarathy2, Camilo Libedinsky2, and Shih-Cheng Yen2,3 1 Department of Electrical and Computer Engineering University of Pittsburgh, PA, USA 2 Singapore Institute for Neurotechnology National University of Singapore, Singapore. 3 Department of Electrical and Computer Engineering National University of Singapore, Singapore Email: scs89@pitt.edu Web: http://www.sinapseinstitute.org/ INTRODUCTION Previous studies have linked the prefrontal cortex to high-order processing, as its functionality is vital for adequate performance in tasks where working memory is required or the rules are dynamic [1]. Specifically, the dorsolateral prefrontal cortex (DLPFC) encodes spatial information in working memory through sustained delay activity [2]. Therefore, one should be able to extract spatial details during a task from the activity of individual neurons in the DLPFC. While it is generally accepted that neurons communicate through their action potentials, or spikes, much effort has been placed on understanding how the brain reads the spike trains to determine what information each neuron is conveying [3]. Certain neurons appear to exhibit rate coding, where one can decode a stimulus based only on the firing rate (i.e. number of spikes) of a neuron during a time window [4]. In light of findings that demonstrate that the timing of as few as a single spike can encode a significant portion of the information about a stimulus [5], naturally many have developed and applied techniques that quantify this temporally encoded information [6, 7]. One such method quantifies the metric distance between spike trains and has been applied to neural data from several brain regions with promising success [8]. Our work applies this method of spike-train distance to DLPFC data, which, to the best of our knowledge, has not been done before. METHODS In the experimental setup, a macaque monkey sits in front of a screen, with its head fixed. The screen displays a 3 x 3 grid, with the center square containing a fixation cross. Once the monkey has established fixation on the center, as determined by an eye tracker, one of the eight perimeter squares illuminates red for 300 ms (target). After a 1000 ms delay period, a different perimeter square illuminates green for 300 ms (distractor). Then, after another 1000 ms delay period the fixation cross disappears and the monkey must attempt to make a saccade to the location of the target square.
In order to quantify the information encoded in spike times, we employ the metric spike train distance algorithm developed by Victor and Purpura [6]. This method, based on the edit-distance algorithm [9], determines the distance between spike trains by finding the cost of transforming one train into another via insertions, deletions, and shifts. Insertions and deletions are assigned a fixed cost of one, each, whereas shift costs are determined by q|∆t|, where q is a sensitivity parameter with units of seconds-1 and ∆t is the time change associated with the shift. As a consequence of these costs, two spikes are similar and can be transformed by a shift of |∆t| < 2/q, because in that case q|∆t| would be less than the cost of two, associated with deleting the original spike and inserting it at the new location. Aronov provides an extension to the spike train distance metric by allowing for spike trains from pairs of neurons to be analyzed [10]. A new parameter k is introduced, which represents the cost of reassigning the label of a spike during a shift, i.e. transforming a spike from one neuron to a spike from a different neuron. Therefore, in this metric, shifts now have a cost of q|∆t| + k when reassigning labels and q|∆t| otherwise. To quantify the information in spike times when representing the eight possible target locations, two methods are used. The first method employs a confusion matrix to find a lower bound for the information contained in each distance matrix, in bits [6]. The second method opts for a qualitative rather than a quantitative approach. By applying the t-distributed stochastic neighbor embedding (t-SNE) algorithm [11] to the distance matrix produced by either the single-unit or multi-unit distance analysis, as performed by VargasIrwin et al. [12], a low dimensional representation of the higher dimensional data can be constructed, which helps to combat the curse of dimensionality. RESULTS We looked to see how both information in the system and the effectiveness of the method evolved over the duration of the experiment. To do this, we first analyzed all 58 cells and found which ones encoded at least 0.1
bits of information for at least one 300 ms time window using the single-unit method at their optimal q value. Next, we computed the pairwise information, at optimal k, for these encoding cells at each time window. Finally, we computed an ensemble information by comparing each of these encoding cellsâ&#x20AC;&#x2122; spike trains at once, essentially taking an aggregate sum of each cellâ&#x20AC;&#x2122;s distance matrix and calculating the information from those distances. For comparison, we included the information found using a rate code for both best pairwise and ensemble coding over time, as well as the best single-unit information (Figure 1).
Figure 1: Information over time for single-unit and pairs.
As described in another study [13], this method of quantifying information is very sensitive to the classifier used, so to construct a better visualization of the clustering performance across different groups of cells, we apply the t-SNE algorithm to our distance matrices (Figure 2). Ideally, as we include more neurons through multi-unit analysis, the locations should separate further. Additionally, we compare the Aronov methodâ&#x20AC;&#x2122;s results to both a rate code and to a different multivariate application of the Victor distance, which applies t-SNE to an N x MN distance matrix, where N is the number of trials and M is the number of neurons [12].
sensitive metric over a rate code, for the subsequent time periods this difference essentially vanished. One of the reasons for this could be because one cell, g40c01, encoded a high amount of information (0.4 bits) during the target period and benefitted the most from a higher temporal precision (approximately 8 ms, a level of magnitude lower than the majority of other cells analyzed), therefore making the rate code much less effective in this time window. However, most other cells were not this time-sensitive and for that reason, rate coding performed as well if not better across the rest of the experiment. Across all time periods, incorporating more cells allowed for more information encoding. The t-SNE visualizations do indicate that the spike train distance metric lends itself well to our experiment, as in the late delay 1 period, the 8 locations form a circle that resembles the shape of the 3 x 3 grid. It is unclear whether the failure to produce clear clusters is due to the particular method or the fact that only a handful of recorded cells contributed significantly (greater than 0.1 bits above shuffled information) to location discrimination. Previous experiments concerning spatial working memory seem to indicate that more of our cells should have contributed to the information through receptive fields [2]. CONCLUSIONS Using spike train similarity measures appears to have strong potential for understanding the DLPFC, but that potential largely depends on the metric used. Future studies might aim to either use and compare existing methods or develop one that best suits the recorded data. REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
Figure 2: t-SNE Visualization for late delay 1. Top row, from left to right: best 2 cells, best 5 cells, best 12 cells. Bottom Left: Rate code 12 cells. Bottom Right: SSIMS (N x MN distance matrix) 12 cells.
DISCUSSION Although for the target period there was a considerable difference in the information encoded using the time
Miller et al. Ann. Rev. Neuroscience, 24, 167-202, 2001. Rainer et al. Nat. Academy Sciences, 95, 15008-13, 1998. Rieke et al. Spikes, 1997. Adrian, J Physiology, 62, 33-51, 1926. Bialek et al. Science, 252, 1854-57, 1991. Victor et al. Network, 8, 127-164, 1997. van Rossum, Neural Comp., 13, 751-763, 2001. Victor, Current Op. in Neurobiology, 15, 585-592, 2005. Sellers, SIAM J Applied Math, 26, 787-783, 1974. Aronov, J Neuroscience Methods, 124, 175-179, 2003. van der Maaten et al. J Machine Learning Research, 9, 2579-605, 2008. 12. Vargas-Irwin et al. Neural Comp., 27, 1-31, 2015. 13. Chicarro et al. J Neurosci Methods, 199, 146-165 2011.
ACKNOWLEDGEMENTS This award was funded by the Swanson School of Engineering and the Office of the Provost.
DEVELOPMENT OF AN ADAPTIVE BRAIN COMPUTER INTERFACE TO AUTOMATICALLY ADDRESS NONSTATIONARY EEG DATA Matthew Sybeldon Statistical Signal Processing Laboratory, Department of Electrical and Computer Engineering University of Pittsburgh, PA, USA Email: mjs196@pitt.edu INTRODUCTION Brain computer interfaces (BCI) are an emerging input modality for disabled users to access the communicative resources afforded by a computer. Electroencephalography (EEG) signals are one of several candidate signals that can be used to operate a BCI. These signals can be collected through noninvasive electrodes, making them an increasingly popular choice. However, the noninvasive nature makes the signal more prone to noise which may be nonstationary in nature. For example, noise generated by medical devices in a hospital setting may change over time. Other factors such as user fatigue can also cause changes in the statistical characteristics of the EEG signal. In turn, the performance of a classifier used to determine user intent is reduced. Frequent calibration is required to continue system usage. As such, there is motivation to develop an adaptive BCI system to reduce calibration requirements for the user. This project explores the application of an ensemble learning approach for such a purpose.
recent data. In this context, efficiency is defined in the context of system accuracy and calibration data requirements needed to obtain said accuracy.
Elwell, Polikar introduced the Learn++.NSE algorithm to address the problem of incremental learning in possibly nonstationary conditions. The algorithm assesses past classifiers with incoming data sets to assign voting weights. The proposed algorithm claims that an ensemble of classifiers can outperform a single classifier [1]. This is particularly true if the nonstationarities are cyclical. Learn++.NSE utilizes boosting to emphasize classifiers that can correctly classify error-prone data points. However Long and Servedio have demonstrated that boosting algorithms can be negatively impacted by noise in real world systems [2]. The performance of the Learn++.NSE algorithm in a BCI context warrants investigation to determine if it can correctly classify EEG signals more efficiently than both a naĂŻvely trained classifier using a data set comprised of aggregate data points and a classifier trained on the most
Participants were asked to undergo three sessions. Each session is comprised of a calibration and test phase. During the calibration phase, two checkerboards flickering at 6 and 20 Hz were displayed. Users were instructed to observe specific checkerboards for five second intervals. The signal data was transformed into a feature vector used to train the classifiers to be investigated in the experiment. A linear discriminant analysis (LDA) classifier was trained to add to the Learn++.NSE ensemble. The test phase that follows is similar to the calibration phase, but at the end of the interval the ensemble attempts to classify the signal data as belonging to one of the two classes.
METHODS The study consists of fifteen adults at least eighteen years of age. Participants were screened for a history of seizures so that only healthy participants were utilized. Users were prepped using noninvasive electrodes embedded in an EEG cap. The OZ channel was used to collect data due to the flickering visual stimuli used in the BCI system. A ground electrode was attached to the FPZ channel and a reference electrode clipped to the userâ&#x20AC;&#x2122;s ear lobe. All electrodes were prepped with g.Tec conductivity gel. The electrodes were connected to the g.Tec gammaBox, which was in turn connected to a g.USBamp biomedical amplifier. The amplifier then communicated with a MATLAB program over USB where data from usage sessions could be analyzed.
DATA PROCESSING EEG signal voltages were sampled at 256 Hz and transmitted to the system. A FIR bandpass filter between 2 and 45 Hz was used to remove noise in
frequency regions not of interest. Data markers sent through the computer’s parallel port to the amplifier were used to extract relevant portions of the signal data. The markers were shifted to account for the phase delay induced by the FIR filter. Welch’s power spectral density estimate was used to obtain the feature vectors to represent the signal. The feature vectors were four dimensional and comprised of the first and second harmonics of the stimuli frequencies. Comparisons between the ensemble classifier, latest individual classifier, and a naïvely trained LDA classifier were made. The naïve classifier considers all data points in current and past sessions without special consideration between data points. The accuracy of the system was estimated by calculating the area under the curve (AUC) of the receiver operating characteristic curve. Classifier performance with regard to calibration length was assessed by utilizing all calibration data prior to the most recent calibration session. This was done to provide equal comparison between classifiers. The most recent calibration session was divided evenly into five segments. Each classifier was trained using every possible k-tuple for k < 5. The AUC on the test data was then calculated using these classifiers. The AUCs corresponding to each k-tuple were averaged together to obtain an estimate at the accuracy of the system for the corresponding calibration session length. RESULTS Preliminary results were obtained using three calibration sessions and one test session from the same user. The AUCs as a function of the calibration session length are displayed in Figure 1.
Figure 1 - Test AUC as a function of calibration session length
The naïve classifier AUC for the most part was constant. The AUCs of the most recent and Learn++.NSE classifiers monotonically increased with the Learn++.NSE classifier always having a greater AUC. DISCUSSION The early results indicate the potential of the Learn++.NSE classifier to outperform other classifiers with shorter calibration data sets. The naïve classifier remained constant in AUC because it drew upon all past calibration sessions. Additional calibration data changes the calculated means and covariances less, thus explaining the relatively constant performance. This would indicate that a classifier that combines data in such a matter may become increasingly inflexible to nonstationarities as the data set grows, thus preventing its use in practical systems. The Learn++.NSE ensemble outperformed the most recently trained classifier in all cases, but more so for short calibration sessions. For extremely short calibration sessions, the ensemble trailed behind the naïve classifier. However as the length increased, the ensemble quickly outperformed the naïve classifier. This held true for all but the shortest calibration length. These observations combined indicate the potential for the Learn++.NSE classifier to reduce the calibration necessary to operate a SSVEP BCI system. REFERENCES 1. Elwell, Polikar. IEEE Transactions on Neural Networks 22, 1517-1531, 2011. 2. Long, Servedio. Machine Learning 78, 287-304, 2010. ACKNOWLEDGEMENTS Research funding was provided by the University of Pittsburgh Swanson School of Engineering Summer Research Internship and the Office of the Provost. Dr. Akcakaya provided guidance in the ensemble classification approach. Stephen Snow provided assistance with development of the stimuli presentation.
ERRADICATING BAD BIT PATTERNS SURROUNDING CORRPUTED MEMORY CELLS IN DRAM Erin Higgins Department of Computer Engineering University of Pittsburgh, PA, USA Email: elh76@pitt.edu INTRODUCTION As technological demands increase, DRAM has been scaled to increase memory density. Though this is required to keep up with the demands of technology, it is causing errors. While error correction code (ECC) can correct some of these errors, it is being discovered that some of them are more persistent and, therefore, cannot be handled by ECC. These errors can also cause issues in other nearby cells. [3] The current method for dealing with these bad cells is to replace them with an extra block of good cells. Unfortunately, as the technology is continuing to be scaled, the number of errors is becoming greater than the number of good blocks. If this happens, the entire block has to be marked as bad and cannot be used. Studies into what is causing these issues show that certain bit patterns surrounding a cell make it more likely to misread. These “bad patterns” that need to be avoided are dependent on the circuit layout of memory. For this study, previous research was used to choose the patterns “000” and “111” as the bad patterns [2]. The goal of this research is to fix these bad patterns to ensure that cells will not return false data. Early intervention with architectural solutions will increase the chances that these memory chips will remain reliable. METHODS In order to solve the issue of data corruption, unusable cells, and bad bit patterns, three basic schemes were created and tested using Pin Tools. The first scheme used compression to attempt to fix bad cells. If a line contained a bad cell, it would be compressed if possible. After it was
compressed, a bit was inserted every other bit to break up bad patterns. The bit inserted was always the inverse of the previous bit so it was impossible to have the pattern “000” or “111”. So in this case, the problem would always be solved as long as the line could be compressed. Lines could be compressed if there were certain patterns found within the line pre compression such as all zeros or a repeating value. A prewritten compression tool was used and edited for this project [1]. The next scheme involved flipping every other bit to break up patterns around bad cells. In this case, when there was a bad cell in a row, the tool would go in and flip every other bit in order to attempt to break up the bad patterns around cells. The final scheme was flipping every third bit. For this portion of testing, the tool would go into the line and flip every third bit starting with the first one. If that left a bad pattern behind, it would try flipping the second bit and if that still left bad patterns, the third was flipped. If all of these choices failed, the line was marked as bad and could not be used in the future. The three schemes were run at the same time and on parsec benchmarks. Each was run for an hour so that the tool had time to get to the main part of the benchmark without having to run each of the tests until they were completed. The tool was run on a total of 11 different benchmarks. For the results discussed in this paper, the cells that were marked bad were chosen at random and corrected. This is an ongoing study and results will soon be available that mark bad cells based on research into the actual distribution of bad cells in DRAM.
RESULTS As seen in figure 1, the different tools worked extremely well for fixing errors found in DRAM.
Figure 1: Results of each test with 2% of the cells being marked as bad randomly
Flipping every third bit almost always fixed errors that it was presented with and corrected about 99% percent of the bad cells. Flipping every other bit was also a viable option as it fixed about 94% of errors. Compression wasn’t the most effective option because, as seen in figure 1, data is often uncompressible. Because data isn’t always compressible, that method will fail frequently and unpredictably. DISCUSSION So far, the best option seems to be flipping every third bit. That method almost always eradicated the bad patterns and would be the best option for salvaging the greatest amount of usable space in DRAM. The other two methods did not fail horribly, however. They were each able to save some space in DRAM and could also be considered viable options. Using the third bit flip method would really help to insure DRAM remains reliable with use. The current method for dealing with these cells would be to just mark them as bad and not use them in the future. The method of simply flipping every third bit would save 99% of these cells and ensure that the user still had room for their data in DRAM. Flipping bits doesn’t take up any extra space in memory and requires little tampering with the data to ensure that it is readable for the user. A simple change like this could save a huge amount of space.
This study is really just the beginning of the discussion of how to fix these bad patterns and attempt to save cells in DRAM before just marking them as unusable. This first test of ideas shows that independently, each method works pretty well. But a combination of methods could fix these errors 100% of the time. If used together, compression and flipping bits could save almost all of the cells that would otherwise be marked as unusable. First, an attempted compression could be used. If that doesn’t work, the tool could simply go back and flip every third bit. And on the very small chance that neither method worked, every other bit could be flipped. This study is just the beginning of how to think about fixing these cells. As technology becomes smaller and these errors become more prevalent, something will have to be done to ensure that some of the space remains usable. Advancements will mean less if the data that people need becomes unreliable, so this problem needs to be dealt with quickly. REFERENCES 1. G. Pekhimenko, V. Seshadri, O. Mutlu, M. Kozuch, P. Gibbons, T. Mowry. BaseDelta-Immediate Compression: Practical Data Compression for On-Chip Caches 2. Z. Yang, S. Mourad. Crosstalk Induced Fault Analysis and Test in DRAMs. 3. Y. Kim, R. Daly, J. Kim, C. Fallin, J. H. Lee, D. Lee, C. Wilkerson, K. Lai, O. Mutlu. Flipping Bits in Memeory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors. ACKNOWLEDGEMENTS Research was conducted in the Department of Computer Engineering in Benedum Hall at the University of Pittsburgh under Professor Alex Jones. This award was funded by the Swanson School of Engineering and the Office of the Provost.
DESIGN AND IMPLEMENTATION OF PORTABLE, SOCIAL ROBOT ON ANDROID WITH SPEECH RECOGNITION AND TEXT TO SPEECH Brian J. Rhindress, S.S. Ge, Fangwen Tu Social Robotics Lab, Department of Electrical and Computer Engineering National University of Singapore, Singapore Email: bjrhindress@gmail.com INTRODUCTION Many science fiction plots are predicated on the idea that robots will inevitably gain human characteristics. The modern equivalent of this idea is social robotics, the concept of creating machinery to interact with humans in the human world. After all, people communicate with each other via nonprogrammatic mediums: touch, sound, sight, etc. At the same time, modern technology is pushing ever towards mobile platforms. Mobile computing and robotic platforms are popular due to their potential versatile applications, scalability and ease of control. If technology is moving in both of these directions, there is some motivation to combine social and mobile robotics. All work presented hereafter builds on the previous hardware and software development of NUS Electrical Engineering Masters student Chang Poo Hee [1]. The physical robot used is a two-motor, treaded wheeled tank robot powered with an Ultralife rechargeable 20V battery. An Arduino Atmega2560 micro controller board is used to communicate commands to the motors for movement. Commands to the Arduino come via a wired Android-Arduino USB serial connection. Note: in order for this connection to work, an Android phone must be used that allows USB hosting. This may not come standard, as most times a USB connection to an android is to a computer, which is considered the host. Using these hardware and software interfaces as a foundation, we seek to show a proof of concept for a social robot. To show some element of peoplelikeness, the design considerations for the robot are as follows: • Simple Artificial Intelligence enabling conversational ability • Emotional detection of human counterparts • Memory of past interactions and/or ability to learn • Navigational abilities • Modular development for future feature augmentation.
With these considerations in mind, a use case for the robot would be one in which you could: 1. Introduce yourself to the robot & tell something about yourself. 2. Tell the robot how you are feeling & have it react accordingly 3. Say goodbye & part ways with the robot 4. Have a new conversation with the robot and it remembers you. METHODS The system design builds on the framework already provided in that the applications use a client-server architecture for delegating duties. In this design, the user client application implements a Speech Recognition package for interpreting user conversation commands. This client application also contains a decision tree module that chooses what the robot will say when it is spoken to. However, the system is purposefully designed to segregate the intelligence of decision making and the action of speaking. There are various reasons for this, chief being maintainability and ease of upkeep. Therefore, the server implements Androids Text to Speech (TTS) package, in addition to the previous features. The following describes the finer details of all of these modules and instructions for reproducible implementation. After speaking to the robot, a decision tree/finite state machine chooses what the robot will reply. This central logic unit is realized as a finite state machine, where the robot knows its current state, what has been last said, and chooses what to say next based on the users recognized speech. Decisions are made in one of two ways, depending on the state. In a negative search pattern, the recognized phrase is reviewed in its entirety, looking for unwanted words. For instance, in the introduction state, we look for “I”, “am”, etc. This is convenient to find names, for instance, since we do not need to search a large bank of names. The other way of searching is positive logic, in which
target activating words bring the robot into a state. For instance, the word sad activates puppy state. RESULTS In its current state, the robot is capable of having a simple conversation and entering one of three action
Figure 1: Finite state machine representing conversational logic for different robotic modes.
modes. Each mode is activated by a positive logical search as mentioned above e.g. searching for key target activation words.
wingman and speak some pickup line! Navigation mode tells you to direct the robot to the nearest landmark to help find out where you are (trigger words: lost, navigate). This mode is somewhat experimental in that it aims to use other future modular integration to resolve a location using image processing and object recognition. See future work section for ideas of next steps for this mode. DISCUSSION This implementation uses an ngram statistical model with the Sphinx package for both normal talk and navigational mode. I found the grammar file set up to be somewhat buggy, so I decided to use ngram instead for navigational mode. However, this is probably slower and less accurate. So, a future implementation should use a grammar language model for navigational mode. Additionally, the puppy dance is currently in the server. There is no technical demand for this, and this code should be moved to the client to obey the given architecture principles. Lastly, a larger corpus and more versatile client commands would be useful for a more realistic speech recognition model. REFERENCES [1]Chang, PooHee.”A Mobile Robot Platform Empowered with Android Smartphones.”(2015). Print.
The first of these modes is puppy mode, in which the robot acts like a little puppy dog, changing its face to a dog, playing a puppy bark .mp3, and moving around sporadically. Puppy mode is triggered when the user alludes to feeling sad or in want of a companion in some way (trigger words: sad, puppy).
[2]http://androiddevelopers.blogspot.sg/2009/09/intr oduction-to-text-to-speech-in.html
Wingman mode is a friend-making mode of the robot in which it introduces you to some of the other stored profiles in the database from previous interactions. This mode was motivated by an idea to make a social robot that connects people and augments human-human interaction instead of replacing it (trigger words: lonely, alone, wingman). In Wingman mode, the robot will first attempt to introduce you to one of its stored profiles. If it has never met anyone before, it will act as a standard
[5] http://tutorialspoint.com/android/android text to speech.htm
[3] http://cmusphinx.sourceforge.net/wiki/tutorial [4] http://developer.android.com/develop/index.html
[6] https://github.com/mik3y/usb-serial-for-android ACKNOWLEDGEMENTS This award was funded by the Swanson School of Engineering and the Office of the Provost.
DUQUESNE LIGHT POWER DISTRIBUTION MODEL CREATION Carl W. Morgenstern and Thomas McDermott Department of Computer and Electrical Engineering University of Pittsburgh, PA, USA Email: cwm30@pitt.edu INTRODUCTION Traditional power generation and distribution has hardly changed in the past 50 years. Most systems consist of a power generation plant that distributes power over electrical power lines to households and businesses. But with the falling cost and rising efficiency of solar panels more households are beginning to generate their own source of power. This confounds the existing power distribution system, forcing power companies to adapt. Duquesne Light, the primary power company of Pittsburgh, has detailed schematics of their distribution system. To scale, they include all components that Duquesne Light uses to distribute power safely to the consumers. Unfortunately, these schematics were drawn in the early 90s, and are only useful as a rough guide. New power distribution (PD) modeling software has given PD companies the ability to see failures before they happen. They are also capable of modeling solar panel generation at consumer homes. Given the complications of adding solar power onto the grid, and the increasing availability and use of solar panels, a programmable model is necessary. CREATING THE MODEL The goal of this project was to create a method to convert Duquesne Light’s PD schematics into OpenDSS models (OpenDSS is an open source PD modeling system). OpenDSS uses a “buses” system. Components, such as power lines or transformers, connect the buses. Values can be added to all the
components, such as the KVA rating of a transformer, or the impedance of a power line. To create an accurate OpenDSS model, the model requires the locations and values of every component (power lines, transformers, switches, etc.). These data, however, are not currently cataloged in a manner that allows straightforward modeling. The models we received from Duquesne Light were AutoCAD drawings modeling their electrical feeder system. There are seven different feeder models, each consisting of 3-8 different subsections of the drawing. Figure 1 is the AutoCAD drawing of one of the more complex subsections connected to the Oakland feeder.
Figure 1: An edited section of a Duquesne Light Schematic used to model part of Oakland.
AutoCAD is a visual modeling software. A user can select a component and see general information, such as local coordinates and name of the object. AutoCAD has a Visual Basic (VB) Scripting language that allows users to interact with the drawing via script. We used VB to select components automatically and extract the relevant data based on specified attributes. Using VB, we recorded all the data that was available in the AutoCAD drawings into two text files. One text file held all the power line data, which held some generic values for each power line, and the other text file held all the transformer data (such as locations and transformer size). Using the labels of the components, we were able to find all the values necessary for a complete OpenDSS model. Using the files generated from the VB scripts, we used a Python script to create a text file in the format that OpenDSS can understand. Figure 2 shows the OpenDSS model of Figure 1.
Figure 2: The blue lines are connected to the power source. The pink lines are not connected and were pointers in the AutoCAD drawing. All the lines are connected but some do not have coordinates because they were generated afterward. The thickness of the line is a representation of the current.
Because the AutoCAD models were drawn only as a visual reference, we had to make some manual edits in order for the VB scripts to work properly.
Figure 1 and 2 are only one section of the original respective feeder model. Each section has its own local coordinate system. All the smaller drawings, then, must be offset to create a fully connected OpenDSS model. THE NEXT STEP To have a fully functioning OpenDSS model more needs to be done. There are approximately 40 drawings, and each has to be edited so the VB script will work properly. Then the drawings have to be offset to the proper position (a script has been written to do this, but the correct values still must be entered). The most time consuming part is adding the correct impedances to all the power lines. Because every line segment does not have a label, the values have to be added manually. Once the model is completed, it can be tested, and modified to allow for new components, such as solar panels. With proper programming, the model can also analyze lightning strikes and power surges. The upgraded model will help Duquesne Light distribute power more safely and efficiently in the years to come. REFERENCES 1. Ambrosius, Lee. "AutoCAD VBA: Programming with VBA." AutoCAD速 Platform Customization: User Interface, AutoLISP速, VBA, and beyond. Indianapolis, ID: John Wiley & Sons, 2015. 6431019. Print. 2. Autodesk. AutoCAD 2016. http://www.autodesk.com/products/autocad 2. Python 3.4, www.python.org 3. OpenDSS, sourceforge.net/projects/electricdss/ ACKNOWLEDGEMENTS Funding was provided by the Swanson School of Engineering and the Office of the Provost. Thomas McDermott was a bank of information and advice, and I would have been lost without him.
Enhancement of The Mobile Application “where am I 2” --An Indoor Localization App Using Wireless Network Xu Y.L. Swanson School of Engineering, Electrical and Computer Engineering Department University of Pittsburgh, PA, 15261, United States Email:yix35@pitt.edu second tab is called outdoor. As is said, it is just a INTRODUCTION As we all know that the wireless technology in navigation field becomes more and more common and developed. The perfect example for that is a mobile application named “Google Maps”. It can bring us location and time information almost everywhere in anytime as long as we stay under the clear blue sky. The system we use is called Global Positioning System (GPS), which is a space-based satellite navigation system. However, in recent years, people stay under indoors longer and longer and the design of the indoor environment is more complicated than before. People start to not only seek accurate and quick outdoor localization, but also indoor localization.
single Google map, using GPS to do the navigation and provide time and location information. The third one called indoor tab, unlike the outdoor tab using GPS, uses WPS. Under the indoor tab, according to the position information calculated by the system, it will show you the floor plan.
There are lots of technologies used for indoor localization. We use Wi-Fi-based positioning system since it has highest cost performance. In this project, I want to integrate the outdoor and indoor localization together so that people don’t need to switch applications when they change environment. The technology I use can distinguish our application from other map related application—ours have an indoor view when zoom in the building while others don’t.
New design of the “where am I” app
Unfortunately, this code was written almost three years ago. Google API is outdated and it is a blank under the outdoor tab. The same thing happened on indoor part. The application always failed to fetch the map. The main reason is that the NUS map server is not maintained and updated, so we cannot find it through the URL.
The intention of this new version of this app is to do a comparison. The target audience is not the customers, but the investigators. I want to make this app as a demo, showing the investigators that what I can do while other map apps cannot. All the other localization apps only shows the outdoors, and users cannot zoom in anymore and get indoors floor map if the level reaches 19 (highest level). The technology we can! In my app, when the outdoor map cannot be enlarged, it will change the view of indoor mode. The technology also changed when the view is changed. For example, when people stay outside, they can just use the base map. The technology now is GPS. If they want to do some shopping inside a mall, they can just walk in the mall and zoom in on the screen; it will show you the floor plan. The technology will change to WPS—an indoor technology to locate people.
DESIGN AND IMPLEMENTION Contents in the old version of “where am I ” The old version of the “where am I” contains three parts. The first part is called my location. Under my location tab, a table shows the detail information of all the access points around the surrounding. The
1
CONCLUSION Implementation and Results The main point of this upgrade of the application is to show investigator that the technology we have can make the map more helpful since people can view indoor map. About this project, some parts left to be solved. The background service part hasn’t finished. The app is still running when we leave the app. Another new problem was discovered: although the app can run successfully on my HTC test phone, the app will crash on the newest phone like Samsung S5. I think the reason is that the outdated API is used for old version, so they won’t work on the latest phone.
To implement this new design, the following steps should be done. The first thing is to convert all the source code on eclipse to Android Studio. When we do the converting, there will be some errors.There errors are similar—third party library. When we are going to do a map application. It will use Google or ArcGIS service and these libraries are not in the Studio. What we can do is to connect to the related service and get the library. After doing the converting, we have to replace the entire Google map with ArcGIS map since the indoor map server we used is ArcGIS. I actually did not delete all the Google related code; I created a new class called “MapActivity.class”. A connection between the outdoor tab and the new class should be made. Appendix A is the Map Activity code. We created a new ArcGIS map and did location service. The zoom level should be adjusted to fit the most suitable size. In order to integrate the indoor and outdoor localization, another layer must be added on the base map. This new layer is an indoor map of all the NUS building. Map server address is “ http://arcgis.amilab.org/arcgis/rest/services/Camp usNetwork/CampusMapService/MapServer”. After done with the indoor map layer, the outdoor with indoor tab is finished. To complete the comparison, I link the Map Activity class to the outdoor tab so that we can see two views—one with indoors localization and one without indoor localization. Figure below shows the results of all three tabs.
References 1.A Robust Dead-Reckoning Pedestrian Tracking System with Low Cost Sensors. Yunyue Jin, Hong-Song Toh, Wee-Seng Soh, Wai-Choong Wong. s.l.: 2011 IEEE International conference on Pervasive Computing and Communications (PerCom), Seattle (March 21-25, 2011) 2. A survey of Indoor Positioning and Object Locating Systems. Hakan Koyuncu, Shuang Hua Yang. s.l. : IJCSNS International Journal of Computer Science and Network Security, VOL.10 No.5, May 2010. 3. A Survey on Localization Techniques for Wireless Networks. Santosh Pandey, Prathima Agrawal. s.l. : Journal of Chinese Institute of Engineers, Vol. 29, No.7,pp. 1125-1148(2006).
Acknowledgements
The subject is done at the Ambient Intelligence Laboratory in National University of Singapore, and I was guided by Professor Wong. Funding are provided by the Swanson School of Engineering and the Office of the Provost.
These pictures above is the map where the zoom in level is different. When people zoom in, they can see the indoor floor map.
2
Analyzing The Life Cycle Assessment of Waste Treatment Scenarios in Singapore Diana Hoang and Alannah Malia Department of Industrial Engineering University of Pittsburgh, PA, USA Email: dbh14@pitt.edu INTRODUCTION Municipal solid waste (MSW) is generated everyday with the majority being disposed into landfills. Landfills take up a large amount of space and require an excessive amount of resources to maintain them. They also have concerns of harmful emissions of greenhouse gases if the organic waste disposed starts to breakdown [1]. In Singapore, the Semakau Landfill is the only landfill in operation which accepts only inorganic and inert waste [1]. This means that the waste disposed in this facility will not decompose any further and will not release any harmful emissions. The issue presented is that Semakau Landfill is expected to reach capacity by 2035 which is a concern for Singapore, since they do not have the space available to open another landfill facility [3]. To combat this issue, Singapore is looking at four methods of turning waste into energy: incineration, anaerobic digestion, aerobic composting, and gasification. Incineration is the most popular form of turning waste into energy due to its ability to take in large amounts of waste daily. The process burns the waste daily releasing greenhouse emissions such as nitrogen oxides and sulfur oxides. These harmful emissions increase the global warming potential (GWP) which is a measure of how much energy a gas absorbs over a given period of time compared to the emissions of carbon dioxide[4]. If the gas has a significantly high GWP, then the gas is harmful for the environment since it is causing a drastic change in the atmosphere and not allowing necessary gases escape into space. Anaerobic digestion and aerobic composting are becoming more popular because they are recycling the waste. AD takes in MSW and
converts it into biogas; biogas can then be converted into energy. Furthermore, AC converts the MSW into bio-compost which can be used as a replacement for mineral fertilizers [2]. This is good for the environment because it will reduce the production of carbon dioxide that is made from manufacturing mineral fertilizers. Furthermore, gasification turns MSW into syngas via chemical reactions. This would most likely allow for a reduction in harmful emissions, but may end up being more costly since it does not appear to be a common practice. METHODS To analyze the life cycle assessment (LCA) of these four processes, a software called iThink has been used in order to create a model. iThink is a software that is able to provide a sensitivity analysis for different scenarios. This will allow policy makers to become more informed in their decision making process as well as more aware of the possible consequences that can take place after implementing certain policies. The following assumptions were made in creating the model: • Most values used within the model represent the standards of facilities within Singapore. • The values used for gasification are estimated based off of values found for biomass and coal gasification. • The model created is assumed to theoretically represent the life cycle assessment of the four processes combined. • Transportation and fuel costs are negligible in this model. • There will be a constant increase in all values including costs, waste generated,
and other values used in the model over time. DATA PROCESSING The data was processed from a thorough research over various sources found from online databases. Furthermore, data was also taken from other research documents that research students at the National University of Singapore provided. The data was placed into the model so that when the user would be able to work with the model, they would not have to worry about inputting these values. RESULTS The model created is user friendly and allows the user to identify which processes can be analyzed. The user can switch the on/off button for the four processes as well as define the percentage of waste that is going into each of the four processes. Likewise, the model allows the user to input prices for gate fees, processing fees, and energy prices. The addition of economic values in the model is to analyze how feasible each scenario is. Although environmental safety is important, economic feasibility is also important because there is a limit on what can be spent if profits are not being made. DISCUSSION 1: Bio comĐ&#x2122;ing produced 1: 2: 3: 4:
2: Global WĐ&#x2122;ing Potential
4: Total Profit
3: Profit or Loss per day
63 3800 100000 650000000
4 1 4
1: 2: 3: 4:
59 3500 90000 150000000
1
2
2 3
3
1 3 1: 2: 3: 4:
54 3200 80000 -350000000
4
2 1 0.00
Page 1
2750.00
5500.00 Days
Possible Errors: Some errors that may arise with the model is that the values found may not be consistent which will cause this model to have outputs that can differ from other models. Furthermore, the model only includes GWP, so this model may not accurately indicate the overall environmental feasibility. It is suggested to research further in order to overcome possible errors that can arise. REFERENCES 1. Erwina, Lee Zi Ping. LIFE CYCLE COST ANALYSIS OF ALTERNATIVE HOUSEHOLD FOOD WASTE RECYCLING SYSTEMS IN SINGAPORE. PhD Thesis. Singapore, 2014. Print. 2. Hsien H. Khoo, Teik Z. Lim, Reginald B.H. Tan. "Food waste conversion options in Singapore: Environmental impacts based on an." Science of Total Environment (2010). Document. 3. Tengyu, Xie. Life cycle assessment of environmental impacts for food waste recycling alternatives in Singapore. PhD Thesis. Singapore, 2014. Print 4. United States Environmental Protection Agency. Understanding Global Warming Potentials. 7 5 2015. Web.
3
4 2
When the switches are all turned on with a specified percentage indicated for each one, this is a possible plot that can be produced. Global warming potential is significantly smaller than if the scenario was only focused on incineration. Moreover, the profits are shown to have increased over time as well. This infers that in some scenario where all four processes are present, it would be much preferred than just a single process by itself.
8250.00 11000.00 5:21 PM Thu, Jul 9, 2015
Results
Figure I: This graph illustrates a scenario where all four processes are used and the LCA focuses on bio-compost being produced (blue), GWP (pink), Profit/Loss per day (green), and Total Profit over time (yellow)
ACKNOWLEDGEMENTS This award was funded by the Swanson School of Engineering and the Office of the Provost Further acknowledgment to Faculty Advisers: Dr. Karen Bursic at the University of Pittsburgh and Dr. Adam Ng at the National University of Singapore
SIMULATING WASTE TO ENERGY PROCESSES USING LIFE CYCLE ASSESSMENT AND DYNAMIC SYSTEMS MODELLING Alannah Malia Department of Industrial Engineering University of Pittsburgh, PA, USA Email: ajm225@pitt.edu INTRODUCTION: THE NEED FOR WASTE MANAGEMENT Handling municipal solid waste in an environmentally friendly way while still being economically favorable is a problem that affects many mega cities and countries today. Landfilling is the most often used and thought of method when disposing of waste; however, this method has many drawbacks. Landfills that contain organic waste release greenhouse gases into the atmosphere as the waste decomposes. The disposal area is unsightly and can release unpleasant odors, meaning that they cannot be placed close to population centers. Most importantly in the case of Singapore, landfills require a large amount of land space to be sacrificed for the dump. This is also a concern for other megacities: in order to avoid placing landfills near where citizens live, the waste could potentially need to be transported a great distance which is not economically desirable. In Singapore there is currently one landfill in operation. This facility only accepts inert and inorganic waste meaning that almost all municipal solid waste must be sent to an alternative facility to be processed first. In time the space in the landfill will run out; thus, it is very important that incoming garbage is reduced in mass so that the least possible amount of inert waste is sent to the landfill. Singapore currently has multiple incineration plants and is looking into and integrating other Waste to Energy systems such as anaerobic digestion, aerobic composting, and gasification. All of these different waste treatment facilities can be used to further process the municipal solid waste, creating energy and decreasing the mass of the waste to be sent to the landfill. This paper will detail the work carried out by Alannah Malia in the summer of 2015 both at the University of Pittsburgh and the National University of Singapore. This work centered on creating a
model that can simulate different disposal methods used in combination, so as to see the benefits and drawbacks to any single waste to energy method as well as any combination of methods. METHODS In order to create a complex and complete model the software iThink was used. This dynamic systems model software allows the users to integrate many different variables and equations and model different combinations of Waste to Energy treatment systems. In this project four Waste to Energy treatments were studied: incineration, gasification, anaerobic digestion, and aerobic composting. iThink utilizes stocks, flows, connectors, and converters in order to create a dynamic model. With proper input of equations and trends, the model is able to simulate the impact of waste disposal treatments over the course of time. With the right information, the impact can be studied and simulated a great many years into the future. DATA PROCESSING In order to obtain the necessary information to create an accurate model a great deal of research was needed. Online data bases as well as University libraries were used for the acquisition of input materials. Past studentsâ&#x20AC;&#x2122; research was also provided and used. The model starts with a sector that simulates waste generation in a megacity- specifically Singapore. This sector takes into account the initial size of the population as well as projected population growth. Municipal solid waste (the focus of the study) is also separated from recyclable waste. This waste, measured in tons, then flows into the Waste to Energy treatment sectors.
There are four different Waste to Energy treatment sectors; incinerations, gasification, anaerobic digestion and aerobic composting. In these sectors the waste flows through a number of converters, stocks, and flows and ends up as energy measured in kWh. Each sector also tracks the greenhouse gas output of each process and converts these volumes into Global Warming Potential. In the case of aerobic composting, there is no energy output and instead the amount of compost is measured. In all sectors processed inert waste, mostly ash, is transported to the landfill sector. In some of the Waste to Energy treatments, organic non-inert residue is created. The model handles this waste by filtering it back into the other treatment sectors so that it can be rendered inert. Once the inert waste enters the landfill sector is flows into a stock where it accumulates. The Economics sector calculates how much revenue each Waste to Energy treatment generates. This number is found by adding together the money produced from selling the energy and the gate fees and then subtracting out processing and other miscellaneous costs. A sector was also created to deal with plants going over capacity. This sector, titled Number of Plants, detects when each plant is about to meet its capacity and then pulses the cost it would take to build a new plant into the profits. This way these costs are taken into account as they are incurred instead of all the costs being pulsed at the beginning of the simulation. Finally a Profit Only sector adds together all the net profits from each of the Waste to Energy treatment sectors and the Landfill sector to calculate a total profit. In order to estimate the total impact that the combination of treatments will have environmentally, the Environmental Concerns Sector adds together all of the global warming potential generated by each sector. This provides researchers with a comprehensive number. A great many of the numbers within the model are variables that can be set by the user. This allows the numbers that change often, for instance electricity prices, to be kept up to date. Other variables, like the percentage of waste that flows to each sector, can also be set by the user in order to create different scenarios to test.
RESULTS During the course of the research no real results were reached. The goal of the project was to create a working and realistic dynamic systems model that could potentially be used by other researchers and other decision makers. DISCUSSION While the model created takes into account a great number of variables. equations, and trends, it could still benefit from further work and study. Treatments like gasification are not very well studied in terms of waste management, especially on a large scale. Further research into this topic would help make the model more accurate in predicting the costs, capacity, and environmental impacts of gasification plants. It may also be recommended to add a transportation sector to the model. In a place such as Singapore, the transportation costs can be considered minimal and ignored but this is not the case in all megacities. The transportation costs could be set as user defined variables based on waste pick up routes in each city. This number could then be incorporated into the Economics sector of the model. As policymakers and others in positions of power seek economically sound and environmentally green solutions to waste problems, they are going to need tools like the model created in order to make educated decisions. The model allows for many different factors to be considered in the analysis including capital costs, waste distribution, energy costs etc. It is important that a tool like this one can manipulate and examine multiple variables so that the model can as closely represent the complex real world problem as possible. While the model in its current form still has many limitations, it should provide a valuable tool to those that wish to analyze waste and energy solutions. ACKNOWLEDGEMENTS This award was funded by the Swanson School of Engineering and the Office of the Provost. Special thanks go to Professor Bursic and Professor Ng for their guidance and mentorship.
BLACK SILICON FABRICATION FOR PHOTOVOLTAICS Mohamed A. Kashkoush Laboratory for Advanced Materials at Pittsburgh, Department of Industrial Engineering University of Pittsburgh, PA, USA Email: mak280@pitt.edu, Web: http://www.pitt.edu/~pleu/Research/index.html INTRODUCTION Silicon-based solar cell efficiency is limited by silicon’s high reflectivity, where typically over 30% of incident light is reflected [1]. Minimizing the amount of reflected light serves to maximize the amount of absorbed photons in the silicon semiconductor, thus increasing the quantity of excited electrons per unit of incident light. This creates a more dense electric current across the cell’s p-n junction, leading to an overall increase in the cell’s power conversion efficiency. Anti-reflective coatings, such as silicon nitride, are currently the most effective method used to reduce light reflectivity. However, nanotechnology has emerged as a promising solution towards reducing reflectivity at a low cost. Fabricated arrays of repeating geometrical subwavelength nanostructures, such as columns, wires, cones and pyramids, on the surface of silicon have been shown to possess excellent light trapping abilities [2]. These nano-arrays are typically patterned by a combination of various processes such as reactive ion etching, plasma enhanced chemical vapor deposition, lithography, evaporation, etc. These patterning methods all involve pre-fabricated masks, which serve as templates for etching and deposition processes. Regardless, fabricating both the masks and the arrays involve multistep synthesis – expending valuable time and expensive resources with each step. Black silicon, named as such because of its black appearance to the naked eye, can be more accurately described by the nanostructures that define its surface. These structures vary from high aspect ratio nanoneedles to low aspect ratio nanopyramids. Similar to the light trapping effects seen in other nanoarrays, black silicon exhibits low incident light reflection (below 2% in the visible light wavelength range). This low reflectivity explains why the silicon appears black, as a near-absent amount of visible light is reflected to the human eye. In addition, black silicon has been shown to absorb light in the near infrared wavelength range (up to 1000 nm), a property that silicon nitride coated cells do not possess [3]. Finally, black silicon morphology has been shown to occur through a unique self-organizing and mask-less process, avoiding many inherent production costs to other incident light absorption-enhancing processes [4].
METHODS Black silicon was fabricated by Bosch-process dry etching in an Inductively Coupled Plasma Reactive Ion Etcher (ICP RIE) from Surface Technology Systems. SF6 and O2 were implemented in conjunction during the etch cycle, while C4F8 was the sole gas used during the passivation cycle. High-density plasma consisting of SF6 and O2 or C4F8 radicals, ions, and reactive neutrals were generated from an inductive coil fixed at 600 W. The platen power controlled ion directionality and acceleration, and alternated between 20 W and 0 W during etch and passivation cycles, respectively. The reactive studies were conducted on the surface of phosphorous doped (p-type), <100> oriented, 4” round silicon wafers. Gas flow, etchant cycle time, passivation cycle time, and number of cycles were studied independently for their effect on Black Silicon geometrical morphology and ultimately incident light reflectivity. These ICP RIE “recipe” parameters and the ranges studied are summarized in Table 1. A camera phone was used to take pictures of every sample. Visual observations such as color and appearance or absence of a reflection on the sample gave important insight when fine-tuning the ICP RIE. For more detailed analysis of promising samples, Scanning Electron Microscopy (SEM) characterization was performed on a Philips FEI XL-30F. SEM was used to quantify certain geometrical features of the Black Silicon nanostructures such as depth, pitch, and aspect ratio. Finally, optical spectroscopy was used to study light reflectivity in conjunction with Black Silicon geometries. RESULTS & DISCUSSION Black silicon morphology was highly sensitive to changes within the studied parameters. Thus, a large variance in geometrical and optical properties was observed. For example, “naked eye” imagery showed results varying in color (light green, light brown, dark brown, light black, dark black), uniformity of color, and reflectivity. Figure 1 below displays a black, nonreflective, and highly uniform sample.
Figure 1: An example of a highly uniform black silicon sample, with no reflection apparent in the image.
While naked eye imagery provides qualitative insight regarding a sample’s reflectivity, more detailed optical spectroscopy is needed as shown in Figure 2.
Figure 2: Reflectance spectrum of a black silicon sample, with reflectance below 5% in the visible light range (380800 nm)
Optical spectroscopy, in combination with SEM imagery as shown in Figure 3, is useful for understanding the correlation between nanostructure geometry and optical reflectivity. It was found that the optical reflectivity is highly dependent on the geometry of the subwavelength nanostructures, and has been studied for optimization of the fabrication process. Table 1: Range of etch parameters that were studied Cycle Cycle Time # of Cycles Etch 7-9 seconds 40-100 cycles Passivation 8-16 seconds 40-100 cycles
Figure 3: Black silicon nanostructures as seen from a side view.
SEM imagery was also useful in understanding reactive ion etch chemistries, mainly the initiative and propagative interactions between silicon, SF6, O2, and C4F8. Understanding these interactions, and how they affect silicon’s ability to absorb incident light, will serve to be important in future study. REFERENCES 1. Yoo, J. “Black Silicon Layer Formation for Application in Solar Cells.” Solar Energy Materials and Solar Cells, 90, no. 18–19 (November 23, 2006): 3085–93. 2. “Wafer-Scale Fabrication of Plasmonic Crystals from Patterned Silicon Templates Prepared by Nanosphere Lithography - Nano Letters (ACS Publications).” 3. S Hele “Black Silicon Solar Cells with Interdigitated Back-Contacts Achieve 22.1% Efficiency.” Nature Nanotechnology 10, no. 7 (July 2015): 4. M Steglich “The Structural and Optical Properties of Black Silicon by Inductively Coupled Plasma Reactive Ion Etching.” Journal of Applied Physics 116, no. 17 (November 7, 2014) ACKNOWLEDGEMENTS Funding for this study was provided by the University Of Pittsburgh Swanson School Of Engineering and the Office of the Provost.
SF6 flow rate 130 sccm 0 sccm
O2 flow rate 13 sccm 0 sccm
C4F8 flow rate 0 sccm 40-110 sccm
OPTIMAL DESIGN OF A PHARMACEUTICAL DISTRIBUTION NETWORK Christopher M. Jambor Department of Industrial Engineering University of Pittsburgh, PA, USA Email: cmj47@pitt.edu
INTRODUCTION With pressure to reduce spending, US hospitals are seeking ways to lower operational costs. The primary scope of this research is the analysis and optimization of pharmaceutical distribution in multihospital systems. By combining the application of classical industrial engineering principles with computer optimization and simulation we hope to develop a tool which will aid healthcare professionals in optimally designing cost-effective health system pharmaceutical distribution networks. Efforts are directed toward the quantification of various components in the distribution network in order to draw conclusions about its optimal design. Systems are differentiated based on where in the network pharmaceuticals are stored, where and how prescription orders are filled, and how orders are delivered. To gain understanding of the research area, relevant literature was explored and supplemented by data and domain knowledge provided by Geisinger Health System staff. A review of the literature shows conflicting viewpoints and a lack of comprehensive evaluation in this area. Gray and others argue against decentralization of medication storage in unit-level cabinets because it leads to unfavorable workload and increased cost [1], while Chapuis writes that decentralized storage is favorable because of its impact on medication error reduction [2]. Lathrop presents results of a study that analyze the effects of increasing the frequency of prescription fill rounds [3]. Lin and others analyzed the workflow implications of installing a prescription-filling robot in a central pharmacy [4]. We aim to synthesize these and other ideas in a single, comprehensive decision making tool. MODELING A common framework was used throughout the project to provide a standard basis for system evaluation. The multi-hospital health system is defined using a three-echelon inventory model with a warehouse that offers centralized pharmaceutical storage for all hospitals, pharmacy storage at each of
the hospitals and storage cabinets on each inpatient unit at each hospital. Within the hospital, medications are delivered in one of two ways. The cart fill process involves remote picking of patient prescriptions in either the hospital pharmacy or the central warehouse. Once picked, pharmacy staff load that dayâ&#x20AC;&#x2122;s prescription orders into a medication cart and bring them to the inpatient units where they are stored and await nurse retrieval and subsequent administration. Automated dispensing cabinets (ADCs) are secure floor-stock medication storage cabinets. When stored in an ADC, medications are delivered to the ADCs from either the hospital pharmacy or central warehouse. When a prescription is ordered, a nurse picks the patient-specific order from the ADC on the patientâ&#x20AC;&#x2122;s unit. Considering these options, there are four main pathways that can be used for medication delivery.
Figure 1: Medication Pathways
Pathways 1 and 2 use a central warehouse where inventory for multiple hospitals is consolidated. Inventory pooling and economies of scale become significant and avoiding the use of hospital real estate yields lower holding costs; however, pharmaceutical transport to multiple hospitals is significantly more expensive than transport from a hospital pharmacy to the units as in pathways 3 and 4. Pathways 2 and 4 utilize ADCs which offer numerous benefits in terms of patient safety and medication security. Locked cabinets prevent diversion of controlled substances and, with safeguards built in to their user interfaces, nurses are less likely to pick the wrong medication than if a pharmacy tech had picked the order amongst hundreds of others in a large pharmacy setting. However, ADCs are expensive devices and due to
the decentralized inventory, they force the storage of more product in order to maintain adequate service levels. The cart fill process in pathways 1 and 3 can be more economical because it has consolidated inventory, but is not capable of delivering urgent orders since it relies on administration schedules and requires several hoursâ&#x20AC;&#x2122; notice to get medication to a patient. Clearly no single pathway is always optimal. The most cost effective system which addresses the needs all order types, requires implementing some combination of the various pathways. Seven medication classes were defined because of proven or expected variation in delivery cost or requirements. STAT, first dose, maintenance, and PRN dose orders are classified based on delivery timing and urgency. Controlled substances, compounds, and IV meds are classified based on individual medication characteristics. An Excel model was developed to generate pharmaceutical distribution cost information based on user-specified parameters. Given the number of beds for each hospital in the system and the volume of medication delivered via each pathway to each hospital, the model provides a complete breakdown of the pharmaceutical distribution costs. Geisinger provided the total bed counts and average prescription fills per day for each of their seven hospitals. Using the prescriptions per bed and the delivery pathway percentages defined by the user, an exact prescription volume is allocated to each of the pathways resulting in costs unique to that pathway. The model calculates the major cost components including staffing, inventory, transportation, facility, procurement, and equipment costs. Although limited in the sense that it can only evaluate specific system scenarios, the Excel model is a good tool for analyzing the impact that various aspects of the distribution have on overall distribution cost. Its development forced the determination of the key cost factors (medication pathways, medication classes) and their effects on the main cost components. It also serves as a basis for the design three of optimization models. The first is a gravity location model that determines an optimal central warehouse location based on the location of other hospitals in the system with the objective of minimizing transportation costs. A second model performs route optimization. Given vehicle capacity, a common beginning and end point, and the set of delivery locations, it determines the optimal number of delivery trips and the order of stops on each of the trips. The third and
most comprehensive model optimizes delivery pathways for each medication type. DISCUSSION By nature, an economical pharmaceutical distribution network is a challenge to implement. Large inventory levels of medication must be maintained because of a highly variable demand and since patient lives depend on drug availability. The result is the need for a robust distribution network which avoids stock outs. An effective balance between service quality and operation cost requires the consideration of many components, some of which we are still working to quantify. At the macro-level, the comparison between filling prescriptions at a system-central warehouse and individual hospital pharmacies involves a number of factors. Manual technician order picking at either location must be weighed against automated robotic order picking. Characteristics of the warehouse such as capacity, building and upkeep cost, equipment and staff cost, and transportation to hospitals must also be explored. At the micro-level, individual hospitals must optimize their internal operations. ADC and cart fill processes should be compared in terms of their cost and effectiveness in satisfying different types of prescription orders. Moving forward, the projectâ&#x20AC;&#x2122;s focus will be on further optimization modeling. Geisinger will provide a large prescription database which will enable enhanced cost determination. Continued linear programming and model development coupled with further analysis of operational processes and additional data will help advance the effort to create a comprehensive optimization tool. [1]Gray, John P., et al. "Comparison of a hybrid medication distribution system to simulated decentralized distribution models." Am J Health-Syst Pharm 70.1322 (2013). [2] Chapuis, Claire, et al. "Automated drug dispensing system reduces medication errors in an intensive care setting." Critical care medicine 38.12 (2010): 2275-2281. [3] Lathrop, Korby, et al. "Design, implementation, and evaluation of a thrice-daily cartfill process." American Journal of Health-System Pharmacy 71.13 (2014): 1112-1119. [4] Lin, Alex C., et al. "Effect of a robotic prescription-filling system on pharmacy staff activities and prescription-filling time." American Journal of Health-System Pharmacy 64.17 (2007): 1832-1839.
ACKNOWLEDGEMENTS The project was funded jointly by Dr. Bryan Norman, the Swanson School of Engineering, and the Office of the Provost
OXIDATION OF NICKEL-BASED SUPERALLOY 625 PREPARED BY POWDER BED BINDER JET PRINTING Shannon Biery, Amir Mostafaei, Erica Stevens and Markus Chmielus Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: shb59@pitt.edu INTRODUCTION The modern world of metallurgy employs several methods for preparation of finished metal products. Additive manufacturing (AM) involves a process of joining materials to create an object from 3D model data, usually layer-by-layer, as opposed to subtractive manufacturing methodology [1]. AM provides an opportunity to produce parts with highly complex shapes and internal features that are very expensive or impossible to produce with subtractive methods [2].
METHODS In these experiments, alloy 625 powder produced by gas atomization has been used with a chemical composition given in Table 1 and mostly spherical in shape as seen in Figure 2. Table 1: Chemical composition of the alloy 625 powder Composition in weight percent (wt.%) Ni Cr Fe Nb Mo Al,Ti 64.04 20.9 2.6 3.2 8.4 0.02
C 0.03
Co 0.01
Mn 0.39
Si 0.31
Alloy 625 is a nickel-based superalloy showing extraordinary properties including high temperature strength, toughness and surface in high temperature corrosive or oxidative environments [3]. Good corrosion resistance results from the formation of a protective oxide scale with very low porosity, good adherence, and thermodynamic stability and slow growth rate [4]. The protective oxide scale of Alloy 625 is attributed to presence of Chromium in the alloy, but the addition of ternary elements, such as iron, niobium and molybdenum, also influences oxidation behavior [5]. It is observed that enrichment of the Cr2O3 oxide film and NiO on the oxide layer can increase corrosion resistance. However, at higher temperatures (1100 °C), alloying component oxides, such as Nb and Ti, form in the films explained by relative thermodynamic stabilities and diffusivities of alloying elements in the metal [5]. This project aims to investigate the oxidation behavior at 700 °C of power bed binder jet printed (PB-BJP) and sintered samples that used gas atomized alloy 625 powder as feedstock. The samples and surface oxide was characterized using optical microscopy (OM), scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS).
Figure 1: SEM image of alloy 625 powder
PB-BJP is an AM method in which powder is deposited layer-by-layer and selectively joined with a binder. In this study, the binder of the printed coupons is cured at 175 °C. Densification of the green part was achieved by a sintering process in a Lindberg Tube Furnace under vacuum atmosphere with two different maximum temperatures to achieve samples with different porosity. The sintering profile was as followed Heating 5 °C/min to 600 °C, 3.2 °C/min to 1000 °C, 2.8 °C/min to the holding temperature (1220 °C and 1260 °C for 4 h), then cooling at 1 °C/min to 1200 °C, 3.1 °C/min to 500 °C. The oxidation step was performed at a temperature of 700 °C for both coupons (sintered at 1220 and 1260 °C). Microscopic observations are carried out using optical microscopy (Keyence) and SEM JEOL JSM6510 equipped with EDS.
RESULTS Figures 2 and 3 show optical and scanning electron micrographs, respectively, of the PB-BJP alloy 625 samples sintered at (a) 1220 °C and (b) 1260 °C to observe porosity structure and quantity of the sintered samples. It is apparent that sintering at higher temperature results in lower porosity.
Figure 2: Optical micrographs of polished samples after sintering at, (a) 1220 oC and (b) 1260 oC
Figure 3: SEM images of the samples after sintering at, (a) 1220 oC and (b) 1260 oC.
Figure 4 and 5 show the SEM micrographs of the porous and non-porous samples, sintered at 1220 and 1260 °C for 4 h, respectively, and oxidized at 700 °C for 12 h.
(a)
(b)
Figure 5: SEM images of the surface of the non-porous samples after oxidation at 700 oC.
For the samples sintered at 1220 °C, the pores are large and connected, while for the sample sintered at 1260 °C, pores are small, spherical in shape and not connected. At the surface, the higher porosity sample has therefore a much larger overall surface area than the non-porous sample. When comparing Figs. 4 and 5, it can be seen that the oxidation behavior is very different and much increased in the case of the porous coupon. This is due to the dramatically increased surface area in the porous sample (1220 °C) compared to the non-porous sample (1260 °C. Furthermore, the oxide scales on the porous surface appear in form of grey islands with submicron crystals. EDS analysis indicates that the oxide scale is made of NiO (rod-like oxide in Figure 4d) and NiCr2O4 (grey islands) [6]. Additionally, white crystals are determined as NbC (big crystals) in the grain boundaries due to aging process and Ni3Nb in the bulk grains (small precipitates). Oxidation resistance is higher in the non-porous sample as evidenced by the formation of the thin spinel structure and Cr2O3 in addition to submicron precipitates in the grain boundaries and bulk grains. Thus, denser samples provide not only a proper microstructure, but it also showed oxidation resistance at high temperature application. REFERENCES [1] Turker et al. Mat. Char. 59, 1728–35; [2] Simchi et al. Metal. and Mat. Trans. 37A, 2549-57; [3] Raim et al. Scripta Materialia 51, 59-63; [4] Trindade et al. Mat. & Corr. 11, 785-790; [5] Kumar et al. Oxi. of Metals 45, 221-245; [6] Wang et al. Mat. Char. 107, 283–292.
ACKNOWLEDGEMENTS Figure 4: SEM images of the surface of the porous samples after oxidation at 700 oC
DISCUSSION It is evident from the OM and SEM micrographs that reduction in the quantity of pores and shrinkage in the pore size is related to the increased sintering temperature.
Funding was provided jointly by Dr. Markus Chmielus, the Swanson School of Engineering and the Office of the Provost. This project was partially funded by Air Force Research Laboratory under agreement number FA865012-2-7230 and by the Commonwealth of Pennsylvania, acting through the Department of Community and Economic Development, under Contract Number C000053981.
INFLUENCE OF POWDER ATOMIZATION TECHNIQUES AND SINTERING TEMPERATURE ON DENSIFICATION OF 3D PRINTED ALLOY 625 PARTS Eamonn Hughes, Amir Mostafaei and Markus Chmielus Department of Mechanical Engineering and Materials Science University of Pittsburgh, Pittsburgh, PA, USA Email: eth19@pitt.edu INTRODUCTION Additive manufacturing (AM) or 3D printing has the potential to revolutionize the manufacturing process and is expanding from more limited rapid prototyping applications to production of finalized parts especially with complex shapes. Yet, the influence that AM methods have on microstructure and properties of parts is not yet well understood. One AM method that is able to build metal parts is powder bed binder jet printing whereby a machine repeatedly lays down a thin layer of powder and deposits binder according to a CAD model for as many layers as necessary to complete the part [1]. The part is then cured in an oven after which it can be handled though it is still quite fragile. The part must then be sintered at high temperature to densify it. This stage is of particular interest because the sintering conditions such as holding time, holding temperature, and sintering atmosphere all have major effects on the final properties of the part. The methods used to atomize a metal into fine particles also affect the properties of the part since the powders have different shapes, sizes and impurities. Zhao studied the porosity behavior in gas and water atomized samples of stainless steel 420 and found that gas atomized powders (GA) have a high packing density relative to water atomized (WA) powders. He observed that GA powders were spherical in shape while WA powders were irregularly shaped. Furthermore, WA powder based parts tended to sinter at lower temperatures and in a shorter amount of time and generally to a higher final density and lower total porosity [2]. The nickel-based alloy 625, which is one of the most successfully applied superalloys in engineering applications [3], is mostly used in aeronautics, chemistry and marine applications thanks to its good corrosion resistance, high stress and strain resistance. The goal of this study is to determine the influence of sintering temperature and powder type on the densification of AM alloy 625 samples.
METHODS In this work, three different alloy 625 powders were used. Scanning electron microscopy (SEM) micrographs are shown in Fig. 1: vacuum-melted argon atomized (AA), air-melted nitrogen atomized (NA), and air-melted water atomized (WA). The composition of alloy 625 is given in Table 1. (a)
(b)
(c)
Figure 1. (a) AA, (b), NA and (c) WA. Table 1. Chemical composition of the alloy 625 powder Composition in weight percent (wt.%) Ni Cr Fe Nb Mo Al,Ti C Co Mn Si 64.04 20.9 2.6 3.2 8.4 0.02 0.03 0.01 0.39 0.31
Cylindrical coupons with the dimensions of 15 mm diameter and 7.5 mm height were printed on the ExOne M-Flex 3D printer and then cured at 175 °C for 8 hours. Samples were then sintered under vacuum in a tube furnace (Across International TF1400) with different final maximum temperatures (1220 °C, 1240 °C, 1250 °C, 1260 °C, and 1270 °C) for 4 h. The densities of unsintered and sintered samples were measured using Archimedes’ water immersion method both taking into account the temperature of the water and the buoyancy of the air. Sintered samples were ground, polished, and then imaged using a Keyence digital optical microscope. The micrographs were analyzed using ImageJ to determine the area density. RESULTS The percent densification of the unsintered samples are summarized in Table 1. Table 2. Percent densification of unsintered samples Percent Densification - Unsintered Samples WA 42%
AA 53%
NA 55%
The results of the densification measurements are summarized in Fig. 2. In general, WA samples have a lower porosity at lower sintering temperatures and higher porosity at higher sintering temperatures than NA and AA power samples (Fig 2-4). Additionally, the porosity decreases with increasing sintering temperature, which is also clearly visible in the micrographs shown in Fig. 3 and 4. Percent Densification - Water Immersion 100% 90% 80% 70% Water 1220 °C
Argon 1240 °C
1250 °C
Nitrogen 1260 °C
1270 °C
Percent Densification of Center Area - ImageJ 100% 90% 80% 70% Water 1240 °C
Argon 1250 °C
1260 °C
Nitrogen 1270 °C
Figure 2. Percent densification of sintered samples by water immersion method (top) and microscopy (bottom).
(a)
powder particles. The densification results for the sintered samples broadly indicate that the sample density increases with increasing sintering temperature. At a certain point the samples begin to show evidence of melting which is very undesirable in sintering, therefore sintering temperatures must be kept below this level. The AA and NA samples were generally free of any melting up to 1260 °C while the WA samples showed signs of melting above 1220 °C. Another reason to avoid temperatures near the melting point of the material or long holding times is the occurrence of grain coarsening which lowers strength and elemental segregation at the grain boundaries which decreases toughness.
(b)
Figure 3. SEM micrographs of (a) WA and (b) NA samples sintered at 1260 oC.
The porosity results at the sintering temperature of 1220 °C for alloy 625 confirm Zhao’s results on stainless steel 420 [2] that WA powders sinter faster and at lower temperatures than GA samples. Additionally, it is noticeable that the porosity does not decrease for WA samples (beyond 5%), yet AA and NA samples decrease porosity to <1%. This might be due to a larger amount of pores that are disconnected from the surface in WA samples that stay within the sample. The difference between water immersion and optical microscopy density measurements, specifically the drop in density at 1270 °C for water immersion measurements, is most likely due to the alumina particles that stick to the sample when it slightly melts. While water immersion averages the density of the entire sample (including low density alumina at the surface), optical microscopy micrographs were only taken in the center of the samples. Data such as this is vital for the advancement of metal 3D printing in the commercial sectors and further research on mechanical properties will follow this project. REFERENCES [1] Turker et al. Mat. Char. 59, 1728–35; [2] Zhou, Y. Characterization of the porosity and pore behavior during the sintering process of 420 SS, (thesis-2014) University of Pittsburgh; [3] Özgün et al. Mat. Char. 108, 8-15.
ACKNOWLEDGEMENTS Figure 4. Optical microscopy of the of the sintered samples at 1220 oC and 1260 oC, (a, b) WA, (c, d) AA, (e, f) NA.
DISCUSSION As shown in the results, the unsintered WA samples have the lowest density by a significant margin due the lower packing ability of its irregularly shaped
Funding was provided jointly by Dr. Markus Chmielus, the Swanson School of Engineering and the Office of the Provost. This project was partially funded by Air Force Research Laboratory under agreement number FA865012-2-7230 and by the Commonwealth of Pennsylvania, acting through the Department of Community and Economic Development, under Contract Number C000053981.
ADDITIVE MANUFACTURING OF NI-MN-GA MAGNETIC SHAPE-MEMORY ALLOYS: THE INFLUENCE OF LINEAR ENERGY ON THE MARTENSITE PHASE TRANSFORMATION Yuval L. Krimer, Jakub Toman, and Markus Chmielus Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: ylk1@pitt.edu, chmielus@pitt.edu INTRODUCTION: Magnetic shape memory alloys (MSMA) are a class of smart materials that exhibit reversible plastic deformation in a magnetic field. The deformation occurs due to the motion of twin boundaries in a martensite phase, and does not involve a phase transformation as in conventional shape memory alloys, such as Nitinol. The MSMA Ni2MnGa is currently the most promising material. It has a large magnetic anisotropy and low twinning stress, and maintains a martensitic structure above room temperature. It is theoretically capable of magnetic field-induced strain (MFIS) of up to 10%, though the largest achieved strains are somewhat below that. MSMA are being investigated for use in very small scale actuators and pumps, where they can be used to greatly reduce complexity versus mechanical systems. Current research in MSMA involves reducing crystal defects, which inhibit deformation, and increasing both the Currie and martensite transformation temperatures, since these limit the working temperature of the alloy. Current research in MSMA involves growing the alloy in single crystals, since grain boundaries will pin the twin boundaries, and inhibit deformation. However, single crystal growth is complex, requires long periods of time, and cannot be used to make precise shapes. On the other hand, polycrystalline MSMA show in bulk form near zero MFIS. The topic of this research is to analyze Ni-Mn-Ga MSMA manufactured by Selective Laser Sintering (SLS). SLS uses a laser beam to melt a stream of powder and deposit it on a metallic substrate. If properly set up, the process results in large, directionally solidified grains extending the length of the part. These grains, which have similar crystallographic orientation, can allow similar behavior to a single crystal.
EXPERIMENTAL METHOD: Samples were created using an Optomec LENS 450 SLS-type 3-D printer. The printer uses a 400 W fiber laser to melt the powder, and has an atmosphere controlled working chamber. 3 samples were printed on a stainless steel substrate at 200 W, 250 W, and 300 W, using a travel speed of 2.5 mm/s, with a total of 5 layers per sample in an argon gas atmosphere. The samples were called 001, 002, and 003, respectively. The samples were then cut from the substrate using a Princeton Scientific K.D. Unipress wire saw. Differential Scanning Calorimetry (DSC) measurements were performed on all samples using a TA Q10 DSC. DSC involves heating a sample and an empty sample holder at the same rate, and measuring difference in heat flow. It can be used to detect phase transformations, Currie temperature, and other changes. DSC was used to determine which samples undergo a martensite to austenite phase transformation, and the transformation temperature. Additional measurements were performed using a Lakeshore Vibrating Sample Magnetometer (VSM). The VSM can measure changes in magnetization with increasing magnetic field or temperature. It can be used to determine magnetic properties such as saturation magnetization, or to measure Currie temperature and the martensite transformation temperature. VSM experiments were performed by placing the sample in a continuous field of 25 mT, and heating 25 째C to 120 째C and cooling back to 35 째C in 1 째C steps. RESULTS: DSC measurements were performed on all three samples. The results are shown in Figure 1 and indicate an exothermic peak starting at about 60 o C and ending at about 75 oC. An endothermic
peak appears at the same temperatures upon cooling. These peaks are more pronounced during the first or second DSC run.
Figure 1: DSC curves for all 3 samples.
VSM magnetization versus temperature measurement for sample 003 sample, printed at 300 W (see Figure 2), show the magnetic moment continuously increasing with temperature until about 70 °C, then dropping to near zero at about 100 °C. The demagnetization was initially fast (just above 70 °C) and then slowed down (above 80 °C).
at different temperatures. Due to the very small mass available, the first order phase transformation peaks are not very pronounced which makes the detection of a second order transformation (demagnetization at the Curie temperature) impossible to detect using DSC. Therefore, VSM measurements were necessary to provide the Curie temperature for sample 001 (200 W). Here again, the continuous magnetization increase (austenite is easier magnetizable than martensite) indicates a wide temperature region at which the martensite phase transformation takes place similarly to the demagnetization region that is also rather broad when comparing to single crystals [1]. Since our previous tests did not indicate a large composition gradient within the samples, which is something that could be expected, the broad transformation range might be based on differently large mechanical constraints that might hinder/delay the phase transformation. Furthermore, small undetected composition changes might also be a reason. Finally, there seems to be no systematic trend of the martensite phase transformation temperature with regards to SLS deposition power. Therefore, further tests are needed to determine or confirm compositional gradients within samples and to explain the broad phase transformation range. REFERENCES:
Figure 2: VSM results for sample 001 printed at 300 W.
DISCUSSION: The exothermic and endothermic peaks in the VSM measurements indicate a martensite to austenite phase transformation during heating, and a reverse transformation on cooling. The breadth of the first order peak indicates that small fractions of the sample volume are transforming
Chmielus, Markus, “Composition, Structure and MagnetoMechanical Properties of Ni-Mn-Ga Magnetic ShapeMemory Alloys,” Dissertation, TU Berlin, 2010 M. Kök , Z. D. Yakinci, A. Aydogdu, Y. Aydogdu, “Thermal and magnetic properties of Ni51Mn28.5Ga19.5B magnetic-shape-memory alloy,” Journal of Thermal Analysis and Calorimetry, 2014 Guang-Hua Yu, Yun-Li Xu, Zhu-Hong Liu, Hong-Mei Qiu, Ze-Ya Zhu, Xiang-Ping Huang, Li-Qing Pan, “Recent progress in Heusler-type magnetic shape memory alloys,” Rare Metals, 2015
ACKNOWLEDGEMENTS I would like to thank Dr. Chmielus and members of his lab. This research would not be possible without joint funding by Dr. Markus Chmielus, the Swanson School of Engineering and the Office of the Provost.
INFLUENCE OF SPUTTER POWER AND WAFER PLASMA CLEANING ON STRESS AND PHASE FORMATION OF AS-DEPOSITIED TANTALUM THIN FILMS Emma Sullivan, Amir Mostafaei, and Dr. Markus Chmielus Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: ems186@pitt.edu, amm97@pitt.edu, chmielus@pitt.edu INTRODUCTION Tantalum (Ta) thin films are used in a variety of applications especially microelectronics and microelectromechanical systems [1]. As the need for improved microelectronics increases, so does the demand for more efficient designs of their components. Strength, reliability, and corrosion resistance at high temperatures are all important factors that make Ta a valuable material to study [1]. Tantalum thin films can take the form of either a stable bcc α phase or a metastable tetragonal β phase [2]. The phase transformation from metastable β-Ta to α-Ta and the resultant stresses from deposition and transition have been studied for decades but are still not entirely understood [1, 2]. This information is vital when designing microelectronic components with new properties or dimensions. Specifically, the parameters changed in sputtering deposition can and will change the microstructures that develop. The ability to predict how stresses will develop in new designs is major to the design process. Thus, the objective of this study is to better understand the development of stresses in tantalum thin films. The first step in obtaining such information was to run a series of systematic tests on sputter parameters and observe the resulting stresses and phases present. This study focuses on the effect of sputter power and wafer plasma cleaning on the as-deposited phases. EXPERIMENTAL METHODS Silicon wafers 76.2 mm in diameter (Silicon Valley Microelectronics Inc.) were removed directly from their double wrapped boxes, inspected for dust particles, using ultra-high purity helium to remove any particles, and placed into the substrate holder, which has been designed so that the wafer is free to buckle or curve as a result of stress. The wafer is then placed into a load lock which is pumped down to less than 3.0e-06 Torr. Once the pressure has reached the atomic regime, the wafer can be transferred to the main chamber where deposition takes place. The wafer is situated on the
top of the chamber, facing down towards the target guns. This chamber is pumped down using a getter pump (SAES) and a cryogenic pump to less than 1.0e-08 Torr, following a bakeout at temperatures around 90 °C for an hour. The thin films were deposited with an AJA International Inc. ATC Orion Sputtering System using a Ta target of 50.8 mm in diameter (99.95% purity, Lesker). Some of the substrates were sputter cleaned prior to deposition at 25 W for varying times and pressures. Deposition took place over a course of 16 minutes at 8 mTorr. The working gas is ultra-high purity Argon (details, see Table 1). Table 1 The Ta samples and their sputtering parameters. Sputtering Sputtering Sample Sputter Cleaning Power Duration ID Parameters [W] [min] 02 125 16 N/A 07 200 16 N/A 08 200 16 20 mTorr for 5 min 09 200 16 26 mTorr for 1.5 min 10 200 16 8 mTorr for 1.5 min 11 160 16 8 mTorr for 1.5 min
The stresses of the thin films during deposition were measured using a kSA MOS curvature measurement system with the bare substrate’s curvature as reference. A laser array is projected from below onto the substrate’s surface and reflected back. The reflected array of dots changes position and spacing as the curvature of the substrate changes. The calculated curvature difference is computed into a stress thickness product, which can then be converted to stress with the film thickness. The thicknesses of the films were measured using a mechanical Alpha-Step IQ Profiler in the Nanoscale Fabrication and Characterization Facility. The step size was measured by dragging the tip from a region of uncoated, but sometimes shadowed, substrate to a coated region about 2 mm closer to the center of the substrate. Multiple values were taken and these numbers were averaged, yielding only an approximate thickness due to some additional
curvature found in the coating that could not be accounted for in step size measurement. The film resistivity was measured using a Jandel four point probe (Model RM2) connected to a Keithley 2001 Multimeter for a more accurate display. The current was set to 100 μA and potential differences were measured at five positions on the wafer. For microstructural analysis, x-ray diffraction (XRD) was performed using a PANalytical XRD. RESULTS Fig. 1 displays the stress-time results. Using the average thickness from those films sputtered at 200 W, a growth rate of 11.5 nm/min has been assumed. 125 W 160 W 200 W 200 W
6
Stress [GPa]
4 2 0 0
1
2
3
4
5
6
-2 -4
Time [min] Figure 1 The approximate stresses found in the films sputtered at different powers and cleaned at different pressures for the first 6 minutes of deposition, at which time the stress stabilizes.
Table 2 below displays the results for the average potential differences, resistances, and thicknesses for several of the films. Table 2 The films resistivities. Sample Voltage Resistance ID [mV] [Ω] 02 0.4279 19.4
Thickness [nm] 95
Resistivity [μΩ*cm] 185
07
0.2465
11.2
165
184
08
0.2354
10.7
230
245
09
0.2747
12.4
165
205
10
0.2701
12.2
180
220
The resistivies of these films was then calculated using the sheet resistance formula for thin films: Rs = 4.53 x V/I where V is the average measured potential difference and I is the current sent through the probe [2]. These values of resistance were then multiplied by the
films’ respective thicknesses to obtain resistivities. XRD was also used to identify the present phases. The XRD pattern displayed peaks that agreed with shifted stick patterns for β-Ta. The peaks that agree with β-Ta are suspected to include 2θ=39.4°, 44.7°, and 52°. DISCUSSION From the calculated resistivities, the present phases of the as-deposited films can be determined. α-Ta is relatively ductile and has an expected resistivity of 15-16 μΩ*cm. β-Ta is more brittle, has more defects, and thus has a higher resistivity of 170-210 μΩ*cm [1]. Resistivity and XRD indicate that the asdeposited samples are at least majorly, if not completely, β-Ta. These results also agree with the stresses found during deposition using the kSA MOS curvature measurement system. In general, the higher the sputtering power, the higher the tensile stress found within the film. The presence of β-Ta has been linked in some cases to tensile stress in literature [4]. Films deposited at higher powers yielded films with a higher tensile stresses, which also had generally higher resistivities. All of this hints at a higher proportion of β-Ta. Further analysis is required to determine the proportions of phases present within the material. Further testing includes subjecting the thin films to higher temperatures to observe stress evolution and eventually a phase change. REFERENCES 1. Lee, S. L., et al. (2004). Texture, structure and phase transformation in sputter beta tantalum coating. Surface and Coatings Technology, 177-178, 44–51. 2. Knepper, R., & Baker, S. P. (2007) Applied Physics Letters, 90(18), 17–20. 3. "Sheet Resistance and the Calculation of Resistivity or Thickness Relative to Semiconductor Applications." Four Point Probes. May 15, 2013. http://four-pointprobes.com/sheet-resistance-and-the-calculation-ofresistivity-or-thickness-relative-to-semiconductorapplications/. 4. Clevenger, L. A., et al. (1992). Journal of Applied Physics, 72(10), 4918–4924.
ACKNOWLEDGEMENTS I would like to thank Dr. Chmielus and members of his lab including Victoria Mbakwe. This research would not be possible without joint funding by Dr. Markus Chmielus, the Swanson School of Engineering, and the Office of the Provost.
PARAMETER STUDY OF TIN-OXIDE NANOWIRE GROWTH ON FTO AND STAINLESS STEEL MESH SUBSTRATES Chuyuan Zheng, Gill-Sang Han, and Jun-Kun Lee Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: chz47@pitt.edu INTRODUCTION Tin-oxide nanowires, as a part, plays an important role in PEC solar cells and other applications such as supercapacitors. In this experiment, our goal is to fabricate tin-oxide nanowires and observe our samples using scanning electron microscopy (SEM). A common method of fabricating nanowires is vapor-liquid-solid (VLS) mechanism. This method requires catalyst nanoclusters, usually gold, deposited on substrates in order to let eutectoid-like reactions happen, i.e. the tin oxide will therefore grow directly underneath the gold nanoparticles creating gold tipped nanowires [1-3]. VLS mechanism also requires proper temperature and temperature gradient, so that the eutectoid-like mixture would melt, vaporize and finally deposit on the substrates and form nanowires. Despite temperature effects, on the other hand, this mechanism can also be affected by a variety of other aspects, such as oxygen flow and amount of metal source. In this case the source is tin powder. However the quantitative correlations between the length of nanowires and parameters above stay unclear, hence our experiments focus on figuring out empirically optimized conditions of nanowire growth. EXPERIMENTAL METHODS This series of experiments can be divided into two parts: fabrication and
microscopic observations. a) Fabrication: the growth of nanowires takes place in a tube furnace (Lindberg). A pump is connected to the glass tube to produce vacuum up to ~1.0 mTorr. An oxygen cylinder, along with the gas regulator, is connected to the MFC controller and pressure gauge, and the gauge is then linked with one end of the tube. In this experiment we mainly used two types of substrates: fluorine-doped tin oxide (FTO) glass and austenite 316 stainless steel mesh. The temperatures used (two zones) are 700/500 째C, 750/550 째C and 800/600 째C, respectively. The oxygen flows, as can be indicated and monitored by MFC, are 25 mTorr, 50 mTorr, 100 mTorr and 200 mTorr. The masses of Sn source are 400 mg and 800 mg. The Sn powder is poured in a quartz boat and the boat is placed in the glass tube. The location of the boat is right beneath the first thermocouple. Five 2*2cm FTO or stainless steel meshes are placed on a 4*10 cm glass plate and the plate is fixed ~8.5 cm to the right of the first thermocouple, as shown in Fig. 1.
Fig.1 Experiment Setup
RESULTS AND DISCUSSION The synthesized samples are observed under SEM. Fig.2 shows SEM images of
samples under different conditions.
(a)
(b) Fig.2 SEM images of nanowires grown on FTO and stainless steel meshes. (a) FTO nanowires at 700/500 °C, 50 mTorr O2 and 400 mg Sn. (b) Mesh nanowires at 800/600 °C, 50 mTorr and 400 mg Sn.
By comparing samples synthesized under different sets of parameters and measuring average lengths of nanowires on substrates, we can draw a conclusion that among all conditions that are applied in the experiments, the ideal conditions are 700/ 500 °C/50 mTorr. At higher temperatures tin starts to form nanoparticles instead of nanowires. For stainless steel meshes, the optimized conditions are 800/600 °C/50 mTorr. In fact we barely observed changes when the amount of Sn source is changed, or the oxygen flow is set below 50 mTorr. The ideal temperature we derived
matches the results of previous studies. Castillo et.al. [4] conclude that the growth temperature of SnO2 is 600 °C. However the reason of the increment of reaction temperature on mesh substrates is still unrevealed. CONCLUSIONS SnO2 nanowires on two substrates are synthesized successfully and SEM observations of nanowires grown shows the ideal temperature reaction matches previous studies to a certain extent. The optimized conditions for NW growth is found, and it can be proved that temperature has a major effect on NW growth. Further studies can be focused on the difference of growth mechanism on substrates, and the effect this difference causes, in terms of NW characteristics. REFERENCES 1. [1]. Kayes et al. "Comparison of the Device Physics Principles of Planar and Radial P-n Junction Nanorod Solar Cells." Journal of Applied Physics: 114302. 2. Martinez-Gil et al. "Nano -patterned Silicon Surfaces for the Self-organised Growth of Metallic Nanostructures." Superlattices and Microstructures 3. Klimovskaya et al. "Study of the Formation of Gold Droplet Arrays on Si Substrates by High Temperature Anneals." Nanoscale Research Letters 6.151 (2011). 4. Castillo et al. "VLS Synthesis and Characterization of SnO2 Nanowires." MRS Proceedings, 1371. ACKNOWLEDGEMENTS This experiment is being funded by the Swanson School of Engineering and the Office of the Provost.
EFFECT OF VARIATIONS IN BLOOD VELOCITY WAVEFORMS ON WALL SHEAR STRESSES IN AN INTRACRANIAL ANEURYSM Isaac Wong, Michael J. Durka and Anne M. Robertson Department of Bioengineering University of Pittsburgh, PA, USA Email: isw4@pitt.edu INTRODUCTION Computational fluid dynamics (CFD) studies of the arteries require a specified blood flow waveform as an input. For studies of the intracranial arteries, the blood flow waveform of the common and internal carotid arteries are usually of interest. Holdsworth et al. reported an archetypal common carotid artery velocity waveform of normal, young subjects [1]. Ford et al. later reported an archetypal internal carotid artery volumetric waveform of normal, young subjects [2]. These waveforms are currently used in CFD studies, but may not be representative of elderly or diseased populations. Blood velocity waveforms gathered through spectral Doppler ultrasound is already used in conjunction with color Doppler ultrasound to help diagnose cardiovascular disease in the clinical setting [3]. The velocity waveforms between patients with conditions such as aortic valve stenosis, aortic regurgitation, and hypertrophic cardiomyopathy are known to generally have different shapes [4]. The aim of this study is to identify variation in the elderly population, and how this variation affects the wall shear stress distribution and intensity of an intracranial aneurysm. METHODS A total of 352 patients, with a mean age of 69 ± 15 years, were screened by clinicians in the Mayo Clinic. Risk factors for cardiovascular disease, such as presence of hypertension, history of smoking, and current body mass index (BMI) were recorded. Patient comorbidities, such as aortic regurgitation, systolic dysfunction, and atrial fibrillation were also recorded. Although the patient’s physical condition was documented, that information was not examined in this paper, and will be used in a future study. A spectral Doppler ultrasound device was then used to investigate the left and right internal carotid arteries of each patient. The technician insonated
along the artery until the clearest blood velocities were measured. Each scan consisted of between four and ten periods, of which only one to seven periods were able to be extracted during processing. DATA PROCESSING All the waveform images from the Mayo Clinic were qualitatively screened; images with excessive background noise, excessive aliasing, and an unclearly defined waveform boundary were excluded. The waveforms were then qualitatively placed, based on their shape, into categories identified in literature [4]. Waveforms that did not fit existing categories and were present in relative abundance were placed into new categories. From each category, a sample waveform was chosen. The first period of each waveform was extracted, and then normalized on both the velocity and time axes. The samples were then shifted such that, for each sample, the half maximum velocity occurs at the same time point. The half maximum velocity is defined as half of the difference between the maximum and minimum velocity, added to the minimum velocity. This was done because noise in end diastole makes it difficult to accurately determine where systole begins. A model of a human intracranial aneurysm, obtained by Dr. Takahashi’s research group, was parametrically reconstructed and used as the vessel for the CFD study. The velocities of each sample period were scaled such that each would have the same flow rate through the vessel. Code provided by Dr. Cebral’s research group was used to compute the wall shear stresses. The time averaged wall shear stress distribution was computed for each sample, as well as the distribution at the time of maximum velocity and dicrotic notch.
RESULTS Qualitative inspection of the waveform shapes resulted in the identification of at least 4 different categories, the most common of which were: normal, bisferious, tardus parvus, and spindle. Figure 1 shows the superimposed sample periods from each category. Normal
1.1
Bisferious
Velocity
0.9
Tardus Parvus
0.7
Spindle
0.5 0.3 0.1
-0.2
0
0.2
0.4 0.6 Time Figure 1: Graph of all samples, superimposed
0.8
1
Figure 2 shows the time averaged wall shear stress distributions, as well as the stresses at the time of maximum velocity and the dicrotic notch.
Figure 2: Wall shear stress distributions of each sample case. A: time averaged distribution. B: distribution at max velocity. C: distribution at dicrotic notch. Legends for each case is shown on the bottom left of each model.
The time averaged, maximum velocity, and dicrotic notch wall shear stresses on the aneurysm wall are distributed similarly across all the sample cases. Relatively higher stresses can be seen in the band across the middle of the bulb and around the neck. However, absolute velocities differed during the time of maximum velocity and the dicrotic notch. At maximum velocity, the spindle waveform had the greatest absolute stress, with the other three having similar stresses. At the dicrotic notch, the bisferious
and tardus parvus waveforms had the higher absolute stresses. All samples had similar time averaged stresses. DISCUSSION Patients with various cardiovascular conditions are expected to have variation in their waveform shapes. The subjects screened were elderly and some were identified with various cardiovascular conditions, which is consistent with the variation observed in the waveforms. The waveforms from these patients were qualitatively classified into a number of categories, three of which are identified in literature and is used to help diagnose clinical illness [4]. The last, ‘spindle’, is a category of observed waveforms that do not seem to fall within the three. Further analysis of the relationship between waveform features and the illnesses as diagnosed by clinicians may lead to the development of an archetypal waveform for a subpopulation of elderly, similar to what Holdsworth et al. and Ford et al. have done [1,2]. The effect of various waveforms is most apparent in specified time points of the cardiac cycle. The time averaged distribution was similar for all sample cases. The ‘spindle’ waveform induced higher stresses than the others at the time of maximum velocity, while the bisferious and tardus parvus waveforms induced higher stresses at the time of the dicrotic notch. This suggests that CFD studies interested in analyzing wall shear stresses at points of the cardiac cycle are sensitive to the choice of waveform input, and that the development of an archetypal waveform for populations of patients is important. REFERENCES 1. Holdsworth et al. Physiol. Meas. 20, 219-240, 1999 2. Ford et al. Physiol. Meas. 26, 477-488, 2005 3. Wood et al. Ultrasound Quarterly 26, 83-99, 2010 4. Madhwal et al. JACC 7, 200-203, 2014 ACKNOWLEDGEMENTS Subjects were screened at the Mayo Clinic. Funding was provided jointly by Dr. Anne Robertson, the Swanson School of Engineering, and the Office of the Provost. The CFD code was provided by Dr. Juan Cebral, and the aneurysm model data was provided by Dr. Akira Takahashi.
A NOVEL APPROACH TO POWDER-BASED 3D PRINTING: DESIGNING A COMPACT BINDER JETTING FOOD PRINTER Adedoyin Ojo Control and Mechatronics Laboratory, Department of Mechanical Engineering National University of Singapore, Singapore Email: ado15@pitt.edu ABSTRACT 3D printing covers a wide range of processes and technologies aimed at the production of parts. Also known as additive manufacturing, it utilizes a layer by layer additive process drastically different from the traditional production methods widely used in the manufacturing industry. This rapidly growing technology boasts a wide range of applications, including culinary arts, medicine and healthcare, with many more emerging every day. One such innovation currently being explored is the use of 3D printing technology in the creation of edible food. The main objective of this project is to design a compact powder-based 3D food printer that uses Binder Jetting technology. Unlike more traditional experimental based research, this project steps into the realm of product design and development and aims to explore viable means to replicate and improve upon already existing, albeit few, powder based food printing technology. This particular project focuses on the binding process of the powder based food printing. INTRODUCTION In Binder Jetting 3-D printing, two platforms equipped with pistons are connected to each other and one platform is filled with powder, while the other is empty. To print an object, after a predetermined amount of powder is raised from the powder feed with the use of the piston, a levelling roller spreads the powder evenly across the powder bed. A binder is then selectively sprayed across the powder layer in a specified manner which corresponds to the slice of the object to be printed. After completion, the powder bed is lowered so that the roller can smooth over another layer of powder and effectively fuse the two layers together. The process repeats itself until a full object has been printed. The main advantage of this process is that supports
are not needed since the powder bed fulfils this function. In addition, a wide range of materials and colors can be used. A few drawbacks include resolution inaccuracies, low strength of finished product and the need for post-processing procedures such as curing [1]. These factors, however, are not applicable for food printing, making this printing technique a practical candidate for further exploration.
Figure 1: Binder Jetting Printing Process METHODOLOGY In the process of determining the most suitable binding material, considerations for PH balance, corrosion and acidity were ignored because one of the primary objectives, at this point in the project, is to simply determine a binder combination that is able to glue the sugar powders in a prescribed form. Some potential combinations include water and Tylose powder, water and Gelatine, water and cornstarch and water and alcohol. In order to test the different binder combinations, a platform is constructed in the same configuration as the powder-based 3D-printing technique. Because the new design being pursued makes use of one chamber to serve the purpose of the powder bed, the testing mechanism designs also incorporate the use of one build chamber. Design requirements include user friendliness and ease of operation. After several design iterations, a final design incorporat-
Figure 2: Different design iterations and their components: (a) Chamber with holes to lock moving plank. (b) Pulley system. (c) Plank with shaft and lock mechanism in the middle
-ion the concept of the scissor lifting mechanism commonly used in construction was chosen. This lifting system works by using folding supports in a criss-cross x pattern; as pressure is applied to the outermost part of the support, extension and elongation of the scissor mechanism is achieved and results in a vertical movement of the platform affixed to the mechanism. For this particular design, all but one edge of the scissor mechanism is fixed; the unfixed edge is attached to a wheel to allow for horizontal movement. The scissor pattern is formed with the use of four long platforms held together in the middle by a pin. Figure 4: Solid works rendering of prototype
Figure 3: Illustration of Scissor mechanism and bottom support pillar The design also incorporates four pillars to allow for alignment as well as ensure that the powder platform remains completely parallel during movement. The pillars also serve to guide the top platform as it is lowered down into the bottom platform. Three ends of the scissor mechanism are rigidly attached to the top of the platform while the fourth is attached to a wheel to allow for movement. As the wheel is moved forward, the powder platform moves vertically. The bottom platform is extended out about 10 mm to allow for additional movement for the rollers on the moving edge of the scissor mechanism. This extension is connected to the bottom platform by a flexible hinge devise that is retractable when the device is not in use. Finally, the entire design is surrounded by walls to fulfil the complete closure requirement for the testing device. To move the platform vertically, the free edge of the scissor mechanism with the wheel is moved. After several design modifications, this model can be used as an effective binder testing mechanism.
DISCUSSION AND CONCLUSION As this project is still in its initial stages, more extensive research still needs to be conducted to further develop the proposed product. After making the necessary design modifications, the next step is to fabricate a physical prototype of the testing mechanism. After this, different binder combinations can be tested with powdered sugar to select the best choice. Considerations for material property can then be taken into account. Another area of future development is the controls for the ink-jet dispensing of the binder and the controls for the print head movement.
ACKNOWLEDGEMENTS This award was funded by the Swanson School of Engineering and the Office of the Provost. Special thanks to the National University of Singapore and Prof. Jerry Fuh and Prof. Lu Wen Feng. REFERENCES [1] â&#x20AC;&#x153;3-D Printing Processes: The Beginnerâ&#x20AC;&#x2122;s Guide.â&#x20AC;? 3DPrintingIndustry. Web. http://3dprintingindustry.com/3d-printing-basics-freebeginners-guide/processes/
OPTIMIZATION OF PROCESSING PARAMETERS OF ADDITIVELY MANUFACTURED INCONEL 625 AND INCONEL 718 Glen Ayes, Zhenyu Liu, Guofeng Wang, and Brian Gleeson Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: gla9@pitt.edu INTRODUCTION Additive manufacturing has been around since the 1980’s, but there has been a rapid expansion of applications in the past few years. The applications range from engineering and construction to military and human tissue replacement. Inconel 625 and Inconel 718 are both nickel-chromium based alloys known for their high strength and corrosion resistance, and their ability to retain high strength at high temperatures. For these reasons, they prove especially useful in aerospace applications, such as in turbine blades. Additive manufacturing is an evolving process seen as the future of manufacturing. In fact, “A third industrial revolution” from The Economist states that additive manufacturing is leading to a third industrial revolution arguing that manufacturing is, “turning away from mass manufacturing and towards much more individualistic production.” With this in mind, it is obvious that research into this topic is essential for its continued development. Processing parameters, which include laser power, laser speed, and powder feed rate influence several aspects of a sample. For instance, inapt settings may lead to adverse conditions, such as initiation of the “balling effect” [1-6] or insufficient energy input [11], consequently leading to increases in porosity and deviation from desired deposit geometry [4-6, 11]. This study aimed to optimize these 3 processing parameters to yield optimal deposit geometry of Inconel 625 and Inconel 718 samples through the application of the Taguchi method, grey relational grade analysis, and analysis of variance (ANOVA) [7-10]. METHODS AND MATERIALS In this study the Optomec LENS® 450 was used to deposit metal cubic samples utilizing the Laser Engineered Net Shaping (LENS®) method of additive manufacturing. A 400W IPG Fiber Laser along with 4 powder feeding nozzles separately
deposited Argon atomized Inconel 625 and Inconel 718 onto 1/8x3x3 in. low carbon steel substrates. Argon gas filled the work chamber to avoid oxidation during the deposition process. Laser power, laser speed, and powder feed rate, each at 3 levels, were studied via the Taguchi method to obtain optimum parameters. 9 samples of each powder were deposited based on the L9 orthogonal array shown in Table 1. Laser scan direction alternated 45° between each layer to alleviate anisotropy [5], and layer thickness was set to 0.015 in. (0.381 mm) for all samples with a total intended height of 0.42 in. (10.7 mm). Table 1: L9 Orthogonal Array Experiment Power # (W) 1 270 2 270 3 270 4 300 5 300 6 300 7 330 8 330 9 330
Speed (in/min) 35 40 45 35 40 45 35 40 45
Feed Rate (rpm) 9 12 15 12 15 9 15 9 12
Each sample was cut along the vertical cross section, polished, and etched with 15 mL HCl, 10 mL HNO3, and 10 mL Acetic Acid. The cross sections were measured for unevenness along the layers within the bottom, middle, and top sections. The average value of these 3 readings was presented as the unevenness value of a sample. The wall height and middle height of the samples were measured with a Neiko digital caliper prior to cutting to obtain deviation from intended sample height (10.7 mm) and to help distinguish levels of failure between partially deposited samples. Energy area density (EAD (J/mm^2)) and energy mass density (EMD (J/g)) were both calculated in order to identify a viable processing range [11].
DATA PROCESSING The raw data for unevenness, wall height, and middle height were initially normalized based on targets of smaller-the-better, nominal-the-better (10.7 mm), and nominal-the-better (10.7 mm) respectively. Following normalization, a grey relational grade was computed from each normalized value similarly as in [8-10]. ANOVA of grey relational grades determined the contribution of factors into the deposit geometry. All calculations were solved in Microsoft Excel 2010. RESULTS The largest grey relational grades for each level revealed the optimum levels for the factors of Inconel 625 (300 W, 35 in/min, 15 rpm) and Inconel 718 (330 W, 35 in/min, 12 rpm). Table 2 reports the results of ANOVA on the grey relational grades of both 625 and 718 samples. For 625, powder feed rate was found to have the largest contribution (49.6%), followed by laser power (25.3%) and laser speed (24.8%). For 718, powder feed rate (40.9%), laser power (3.52%), and laser speed (44.0%) contributions were also found. DISCUSSION As expected, a difference between the deposition of 625 and 718 samples was evident, as proven by the fact that the 625 samples produced 5 failures, while 718 only produced 1 failure. Consequently, the analysis of the 625 data was more prone to error due to unevenness values for failures set to 3 rather than a true unevenness. The reasoning behind this was that although experiment 1 for 625 was shown to be the worst deposit, it had a significantly lower unevenness than other failures (experiments 6 and 8) due to a low wall height (4.58 mm) and middle height (3.15 mm), thus producing misleading data if based solely on unevenness. Therefore, unevenness, wall height, and middle height where all considered together as grey relational grades (718 excluded middle height due to only 1 failure) with weighting factors of 0.6, 0.05, and 0.05, respectively. Table 2: ANOVA of Grey Relational Grades Inconel 625 Parameter DoF SS MS Power 2 0.100 0.050 Speed 2 0.098 0.049 Powder 2 0.196 0.098 Error 2 0.001 0.001 Total 8 0.394
F 98.5 96.5 193
% 25.3 24.8 49.6 0.26
The failures in this study can be partially attributed to and represented by an unfavorable relationship between EAD and EMD [11]. By plotting EAD vs EMD, predicting whether failures or relatively even samples will be deposited becomes possible. This processing window requires additional research for confirmation, but could certainly prove useful for further investigation on additively manufactured Inconel 625 and 718 samples so as to avoid incomplete deposits all together while improving upon complete and even samples based on the future selected areas of study. REFERENCES 1. Yadroitsev et al. Applied surface science 253.19, 8064-8069, 2007. 2. Li et al. The International Journal of Advanced Manufacturing Technology 59.9-12, 1025-1035, 2012. 3. Tolochko et al. Rapid Prototyping Journal 10.2, 78-87, 2004. 4. Helmer et al. Journal of Materials Research 29.17, 1987-1996, 2014. 5. Hanzl et al. Procedia Engineering 100, 14051413, 2015. 6. Gu et al. Applied Surface Science 255.5, 18801887, 2008. 7. Mantrala et al. Journal of Materials Research 29.17, 2021-2027, 2014. 8. Rama et al. IJERA 2.3, 192-197, 2012. 9. Lin, C. L. Materials and manufacturing processes 19.2, 209-220, 2004. 10. Chiang et al. Computers & Industrial Engineering 56.2, 648-661, 2009. 11. Zhong et al. Journal of Laser Applications 27.3, 032008, 2015. ACKNOWLEDGEMENTS I am grateful to the Swanson School of Engineering, the Office of Provost, my mentor Dr. Guofeng Wang, and Dr. Brian Gleeson for jointly funding my summer research. I am also thankful to graduate student Zhenyu Liu who provided any needed assistance to complete my summer research.
Parameter Power Speed Powder Error Total
DoF 2 2 2 2 8
Inconel 718 SS MS 0.001 0.000 0.006 0.003 0.006 0.003 0.002 0.001 0.014
F 0.3 3.8 3.5
% 3.52 44.0 40.9 11.6
MINIATURIZED SHEAR PUNCH TESTING OF PLASTIC FLOW BEHAVIOR OF METAL AND ALLOY THIN FOIL SPECIMENS Cyrus T. Eason Dr. JĂśrg Wiezorekâ&#x20AC;&#x2122;s Laboratory University of Pittsburgh, PA, USA Email: cte6@pitt.edu INTRODUCTION Shear punch testing is a mechanical testing technique used to measure plastic flow related mechanical properties of a material available in thin sheet form. Miniaturized shear punch testing has the advantage of enabling plastic flow property measurements from small volumes of material, requiring specimens on the scale of about 3mm x 3mm x 0.2mm. This has been recognized as advantageous when testing neutron irradiated and radio-activated materials in order to minimize health hazards or for testing of unique nanocrystalline materials created through novel mechanical processing methods such as linear plane-strain machining, which are available only in small quantities [1]. In shear punch testing measurements of the applied load with respect to the displacement of the punch tip provides data that can be converted into shear stress and normalized displacement, which in turn enables determination of the ultimate and yield shear strengths of the tested material. The shear ultimate and yield stresses obtained from shear punch test measurements are related to the ultimate and yield tensile stresses by: đ?&#x153;&#x17D;đ?&#x153;&#x17D;đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą = đ?&#x2018;&#x161;đ?&#x2018;&#x161; â&#x2C6;&#x2014; đ?&#x153;&#x17D;đ?&#x153;&#x17D;đ?&#x2018; đ?&#x2018; â&#x201E;&#x17D;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
(1)
Here m is an empirically determined coefficient specific to the shear punch set-up used in a given test, which typically differs for the ultimate strength (m) and the yield strength correlations (n). The purpose of this research is to verify the reproducibility and reliability of a simple and costeffective miniaturized shear punch set-up using different stainless steels (type 316 SS and 304 SS), and Nickel, for which the tensile properties are reasonably well known, prior to application for the determination of the of plane-strain machining
based microstructural modifications on the plastic properties. METHODS The shear punch set-up consists simply of a split die assembly and punch (fig. 1). A controlled load is applied to the head of the punch, forcing it in contact with and deforming a sample held fixed by the die assembly. To ensure shear deformation, the sample is clamped between the two halves of the die assembly. Both halves have aligned and matching channels to accept and guide the punch. Fig. 1
Utilizing two protective grip pieces, the die and punch assembly is inserted and coupled with an Instron 5500 Universal Mechanical Testing machine to apply controlled loads to the punch and measure them with a 50kN capacity load cell, while the punch tip displacement is measured as the crosshead displacement. Tests have been performed with a constant crosshead displacement rate of 0.004 mm/s (4Âľm/s). In the experiments detailed below two separate punches of cross-sectional diameters, 700Âľm and 775Âľm and referred to as the â&#x20AC;&#x2DC;Undersized Punchâ&#x20AC;&#x2122;, â&#x20AC;&#x2DC;Large Punchâ&#x20AC;&#x2122;, respectively, were used to enable assessment of the effects of the gap dimensions in the receiving channel, The inner radius of the die receiving channel is 0.8mm, producing a clearance of 100Îźm for the Undersized Punch and 25Îźm for
DATA PROCESSING By graphing the shear stress vs. the normalized displacement, a curve similar to those from conventional uniaxial tensile testing is observed, and similar mechanical properties can be obtained, e.g. the shear yield strength and shear ultimate strength. The shear stress is defined by: đ?&#x2018;&#x192;đ?&#x2018;&#x192; 2đ?&#x153;&#x2039;đ?&#x153;&#x2039;đ?&#x2018;&#x;đ?&#x2018;&#x;đ?&#x2018;&#x17D;đ?&#x2018;&#x17D;đ?&#x2018;&#x17D;đ?&#x2018;&#x17D;đ?&#x2018;&#x17D;đ?&#x2018;&#x17D; đ?&#x2018;Ąđ?&#x2018;Ą (2) where P = load applied, ravg is the average of the punch tip radius and the receiving die channel radius, and t is the thickness of the sample [4]. The normalized displacement is defined as: đ?&#x2018;&#x2018;đ?&#x2018;&#x2018; đ?&#x2018;Ąđ?&#x2018;Ą (3) where d = displacement of crosshead. By dividing this by the specimen thickness, the effects of specimen thickness are removed from the test results [2]. The yield shear stress was determined with a linear offset of 1%, as it has been shown to produce most reliably shear-tensile correlations [2,3] RESULTS The Large Punch produced reproducible results within the margin of measurement error for both the 5mm x 5mm samples and the 3mm x 3mm samples; this implies that tests samples of materials with unknown mechanical properties can be reliably performed with the large punch.
Actual vs. Measured
Tensile Strength
the Large Punch. The lower clearance of the Large Punch produces a well-defined and distinct region of elastic loading in the load-displacement data acquired during shear punch testing. Samples tested were: 316 SS foil 203Îźm Âą 10Îźm thick, 304 SS foil 203Îźm Âą 10Îźm thick, and Nickel foil 127 Îźm Âą 10Îźm thick. 5mm x 5mm samples were tested originally to empirically determine the tensile-shear coefficients of each material, and then 3mm x 3mm samples were tested to verify that 3mm x 3mm samples produce equivalent results.
800 600
UTS-Set 1
400 200
0
YTS-Set 1
0
2
Sample #
4
Undersized Punch Large Punch Materials Average (Large Punch)
YTS-Set 2
The Undersized Punch also produced repeatable results within a margin of measurement error for the 5mm x 5mm 316 SS samples, but the large clearance between punch tip and receiving hole made obtaining the yield stress difficult for the 3mm x 3mm samples due to the sample bending instead of remaining in shear. This punch should therefore not be used for samples in the 3mm x 3mm size range and was not used to test the other materials. REFERENCES 1.
2.
3.
4.
Y. Idell, G. Facco, A. Kulovits, M.R. Shankar, J.M Wiezorek. â&#x20AC;&#x153;Strengthening of austenitic stainless steel by formation of nanocrystalline Îł-phase through severe plastic deformation during two-dimensional linear plane-strain machining.â&#x20AC;? Scripta Materialia 68 (2013): 667-670 R.K. Guduru, K.A. Darling, R. Kishore, R.O. Scattergood, C.C. Koch, K.L. Murty. â&#x20AC;&#x153;Evaluation of mechanical properties using shear-punch testing.â&#x20AC;? Material Science and Engineering 395 (2005): 307-314. Toloczko, M. B., Abe, K., Hamilton, M. L., Garner, F. A., and Kurtz, R. J., "The Effect of Test Machine Compliance on the Measured Shear Punch Yield Stress as Predicted Using Finite Element Analysis," Small Specimen Test Techniques: Fourth Volume, ASTMSTP 1418, M. A. Sokolov, J. D. Landes, and G. E. Lucas, Eds., ASTM International, West Conshohocken, PA, 2002. Lucas, G. E., Odette, G. R., and Sheckherd, J. W., "Shear Punch and Microhardness Tests for Strength and Ductility Measurements." The Use of Small-Scale Specimens for Testing Irradiated Material, ASTM STP S88, W. R. Corwin and G. E. Lucas, Eds., American Society for Testing and Materials, Philadelphia, 1986, pp. 112- 140.
ACKNOWLEDGEMENTS The above research was funded jointly by Dr. JĂśrg Wiezorek, the Department of Mechanical Engineering and Materials Science, the University of Pittsburghâ&#x20AC;&#x2122;s Swanson School of Engineering, and the University of Pittsburghâ&#x20AC;&#x2122;s Office of the Provost.
Table 1: Average tensile-shear coefficients Ultimate coefficient m 0.758 0.557 0.552
6
UTS-Set 2
Yield coefficient n 0.523 0.395 0.385
IMPLEMENTATION OF BUTTERWORTH FILTERING TO IMPROVE BEAT SELECTION FOR HEMODYNAMIC ANALYSIS
METHODS Pre-operative RVP waveforms from right heart catheterizations of 26 patients who received left ventricular assist device (LVAD) or biventricular assist device (BiVAD) were retrospectively obtained and redigitized (Figure 1, Top). 1 patient was excluded due to inability to analyze their pressure waveform. Redigitization was performed by selecting waypoints along the pressure waveform and constructing a line via cubic piecewise polynomial interpolation. Post-processing was performed via the following two methods: 1) A Savitzky-Golay (SG) filteringbased system that divided each cycle at estimated End Diastolic Pressure (EDP). EDP was estimated by finding diastolic blood pressure (DBP) on the waveform and adding a set amount to simulate EDP (Figure 1, Lower Left). 2) A 3rd order low-pass BW filtering-based system which used the 2nd derivative of pressure for calculation of EDP. Then each cardiac cycle was divided 1/3 of the way between DBP and EDP (Figure 1, Lower Right).
60 40 20 0 0
2
4
80
S-Golay
60 40 20 0 0
0.2 0.4 0.6 0.8
Time (s)
Pressure (mmHg)
This becomes problematic when dealing with ventricular assist device (VAD) patients because they are small in number and often have erratic RVP waveforms. The aim of this study is to improve the ability to perform multiple beat selection via a Butterworth (BW) filtering-based system, thus strengthening the consistency of results and integrity of the data set.
Entire Pressure Waveform 80
6
8
Time (s) Pressure (mmHg)
INTRODUCTION In hemodynamic analysis of right ventricular pressure (RVP) waveforms, generation of an average representative beat from multiple cardiac cycles is a common technique. Inability to align cardiac cycles due to improper cycle division and artificial phase shifts can generate inconsistent or incorrect results. This misalignment either leads to: 1) use of a single beat which may not be representative of the entire pressure waveform for analysis; or 2) exclusion of the patient sample.
Pressure (mmHg)
Michael P. Jacus, Timothy N. Bachman, Rebecca R. Vanderpool and Dr. Marc A. Simon Simon Laboratory, Vascular Medical Institute University of Pittsburgh, PA, USA Email: mpj11@pitt.edu, Web: http://www.vmi.pitt.edu/labs/simonlab.html
80
BW Method
60 40 20 0 0
0.2 0.4 0.6 0.8
Time (s)
Figure 1: Top panel shows RVP waveform. A red solid line highlights the cycles selected for use in the Average Representative Beat. Bottom panels show overlay of selected cycles that were divided by each method: SG on left and BW on right.
The divided cycles were overlaid to aid in the selection of beats for an average representative beat- the average of beats that were similar in phase, features, and morphology. The resulting average representative beat was analyzed and the results generated from each method were examined via Bland-Altman analysis. Additionally, a small set (n=5) of prospectively obtained pressure waveforms was analyzed to examine the effect of redigitization on the analysis process, and to use as a baseline comparison for the two filtering methods. RESULTS The BW filtering-based method exhibited improved quality of cardiac cycle division. Table 1 shows the results of the number of beats used to create the average representative beat for each patient. Furthermore, it shows the number of times a single beat had to be used for analysis.
Table 1: Cardiac Cycles Used to Generate Average Representative Beat
Average # of Beats Used (n=25)
Single Beat Usage (n=25)
S‐Golay BW
2.16 4.28
6 1
Difference (BW‐SG)
2.12
‐5
Bland-Altman analysis confirmed that all desired outputs of the two methods lay within the limits of agreement. However, the Bland-Altman plot of dP/dt (max) exhibited a positive slope resulting in bias increasing with mean magnitude (Figure 2). dP/dt (max): (BW)-(S-Golay) Difference (BW)-(SG) (mmHg/s)
250 200 150 100 50 0
dP/dt (max) Mean +/- 2SD Regression
-50 -100 200
300
400
500
600
700
800
900
Mean (BW+SG)/2 (mmHg/s) Figure 2: B-A plot of dP/dt (max) with regression line showing the increasing magnitude of the bias.
Analysis of the prospective data retained good agreement between the results of the filtering methods, but decreased the magnitude of the bias of dP/dt (max) while eliminating the positive slope. DISCUSSION With the variable clinical data that is seen in patients with heart failure, inclusion of as many beats as possible is important for consistent analysis. The BW-based method allowed for representation by multiple beats where only single beats could previously be used. Additionally, this method allowed inclusion of patients who would have been excluded using the S-Golay-based method.
The Bland-Altman analysis was used to compare the outputs of the two methods. All of the values except for EDP, dP/dt (max), and dP/dt (min) fell within the limits of agreement. The difference in EDP was expected due to the methods in which EDP was calculated. The effects on the pressure derivative can also be attributed to the two methods: Savitzky-Golay using a smoothing filter on the signal as opposed to the Butterworth filter removing high-frequency noise. The increasing bias with higher magnitude of dP/dt (max) was an unexpected result that indicates either an underestimation via SG filtering, or an overestimation via BW filtering. Unfortunately, when analyzing the prospective data, it was found that none of the 5 samples had dP/dt (max) values above 500 mmHg/s. As a result, it could not be determined whether the BW or SG method was correctly filtering the signal. In the future, more prospective clinical data will be obtained for analysis. When comparing the redigitized data to the prospective data, it was discovered that the redigitization was acting as an initial filtering stage before the SG or BW filters were applied. It did not significantly affect the BW methods because the BW method removed all noise above a cutoff frequency. However, it was facilitating the SG method to remove more noise, which could be seen when examining the derivatives. Moreover, while Bland-Altman analysis is good for comparing the outputs of two procedures, it may not be the optimal method to determine which filtering and beat alignment method is better. Initially, it has been demonstrated that the BW method is better due to inclusion of more beats in the average representative beat and use of only a single beat for less patients. For a more sophisticated quantitative analysis metric, future work will involve comparing beat alignment using normalized cross-correlation of the average representative beats to a manually constructed average. ACKNOWLEDGEMENTS Funding provided by the University of Pittsburgh Swanson School of Engineering and the Office of the Provost. Signal processing advice was provided by Dr. Kang Kim, Nadim Farhat, Xuan Ding, and Jaesok Yu of the Multi-modality Biomedical Ultrasound Imaging Lab at University of Pittsburgh.
Quantification of Axonal Projections from Topologically-related Areas of Motor and Sensory Cortex in Transgenic Mice 1 Michael Urich , Michael Economo2, Bryan M Hooks1,2,3, Charles R Gerfen3 1 Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA 2 Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 3 Laboratory of Systems Neuroscience, National Institute of Mental Health, Bethesda, MD Email: mpu2@pitt.edu INTRODUCTION The neural connections of the mammalian motor cortex are relatively unknown. Understanding these connections may explain the neural circuit response in neurodegenerative conditions and provide insight into how to treat them. We developed a standard method for quantifying neuronal projections from specific cellular components of motor cortex circuitry in the mouse brain. We labeled specific cell types of cortical pyramidal neurons using transgenic Cre-driver mice from the GENSAT project. Subsequently, these cells are infected in specific locations of motor and sensory cortex with viruses expressing three fluorescent proteins. The fluorescent proteins label the axonal projections of these neurons, signaling the strength of connections between anatomical regions. We quantify this fluorescence as a measure of connection strength between two regions. We aim to provide a better model for connectivity of cortical regions, which may help describe the neurocircuitry of the motor system in health and disease. MATERIALS AND METHODS We used custom-written MATLAB software to mark structural points of interest in the sample brain and a standardized reference brain to align in a standard coordinate space. We used affine and bspline warps to manipulate the sample image to the standardized one. The software then resamples the
sample image down to the smaller size of the standard brain, and interpolates the frames to have the same dimensions. After alignment, we then analyze fluorescence data of defined brain regions in the reference brain. We developed an algorithm in FIJI to threshold images of neuronal projections so that only pixels of a minimum brightness are present. By subtracting away background noise we are able to focus our quantifications on only locations where labeled neuronal projections are present. After thresholding in FIJI, we imported the images back into MATLAB and selected a defined anatomical region of the brain to analyze using the anatomical location of brain regions in aligned coordinate space of our reference map. With this specialized image, we run a range of tests to quantify intensity of projections from the injection site to specific brain areas and determine correlations between topographically related projections in motor and sensory areas of the mouse brain. Figure 1 demonstrates the steps we go through in isolating a â&#x20AC;&#x153;maskâ&#x20AC;? of the image to further analyze. DATA PROCESSING We plotted total amount of brightness as well as mean brightness of each pixel along the anterior/posterior axis, indicating how strongly projections innervate a given area. We next determined the Pearson product-moment correlation
Figure 1. A - Standard reference brain with points of interest marked. B - Thresholded and aligned sample fluoroescence image with viral injection. C - Caudoputamen cutout of reference brain illustrating general area for a single plane. D - Fluorescence channel image of caudoputamen, with intensity adjusted to show signal strength
coefficient (PPMCC) for pairwise comparisons between three different channels. Each channel in this context represents a viral injection into a different topological area in either the sensory or motor cortex. Changes in PPMCC along the anterior-posterior axis show differences in whether two inputs from distinct topographic locations in primary motor (M1) and primary somatosensory (S1) cortex to the same target are strongly correlated. We used a moving average taken every fifteen planes (out of a 528-plane image) to smooth the curve and reduce noise. RESULTS AND DISCUSSION By plotting the PPMCC as a function of the distance along the anterior-posterior axis, we were able to visually demonstrate peaks in correlation. These peaks represent where two channels, which represent viral injections into different topological regions of M1 and S1, projected to the studied region in similar ways. These results, a sample of which is provided below in Figure 2, illustrate where connectivity is high.
DISCUSSION Our algorithm successfully quantifies neuronal projections of the mouse brain in a standardized reference space. Viral injections marked regions clearly, illustrating individual cells, and allowed us to trace how the labeled axonal projections moved through the brain. We were able to measure and compare neuronal connectivity and provide a method for examining the correlation of output from topologically aligned and non-aligned discrete regions of M1 and S1. Thus, the model we have developed is able to help determine the connectivity of different regions of the mouse brain. This tool will have applications in understanding motorrelated neurological disorders. ACKNOWLEDGEMENTS We are thankful for funding provided by the University of Pittsburghâ&#x20AC;&#x2122;s Swanson School of Engineering and Office of the Provost.
Figure 2 Pearson product-moment correlation coefficient as a function of location along anterior-posterior axis, with moving average plotted to smooth data. Some brain images showed multiple peaks, indicating multiple areas of connectivity.
MAPPING AND MODELING EEG SIGNALS BEFORE AND AFTER A CRANIOTOMY PROCEDURE Deepa Issar, Adam C. Snyder, Matthew A. Smith Visual Neuroscience Lab, Department of Ophthalmology University of Pittsburgh, PA, USA Email: dei4@pitt.edu, Web: http://www.smithlab.net INTRODUCTION protective titanium mesh (we performed the Uncovering the mechanisms of the brain’s craniotomy in order to implant a multielectrode computing ability is crucial for better diagnosis and array for a separate experiment). The subjects held treatment of brain disorders and the development of their gaze fixed on a central point on a computer neural prosthetics. Electroencephalography (EEG) screen while checkerboard stimuli flashed on the at the scalp is the primary method for measuring right or left side of the screen. We obtained an MRI electrical correlates of brain activity in humans scan of each subject’s head before the craniotomy because it is non-invasive; however, the signals procedure. must pass through layers of skin, bone, and brain tissue, which weakens them. With non-human DATA PROCESSING subjects, experimenters may use more precise but We combined the MRIs with published reports of invasive methods of neural recording that penetrate tissue resistances to construct pre and postthe brain and require creating a small opening in the craniotomy computer models of the electrical skull (craniotomy). EEG signals measured from properties of each subjects’ head, which we used to animal subjects allow us to translate between infer the sources of electrical currents during internal brain signals observed through more visually evoked responses. We used Matlab, invasive means in animal studies and external brain Photoshop, and FieldTrip open-source code to signals measured in human studies. However, the construct these models [1]. Using Pearson’s opening in the skull has the potential to alter the correlation coefficient, we analyzed the similarity way electrical signals in the brain travel. We between the time-course, scalp electrical maps, and analyzed EEG signals before and after perforating inferred source locations from before and after the the skull to determine whether our method of craniotomy. mapping between neural activity and EEG signals was affected. For each subject, we randomly selected half of the pre-craniotomy data to calculate the source origin of We found that the EEG data had similar time the neural activity, projected that source onto the courses and calculated source origins regardless of post-craniotomy model with a skull opening to whether there was a hole in the skull. This minimal predict EEG measurements, and computed the effect of small openings in the skull is particularly correlation between the predicted and measured important for the emerging use of neural post-craniotomy EEG data. In order to establish the prosthetics, which will require craniotomy re-test reliability of the localization method, we also procedures to implant the devices. projected the calculated source onto the precraniotomy model and calculated the correlation METHODS between the predicted data and the other half of the Procedures were approved by the Institutional measured pre-craniotomy data. Animal Care and Use Committee of the University of Pittsburgh, and surgeries were performed in RESULTS aseptic conditions under general anesthesia. We Initial analysis of the correlation between pre and recorded EEG from eight electrodes on the scalp of post-craniotomy EEG waveforms showed that the two rhesus macaque monkeys both before and after time courses of the neural activity were similar, a craniotomy procedure that removed a small piece while the amplitude of the response components of skull over visual cortex and replaced it with a differed. Figure 1 shows one subject’s pre-
craniotomy and post-craniotomy waveforms for two of the eight electrodes (Oz and F6). 14 pre Oz post Oz
12
channel amplitude (uV)
10
} r = 0.80 } r = 0.84
8
4 2
DISCUSSION Our goal is to improve the translation between field potential recordings (using EEG) and more invasive modes of recording to provide a basis for scientific generalization across these relatively isolated and distinct methodologies. As a crucial first step, we found evidence that a skull opening does not cause significant deviations in EEG signals detected at the scalp. Therefore, our method of mapping EEG data is reliable for analyzing animal subjects with and without craniotomies.
0
0
0.05
0.1
0.15
0.2 time (s)
0.25
0.3
0.35
0.4
Figure 1. Pre-craniotomy waveforms (red) and postcraniotomy waveforms (blue) had similar time courses as measured by the correlation coefficient (r) but different amplitudes.
Figure 2. The most probable source locations are similar regardless of whether they were calculated using precraniotomy data (red) or post-craniotomy data (blue).
The source localization analysis showed that the correlation between the actual measurements and those predicted by the projection of the calculated source was high. Figure 2 shows the calculated sources based on the pre-craniotomy and postcraniotomy data on an MRI slice for one subject. There are two active sources (nearly symmetrically located in the left and right hemispheres) in response to a visual stimulus. These results indicate that a hole in the skull over visual cortex does not cause significant deviations in our ability to map visually evoked EEG signals.
These results are similar to those found in a study by Gindrat et al. that looked at the effect of a craniotomy on EEG mapping of somatosensory evoked potentials and found that before and after the craniotomy the source localization results had no major differences [2]. However, their study involved the replacement of the skull piece after surgery, while our subjects were fitted with a titanium mesh over the craniotomy site. Unlike Gindrat et al., we saw a reduction in the postcraniotomy waveform amplitude as pictured in Figure 1. This may be the result of current leakage through the titanium mesh. The time course and localization of signals, however, do not appear to be affected by this. Future steps for relating EEG data with other recording methods will involve comparing the predicted source location from EEG data with the activity of dually recorded single-neurons using more invasive technology like microelectrode arrays. REFERENCES 1.Oostenveld et al. Comput Intell Neurosci 2011, 19, 2011. 2.Gindrat et al. Brain Struct Funct 220, 2121-2142, 2014. ACKNOWLEDGEMENTS Funding for DI was provided by the Swanson School of Engineering and the Office of the Provost. ACS was supported by an NIH fellowship (F32EY023456). MAS was supported by an NIH grant(R01EYO22928).
Contralateral Limb Differences in Knee Kinetics and Correlations to Kinematic Differences After Anterior Cruciate Ligament Reconstruction Andrew Sivaprakasam1, James J. Irrgang2 PhD, Freddie Fu2 MD, Scott Tashman2 PhD 1
2
University of Pittsburgh Department of Bioengineering Orthopaedic Biodynamics Laboratory, Department of Orthopaedic Surgery University of Pittsburgh, PA, USA Email: sivaprakasam@pitt.edu, Web: http://www.orthonet.pitt.edu/
INTRODUCTION Anterior Cruciate Ligament (ACL) reconstruction is a common procedure that is used to restore joint function in the knee after an ACL-related injury. However, patients still often have kinematic and mechanical differences in their reconstructed joint compared to their joint preinjury. These differences are thought to cause variations in joint surface interactions that result in abnormal loading patterns, increasing the risk of premature osteoarthritis1 (OA). In order to further improve ACL reconstruction surgeries and understand how these variations in joint surface interactions and loading patterns occur, it is essential to properly investigate the kinetics that occur at patients’ reconstructed and nonreconstructed contralateral knees. The goals of this project were to identify kinetic differences between patient-specific reconstructed and non-reconstructed knees, and relate kinetic differences to kinematic differences. METHODS Anatomic ACL reconstructions were carried out on 54 patients as part of an ongoing clinical trial. However, due to marker loss resulting in an inability to interpolate marker position in some patients, 33 patients (mean age 23±8.6 years, 33% female) had complete motion capture data for level walking trials (1.3 m/s), while 32 patients (mean age 23±8.5 years, 34% female) had complete data for downhill running trials (3 m/s, 10 degree slope) six months post-reconstruction. In order to collect kinetic data, a dual force plate instrumented treadmill was placed within an 8-camera VICON system during walking and running. Twelve trials in total were recorded for each patient, three for each knee, both for walking and running. Visual3D software made by C-Motion (Germantown, MD) was used to calculate the knee external moments in sagittal and coronal planes through the use of inverse dynamics. In order to accurately account for patient anthropometry in the model-building process, lower-limb joint and segment radii were measured. The femur and tibia segments were modeled as frusta of right cones, while the
foot and hip were modeled as right elliptical cylinders based on the acquired subject-specific measurements. Bone kinematic data was collected using a Dynamic Stereo X-ray system. Walking trials were conducted at a sampling rate of 100 Hz, while running trials were conducted at 150 Hz. Using bilateral computed tomography (CT) scans of patient bones, 3D bone models were created in Mimics (Belgium) medical imaging software and aligned to the fluoroscopy obtained by the DSX system to obtain six degree tibiofemoral kinematic data. DATA PROCESSING All moment data was normalized by patient mass. Peak moment values were recorded for each trial within 4% to 25% gait cycle using MATLAB and averaged for each knee. The moment data collected for the nonreconstructed contralateral knee for each subject was compared to that of the reconstructed knee, and the differences were analyzed using a paired t-test to observe significant kinetic differences between reconstructed and contralateral knees. Kinematic differences between patient reconstructed and contralateral knees were then calculated. The limb-tolimb differences in maximum flexion angle and minimum abduction angle were plotted against kinetic data that occurred in the respective plane. A linear trendline was then fitted to the data and Pearson’s method was used to determine correlations between kinematic and kinetic data. Significance was set at p<0.05 for all statistic tests. RESULTS Patients exhibited a 43% decrease in peak external flexion moment in reconstructed knees compared to contralateral knees, which was found to be significantly different (Table 1). In downhill running trials, there was a significant 38% decrease in peak external flexion moment (Table 2). External adduction moment differences were observed in both running and walking trials, however they were not statistically significant. Upon using Pearson’s method, no significant correlations between the kinematic differences and peak external
moment differences were found (Figure 1). Generally, a slightly higher relationship was found between kinematics and kinetics occurring in the sagittal plane (Figure 1 A, C) than those found in the coronal plane (Figure 1 B, D).
No significant correlations were found between kinetic and tibiofemoral kinematic data in this study. However, comparing peak moment data to maximum and minimum kinematic data may not be the best means of analysis. By investigating differences and correlations that occur within a wider range of the gait cycle, and not just the peak kinetic and kinematic values, a more proper analysis may be conducted.
DISCUSSION An opposing internal moment is required to balance the external moment found in a joint. Because the external flexion moment was found to be significantly lower in the REFERENCES reconstructed knee during both running and walking, less [1] Andriacchi T., et al. Annals of Biomedical internal moment was required to be generated by the knee Engineering Vol. 32(3), 2004 extensors, possibly straining the reconstructed ACL less. ACKNOWLEDGEMENTS The ACL autograft was harvested from the quadriceps I would like to recognize the Swanson School of muscle, one of the knee extensors, which may also be a Engineering, the Office of the Provost, and the reason for the significant decrease in reconstructed knee Orthopaedic Biodynamics Laboratory for funding this external flexion moment. Kinetics that occur in the summer research project, Dr. Scott Tashman for being my coronal plane are known to vary with gender, and the mentor, Eric Thorhauer for his help in the lab, and Tom reason that no significant differences were found in the Kepple for his help with Visual3D. coronal plane moments could be due to the fact that this was a mixed-gender study. Figure 1. Tibiofemoral Kinematic vs Kinetic Differences. (Difference = reconstructed-contralateral limb)
Table 1: Average Peak Kinetic Values During Level Walking Trials (* indicates p<0.05) Variable
Non-Reconstructed
Reconstructed
Difference
p-value
Peak Flexion Moment (Nm/kg)*
0.40 (0.27)
0.23 (0.26)
-0.17 (0.34)
0.008
Peak Adduction Moment (Nm/kg)
0.65 (0.27)
0.73 (0.20)
0.09 (0.32)
0.126
Difference -0.97 (1.03) -0.042 (0.76)
p-value <0.001 0.686
Table 2: Average Peak Kinetic Values During Downhill Running Trials (* indicates p<0.05) Variable Peak Flexion Moment (Nm/kg)* Peak Adduction Moment (Nm/kg)
Non-Reconstructed 2.57 (0.52) 1.33 (0.56)
Reconstructed 1.66 (0.89) 1.34 (0.50)
OPTIMAL RECEPTIVE FIELDS FOR THE CLASSIFICATION OF CONSPECIFIC VOCALIZATIONS Patrick J. Haggerty, Srivatsun Sadagopan Departments of Bioengineering and Otolaryngology University of Pittsburgh, PA, USA Email: pjh49@pitt.edu INTRODUCTION We recognize complex sounds such as speech in a variety of acoustic contexts effortlessly, unmatched by many modern speech recognition systems. The mechanisms by which the brain performs this feat are poorly understood. In the brain, sounds are encoded by neurons along the ascending auditory pathway. These neurons are defined by their receptive fields, which act as filters selective for acoustic stimuli with varying complexity. A processing hierarchy exists within the ascending auditory pathway, allowing for the encoding of simple, single frequency stimuli at lower stages to complex spectrotemporal features at higher stages. Using animal vocalizations as a model for complex sound processing, fMRI studies in macaques and marmosets have shown that this hierarchy might culminate in a region of the brain that is preferentially activated by speciesspecific vocalizations [1, 2]. However, the receptive fields of neurons that underlie this preference for conspecific vocalizations are unknown. The goal of this project was to determine, from a theoretical perspective, receptive field characteristics that support optimal discrimination of complex sounds in real-world acoustic conditions. METHODS We used an information theoretic framework to evaluate the extent to which acoustic features derived from the vocalizations themselves could solve a vocalization categorization task. First, we generated 2000 random vocalization fragments (center frequency 100 Hz<CF<25000 Hz; bandwidth 0.1< BW<5.0; length 0.05 s<T<1 s) from a large database of marmoset vocalizations. We focused on categorizing one particular vocalization type
(twitter) from all other types (phee, trill, tsik, etc.). Fig. 1a displays the spectrogram of a twitter call with sample twitter fragments outlined in black. The 'responses' of each fragment to the training vocalizations were then computed using normalized cross-correlation as a metric. The distribution of responses of one such fragment to 500 twitters and 500 nontwitters are plotted in Fig. 1b. We quantified the categorization performance of the fragments using a mutual information metric (Equation 1).
Equation 1. Information value (merit) of a fragment.
Here, I(f(θ);C) is the mutual information of a fragment at a chosen threshold θ, F is a binary variable indicating the presence or absence of a fragment, and C is a binary variable indicating whether a given vocalization was within-class or outside-class. For each fragment, we designated an optimal threshold (θopt; dashed line in Fig. 1b), the response value at which each fragment exhibited maximal mutual information with respect to the classification task (Fig. 1c). The merit of each fragment was defined as the mutual information value at this optimal threshold. Because fragments were chosen randomly, many fragments had similar spectrotemporal characteristics, resulting in high redundancy of information. To minimize this redundancy, we implemented a minimax greedy-search algorithm to generate a set of the most selfdissimilar and most informative fragments (MIFs) that provided the maximal added pair-
wise joint mutual information. This pair-wise joint mutual information was computed using an equation similar to Equation 1 [1]. All computations were performed in MATLAB. RESULTS Using a set of only 6 MIFs (Fig. 1a), we obtained a categorization performance of ~98% with only a 2% false-alarm rate. We expected that fragments of intermediate size would carry the most merit â&#x20AC;&#x201C; because they lack sufficient specificity, small fragments would be detected both within-class and outside-class, whereas large fragments would not be expected to generalize well across all within-class exemplars. In Figs. 1d and 1e, we plot the merits of the top 10 fragments in each bandwidth or length bin as a function of their bandwidths and lengths, respectively. These data were better fit by convex quadratics (using adjusted R2 as a metric; p < 10-5 for all fits) than by lines. The applied quadratic fits to both fragment length and bandwidth support the hypothesis that the best fragments for categorization were of intermediate spectrotemporal size. DISCUSSION We discovered that a small (n=6) set of nonredundant, intermediate-sized fragments can categorize marmoset vocalizations with nearperfect accuracy. This finding predicts the types of neural response properties one would encounter in neurophysiological recordings from higher auditory areas. We also found that
the best features for vocalization classification were of intermediate size. These findings are similar to results from visual processing â&#x20AC;&#x201C; for example, Ullman et al. exhibited that visual features of intermediate complexity were optimal for visual tasks such as classification [3]. Our data thus suggest that features of intermediate size and complexity are optimal for classification tasks in multiple sensory domains. We are currently in the process of testing how robust model performance is to perturbations in acoustic environments. We are currently using this approach to evaluate the performance of receptive fields that have been encountered in experiments. Finally, similar approaches for the categorization of human speech sounds will be explored. REFERENCES 1. Petkov CL et al. A voice region in the monkey brain. Nature Neuroscience 3: 367-74, 2008. 2. Sadagopan S, Temiz NZ, Voss HU. Highfield functional magnetic resonance imaging of vocalization processing in marmosets. Nature Neuroscience 5, 2015. 3. Ullman S, Vidal-Naquet M, and Sali E (2002). Visual features of intermediate complexity and their use in classification. Nature Neuroscience 5: 682-687, 2002. ACKNOWLEDGMENTS Funding was provided by the Swanson School of Engineering and the Office of the Provost.
Figure 1. Selection and analysis of most informative fragments.
NOVEL IN-VITRO BIOMIMETIC MINERALIZED MATRIX MODEL FOR STUDYING BREAST CANCER METASTASIS Tatyana Yatsenko1, Akhil Patel2, Yingfei Xue2, and Shilpa Sant1,2,3 Departments of Bioengineering and 2Pharmaceutical Sciences, and the 3McGowan Institute for Regenerative Medicine, University of Pittsburgh, PA, USA Email: tay23@pitt.edu 1
INTRODUCTION Breast cancer (BrCa) is the leading cancer found in women in the United States, with over 230,000 aggressive and 62,000 nonaggressive cases diagnosed annually in women, and 1% that number in men [1]. Alarmingly, 1 in 8 women develops aggressive breast cancer within her lifetime [1]. Breast cancer metastasis to bone represents the highest number (70%) of deaths related to the cancer and is one of the most difficult diseases to treat, as the cancer contributes to bone resorption, fracture, and severe skeletal pain [1]. Currently, most cancer treatments are feeble in treating metastasized BrCa in bone, and rather target the issue of bone de-mineralization, which is mediated by osteomimetic breast cancer cells [2]. Interestingly, ~88% of bone metastases are estrogen-receptor positive (ER+) lesions [2]. It has been suggested that the microenvironment of bone encourages ER+ BrCa survival and reverse epithelial-to-mesenchymal transition (r-EMT), thus allowing metastatic ER+ BrCa to preferentially relocate to bone [2]. However, some clinical research has suggested that the route to bone invasion by BrCa begins in the primary tumor site, prior to metastasis, in lesions that contain malignant microcalcifications [3]. Microcalcifications (MCs) are a common phenomenon found in breast tissue on mammograms and help diagnose ~50% of non-palpable, asymptomatic BrCa [4]. MCs are representative of milk duct calcifications, fibrocystic changes, surgical scarring, and cancer â&#x20AC;&#x201C; only the latter of which is malignant. Benign MCs differ from malignant ones in composition and structure: malignant MCs have been found to contain only hydroxyapatite crystals (HA), with a lower percentage of carbonate substitution, and tend to be smaller (~200 um in diameter) than benign MCs [5]. The role that malignant MCs play in priming BrCa for metastasis is unknown, but one clinical study has found that cells in lesions surrounding a malignant MC showed nuclear B-catenin translocation, and cytokeratin and Vimentin colocalization, characteristic of EMT, the first step in metastasis [3]. Furthermore, the cells were already expressing bone-mimicking surface markers like bone sialoprotein (BSP), osteopontin (OPN), and bone morphogenetic protein-2.
Currently, BrCa to bone metastasis is being studied through many 3D engineered models, but none have studied the microenvironmental interactions between metastatic and non-metastatic BrCa and minerals, especially in the context of MCs at the primary tumor site [6]. In fact, the role of MCs in the breast remains elusive. Rather, the focus of previous research has been to determine if and how different types of BrCa can selfmineralize in-vitro and what molecular changes the process stimulates [7]. We hypothesize that mineralization in the primary site primes ER+ breast cancer to metastasize to bone. To study this, we have developed an in-vitro biomimetic mineralized matrix model that resembles the mineral microenvironment of malignant MCs in breast tissue. METHODS All cell culture supplies were obtained from Corning, unless otherwise mentioned. All other reagents were purchased from Sigma-Aldrich. Chitosan/Gellan Gum (Cht/GG) aligned hydrogels were fabricated using a microfluidic system in our lab previously. These hydrogels mimic the native self-assembly of collagen at the nano- and micro-scales. More importantly, the hydrogels promote biomineralization when incubated in simulated body fluid (SBF), a well-described biomineralization technique [8]. For this study, 0.25 mm2 scaffold cuts were sterilized with ethanol under UV light and incubated for 9 days in SBF or 2 hours in deionized water. These time points were chosen for maximum hydration and sufficient mineral deposition to occur. Calcium staining for visualizing the mineralization was completed using a 20% w/v Alizarin Red aqueous solution; 1 mL of the solution was used per scaffold, and stained for 1 min. MCF7 (ER+/PR+/HER2- nonaggressive BrCa) or MDA-MB231 (triple negative, more aggressive line) cells were seeded on mineralized and hydrated scaffolds at passages 3-15 at a concentration of 45,000 cells/scaffold. The hydrogels were incubated at 37 C for up to 8 days with 5% CO2 and in a culture medium composed of DMEM containing 10% fetal bovine serum and 1% penicillin/streptomyosin solution. The medium was changed every 48 hours. Light microscopy images were taken every 24 hours using a ZenÂŽ system at 10x magnification. Scaffolds were fixed with 4%
paraformaldehyde (30 min) at days 2,5, and 8, and subsequently stained for E-cadherin, an epithelial membranous marker (rabbit polyclonal), and Vimentin, a mesenchymal cytoplasmic marker (mouse monocloncal) overnight. Secondary staining (AlexaF488 donkey antirabbit and AlexaF594 goat anti-mouse) was completed for 1 hour, immediately followed by nuclei staining (Nuc-Blue® ReadyProbe, Life Technologies, according to manufacturer’s instructions). Confocal images were obtained using an inverted confocal laser scanning microscope (Olympus Fluoview 1000) under 20X and 40X objectives. RESULTS Cht/GG scaffolds were successfully able to mineralize in SBF, and Alizarin Red staining confirmed the presence of small minerals on the surface and inside the scaffold. FTIR analysis confirmed that the minerals were crystalline and amorphous HA. Light microscopy images over a period of 8 days reveals that MCF7 cells formed significantly smaller colonies and tended to migrate and spread more in mineralized scaffolds. Large microtumors did not exist in mineralized scaffolds, while they were prevalent in hydrated ones (Figure 1).
Figure 1. Large microtumors are found in hydrated scaffolds (left) versus smaller, spread-out colonies in the mineralized scaffolds (right) on day 5 of the study. Scale = 200 um.
Figure 2. Large microtumors are found in hydrated scaffolds, and clear membranous E-cadherin staining is seen at day 5. Cells in the mineralized scaffold show cytoplasmic E-cadherin staining and a clear gain of Vimentin. Some cells acquired a mesenchymal-like phenotype, indicated by the arrow. Scale = 50 um.
Confocal imaging showed that MCF7 cells in the mineralized scaffolds gained Vimentin and either lost Ecadherin, or experienced its translocation into the nuclei and cytoplasm (Figure 2). MDA-MB-231 cells showed no differences in colonization or staining between mineralized and hydrated scaffolds (results not shown). DISCUSSION The clear translocation of E-cadherin from the membrane to the cytoplasm and gain of Vimentin in cells in the mineralized scaffolds points to EMT occurring during the study. The observed increased spreading, migration, and smaller colony formation further signify that the HA minerals in our model contribute to MCF7 cells becoming more aggressive and metastatic, supporting reports that malignant mineralization leads to tumor cell necrosis and more aggressive cancers. Further studies will better characterize the epithelial-to-mesenchymal transition by staining for B-catenin and cytokeratin, and performing an MMP-2 and -9 quantification assay. Osteo-mimicry will be assayed by quantifying collagen-1 and alkaline phosphatase secretion, and OPN and BSP upregulation. REFERENCES 1. "Breast Cancer." American Cancer Society, 2015. Website, accessed 04/10/2015. http://www.cancer.org/cancer/breastcancer/detailedgu ide/breast-cancer-survival-by-stage 2. B. Wei. “Bone metastasis is strongly associated with estrogen receptor–positive/progesterone receptor– negative breast carcinomas.” Human Pathology. 2008. Vol 39, 12. pp 1809-1815. 3. M. Scimeca. “Microcalcifications in breast cancer: an active phenomenon mediated by epithelial cells with mesenchymal characteristics.” BMC Cancer. 2014. Vol 14,1. 4. Edward Sickles. “Breast calcifications: mammographic evaluation.” Radiology. 1986. Vol 160, 2. pp 289-293. 5. R. Baker. “New relationships between breast microcalcifications and cancer.” Br J Cancer. 2010. Vol 103, 7. pp 1034-1039. 6. Taubenberger, A. “In vitro microenvironments to study breast cancer bone colonization.” Advanced Drug Delivery Reviews. Vol 79-80. 2014. pp. 135144.
STUDYING THE APPLICATION OF SYNTHETIC, PAPER-BASED SENSORS FOR BIOLOGICAL TESTING Robert Donahoe, Alexander Szul, Konstantin Borisov, Garrett Green, Apurva Patil International Genetically Engineered Machine, Department of Bioengineering University of Pittsburgh, PA, USA Email: rjd44@pitt.edu, Web: http://2015.igem.org/Team:Pitt INTRODUCTION In recent years, significant advances in transcription/translation systems based on cellfree extracts have been made [1, 2]. In particular, it has been observed that freeze-drying the extracts on paper allows for long-term storage and simple transportation while retaining the functionality of the extract once rehydrated [3]. This study aims to develop the ability for these freeze-dried extracts to be used as cheap diagnostic sensors for a variety of molecules. The first system relies on transcriptional activation of a synthetic estrogen-sensitive T7 RNA polymerase (RNAP). A second system uses a synthetic repressor that can be cleaved by specific proteases, which allows for detection of matrix metallo-protease 2 (MMP2) and MMP9, which are cancer biomarkers in urine. One major problem with the system, as examined by Pardee, was that the green fluorescent protein (GFP) produced background fluorescence that can cause a test to appear as a false-positive [3]. One way around this is to use a decoy to suppress GFP expression. T3 and T7 polymerase decoys can take the place of actual T3 and T7 RNAP. The GFP protein reads the T3 and T7 decoys instead of the actual T3 and T7, so no GFP will be produced. METHODS The study used a modified S30 protocol [1] to create extracts sensitive to the analytes in question. For the estrogen sensitive extracts, the modified T7 RNAP was obtained as a plasmid from Cheryl Telmer at Carnegie Mellon University. The construct was grown in DH5Îą cells, which were then used to create the sensor extract. The extract was then incubated with a plasmid containing a T7 promoter, followed by a region encoding mRFP1, a red fluorescent
protein. The extracts were tested with varying concentrations of estrogen, which allows for the determination of a limit of detection of estrogen using the sensor extract. A similar method was used for the protease detection system, except the constructs were grown in NiCo21 (DE3) cells, which have deficiencies in several E. coli proteases. The T3 and T7 decoys were purchased from Integrated DNA Technologies, Inc. (IDT). The level of GFP production had to be measured using different concentrations of T3 and T7 decoys so that the appropriate amount of decoy polymerase could be used to eliminate background fluorescence while still allowing the system to produce measurable amounts of GFP. In addition to measuring varying amounts of T3 and T7 RNAP decoy, varying concentrations of GFP had to be measured, so the NiCo21 extract could have the perfect balance of GFP and T3 and T7 decoy to prevent as much background fluorescence while promoting fluorescence of GFP in response to the presence of estrogen or MMP2 and MMP9. RESULTS Currently, the project is focusing on perfecting the modified S30 protocol to achieve maximum transcription and translation in vitro, perfecting the concentration of GFP in the NiCo21 extract, and perfecting the amount of T3 and T7 RNAP to eliminate as much background fluorescence as possible. We currently have isolated the plasmids for the estrogen-sensitive sensor, as well as positive controls for the system. In addition, we have grown the plasmids for the MMP2 sensitive repressor, and are sequencing plasmids for the
MMP9 sensitive repressor. Figure 1 shows the fluorescence fold change of the S30 extract compared to the background signal without estrogen and estrogen driven green fluorescent protein (GFP).
without any DNA was used as a negative control to show the minimum amount of fluorescence while the T3 decoy was used as a positive decoy since the GFP in this extract did not require any T3 RNAP.
GFP Relative Fluorescence vs Time 14000 12000
DH5Alpha
RFU
10000 ERT7
8000 6000
ERT7 + GFP
4000 2000 0 0
5 10 15 20 25 30 35 40 45 50 55 60 65 70
ERT7 + GFP + Est.
time (min)
Figure 1: The data shows the fluorescence fold change of the S30 extract in the presence of estrogen driven GFP and estrogen.
To measure varying amounts of GFP expression, samples of 100 ng of GFP and samples of 250 ng of GFP were placed into a Tecan fluorometer system and measured over a period of 150 minutes. Figure 2 shows the drastic difference in relative fluorescence units (RFU) between the 100ng amount and the 250 ng amount of GFP.
Figure 3: The data shows the fluorescence fold change of T7-driven GFP in the presence of varying concentrations of T7 RNAP decoy.
DISCUSSION The S30 extract protocol needs to be modified to work for the sensor extracts, as the original protocol shows no significant differences between positive and negative controls. REFERENCES 1. Kigawa, T. Journal of Structural and Functional Genomics. 2004. Vol 5. 63-68. 2. Kim, D.M. Biotechnol. Prog. 2000. Vol 16. 385-390. 3. Pardee, K. Cell. 2014. Vol 159. 940-954.
Figure 2: The data shows the change in fluorescence fold change of the GFP from different concentration of protein over time.
The T3 and T7 decoys were measured using the same Tecan fluorometer used to measure the GFP expression. Figure 3 shows the GFP fluorescence produced from varying amounts of T7 decoy. The maximum amount of T7 decoy, 100ng, silenced the GFP fluorescence. A solution
ACKNOWLEDGEMENTS Estrogen and estrogen sensitive plasmids were provided by Cheryl Telmer and the Carnegie Mellon University iGEM team. Partial funding was provided by the University of Pittsburgh, Swanson School of Engineering. DNA constructs were provided by Integrated DNA Technologies, Inc. Additional DNA constructs were provided by the International Genetically Engineered Machine. Funding was provided by Pittsburgh Life Sciences Green House. Lab space was provided by Dr. Hanna Salman.
MAPPING THE EXTRACELLULAR MATRIX: AN AUTOMATED ANALYSIS OF THE STRIATAL DISTRIBUTION OF THROMBOSPONDIN USING CELLPROFILER Jessie R. Liu, Michel Modo Regenerative Imaging Laboratory, McGowan Institute for Regenerative Medicine University of Pittsburgh, PA, USA Email: jrl99@pitt.edu INTRODUCTION The extracellular matrix (ECM) is a functionally important, yet a commonly overlooked complex structure of the brain. The ECM, comprising 20% of the neural tissue volume, is responsible for tissue scaffolding as well as juxtacrine signaling. This structure is greatly disrupted in neurological diseases alongside tissue damage or loss. In order to understand this loss, it is necessary to consider the normal composition of the ECM. Here, the distribution of the ECM molecule thrombospondin is analyzed anterior to posterior in the striatum in conjunction with a customized pipeline for high throughput image analysis using CellProfiler [1]. Thrombospondin, a glycoprotein produced by astrocytes, participates in several crucial cell signaling events, including the development of synapses and cell migration [2, 3]. CellProfiler is used to quantify its co-localization with two major cell types, neurons and astrocytes, in the normal striatum. METHODS Male Sprague-Dawley rats (Taconic Labs) were perfused transcardially at 12 weeks old with 4% paraformaldehyde prior to 30% sucrose cryoprotection. Fixed brains were sectioned at 50 µm thickness on a cryostat (Leica). For immunohistochemistry, sections were washed 3x for 5 minutes each in PBS before overnight (~15 hours) incubation at 4 °C with primary antibodies, Fox3 for neurons (1:500), GFAP for astrocytes (1:3000), and Thrombospondin (1:100), diluted in PBS/0.5% Triton-X100. Following overnight incubation, primary antibodies were removed and sections were washed 3x with PBS before application of corresponding AlexaFluor secondary antibodies (1:500 in PBS) for 1 hour at room temperature. Secondary antibodies were removed and sections were washed 3x with PBS before
counterstaining with the nuclear marker Hoechst 33342 (1 µg/ml in PBS) for 5 minutes followed by a final 3 washes with PBS. Sections were then coverslipped with Vectashield mounting medium. Images were acquired using an AxioImager M2 microscope (Zeiss) (20x objective) in conjunction with Stereo Investigator software (MBF). For CellProfiler analysis, 15 images were taken per section along the anterior-posterior axis (Figure 1A) unless the size of the striatum at the distance from Bregma was too small. In CellProfiler, images were converted to grayscale and background was removed using an adaptive automatic threshold. To identify nuclear and neuronal populations, a single module was applied to process images to identify the nuclear stains of Hoechst and Fox3 as objects. To identify astrocyte populations, a module is applied to pass nuclei objects through a binary version of the cleaned GFAP image if objects meet a specified fractional overlap with the binary image. Accepted objects are identified as astrocyte nuclei objects. To identify the populations of co-localized neurons and astrocytes, a module is applied in the same manner, where a binary version of the processed Thrombospondin image serves as the mask, while previously identified neuronal objects and astrocyte nuclei objects are passed through the mask according to a specified fractional overlap. All module settings were tested for accuracy across 10 images and were not accepted for the final pipeline unless Pearson’s r was greater than 0.9. Counts for all identified populations were collected and percentages of neurons and astrocytes colocalizing with Thrombospondin were calculated. Averages were taken per section per animal and replicates were graphed using GraphPad Prism 6. RESULTS Qualitatively, thrombospondin appears evenly distributed throughout the striatum and along the
anterior-posterior axis (Figure 1B). Thrombospondin presents as cytoplasmic and colocalized with both neurons and astrocytes (Figure 1C). However, it is visually clear that thrombospondin co-localizes mostly with neurons. Automated analysis through CellProfiler and twoway ANOVA testing indicated that neurons colocalize with thrombospondin significantly more (p<0.0001) than astrocytes in the striatum (Figure 1D). On average, anterior to posterior, 94% of neurons co-localize, while only 47% of astrocytes co-localize with thrombospondin. Astrocytes also demonstrate more variability in thrombospondin colocalization, which may be attributed to the increased difficulty in identifying astrocytes, as compared to neurons. Despite this, both neuron and astrocyte co-localizations with thrombospondin do not demonstrate a significant pattern anterior to posterior. DISCUSSION The percentage of neurons and astrocytes associating with thrombospondin indeed reflects the functional roles of thrombospondin. As thrombospondin plays a role in crucial developmental events and is produced by astrocytes [2, 3], it was expected that there would be colocalization with both neurons and astrocytes. The difference in the amount of co-localization, which was not significantly different anterior to posterior, is what is most interesting. While most, if not all, neurons co-localize with thrombospondin, the percentage of astrocytes that co-localize is dramatically lower. This may suggest that
thrombospondin plays a more functional role in neuronal events than those of astrocytes. However, it may also be that neurons co-localizing with thrombospondin are participating in neuro to glia signaling. Additionally, it is possible that some of the astrocytes co-localizing with thrombospondin were only in the process of synthesizing the glycoprotein and may not regularly associate with it. More detailed immunohistochemical analysis is needed to give insight into why not all astrocytes co-localize with thrombospondin and how they may differ from their co-localizing counterparts. Further investigation of other ECM molecules by high throughput image analysis can help elucidate the composition of the normal striatum and give insight into which molecules seem most important to consider in regenerative therapies. REFERENCES 1. Lamprecht, M.R. et al. Biotechniques 42.1, 7175, 2007. 2. Dityatev, A. et al. Trends Neurosci 33.11, 503512, 2010. 3. Roll, L. and A. Faissner. Front Cell Neurosci 8, 219, 2014. ACKNOWLEDGEMENTS This research was partially funded by the National Institute for Neurological Disease and Stroke (R01NS08226). Jessie R. Liu was supported by the Department of Radiology, the Swanson School of Engineering, and the Office of the Provost of the University of Pittsburgh.
Figure 1: A. Histological overview of one hemisphere (~0.00mm Bregma) showing approximate fields of view taken of the striatum. B. An example of an image taken with a 20x objective of the striatum, whose individual channels were processed by CellProfiler. Scale bar represents 100Âľm. C. Example of an astrocyte and a neuron correctly co-localized with thrombospondin. D. Graph of the percentages of neuron and astrocyte populations that co-localize with thrombospondin (TSP), anterior to posterior. Averages were taken across images per section (noted here by distance from Bregma) per animal and error bars represent the standard deviation.
ROBOT DANCING ALONG A SONG: BEAT AND RHYTHM DETECTION IN SOCIAL ROBOTS Sean Michael Brady Department of Electrical Engineering University of Pittsburgh, PA, USA Email: smb166@pitt.edu INTRODUCTION Recently robots have progressed to interact with humans in more lifelike ways. However, robots still lack the necessary sensory skills to appreciate artistic content, such as a painting or a song. One way in which humans appreciate music is by dancing. This project establishes a robust way to gather enough information on a song to decide how a robot should be dancing to it. It also explores a framework which could lead to how a robot will choose to dance to give the illusion of free-form dancing which is aperiodic, as opposed to more traditionally seen robotic movements. This project's exploration of beat detection is based on a study in Beat Detection created by Frederic Patin [1]. The intended application of this research is for a robotic teddy bear with many different functions. This teddy bear is known as Baloo in the Social Robotics Laboratory at NUS. Prior to the research, a primitive dancing algorithm was designed by Mingming Li, which allowed the teddy bear to mime a clapping action based on breaks in the audio. This proved to work relatively well on some tracks, especially pre-examined a capella tracks which had silence at regular intervals. METHODS There are many different languages used for signal processing. MATLAB, for example, has a library for digital signal processing, which is a good fit for real time audio analysis. Using the Fast Fourier Transform function in MATLAB, individual frequency bands can be monitored for changes in values. Python, on the other hand, has a tool with limited documentation and functionality called aubio. This tool is also available in C++, and while Python may not be used much for signal processing, it is common enough in the scientific community that beat detection within its ability was explored. Using the aubio library's demos, it can be seen that
aubio already features a kind of beat detection algorithm. Beatroot is a system for beat tracking and visualization of wav files. With the first edition written in 2006, and frequent updates to the software, Beatroot has proven to be a great system for tracking beats to a large number of songs. Firstly, its algorithm seems to be fairly reliable no matter the genre of music given to it. Second, the output files are easy to manipulate and use, as the information provided within them are in the format which is preferred for the ROS functionality: namely, time amounts in seconds. The output of Beatroot says when each beat will occur starting from the beginning of the song, and so it can decide before a beat what it will do in terms of movement. Beatroot also seems immune to a problem which plagues aubio's beat detection; it does not work better at the beginning or end of the song. That is because Beatroot does not attempt to work in real time, but instead can retroactively decide what is most likely for the beat at the beginning, given times when the beat was perhaps easier to find later. The program uses a tiered system to determine what is the necessary course of action at each time. First, the user must decide which song they would like to have the bear dance to, and this is determined by telling its name. From a bank of children's songs, the one that has been chosen will either be in the system or not, but some functionality could be implemented for perhaps downloading a song to depreciate this limitation. Alternatively, a song could be played for the system and the open source fingerprint software from MusicBrainz could be used to determine the song title. This was tested and found to be somewhat useful for common pop songs. However, given the importance of a capella tracks, this was deemed unnecessary for the project. The program will then convert that song from mp3 to wav (the format which Beatroot uses) if that is
necessary. This is done using a program called Sound eXchange or SoX, a cross-platform command line utility for converting various formats of audio files into others. The next step is to analyze the song using Beatroot. After this detection, the times of each of the beats is known to the system. RHYTHM ANALYSIS The energy contained within the audio signal is assumed to provide a lot of content for what kind of song is being listened to and which part of the song is currently being played. For example, in a rock song, the overall energy of the signal contained between two beats could be noticeably higher compared to the previous four or eight beats. This could be taken as an indication that the chorus has now started, and perhaps this information could be used to decide on a new direction of the robot's dancing. In a shorter time frame, the power of one beat could be compared to the previous beat. If one beat is higher than the last, and this pattern is repeated throughout a large enough section, the robot might be able to reasonably assume that the song is in duple time, with similar reasoning to find a song in triple. This would also naturally find the beat which is strongest in that time, and that is important information that could be used to decide how the robot chooses to dance to it. Using the pyo library, in conjunction with the beat times provided by BeatRoot, the strength of the signal around each beat can be determined. The amount of time around the beat to find the strength can be decided song by song. Once the strength of each beat is determined, the meter can be extracted by looking for patterns in these values. Another useful value is from the time between each beat. This is determined in a similar fashion, and rounded to integers between one and ten based on the highest value found. If the value on each betat is low enough, the robot will refused to move for that beat. If it is higher, the robot will make more rapid movements. This is to mirror how a human would react to the energy of the song increasing or decreasing. RESULTS Overall, any attempts to create a beat detection algorithm from scratch proved mostly a waste, as
Beatroot could more adequately determine important times in any song given to it than them. The output of Beatroot is a useful tool for this application.
Figure 1: . Representation of the song “Row Row Row Your Boat” visualized by the program Beatroot
Figure 1 shows Beatroot’s visualization of the times found to correlate with a uniform beat within a given song. Although the most important songs looked at were to be a cappela, and Beatroot does much better with instrumentation, the application still works better than any others tested. DISCUSSION The work represented by this paper lays the groundwork for the further development of a robot dancing along a song algorithm. Since beat detection is absolutely necessary for dancing, this is a pivotal step towards allowing a social robot to dance. The next step would be using the information provided from these algorithms to control the servo motors within a robot and determine movements which are natural, to give the illusion that the robot is fully enjoying the music. REFERENCES 1. Patin Beat Detection Algorithms, 2003 2.
ACKNOWLEDGEMENTS This research was performed at NUS’ Social Robotics Lab under the supervision and with the help of Mingming Li and Dr. Sam Ge. This award was funded by the Swanson School of Engineering and the Office of the Provost.
ENHANCING HEPATOCYTE FUNCTION USING LIVER EXTRACELLULAR MATRIX DERIVED FROM VARIOUS SPECIES a
Abigail E. Lonekera,b, Denver M. Faulka, and Stephen F. Badylaka,b,c McGowen Institute for Regenerative Medicine, bDepartment of Bioengineering, cDepartment of Surgery University of Pittsburgh, PA, USA
INTRODUCTION Whole organ engineering and other regenerative medicine approaches are being investigated as potential therapeutic options for patients with endstage liver failure. These approaches aim to be curative by replacing the diseased liver with functional tissue constructs. A major challenge of these strategies is the loss of hepatic specific function after parenchymal cells are removed from their specific microenvironment and then seeded upon or within an engineered scaffold. The identification of a soluble or solid substrate that can maintain functionally differentiated hepatocytes during in-vitro culture would significantly advance these therapeutic strategies. Previous work has shown that hepatocytes and hepatic sinusoidal endothelial cells better maintain their phenotype and functionality when they are cultured on liver specific extracellular matrix as compared to other substrates (1,2). It is plausible that enzymatically digested ECM derived from the liver itself might be a useful media supplement for hepatocyte culture. Although matrix derived from rat liver might theoretically be superior to that from other species for the maintenance of rat hepatocyte function, we compared liver ECM from various species using primary rat hepatocytes to determine if there are species specific advantages. The objective of the present study was to compare the ability of porcine, human, rat, and canine liver ECM to support the following primary rat hepatocyte (PRH)–specific functions in vitro: albumin secretion, ammonia metabolism, and normal hepatocyte morphology. Biochemical and mechanical analysis of these species of LECM was also conducted to more fully characterize species specific differences. METHODS Liver extracellular matrix was isolated from porcine, canine, human, and rat livers. Each liver tissue was subjected to the same decellularization
protocol, which consisted of comminuting the liver into cubes of approximately 0.5 cm2, exposing the tissue to trypsin/EGTA, inducing cell lysis with the detergent Triton X-100, rinsing with phosphate buffered saline and deionized water and disinfecting with Peracetic Acid. The resulting liver ECM from each species was lyophilized, powdered and pepsin digested. Hydrogels from each group were formed and rheological properties were quantified. The ability of each liver ECM to support hepatocyte phenotype in-vitro was determined. Primary rat hepatocytes (PRHs) were isolated. Cells were seeded on 6-well rat tail type I collagen coated plates and allowed to culture for one day in hepatocyte basal media. After one day, media was changed with the addition of 50 µg/mL of human LECM, porcine LECM, canine LECM, or rat LECM. Unspiked media, pepsin, UBM and SIS groups (50 µg/mL) were included as heterologous ECM controls. Conditioned media from each well was collected on days 2, 5, and 7 and tested for albumin production with a commercially available kit. Images were taken on days 2, 5, and 7 to compare cell morphology. PRH metabolism of ammonia to urea was measured on day 7 of culture. The media was aspirated and PRHs were incubated with 2.5 mM NH4Cl in hepatocyte basal media for two hours at 37 °C. The conditioned media was collected and the concentration of urea was measured. RESULTS Preliminary results indicate significant differences in the biochemical and biomechanical properties of various species of liver ECM. Rheological testing indicated differences in the gelation properties of hydrogels derived from different species. Results from the initial creep test show porcine and canine LECM pre-gel to be more viscous than rat or human. The time sweep test indicated that porcine LECM had a significantly higher storage modulus (G’) than all other groups indicating greater gel stiffness. Rat LECM gelled more quickly than all
other species, while the human, canine, and porcine LECM all reached fifty percent gelation in a similar time frame (Figure 1).
Figure 1: Rheological Properties of human, porcine, canine, and rat liver ECM hydrogels at a concentration of 8mg/ml. Pre-gel viscosity (A), representative time sweep (B), time to 50% gelation (C), and maximum storage modulus (D).
PRH’s treated with canine and porcine LECM maintained their phenotype than all other groups (Figure 2). Morphology images of PRH’s treated with canine and porcine LECM show increased bile production and the formation of multinucleated cells, both markers of maintained hepatocyte phenotype. Hepatocytes treated with canine and porcine LECM showed increased albumin production throughout culture. Ammonia metabolism was not significantly different between groups. Taken together this metrics indicate that solubilized canine and porcine LECM enhance hepatocyte function in-vitro.
methods designed to maintain as much of the composition of the native liver matrix as possible. Since the ECM of each tissue and organ is produced by the resident parenchymal cells and logically represents the ideal scaffold or substrate for these cells, it is intuitive that a substrate composed of liver-derived ECM would be favorable for hepatocytes. However, species specificity has never been investigated for the purpose of hepatocyte invitro culture. Thus, this study represents a novel concept that could enhance in-vivo and in-vitro applications of liver tissue engineered constructs. The enhanced hepatocyte-specific functions of primary rat hepatocytes cultured with porcine and canine liver ECM compared to primary rat hepatocytes cultured with human and rat liver ECM as well as rat tail collagen I may be due to differences in the composition of the respective ECM. These different species of liver ECM may have differences in collagen composition, growth factor content, surface ligands, and exposed matricrypic peptides. These differences in bioactive components could help explain modulation of hepatocyte functionality. A comprehensive analysis of the biochemical composition of each liver ECM is currently being conducted. As the field of tissue engineering and regenerative medicine moves toward the replacement of more complex tissues and three-dimensional organs, it is likely that more specialized scaffolds will be needed to support multiple, functional cell phenotypes. The findings of the present study suggest that ECM derived from various species can affect the ability to support appropriate cell phenotype. REFERENCES
Figure 2: Functional rat hepatocyte data from day 7 of the invitro culture with the addition of solubilized rat, human, canine, and porcine liver ECM added to the media.
DISCUSSION Pepsin digested ECM derived from human, porcine, rat, and canine liver was shown to support rat hepatocyte function in vitro at levels significantly greater than pepsin or rat type I collagen alone. The liver ECM from each species was prepared by
1.
Sellaro TL, Ravindra AK, Stolz DB, Badylak SF. “Maintenance of hepatic sinusoidal endothelial cell phenotype in vitro using organ-specific extracellular matrix scaffolds.” Tissue Eng. 2007 Sep;13(9):2301-10. PMID: 17561801
2.
Sellaro TL, Ranade A, Faulk DM, McCabe GP, Dorko K, Badylak SF, Strom SC. “Maintenance of human hepatocyte function in vitro by liver-derived extracellular matrix gels.” Tissue Eng Part A. 2010 Mar;16(3):107582. doi: 10.1089/ten.TEA.2008.0587. PMID: 19845461
ACKNOWLEDGEMENTS Abigail Loneker was funded in part by the Department of Bioengineering at the University of Pittsburgh.