Swanson School of Engineering Undergraduate Summer Research Abstracts

Page 1

Swanson School of Engineering Undergraduate Summer Research Program 2016


Welcome to the 2016 Issue of the Swanson School of Engineering (SSOE) Summer Research Abstracts! Every year the SSOE invites all undergraduates to propose a research topic of interest to study for the summer and to identify a faculty mentor willing to serve as a mentor and sponsor for their project. In this way, students get to work on cutting edge research with leading scientists and engineers while spending their summertime at SSOE. The students, however, were not restricted to the Swanson School of Engineering or even the University of Pittsburgh. The world has been provided to them! As a result, eight students spent their internship in Singapore at the National University of Singapore and Nanyang Technological University. One student traveled to Israel to study at the Israel Institute of Technology and another student studied at Politecnico di Milano in Italy. Stateside we had one student at John Hopkins University and another student at the DanaFarber Institute at Harvard Medical School. There are multiple programs that offer summer research opportunities to the SSOE undergraduates, the largest of these being the Summer Internship Program jointly sponsored by the Swanson School and the Provost. This year, the program was able to fund over 60 students, with generous support from both the SSOE and the Office of the Provost. Additional support was provided by a generous gift from the PPG Foundation for students selected as PPG Fellows. The Swanson School study abroad program assisted with the students who participated in international internships. The following individual investigators also provided support: Kevin M. Bell, David M. Brienza, Bryan N. Brown, Lance A. Davidson, Richard E. Debski, Bryan M. Hooks, Karl J. Johnson, Jung-Kun Lee, Colleen A. McClung, Micahel M. Modo, Ian A. Sigal, Matthew A. Smith, George D. Stetten, Jonathan Vande Geest, and Götz Veser. As part of the requirements of the internship, students submitted poster abstracts to Science 2016 –Game Changers! in September. Nearly all of these students were selected to present posters in a special undergraduate student research session at Science 2016, with Engineering making up 70% of the undergraduate posters at the conference! SSOE provides other opportunities in addition to the internship program. Interns and other summer students were invited to submit an abstract to be considered for expansion into a full manuscript for consideration for the third issue of Ingenium: Undergraduate Research in the Swanson School of Engineering. This provides undergraduates with the experience of writing manuscripts and graduate students – who form the Editorial Board of Ingenium – with experience in peer-review and editing. We hope you enjoy this compilation of the innovative, intellectually challenging research that our undergraduates took part in during their tenure at SSOE. In presenting this work, we also want to acknowledge and thank those faculty mentors who made available their laboratories, their staff, and their personal time to assist the students and further spark their interest in research.

Larry J. Shuman, Senior Associate Dean for Academic Affairs David A. Vorp, Associate Dean for Research


Student

Iman L. Benbourenane

Student Department

Bioengineering

Mentor(s)

Steven D. Abramowitch

Mentor Primary Department(s) All mentors are faculty at the University of Pittsburgh unless otherwise noted.

Title (*abstract witheld)

Bioengineering

COMPARATIVE ANALYSIS OF PHOTOGRAMMETRY VERSUS LASER-BASED METHODS OF MEASURING THE PHYSICAL DIMENSIONS OF OBJECTS

Mara C. Palmer

Bioengineering

Bryan N. Brown

Bioengineering

ASSESSMENT OF SCHWANN CELL MIGRATION AND FUNCTIONAL RECOVERY AFTER PERIPHERAL NERVE INJURY AND TREATMENT WITH TISSUESPECIFIC EXTRACELLULAR MATRIX HYDROGEL

Joshua R. Tarantino

Bioengineering

Bryan N. Brown

Bioengineering

ASSESSING THE HOST INFLAMMATORY RESPONSE TO ACELLULAR LUNG SCAFFOLDS

Bioengineering

CHARACTERIZING THE ECM COMPOSITION AND MECHANICAL PROPERTIES OF OVARIAN TISSUE-DERIVED HYDROGELS

Bioengineering

*NOVEL PEDOT COATING FUNCTIONALIZATION METHODS FOR BIOINTERNFACING APPLICATIONS

Ziyu Xian

Bingchen Wu

Bioengineering

Bioengineering

Bryan N. Brown

Xinyan Tracy Cui

Micah J. Feeney

Bioengineering

Lance A. Davidson

Bioengineering

Ryan T. Black

Bioengineering

Richard E. Debski

Bioengineering

Samik Patel

Bioengineering

Richard E. Debski

Bioengineering

*IN VITRO CARDIAC ORGANOID INDUCTION: ADVANCING A 3D “ORGAN IN A DISH” MODEL FOR BIOMECHANICAL STUDIES OF EARLY CARDIAC DEVELOPMENT PREDICTING MECHANICAL PROPERTIES WITH QUANTITATIVE ULTRASOUND MEASURES IMPACT OF SCREW LENGTH ON FIXED PROXIMAL SCAPHOID FRACTURE BIOMECHANICS: IN VITRO STUDY WITH CYCLIC LOADING AND LOAD TO FAILURE

All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property


Student

Shae A. Rosemore

Avin Khera

Riley S. Burton

Kamiel S. Saleh

Toby Zhu

Katelyn F. Axman

Mentor Primary Department(s)

Student Department

Mentor(s)

Mechanical Engineering

Bioengineering and Mechanical Joseph T. Samosky SUPPORTING INFRASTRUCTURE Engineering, FOR LAST MILE SOLUTIONS and Jian Huei Choo National University of Singapore

Bioengineering

Bioengineering

Bioengineering

Bioengineering

Bioengineering

Jacob N. Herman

Bioengineering

Sarah Shaykevich

Electrical Engineering

All mentors are faculty at the University of Pittsburgh unless otherwise noted.

Title (*abstract witheld)

Bioengineering

SURGICAL APPLICATIONS OF HAPTIC RENDERING DEVICES FOR MEMBRANE PUNCTURE SIMULATION

Bioengineering

COMPUTATIONAL MODELING OF WALL STRESS IN CEREBRAL AORTIC ANEURYSMS ENDOSCOPICALLY COILED WITH DIFFERENT PACKING DENSITIES

David A. Vorp

Bioengineering

SVF CELL-SEEDED TEVGs AND THE REMOVAL OF THE IN VITRO DYNAMIC CULTURE PERIOD

David A. Vorp and Hongliang Ren

Bioengineering and Biomedical Engineering, National University of Singapore

FABRICATION OF PATIENTSPECIFIC INTRACRANIAL ANEURYSM MODELS FOR BURST TESTING

Jonathan Vande Geest

Bioengineering

VERIFYING NORMALITY OF OCULAR TISSUE THROUGH DEVELOPMENT OF A SEMIAUTOMATED OPTIC NERVE AXON COUNTING METHOD

Zhi T. Ang

Biomedical MODELING AND IN-SILICO Engineering, ANALYSIS OF CLINICALLY USED National University CORONARY ARTERY STENTS of Singapore

Shy Shoham

Biomedical Engineering, Israel Institute of Technology

*CORTICAL CELL NETWORK RESPONSE TO ULTRASOUND STIMULATION

Chemical and Petroleum Engineering

PREDICTING PHASE BEHAVIOR OF ORGANIC-SALT-WATER, TWO-PHASE SYSTEMS USING THE AIOMFAQ MODEL

George D. Stetten

David A. Vorp

Chemical Forrest M. Salamida Engineering and Eric J. Beckman Computer Science

All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property


Mentor Primary Department(s)

Student

Student Department

James J. McKay

Chemical Engineering

Robert M. Enick

Electrical Engineering

Chemical and Petroleum Karl J. Johnson and Engineering and Robotics, National Raye Yeow University of Singapore

3rd GENERATION ROBOTIC SOCK WITH ANKLE FEEDBACK CONTROL

Chemical Engineering

Chemical and Petroleum Engineering, (Pitt) and Chemical and Karl J. Johnson and Biomedical Dan Zhao Engineering, Nanyang Technological University, Singapore

DESIGN OF HIGHLY EFFICIENT BIFUNCTIONAL METALORGANIC FRAMEWORK CATALYSTS FOR TANDEM CATALYSIS BY SHORTENING THE REACTION PATHWAY

James Liu

Benjamin Y. Yeh

Abhinav Garg

Peter Tancini

Chemical Engineering

Chemical Engineering

Mentor(s)

All mentors are faculty at the University of Pittsburgh unless otherwise noted.

Chemical and Petroleum Engineering

Title (*abstract witheld)

*STABILIZATION OF DRY HYDRAULIC FRACTURING INJECTION FLUIDES WITH NOVEL SURFACTANTS

Lei Li

Chemical and Petroleum Engineering

COMPARISION AND OPTIMIZATION OF BIOMIMETIC SUPERHYDROPHOBIC SURFACES CREATED USING COMB-LIKE-POLYMERS AND PERFLUORINATED CHEMICALS

Giannis Mpourmpakis and Matteo Maestri

Chemical and Petroleum Engineering and Dipartimento di Energia, Politecnico di Milano, Piazza

UNDERSTANDING THE EFFECT OF ADSORBED WATER IN ALCOHOL DEHYDRATION ON ฮณAL2O3 USING MICRO KINETIC MODELING

Serena W. Chang

Chemical Engineering

Chemical and Jason E. Shoemaker Petroleum Engineering

Travis L. La Fleur

Chemical Engineering

Chemical and Jason E. Shoemaker Petroleum Engineering

Brett N. Amy

Chemical Engineering

Gรถtz Veser

Chemical and Petroleum Engineering

MODELING AND OPTIMIZATION OF IMMUNEREGULATORY SIGNALING STRATEGIES OF THE HOST RESPONSE TO INFLUENZA VIRUS INFECTION ANALYZING THE CONTROL STRATEGIES OF THE INFLUENZA VIRUS THE ENGINEERING OF NANOSTRUCTURED INTERMETALLIC CATALYSTS FOR CO 2 REDUCTION

All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property


Student

Student Department

Mentor(s)

Julie L. Hartz

Chemical Engineering

Gรถtz Veser

Kenny L. To

Chemical Engineering

Gรถtz Veser

Meghan J. Wyatt

Bioengineering

Laura B. Fulton

Mechanical Engineering

Nicole E. Cimabue

Sing Yian Chew

Jeffrey J. Gray

Civil Engineering Kyle J. Bibby

Carolyn M. Wehner Civil Engineering Andrew P. Bunger

Mentor Primary Department(s) All mentors are faculty at the University of Pittsburgh unless otherwise noted.

Title (*abstract witheld)

Chemical and Petroleum Engineering

CELL RECOVERABILITY AFTER EXPOSURE TO COMPLEX ENGINEERED NANOPARTICLES

Chemical and Petroleum Engineering Chemical and Biomedical Engineering, Nanyang Technological University, Singapore

ASSESSING TOXICITY OF COMPLEX ENGINEERED NANOPARTICLES

Chemical and Biomolecular Engineering, Johns Hopkins University

IMPROVEMENT OF ROSETTA BIOCOMPUTING SOFTWARE FOR CANONICAL ANTIBODY CDR LOOP PREDICTION

Civil and Environmental Engineering

PERSISTENCE OF EBOLA SURROGATE AT VARIOUS TEMEPERATURES AND NEUTRAL PH

Civil and Environmental Engineering

WELL PLUGGING WITH ABSORBANT CLAY: STUDYING BENTONITE PELLET DESCENT IN BOREHOLES

Brandon Contino

Electrical Engineering

Civil and David V.P. Sanchez Environmental Engineering

Swaroop Akkineni

Computer Engineering

Electrical and Samuel J. Dickerson Computer Engineering

Michael P. Urich

Electrical Engineering

Zhi-Hong Mao

Electrical and Computer Engineering

Mark D. Littlefield

Bioengineering

Young Jae Chun

Industrial Engineering

Rithika D. Reddy

Industrial Engineering

Paul W. Leu

Industrial Engineering

INFLUENCING DIFFERENTIATION AND GROWTH OF NEURAL PROGENITOR CELLS WITH GENE SILENCING AND LAMININ

AN EVALUATION OF CARBON ELECTRODES FOR ANTIFOULING IN THE ELECTROFENTON PROCESS SPIWAVE: AUTOMATED SPIRAL EVALUATION FOR PARKINSONIAN PATIENTS USING WAVELETS IT'S ALL IN YOUR HEADBRIDGING NEUROLOGICAL SIGNALS TO THE PHYSICAL WORLD THROUGH EEG MODELING AND EXPERIMENTAL ANALYSIS OF THE TEMPORARY, FULLYRETRIEVABLE STENT FOR TRAUMATIC HEMORRHAGE CONTROL NITROGEN DOPING CARBON NANOTUBES

All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property


Student

Student Department

Mentor(s)

Terrance P. McLinden

Mechanical Engineering

Markus Chmielus

Tyler C. Zatsick

Mechanical Engineering

Peyman Givi

Mentor Primary Department(s) All mentors are faculty at the University of Pittsburgh unless otherwise noted.

Title (*abstract witheld)

Mechanical Engineering and Materials Science

DESIGN AND TEST OF A NEW POWDER DELIVERY SYSTEM

Mechanical Engineering and Materials Science

NUMERICAL SOLUTIONS TO THE ONE-DIMENSIONAL WAVE EQUATION VIA FIRST UPWIND DIFFERENCE, LAX-WENDORFF, AND EULER'S BTCS IMPLICIT METHODS INCREASING THE DENSITY OF LOW TEMPERATURE SINTERED SILICON CARBIDE BY MEANS OF POLYMER IMPREGNATION AND PYROLYSIS

Mackenzie N. Stevens

Materials Science Jung-Kun Lee and Engineering

Mechanical Engineering and Materials Science

Vincent A. Verret

Materials Science Jung-Kun Lee and Engineering

Mechanical Engineering and Materials Science

VISUAL STUDY OF OXIDATIVE PROPERTIES OF COPPER CORE SILVER SHELL NANOPARTICES

Joshua Barron

Mechanical Engineering

Nitin Sharma

Mechanical Engineering and Materials Science

DESIGN OF AN ACTIVE TRUNK CONTROL AND BALANCING SYSTEM TO REDUCE FATIGUE DURING WALKING WITH AN EXOSKELETON

Amanda M. Boyer

Mechanical Engineering

Nitin Sharma

Clement N. Ekaputra

Materials Science Albert C. F. To and Engineering

Preston C. Shieh

Mechanical Engineering

Albert C. F. To

Mechanical Engineering and Materials Science

Gabriel K. Hinding

Mechanical Engineering

Jeffrey S. Vipperman

Mechanical Engineering and Materials Science

MODELING AMPLIFIERS IN LINEAR AEROTECH INC. STAGES

Paolo Zunino

Mechanical Engineering and Materials Science

CREATING AN OSTEOCHONDRAL BIOREACTOR FOR THE SCREENING OF TREATMENTS FOR OSTEOARTHRITIS

Derek A. Nichols

Mechanical Engineering

Mechanical Engineering and Materials Science Mechanical Engineering and Materials Science

ELECTROMYOGRAPHY BASED CONTROL FOR LOWER LIMB ASSISTIVE THERAPY ATOMIC-SCALE TOPOLOGY OPTIMIZATION BASED ON FINITE ELEMENT METHODS CAPABILITIES OF TOPOLOGY OPTIMIZATION IN THE FIELD OF ADDITIVE MANUFACTURING

All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property


Student

Mentor Primary Department(s)

Student Department

Mentor(s)

Bioengineering

Medical OF A FLEXIBLE DRILL FOR Liu Quanquan and Mechatronics Lab, ROBOTIC MINIMALLY National University Ren Hongliang INVASICE TRANSORAL of Singapore

All mentors are faculty at the University of Pittsburgh unless otherwise noted.

Title (*abstract witheld)

*PRELIMINARY DEVELOPMENT

Michelle E. Botyrius

SURGICAL SYSTEM

ANTI-TUMOR (M1) MACROPHAGES SECRETE CYTOKINES THAT PRIME BREAST CANCER CELLS FOR APOPTOSIS CHANGES IN PULMONARY ARTERIAL HEMODYNAMICS PRIOR TO LVAD IMPLANT AND THE ASSOCIATION WITH RV FAILURE DEVELOPMENT OF TWOPHOTON CALCIUM IMAGING METHODS FOR CIRCUIT MAPPING IN MOUSE MOTOR CORTEX

Maya D. McKeown

Bioengineering

Jennifer Guerriero and Anthony Letai

Medical Oncology, Dana-Farber Cancer Institute/Harvard Medical School

Courtney Q. N. Vu

Bioengineering

Marc A. Simon

Medicine

Dillon S. Thomas

Bioengineering

Bryan M. Hooks

Neurobiology

Adam L. Smoulder

Bioengineering

Neurotechnology, Singapore Institute Sudip Nag and Shih- for Cheng Yun Neurotechnology, National University of Singapore

WIRELESS MUSCLE STIMULATION DATA TRANSMISSION FOR PERIPHERAL NERVE PROSTHESIS DEVELOPMENT

Nathan M. Myers

Bioengineering

Morgan V. Fedorchak

Ophthalmology

*TOPICAL OCULAR AND OTIC DUAL PHASE DRUG DELIVERY SYSTEM APPLICATION

Felipe Suntaxi

Bioengineering

Ian A. Sigal

Ophthalmology

Shruti K. Vempati

Bioengineering

Matthew A. Smith

Ophthalmology

Ziyi Zhu

Bioengineering

Matthew A. Smith

Ophthalmology

QUANTITATIVE ANALYSIS OF THE EYE VASCULATURE CATCH THE WAVE: USING PRIOR KNOWLEDGE OF ACTION POTENTIAL SHAPES TO IDENTIFY NEURONS IN CHRONIC RECORDINGS DEVELOPMENT OF COMPUTATIONAL TOOLS FOR ANALYZING 3D IN VIVO DEFORMATIONS OF MONKEY OPTIC NERVE HEAD

All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property


Student

Student Department

Mentor(s)

Rahul Ramanathan

Bioengineering

Kevin M. Bell

Michelle E. Riffitts

Bioengineering

Kevin M. Bell

Kalon J. Overholt

Bioengineering

Rocky S. Tuan

Michael R. Adams

Bioengineering

Robert A. Gaunt

Mentor Primary Department(s) All mentors are faculty at the University of Pittsburgh unless otherwise noted.

BIOMECHANICAL CONTRIBUTIONS OF UPPER Orthopedic Surgery CERVICAL LIGAMENTOUS STRUCTURES IN TYPE II ODONTOID FRACTURES SOFTWARE DESIGN AND MECHANICAL VERIFICATION Orthopedic Surgery OF AN IMU SYSTEM TO MONITOR CERVICAL SPINE MOVEMENT ENGINEERING THE BONEOrthopedic Surgery CARTILAGE INTERFACE Physical Medicine and Rehabilitation

Christine N. Heisler

Bioengineering

Colleen A. McClung

Psychiatry

Jessie R. Liu

Bioengineering

Michel M. Modo

Radiology

Michael V. Churilla

Bioengineering

David M. Brienza

Rehabilitation Science and Technology

Siddharth Balakrishnan

Soumya S. Vhasure

Bioengineering

Bioengineering

Marina V. Kameneva

Marina V. Kameneva

Title (*abstract witheld)

PREDICTING MUSCLE FORCE OUPUT USING EMG ACTIVITY EFFECTS OF PHASE-DELAYING OPTOGENETIC STIMULATION OF THE SUPRACHIASMATIC NUCLEUS ON MOOD-RELATED BEHAVIORS MAPPING THE EXTRACELLULAR MATRIX: AN AUTOMATED COMPARISON OF THE DISTRIBUTION OF EXTRACELLULAR MATRIX MOLECULES IN THE BRAIN *EFFECT OF AN ALTERNATING PRESSURE OPERATING ROOM TABLE OVERLAY ON SACRAL SKIN BLOOD FLOW

Surgery

*EFFECT OF DRP ON DISTRIBUTION OF PLATELETSIZED PARTICLES IN MICROVESSELS: A POTENTIAL TREATMENT FOR THROMBOSIS

Surgery

*EFFECT OF DRAG REDUCING POLYMERS ON THE DISTRIBUTION OF LEUKOCYTESIZED PARTICLES IN MICROVESSEL BLOOD FLOW: A POTENTIAL METHOD TO REDUCE INFLAMMATION

All mentors are faculty at the University of Pittsburgh unless otherwise noted *Denotes abstract withheld to protect intellectual property


COMPARATIVE ANALYSIS OF PHOTOGRAMMETRY VERSUS LASER-BASED METHODS OF MEASURING THE PHYSICAL DIMENSIONS OF OBJECTS Iman L. Benbourenane, Deanna Easley, Maurice Kotz, Steven Abramowitch PhD Musculoskeletal Research Center, Department of Bioengineering University of Pittsburgh, PA, USA Email: ilb8@pitt.edu, Web: http://www.pitt.edu/~msrc/ INTRODUCTION Mechanical properties of soft tissue are derived from the physical dimensions of a specimen (e.g. engineering stress is equal to force divided by unloaded cross-sectional area), but due to the deformable nature of soft tissue, accurate measurements of their physical dimensions can be challenging. Gold standard, non-contact methods include the use of lasers, which can be costly. As a result, many use either a 2-D image or vernier calipers and assume a specific shape of the specimen. Direct specimen contact as well as assumptions of specimen shape can often lead to systematic inaccuracies. Photogrammetry presents a novel, low cost, non-contact alternative to measuring the physical dimensions of materials in 3-D that makes no assumptions about shape. While it has been proven accurate for use in some applications, the approach presents some challenges when applied to objects with a uniform or repeating texture or objects that are highly reflective. For many soft tissues, this may be an issue. As a first step in exploring the feasibility of this approach, this study aims to use photogrammetry for making measurements of the cross-sectional area of objects with known geometric shapes to those obtained with laser sensors (both bounce back and collimated). It is hypothesized that the photogrammetric approach will provide the same level of accuracy and repeatability as laser-based measurements. METHODS Four machined geometries were used to test this hypothesis. Machined geometries were composed of acetyl Delrin and have been machined with known dimensions. The shapes included 1” diameter circle with half circle cut outs, triangle, 0.5” side-length square, and 0.5” diameter circle with half circle cutouts (Figure 1).

Figure 1 Cross-sections of machined geometries with corresponding machined cross-sectional areas.

A photogrammetry rig was established consisting of a turntable, a reference object accurate up to 1mm for scale, monochrome background, and DSLR camera (EOS Rebel T3i, Canon USA, Inc.). Photosets consisting of 10-20 photos each were imported into PhotoScan Pro (Ver. 1.2.6, Agisoft LLC, St. Petersburg, Russia), manually masked, and used to produce 3D models of each specimen. Using Paraview (Ver. 4.3.1, Kitware, Sandia National Laboratories), the 3D models were sliced to find the cross-sectional perimeter, the slice was solidified in Blender (Ver. 2.77a, Blender Foundation), and the area was measured using the Blender 3D Print Toolbox add-on (author: Campbell Barton). Those measurements were then compared against those measured with established laser based systems (AR-50M, Acuity® Inc. & LS3060 T, Keyence Corp, Osaka Japan, system). The areas of each geometry were measured five times for each method with comparisons to machined values to assess accuracy and repeatability. Data is reported as mean +/- standard deviation. RESULTS Table 1 below displays the averaged cross-sectional areas of each standard geometry. Except for the 1”


diameter circle with half circle cut outs were the CCD laser system (bounce back) performed better, photogrammetric data yielded lower percent error, and thus higher accuracy in cross-sectional area measurements than both laser measurements. There was, however, approximately an order of magnitude greater variability within photogrammetric data. Yet, one standard deviation above and below the averaged photogrammetry result still yields comparable, and in some cases superior, readings to laser-based measurements. The collimated laser micrometer has difficulty detecting sharp edges, explaining the high error in geometries such as the triangle and the square. Additionally, it cannot detect concavities, so overestimations are expected for the 1” and 0.5” diameter circles with cutouts. The CCD laser system consistently resulted in underestimation of the cross-sectional area, likely due to the laser penetrating the surface slightly. No clear trends were observed for the photogrammetry data based on the shapes and sizes used in this study. DISCUSSION This project established the setup and conditions for calculating cross-sectional area of objects using photogrammetry with the intention of applying the approach to soft tissue measurements. It demonstrated that photogrammetric based crosssectional area measurements of machined geometries were generally on the same level as state-of-the-art laser based methods. However, the repeatability was slightly inferior. Thus, the hypothesis was only partially supported. The shapes for which improved accuracy of photogrammetric data was demonstrated are consistent with our knowledge of the shortcomings of laser-based measurements systems. Sharp edges or gaps within geometries, such as for the triangle geometry and circle with half circle cut out

geometries, tend to produce inaccurate readings for the laser micrometer systems; while surface penetration is a known issue for bounce back laser systems. Photogrammetry is theoretically limited only by the focal length of the lens on the camera relative to the size of the object being photographed. Higher variability in photogrammetric data compared to laser-based readings could be due to 1) accuracy of the object used to scale the images, and 2) the location at which cross-section is taken for each model. While the former is an obvious likely source of error, the latter may have also contributed. Cross-sectional area along the length of each geometry was machined to be uniform, so theoretically any slice perpendicular to the length of the geometry should have identical cross sections. However, the 3D models generated are imperfect and slightly non-uniform. So, it is logical that this methodology would yield relatively higher variability compared to laser-based readings, wherein the cross section could be measured at exactly the same location for each trial. Overall, photogrammetric based measurements of cross-sectional area may provide a cheaper alternative to laser based approaches. As this was a first attempt, some additional refinement of the methodology, could improve the repeatability of this approach. Still soft tissues that lack texture and have a high degree of reflectivity could prove problematic. Future work will test this approach on manufactured cryogels and eventually biologic soft tissues to determine its utility for experimental studies. ACKNOWLEDGEMENTS The authors thank the National Science Foundation (NSF Award # 1511504) for funding this research through the Swanson School of Engineering 2016 Summer Research Internship.

Table 1: Average Cross Sectional Area for Machined Geometries Cross Sectional Area [mm2] Machined Geometry

CCD Laser System Percent

Laser Micrometer

Photogrammetry Percent Error

Percent Error

Error 1” Diameter Circle with Half Circle Cut Outs Triangle 0.5” Diameter Square 0.5” Diameter Circle with Half Circle cutouts

471.1 211

462.0±0.6 202.5±2.6

-1.9% -4.0%

501.7±0.3 225.5±3.2

6.50% 6.9%

454.1±5.1 214.0±5.6

-3.61% 1.4%

163.2

150.7±0.1

-7.6%

166.2±0.9

1.8%

164.0±3.9

0.5%

121.4

114.2±0.4

-6.0%

126.1±0.4

3.9%

125.4±2.3

3.3%


ASSESSMENT OF SCHWANN CELL MIGRATION AND FUNCTIONAL RECOVERY AFTER PERIPHERAL NERVE INJURY AND TREATMENT WITH TISSUE-SPECIFIC EXTRACELLULAR MATRIX HYDROGEL Mara C. Palmer, Travis A. Prest, and Bryan N. Brown McGowan Institute for Regenerative Medicine, University of Pittsburgh Medical Center University of Pittsburgh, PA, USA Email: mcp53@pitt.edu, Web: www.mirm.pitt.edu INTRODUCTION A multitude of traumatic occurrences, including car collisions, crush injuries, and lacerations due to knife wounds, gunshots, and other sharp objects, cause peripheral nerve damage, resulting in over 900,000 peripheral nerve reconstruction surgeries of the upper extremities per year [1,2]. The more severe cases of injury involve partially or fully transected peripheral nerves which leave many patients with permanent motor deficits, significantly impacting quality of life [3]. Sensory nerve autografts are the current gold standard treatment for severe peripheral nerve transections. However, autograft repair is associated with donor site morbidity and provides only moderate recovery. Structural nerve conduits (NCs) coupled with lumen fillers are an alternative approach, avoiding the donor site morbidity of autograft repair [4]. The present study analyzes the use of decellularized porcine peripheral nerve-specific extracellular matrix (PNS-ECM) hydrogel as a NC luminal filler to treat peripheral nerve gap defects in a rat model. As the hydrogel degrades, PNS-ECM-specific biomolecules (i.e. growth factors) are released into the local environment. It is proposed this distribution will amplify Schwann cell (SC) migration, contributing to axonal outgrowth and positive functional outcomes, resulting in a more effective treatment for critical gap nerve defects. METHODS A Boyden chamber assay, modified from Agrawal, et al., quantified in vitro SC migration [5]. Decelluarlized porcine-derived PNS-ECM (n=6), non-nerve specific small intestine submucosa (SIS) (n=4), and non-nerve specific urinary bladder matrix (UBM) (n=5) ECM hydrogels were utilized as comparative attractant solutions. Varying hydrogel concentrations (1000, 500, 250, and 125ug/mL); a positive control: DMEM with 10% FBS and 1% penicillin streptomycin; a negative control: serum-free DMEM; and a chemokinetic condition: 15uL of 1000ug/mL hydrogel on the cell side of the chamber (to monitor nondirectional

chemotaxis) were analyzed. DAPI staining of the microporous membrane after assay incubation (4hrs, 37oC) identified SC nuclei. Images of the DAPI stained membrane were taken (510 nm, 4X) using Nuance multispectral imaging system (ThermoFisher Scientific, Caliper Life Sciences). A 15mm gap defect of the sciatic nerve in a rat model (n=2) was used to assess in vivo SC migration. Crosssections of the rodent nerve explant were taken at controlled time intervals (t=7d, t=28d) after surgical repair using a NC and lumen filler (PNS-ECM hydrogel, 10mg/mL, or saline solution). Cross-sections were stained with S100 and a corresponding secondary antibody. In vivo migration was measured along the length of the gap (1, 5, 10, 15mm), with respect to the proximal stump. Immunofluorescent stains were imaged using a Nuance multispectral imaging system (4X). A CellProfiler (Broad Institute) analysis was performed to quantify positively stained SCs in both studies. Kinematic functional outcomes will be assessed through over-ground gait analysis in a rat model (n=5) using the TSE MotoRater system and SIMI Motion Software. Rodents will be pre-trained on a TSE MotoRater automated kinematic capture machine. Animals will undergo surgery to analyze the repair of a 5mm peroneal nerve gap defect; animals will receive an autologous nerve graft or repair using a NC coupled with PNS-ECM hydrogel. Animals will stand then run through the TSE MotoRater system every other week after completion of surgical procedures. Animals will be followed out to 90 and 180 days. Functional gait analysis will be compared to a previously developed common peroneal nerve index (PNI) developed by collaborators of Dr. Bryan Brown. DATA PROCESSING CellProfiler analysis was used to evaluate both in vitro and in vivo SC migration images. Nuclei detection specifications were optimized for each image set to omit extraneous object detection. CellProfiler analysis identified and counted nuclei. GraphPad Prism was


used to perform a two-way ANOVA (Îą=0.05) with multiple comparisons for in vitro SC migration data. Statistical analysis was not performed on in vivo data due to small sample size. Data processing of gait kinematics will be performed using SIMI Motion Software. Automated frame-byframe gait analysis will be implemented to capture post-surgical changes in locomotion. The PNI will be calculated using images acquired from kinematic analysis. RESULTS In vitro migration analysis revealed increased SC movement with the use of higher concentrations (1000, 500ug/mL) of PNS-ECM. Figure 1 depicts the average number of SCs identified through CellProfiler analysis for experimental and control conditions. Additionally, analysis of in vivo SC migration showed a larger presence of SC with the use of PNS-ECM hydrogel at all time points and distances except t=7d at 5mm. Table 1 displays the average SC migration across the length of the nerve explant at day 7 and day 28 postsurgical repair. Functional recovery testing and kinematic analysis to be completed. * **

*

DISCUSSION Data obtained from both in vitro and in vivo analyses supports the proposed hypothesis regarding SC migration when using PNS-ECM hydrogel for peripheral nerve repair. Though non-nerve specific UBM-ECM hydrogel exhibited comparable SC migration to PNS-ECM at lower concentrations, the degree to which the UBM-ECM hydrogel solution gels over the incubation period is not known, and increased gelation may contribute to uneven concentrations within the bottom well of the Boyden chamber. Furthermore, when evaluating the addition of 1000ug/mL hydrogel on the cell side of the chamber, SCs demonstrate chemokinesis in response to the 1000ug/mL UBM-ECM hydrogel, whereas SC exposed to the PNS-ECM hydrogel do not. The PNSECM hydrogel exhibits a directed attraction for SCs across the microporous membrane, indicating a PNSECM hydrogel is a more advantageous NC lumen filler for clinical use. Based on the increased propensity for SC migration when using PNS-ECM hydrogel, it is reasonable to deduce that PNS-ECM hydrogel as a NC lumen filler will show increased functional outcomes; however, it is recommended to proceed with analyzing postsurgical functional kinematics. Moreover, amplified SC migration with the use of PNS-ECM hydrogel demonstrates the PNS-ECM hydrogel and nervespecific functional molecules are capable of producing more favorable surgical outcomes, avoiding the shortcomings of autologous nerve repair. REFERENCES 1. Brattain, K. Magellan Medical Technology Consultants, Inc., 2013. 2. Menorca R.M.G. Hand clinics 29.3, 317-330, 2013. 3. Brown B.N. Transl Res 163, 268, 2014. 4. Pabari, A. J Control Release 156.1, 2-10, 2011. 5. Agrawal, V. J Tissue Engineering 3.8, 590-600, 2010.

Figure 1: Experimental and control in vitro SC migration cell counts. Results referring to PNS-ECM use are in blue, SISECM in red, and UBM-ECM in green. Error bars depict standard deviation within the group. PNS-ECM increased SC migration to a significant degree as compared to 1000ug/mL SIS(*) UBM-ECM(**), and 500ug/mL SIS-ECM(*), at each respective concentration.

ACKNOWLEDGEMENTS The research is funded jointly by the University of Pittsburgh’s Swanson School of Engineering, the Office of the Provost, and the Brown Lab at the McGowan Institute for Regenerative Medicine.

Table 1: Quantified SC Migration Across the Length of a 15mm Peripheral Nerve Gap in a Rodent Model Distance from Proximal Stump (mm) 1 5 10 15

PNS (t=7d) 440 230 76 207

Saline (t=7d) 203 322 32 121

PNS (t=28d) 165 141 90 8

Saline (t=28d) 101 46 23 1


ASSESSING THE HOST INFLAMMATORY RESPONSE TO ACELLULAR LUNG SCAFFOLDS Josh Tarantino, Clint Skillen, Bryan Brown McGowan Institute for Regenerative Medicine University of Pittsburgh, PA, USA Email: jrt61@pitt.edu, Web: http://www.mirm.pitt.edu/ INTRODUCTION Chronic respiratory disease is one of the leading causes of death in the United States. However, there are not enough available lungs to meet the demands of patients on the transplant list. Alternative methods are needed to provide additional organs to address the shortage and improve long-term survival for patients suffering from respiratory disease. Repopulating decelluarized lung tissues with cells from the patient is a promising approach to the development of transplantable lung tissue. This method is promising because it replaces the cells from the donor tissue with cells from the patient while leaving connective structures intact, allowing a new lung to be created. Ideally, this new tissue will not trigger an adverse immune response after implantation. Our experiment sought to determine how the host’s immune system responds to the implantation of acellular lung scaffolds over the course of ten weeks. METHODS For this study, four different decellularized lung scaffolds were used for implantation: macaque decellularized lung, human decellularized lung, wild-type pig decellularized lung, and ι-Gal epitope knockout pig decellularized lung. Intact wild-type pig lung and an agarose vehicle control were used as controls. The study consisted of three different phases: a pilot study, a comprehensive study, and a reimplantation study. The pilot study consisted of a single primate model in which all of the lung scaffolds and controls were implanted subcutaneously into the animal. Punch biopsies of the implanted tissues were taken at 2 and 4 weeks, and full subcutis explants were taken at ten weeks. These samples were then stained with different stains to identify specific cell types to characterize an immune response: Wright Giemsa (eosinophils), Alcian Blue (mast cells), Hematoxylin and Eosin (neutrophils). Additional samples labeled with

antibodies such as CD31 and CD206 to further characterize the immune response. The comprehensive study was performed in ten animals, with five groups of two animals each. Each of these groups received a different implant: intact wild-type pig lung, macaque decellularized lung, wild-type pig decellularized lung, and Îą-Gal epitope knockout pig decellularized lung. The fifth group served as a sham surgery control. Full subcutis explants were taken at 1, 2, 4, and 8 weeks into the study. These explants were stained using identical procedures and stains as in the pilot study to characterize the immune response. The reimplantation study followed the same protocol as the comprehensive study. The same ten animals in the original five groups of the comprehensive study were reimplanted with lung scaffolds they received in the comprehensive study, and their immune response was characterized again. The same staining methods and imaging techniques were performed on these samples as in the other two studies. DATA PROCESSING Images of each stain were taken for every sample. The images were taken at 4x, 10x, 20x, and 40x magnification, and quantitatively analyzed using an algorithm in ImageJ. These images were used to establish cell counts for different types of immune cells to characterize the immune response. The images taken using antibody labeling were used to determine immune cell activity in a specific region of interest. Further qualitative analysis was performed on these regions of interest to characterize tissue remodeling over the course of the study. RESULTS Host cell counts were determined using Hematoxylin and Eosin staining. Most of the lung


scaffolds elicited a strong infiltration of host cells at two weeks post-implantation, declining thereafter. The exception to this was the intact native porcine tissue, which saw a twofold increase in cells between week two and week ten, likely reflective of a rejection-type response. However, most cell counts were trending downwards when measured at ten weeks, with the exception of the macaque lung and the agarose control, which both expressed an increase in cells between week four and week ten. We are currently completing immunolabeling studies to identify the type of host inflammatory cell (e.g. neutrophil, macrophage, t-cell, b-cell) infiltrating each scaffold material over time. All materials were seen to remodel to varying degrees over the 10 week time period. Initially, lung structures could be identified within the implant site. However, with time, the original matrix was degraded and replaced by either adipose and connective tissue or scar tissue, depending upon the nature of the original implant.

Figure 1: Cell counts in lung tissue samples recorded at 2, 4, and 10 weeks. All cell counts in the target animal were down after 10 weeks in each scaffold source with the exception of the native pig lung, which doubled its week 2 numbers. However, both types of decellularized pig lung showed a decline in the number of cells from week 4 to week 10, while the agarose control and the macaque lungs showed a marked increase in cells over the same time frame, though the cell counts were still lower than the initial count at week 2.

DISCUSSION Data collected from the study indicates all decellularized lung scaffolds were repopulated by host cells over the course of the study. However, the kinetics of repopulation as well as the remodeling outcomes associated with each scaffold material were different. We are currently investigating the phenotype of the cells which

participate in the remodeling response to each implanted scaffold material seen in the pilot study. Further analysis will be performed in the comprehensive and reimplantation studies before definitive conclusions can be made. The results of this study will assist in the determination of an ideal scaffold host tissue source for future studies investigating the generation of engineered whole lung transplants. ACKNOWLEDGEMENTS Implants and surgery were performed at United Therapeutics Regenerative Medicine. Joint funding was provided by Dr. Bryan Brown and the McGowan Institute for Regenerative Medicine, the University of Pittsburgh Swanson School of Engineering, and the University of Pittsburgh Office of the Provost.


CHARACTERIZING THE ECM COMPOSITION AND MECHANICAL PROPERTIES OF OVARIAN TISSUE-DERIVED HYDROGELS Ziyu Xian, Michael J. Buckenmeyer and Bryan N. Brown McGowan Institute for Regenerative Medicine, Department of Bioengineering University of Pittsburgh, PA, USA Email: zix7@pitt.edu, Web: http://mirm.pitt.edu INTRODUCTION Women in cancer remission often lose their reproductive ability due to damages to ovarian tissues as a result of radiation or chemotherapy. For example, around 33% to 76% of women with breast cancer experience amenorrhea following treatment, which may lead to infertility [1]. A potential solution to restore reproductive function is to develop a threedimensional culture system using an ovarian tissuederived hydrogel to grow immature follicles to obtain mature oocytes. A successful in vitro culture system requires a highly regulated environment composed of extracellular matrix (ECM) proteins such as collagen I, collagen IV, fibronectin, and laminin along with growth factors and hormones that mimic the conditions of the native ovarian microenvironment. The mechanical stiffness of the ECM is another variable that may influence signaling through local mechanotransduction pathways that initiate and regulate follicle development. Immature follicles usually reside near the stiffer cortex of the ovary and migrate towards the softer medulla as they develop. Thus, it is imperative to characterize the protein composition and the mechanical stiffness of the hydrogels to ensure proper follicle development. METHODS Porcine ovarian tissues were decellularized using mild detergents to remove cells from their native environment [2]. Immunohistochemistry (IHC) staining for collagen IV (ab6586) and laminin (ab11575) were performed to ensure that the proteins important for follicle maturation are maintained after the decellularization process. Decellularized and digested ovarian ECM was used to make hydrogels with 2, 5, and 10 mg/ml ECM concentrations, using NaOH and PBS to neutralize the pH and balance the salt concentrations.

Turbidimetric gelation kinetics (TGK) were performed using a plate reader (BioTek Synergy HTX Multi-Mode Reader) to optimize the pepsin concentration for ovarian ECM digestion. Stock digests of 20 mg/ml digested with 0.5, 1, 1.5, 2, or 3 mg/ml pepsin for 48 hours were used to make 2, 5, and 10 mg/ml hydrogels. A stock of 10 mg/ml ECM was digested with 1 mg/ml pepsin and was used to make 2 and 5 mg/ml hydrogels to assess differences in gelation between varying stock concentrations. The change in absorbance at 405 nm under 37 °C was measured over 1-hour duration. A rheological time sweep was used to assess hydrogel stability and to determine the elasticity (G’ – storage modulus) and viscosity (G’’ – loss modulus) of the hydrogels over 40-minute duration using a parallel plate rheometer (AR2000). Hydrogels were tested under 37 °C with 1 rad/s frequency and 5% strain. An ANOVA was conducted for each time point to assess any significant differences between the 2, 5, and 10 mg/ml hydrogels for both G’ and G’’. A strain sweep was used to identify the amount of strain the hydrogels could withstand to ensure that the hydrogels would not break when exposed to forces applied by the cells. Strains from 0.01 to 100% were tested with 1 rad/s frequency under 37 °C. Scanning electron microscopy (SEM) (JEOL JSM 6335F) images of the hydrogel ultrastructure for the 2, 5, and 10 mg/ml hydrogels were taken at 6kX, 8kX, and 10kX magnifications to visualize the fiber crosslinks and porosity. DATA PROCESSING Time sweep data were averaged and the standard deviations were found. Data from the first 3-minutes of the strain sweep were removed before averaging due to incomplete gelation of the hydrogels. Normalized data from the turbidimetric gelation kinetics assay were used to calculate the average.


RESULTS IHC staining showed that collagen IV and laminin are maintained after decellularization (Fig. 1).

The strain sweep showed that the hydrogels displayed a constant G’ and G” until approximately 10% strain. The 2 mg/ml hydrogel exhibit less fiber crosslinks and higher porosity compared to the 5 and 10 mg/ml hydrogels (Fig. 4).

Figure 1: (A) Native and (B) decellularized ovarian tissues stained for collagen IV. (C) Native and (D) decellularized ovarian tissues stained for laminin.

Hydrogels digested with 0.5 mg/ml pepsin were unstable while those digested with 1 and 1.5 mg/ml pepsin were stable for all ECM concentrations. Hydrogels digested with 2 or 3 mg/ml pepsin were unstable for the 2 mg/ml hydrogel. No difference in stability was found between the 10 and 20 mg/ml stock concentrations (Fig. 2).

Figure 2: Normalized absorbance for hydrogels digested with varying concentrations of pepsin, and comparison of hydrogel stability of 10 and 20 mg/ml stock concentrations digested with 1 mg/ml pepsin.

Gelation occurred within the first 10 minutes for all ECM concentrations, with a time-invariant G’ and G’’ after gelation (Fig. 3). Significant differences in stiffness were found (p<0.0001) between the 2, 5, and 10 mg/ml hydrogels at all time points.

Figure 3: Time sweep depicting the change in G’ and G’’ of 2, 5, and 10 mg/ml hydrogels over 40-minute period.

Figure 4: SEM images of (A) 2, (B) 5, and (C) 10 mg/ml hydrogels at 8kX magnification.

DISCUSSION IHC staining confirmed that the proteins essential for cellular interactions with the ECM, collagen IV and laminin, are present in the hydrogels to facilitate proper follicle development. The TGK assay showed that hydrogels digested with 1 and 1.5 mg/ml pepsin formed the most stable hydrogels. Ovarian hydrogels at all ECM concentrations demonstrated a timeinvariant stiffness after gelation, which ensures that a stable environment is provided for proper follicle maturation over time. Hydrogels with higher ECM concentrations are stiffer than those with lower ECM concentrations, which is confirmed by the SEM images where the 10 mg/ml hydrogel exhibit more fiber crosslinks and therefore higher resistance to stretching compared to hydrogels with lower ECM concentrations. The average cell traction force is approximately 900 pN [3], which will produce a strain much lower than 10% on the hydrogels, showing that the hydrogels exhibit sufficient stiffness to withstand forces applied by the cells. Future studies will be focusing on using crosslinking reagents to further increase hydrogel stiffness, while maintaining a constant ovarian ECM concentration. REFERENCES 1.Hulvat et al. Current Treatment Options in Oncology 10, 308-307, 2009. 2.Brown et al. Tissue Engineering 17, 411-421, 2011. 3. Mousavi et al. Physical Biology 11, 026002, 2014. ACKNOWLEDGEMENTS Funding was provided by the University of Pittsburgh Swanson School of Engineering, Dr. Bryan Brown, and the Office of Provost.


PREDICTING MECHANICAL PROPERTIES WITH QUANTITATIVE ULTRASOUND MEASURES Ryan T. Black, Gerald A. Ferrer, Masahito Yoshida, Volker Musahl, and Richard E. Debski Orthopaedic Robotics Laboratory, Department of Bioengineering and Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA Email: rtb18@pitt.edu Web: http://www.engineering.pitt.edu/labs/orl/ INTRODUCTION Ultrasound is a common imaging modality used to evaluate musculoskeletal tissues non-invasively and dynamically [1]. Tissue geometry can be assessed objectively, but determining tissue quality is largely subjective. As a result, clinical ultrasonography examinations have low repeatability between examiners [1]. Previous studies have proposed the use of quantitative ultrasound measures (QUS) to objectively evaluate tissue quality using ultrasound imaging [1,4,5]. In addition, studies have shown that better tendon quality is correlated with QUS measures including higher values of echogenicity and variance and lower values of skewness and kurtosis [4]. However, the relationship between these QUS measures and mechanical properties of musculoskeletal tissues, which are representative of tissue quality, are poorly understood. Therefore, the objective of this study is to relate mechanical properties of human tendon with QUS measures to be able to quantify tendon quality. It was hypothesized that high values of echogenicity and variance and low values of skewness and kurtosis are related to greater mechanical properties representing better tissue quality. METHODS Five fresh frozen cadaveric long head of biceps tendon (LHBT) were harvested from intact shoulder specimens (59.8 ± 9.6 years old). The LHBT was then dissected into a “dog-bone” shape approximately half the width of the tendon to ensure failure in the midsubstance [2,3]. Two rubber band markers were superglued to the tendon at the ends of the “dog-bone” region to provide a point of reference to ensure repeatability between ultrasound images (Fig. 1). Using a laser scanner (Next Engine, Desktop 3D Scanner, Santa Monica, CA, USA), the crosssectional area of the “dog-bone” region of the tendon was measured [2]. The LHBT was then secured to the materials testing machine (Instron Model 5965,

Norwood, MA, USA) for uniaxial tensile testing using soft tissue clamps. A layer of skin was wrapped around the tendon to allow for ultrasound imaging during mechanical testing. For mechanical testing, a 1N preload was applied to the LHBT, followed by preconditioning from 1-10N for 20 cycles at 10 mm/min. After preconditioning, the LHBT was loaded from 1-30N for 50 cycles at 10 mm/min. The max load of a loading set was then increased by 20N until tendon failure occurred (ie. 130N, 1-50N, etc.). Following each loading set, three ultrasound images (GE, LOGIQ S8, Fairfield, CT, USA) were taken at the max load of each loading set by a single experienced ultrasound examiner using a rig designed to ensure repeatable placement of the ultrasound probe. DATA PROCESSING For each ultrasound image, a region of interest (ROI) was selected as a box centered between the rubber band markers (Fig. 1). QUS measures (Skewness, Kurtosis, Variance, Echogenicity) were calculated from a grayscale distribution produced from the ROI [1,4,5]. QUS measures from each of the three ultrasound images taken for a loading set were adjusted to account for baseline QUS measures of surrounding, unloaded fat tissue. Mechanical properties including toe region tangent modulus and linear region tangent modulus were determined using the last cycle of the stress strain curve for each loading set, while creep was determined using the last and initial cycles of the stress strain curve for each loading set. A backwards multiple linear regression was performed to test the influence of QUS measures on mechanical properties of the tendon. Significance was set at p < 0.05.


Modulus) and the QUS measures (Skewness, Kurtosis, Variance, Echogenicity).

Figure 1: Typical Ultrasound images of LHBT obtained. The ROI for the LHBT is defined by the yellow box in the images. A) Ultrasound image at 15 MPa located within the toe region of the stress strain curve. B) US image at 28 MPa located within the linear region of the stress strain curve.

RESULTS A significant multiple linear regression model was found for each mechanical property determined in this study involving the interaction of at least two QUS measures as predictors (Table 1). Both creep and linear region tangent modulus were predicted by the interaction between kurtosis and variance, while toe region tangent modulus was predicted by the interaction between skewness, kurtosis, and variance. The sign of the beta coefficient indicates the type of relationship between the variables (ie. positive is direct relationship, negative is inverse relationship). For example, creep was found to be inversely proportional to kurtosis and variance (Creep = (-6.21)(Kurtosis) + (-0.05)(Variance) + 26.98), thus decreasing kurtosis and variance were related to increasing creep. There were no other significant multiple linear regression models between the mechanical properties (Creep, Toe Region Tangent Modulus, Linear Region Tangent Table 1: Summary of multiple linear regression analysis performed between QUS measures and mechanical properties.

DISCUSSION Previous studies have shown that different measures of tissue quality individually correlate with single QUS measures [4].The results of this study show that the interaction of multiple QUS measures can predict mechanical properties, representative of tissue quality. Both creep and linear region tangent modulus were shown to be inversely related to kurtosis and variance, while toe region tangent modulus was shown to be directly related to skewness and inversely related to kurtosis and variance. These results do not support our initial hypothesis, however the interaction between multiple QUS measures as predictors of mechanical properties influences the relationships observed between the QUS measures and mechanical properties. In the future, other musculoskeletal tissues, including rotator cuff tendons, will be assessed using QUS measures to determine tissue quality. These QUS measures will provide clinicians with a repeatable tool to predict mechanical properties of musculoskeletal tissue using ultrasonography, providing insight on tissue quality and microstructure. SIGNIFICANCE Clinicians will be able to use QUS measures to better assess mechanical properties of musculoskeletal tissue, representative of tissue quality, with ultrasonography, a common, inexpensive, and dynamic imaging modality, to improve treatments. REFERENCES 1. Collinger J et al. Academic Radiology. 2009; 16:1424-1432. 2. McGough R et al. Knee Surg, Sports Traumotol, Arthroscopy. 1996; 3:226-229 3. Kolz C et al. Clinical Biomechanics. 2015; 30:940-945. 4. Collinger J et al. Am. J. Phys. Med. Rehabil. 2010; 89:390-400. 5. Collinger J et al. PM R. 2010; 2:920-925. ACKNOWLEDGEMENTS Support from the University of Pittsburgh Department of Bioengineering, Swanson School of Engineering, the Department of Orthopaedic Surgery, and NSF Fellowship Grant No. 1247842 is gratefully acknowledged.


Impact of Screw Length on Fixed Proximal Scaphoid Fracture Biomechanics: In Vitro Study with Cyclic Loading and Load to Failure Samik Patel, Juan Giugale MD, Nathan Tiedeken MD, Richard E. Debski PhD, Robert Kauffman MD, John Fowler MD Orthopaedic Robotics Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: sdp34@pitt.edu, Web: http://orl.bioe.pitt.edu/ Introduction: Internal fixation used as treatment for scaphoid fractures has increased with advances in surgical techniques [1]. Proximal pole scaphoid fractures present a clinical challenge due to their lower success rate when cast immobilization and internal fixation are utilized [2]. Screws maximizing length have been shown to impact wrist motion for fixed scaphoid waist fractures [3]. Additionally, screws with greater diameters and varying geometry have been shown to impact stiffness and ultimate load at the fracture site for fixed scaphoid waist fractures. Currently, the effect of screw length on the structural properties of fixed proximal scaphoid fractures is not well understood even though fixation of proximal scaphoid fractures has a high nonunion rate. Therefore, the objective of this study is to determine the bending stiffness and ultimate load for the loading of fixed proximal scaphoid fractures for screws of various lengths. Methods: Fifteen, fresh frozen cadaveric scaphoids (57.6±10.3 years of age) underwent an oblique osteotomy with a .52mm blade saw to simulate a 7mm proximal oblique fracture with respect to the long axis. Each scaphoid was randomly assigned for fixation to one of 3 possible screw lengths (n=5) of a 2.5mm diameter central threadless screw (Stryker, Kalamazoo, MI, USA): 10mm, 18mm, and 24mm length. The distal pole of the scaphoids was potted in epoxy putty (Bondo, St. Paul, MN, USA) with the scaphoid long axis perpendicular to the horizontal plane. The scaphoid was then oriented at 45° to simulate clinical dorsal to volar bending load (Figure 1). Each specimen was cyclically loaded for 1000 cycles with an 800Nmm bending moment, where the applied load (40.0N-66.7N) depended on the moment arm between the potting and a plunger driven by a materials testing machine (Instron, Eden Prairie, MN). Stiffness was calculated at the 1000th

cycle and cyclic failure was defined as either plunger extension greater than 2.5mm or a proximal pole crack in the construct [4]. Each specimen was loaded to failure after cyclic loading. Failure was indicated by loss of fracture reduction or a proximal crack in the construct as a result of loading (Figure 2); this was defined by as a distinct decrease in the load-displacement curve. One-way analysis of variance (ANOVA) tests were performed to evaluate differences in stiffness and load to failure. Significance was set at p<0.05. Results: No significant difference in long axis lengths between the randomized groups of scaphoids was found. Additionally, no significant difference in stiffness at the 1000th cycle between different screw lengths was found (Figure 3). All specimens with 18mm and 24mm screw fixations were able to withstand cyclic loading, however 1 specimen fixed with a 10mm screw failed during cyclic loading. As a result, a proximal fracture fixed with a 10mm screw was able to withstand 845 ± 346 cycles. Load to failure was significantly (p<.05) impacted by screw length utilized for fixation. A significant difference (p<.05) in load to failure between a 10mm screw and 24mm screw was found, however no significant difference (p=.606) occurred in load to failure between an 18mm and 24mm screw (Figure 3). Discussion: This study examined the effect of screw length on bending stiffness during cyclic loading and load to failure. The results of this study show that a screw that maximizes the length (24mm) within a specimen withstands significantly greater load to failure than a screw that is centered (10mm) with respect to the fracture site. The 10mm screw gains less purchase in the bone on either side of the fracture compared to the 24mm screw. However, there is no statistically significant


difference in load to failure between an 18mm screw, that does not maximize its length within the specimen, and a 24mm screw; this could be occurring because the 18mm screw is more centered with respect to the screw compared to the 24mm screw. Our data contradicts a previous study that contends that maximizing the screw length significantly optimizes wrist biomechanics and fracture healing [3].

compared to a 24mm screw minimizes the risk for injury to the distal radius articulation during surgery. Acknowledgements: Summer support from the Swanson School of Engineering is gratefully acknowledged. References: [1] Rhemrev SJ et al. Int J Emer Med. 2011; 4:4. [2] Trumble M et al. J Amer Soc Surg H. 2001; 155-171. [3] Dodds SD et al J Hand Surg Am. 2006; 405-413 [4] McCallister WV et al. J Bone Joint Surg Am. 2003;72-77

Significance: The results of this study will provide surgeons with useful information that will help them in determining an optimal screw length for the fixation of proximal scaphoid fractures using central threadless screws. Fixation utilizing an 18mm screw

A

B

Figure 2A: Displays failure by proximal crack. Figure 2B: Displays failure by loss of fracture reduction.

Figure 1: Experimental setup with the distal scaphoid potted in the epoxy putty. Bending load was applied with a plunger attached to a materials testing machine. The scaphoid was oriented at a 45° angle.

Figure 3: Stiffness at the 1000th cycle during cyclic loading of an 800 Nmm bending moment and load to failure withstood based on screw length utilized for fixation. (mean Âą SD, *p<.05)


Supporting Infrastructure for Last Mile Solutions Rosemore S. and Zastrow S. Swanson School of Engineering, University of Pittsburgh College of Engineering, University of Wisconsin, Madison

Introduction As urban populations increase in the dense city-state of Singapore the demands for fast, simple, and affordable means of transportation are continually rising. This demand lead to our teams’ assignment of the design project “Personal Mobility for the Singaporean-Built Environment.” Our task was to contribute to innovation regarding personal mobility devices and their use within Singapore’s multi-modal transport system. First and last mile transportation refers to the initial and final distances travelled to or from public transportation. The challenge posed to the team was to research a better way of incorporating electric scooters or other PMDs in Singapore. The popularity of escooters is currently rising because they provide fast transport and rely only on electricity to power them. This prevents people from spending a large amount of time walking or from physically exerting themselves in Singapore’s hot climate. While escooters have become more popular many users are facing issues combining their usage with public transportation, and issues using the current infrastructure prevent escooters from rising in popularity. Our team was given eight weeks in Singapore and four weeks in Pittsburgh to research a feasible design challenge by narrowing the scope, brainstorm and complete design iterations, prototype these iterations, and demonstrate our final design. We were given some initial findings and were also asked to perform separate research

into how to best address the issues of escooter users. After surveying current news articles, contacting local users, and researching areas around Singapore we concluded that by providing a secure storage area for an escooter users would be able to more freely use their devices throughout the city. Methods The focus of the team began on narrowing down key elements of public transportation and convenience. The first three weeks of research consisted of personal electric scooter trials, land surveying, and contacting local escooter companies for input. The supervising professor recommended several location types for the team to explore and gather information such as residential areas and tourist spots. Using the information from this the team was able to come up with several initial hypotheses for PMD usage in Singapore. Other considerations were made to consider the legality of certain methods; with so many citizens in such a small area Singapore is rigorous about enforcing rules and safety. PMDs are currently being reevaluated by a city council to modernize and update the current rules to better represent the citizens. Once the team had an understanding of the current laws and usage of PMDs a survey was conducted on local users and non-users to gauge interest in services and features. With the feedback the team was able to begin prototyping which would focus on a public storage option for escooter users.


Prototyping Phase The initial ideas focused on the idea of a long lasting design with minimal maintenance and space required. The original designs were narrowed down using a design matrix and a folding locker idea was decided on. The team used a 1:5 scale model of the largest and average scooters on the market to work with. The materials used were foam core, laser cut acrylic, and 3D printed model doors. The final prototype is shown in Figure 1. The user would roll their scooter into the folding locker where they could choose to fold it, rotate the door up, and lock it. Or they could leave the scooter unfolded a lock the scooter the locker. This design is beneficial for scooter with non-folding seats. The initial design was reduced in size and the hinging mechanism was refined. Other options that were considered were including an electronic lock and rental service and a second level to increase space efficiency. Discussion

needs of the users to better incorporate PMD usage into their schedule. References Active Mobility Advisory Panel Recommends Rules and Code of Conduct for Safe Sharing of Paths. Rep. Land Transport Authority, 17 Mar. 2016. Web. June-July 2016. Lee, Amanda, and Laura Philomin. "The Big Read: As More People Add Zip, Congestion Moves from Roads to the Pavement." Today. N.p., 13 Nov. 2015. Web. June-July 2016. Siong, Olivia. "Allow Bicycles, Personal Mobility Devices on Footpaths, but with Speed Limits: Advisory Panel." Channel NewsAsia. N.p., 17 Mar. 2016. Web. June-July 2016. Acknowledgements Research and prototyping conducted with the Design-Centric Programme of the National University of Singapore. Funding and advising provided by Swanson School of Engineering, University of Pittsburgh.

The final design has a focus on simplicity and easy access for the user while still remaining secure. The future design intent would be to move onto a full-scale model which would be made of sheet metal and a door possibly made of high-density polyethylene (HDPE). The hinging mechanism would be improved to a full four bar action. Surveys would also be distributed to collect more information on the interest of features for the system. This product also focuses on Singaporean usage and further exploration could be made to incorporate for other cities and countries. Overall, the system meets the Figure 1: Final Prototype including scale model scooter


SURGICAL APPLICATIONS OF HAPTIC RENDERING DEVICES FOR MEMBRANE PUNCTURE SIMULATION Avin Khera, Randy Lee, George Stetten Visualization and Image Analysis Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: avk12@pitt.edu, Web: http://vialab.org

INTRODUCTION During microsurgical procedures, surgeons use visual feedback when forces are below the threshold of touch. Current haptic platforms that model tissue interaction, such as the Geomagic Touch and the Butterfly Haptics Magnetic Levitation Haptic Device (MLHD), are expensive, slow, limited in force output, or subject to inertial effects due to their heavy actuators platforms. Our Haptic Renderer was developed to address these limitations [1]. It uses a woofer loudspeaker actuator with force and displacement sensors to simulate tissue interaction. We describe here ProportionalIntegral-Derivative (PID) control of the device to improve its performance in rendering tissues and assisting with the development of future haptic surgical tools. METHODS Our actuator is an 80W subwoofer speaker (Faital PRO 5FE120) with a 5 inch (12.7 cm) diameter and a cone mass of 11 grams. It is capable of 9.5 mm displacement from rest and can move steadily against forces up to 10 N [1]. Speakers have the advantage of direct electromechanical transduction, with force related simply to the voltage across the speaker coil. For sensing purposes, a stereolithographic scaffold with a Honeywell FS03 Force Sensor was placed on top of the speaker cone. An optical IR transceiver (Vishay TCRT5000L) was mounted and used to measure speaker cone position. The total resulting mass of the cone including the sensor was 23.6 grams. The renderer is controlled by an Analog Devices ADuC7026

microprocessor. A Wixel USB module (Pololu; Las Vegas, NV) enabled communication between the “masterâ€? microprocessor and a “slaveâ€? computer for data logging and mode selection. A Proportional-Integral-Derivative (PID) feedback controller was used to regulate speaker current based on force and displacement measurements [2]. The error (e) was determined by subtracting the sensor voltage at some set point from the current sensor voltage. Proportional control multiplies a proportional gain (Kp) by the error, while the integral (Ki) and differential (Kd) controls summate previous errors and predict future errors with their respective gains [2]. Two primary modes of PID were developed: (1) Virtual Wall (VW) mode, in which a set-point was chosen based on the desired displacement, with the error used to resist displacement from the set-point with maximum force, and (2) Zero Stiffness (ZS) mode in which speaker voltage was used to minimize force detected by the sensor and thereby simulate open air. In each case, PID control was used determine the output voltage based on the error as shown in Eq.1. đ?‘œđ?‘œđ?‘œđ?‘œ

đ?‘‰đ?‘‰đ?‘‰đ?‘‰đ?‘œđ?‘œđ?‘œđ?‘œđ?‘œđ?‘œđ?‘œđ?‘œđ?‘œđ?‘œđ?‘œđ?‘œ = đ??žđ??žđ??žđ??žđ?‘?đ?‘?đ?‘?đ?‘? ∗ đ?‘’đ?‘’(đ?‘Ąđ?‘Ąđ?‘Ąđ?‘Ą) + đ??žđ??žđ??žđ??žđ?‘–đ?‘–đ?‘–đ?‘– ďż˝ đ?‘’đ?‘’đ?‘’đ?‘’(đ?‘Ąđ?‘Ąđ?‘Ąđ?‘Ąâ€˛) đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘Ąđ?‘Ąđ?‘Ąđ?‘Ąâ€˛ + đ??žđ??žđ??žđ??žđ?‘‘đ?‘‘đ?‘‘đ?‘‘ 0

Eq. 1: PID controller for output voltage

đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘’đ?‘’đ?‘’đ?‘’(đ?‘Ąđ?‘Ąđ?‘Ąđ?‘Ą ) đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘Ąđ?‘Ąđ?‘Ąđ?‘Ą

Open-loop Ziegler-Nichols tuning was performed to empirically determine the PID gains from the ultimate gain (Ku), the value of Kp at which the system began oscillating, and the period (Tu) of oscillation for each


mode. Anti-windup control was implemented by scaling down the integral component by the difference in the calculated controller output and the maximum actuator output [2]. A slew-rate limiter and a low pass filter were added to compensate for high frequency signal noise accentuated by the derivative gain term. RESULTS Tyreus-Luyben PID controller settings were adopted over Ziegler-Nichols due to their greater stability with the system. Final calculated Ku and Tu are shown in Table 1. PID control of Virtual Wall mode produced forces opposite to the user, resulting in almost zero displacement as seen in Figure 1. Similarly, the new rendition of Zero-Stiffness mode actively removed the tension at all positions of the speaker cone as seen in Figure 2. Table 1: Calculated Ku and Tu Values

Ku Tu

Virtual ZeroWall Stiffness 1.50 7.348 0.025 0.002

Figure 2: Actuator in virtual wall mode receiving large push/pull forces, but undergoing almost zero displacement (red, middle)

Figure 3: Actuator in zero stiffness mode undergoing rapid changes in displacement and detecting no forces (blue, top)

DISCUSSION Dynamic control of the actuator was achieved with PID and successfully rendered both a very stiff membrane in VW mode and an empty space simulation in ZS mode. In both modes, PID control of the speaker actuator minimized errors with lower latency than simply P or PI methods. The addition of derivative control predicted future errors and balanced the error averaging conducted by the integral term. Prior to anti-windup control, the actuator suffered from repeating integration of previous errors. Anti- windup eliminated “stickiness� related to actuator saturation at the ends of the displacement range. REFERENCES 1. Khera et al. One-Dimensional Haptic Rendering using Audio Speaker with Displacement Determined by Inductance, Machines, 2016, 4, 9; doi:10.3390/machines4010009 2. Astrom, K. J., & Murray, R. M. (2008). PID Control. In Feedback Systems: An Introduction for Scientists and Engineers (pp. 293-312). Princeton, NJ: Princeton University Press. ACKNOWLEDGEMENTS NIH R01EY021641, NSF IIS-1518630, Research to Prevent Blindness, Dr. George Stetten and University of Pittsburgh Swanson School of Engineering Bioengineering Department REU


COMPUTATIONAL MODELING OF WALL STRESS IN CEREBRAL AORTIC ANEURYSMS ENDOSCOPICALLY COILED WITH DIFFERENT PACKING DENSITIES Riley S. Burton1, Joseph E. Pichamuthu2,3,4, Justin S. Weinbaum2,3, Brian T. Jankowitz6 and David A. Vorp2,3,4,5 1. Department of Mechanical Engineering, University of Pittsburgh, Pittsburgh, PA 2. Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 3. McGowan Institute for Regenerative Medicine, Pittsburgh, PA 4. Center for Vascular Remodeling and Regeneration, University of Pittsburgh, Pittsburgh, PA 5. Department of Surgery, University of Pittsburgh, Pittsburgh, PA 6. Department of Neurosurgery, University of Pittsburgh School of Medicine, Pittsburgh, PA Email: rsb61@pitt.edu, Web: http://engineering.pitt.edu/vorplab/

INTRODUCTION Unruptured cerebral aortic aneurysms (CAA) are prevalent in as high as 6.6% of the general population [1]. CAAs are abnormal focal dilations of intracranial arteries resulting from medial degeneration of the vessel walls. While most CAAs exhibit mild symptoms at most, risk factors such as hypertension, smoking, and large aneurysm size (>2cm) pose a serious threat for sudden rupture and subsequent subarachnoid hemorrhage. CAA rupture is a biomechanical phenomenon that occurs when local wall stress exceeds its maximum strength due to hemodynamic factors [1]. Though primarily treated through surgical clipping, endovascular coiling serves as a non-invasive alternative [2]. In this procedure, CAAs are filled with detachable coils to induce blood clotting, thereby restricting blood flow in the aneurysm. Optimal coil packing densities (CPD), generally above 20–25%, are targeted to ensure complete filling and prevent coil compaction [3]. The goal of this study is to investigate spatial variation in aneurysmal wall stress for different CPDs and to evaluate the effectiveness of coiling between simple and branched CAAs. METHODS Virtual 3D geometries of CAAs were constructed from Digital Subtraction Angiography scans of patients (n=7) under observation using an approved IRB (#PRO13080334). First, scans were imported into Mimics (Materialise, Plymouth, MI), where lumen boundaries of CAAs and their parent vessels were profiled using a pixel thresholding algorithm. After isolation, the boundaries were rendered into coarse 3D volumes modeling the CAAs. The models were exported as point clouds for further processing in Geomagic (3D Systems, Rock Hill,

SC), where they were smoothed, corrected for rendering flaws, and patched. Surface geometries of models were then exported in the form of nonuniform rational B-splines. A coil thrombus mass (CTM) was created for each CAA in SolidWorks (Dassault SystĂŠms, Waltham, MA). CTMs were modeled as solids filling the dome region of the aneurysms. CAA surface models and CTM solid models were then meshed into quadrilateral shell elements and tetrahedral solid elements, respectively, in the commercial finite element analysis software, Abaqus (Dassault SystĂŠms, Waltham, MA). Each CAA wall was treated as a homogenous, nonlinear, isotropic, hyper-elastic, and incompressible material with a uniform thickness of 0.36mm and modeled as reported in the literature with the strain energy function Wwall [1]: đ?‘Šđ?‘Š!"## = đ??śđ??ś! đ??źđ??ź! − 3 + đ??śđ??ś! đ??źđ??ź! − 3 + đ??śđ??ś! đ??źđ??ź! − 3 đ??źđ??ź! − 3 (Eq. 1)

where đ??śđ??ś!, đ??śđ??ś!, and đ??śđ??ś! are material parameters [59.8, 16.8, 5710] characteristic of CAA wall properties in (kPa). đ??źđ??ź! and đ??źđ??ź! and are strain invariants. CTMs were modeled into distinct homogeneous, nonlinear, isotropic, hyper-elastic, and compressible materials. Four fillings were created for each patient: one exhibiting the mechanical properties of a clot made entirely of solidified blood, and three clots made with different coil packing densities (10%, 20%, and 30%). The strain energy function used for each material was derived using equation fitting built into Abaqus. A reduced fourth order polynomial strain energy function WCTM was created for each filling from uniaxial compression test data:


!

đ?‘Šđ?‘Š!"# =

!!!

đ??śđ??ś! đ??źđ??ź! − 3

!

Eq. 2

where đ??śđ??ś! (defined in Table 1) is aproperties. set of parameters characteristic of CTM material đ??źđ??ź! is a strain invariant.

Table 1: Filling Material Parameters Filling Blood Clot CPD10 CPD20 CPD30

C1 92.14 405.97 336.03 702.10

Coefficient (kPa) C2 C3 23.88 E+3 -80.41 E+4 12.32 E+3 -35.26 E+4 83.85 E+3 -28.13 E+5 10.05 E+4 -32.04 E+5

C4 11.03 E+6 44.11 E+5 38.53 E+6 42.74 E+6

DATA PROCESSING The measure of wall stress used was von Mises stress as shown in Figure 1. Mean wall stress (MWS) was defined as the average von Mises stress acting on the aneurysm wall. MWS was computed for each model along with relative reductions for each CTM-treated model. Appropriate statistical significance t-tests and an analysis of variance were performed in MATLAB (MathWorks, Natick, MA) to evaluate significant differences. All tests were performed at a 95% confidence level. No Fill Blood Clot

1000

100

10

CPD10

1

0.1

CPD20

0.01

CPD30

0.001 0.0001

kPa Figure 1: von Mises stresses for unfilled and CTM-treated models

RESULTS The MWS for CTM-filled CAAs was significantly lower than that of unfilled models (p < 0.05), with reduction averages 96.4%, 97.0%, 97.2%, and 97.4% for each filling as shown in Figure 2. MWS reductions between filled CAAs were also significantly different (p < 0.05), with larger reductions occurring with the presence of coils and larger coil packing densities. Significant reductions in MWS were also found for each subject tested individually. MWS reductions in simple CAAs were significantly higher than that of branched models (p < 0.05).

Mean Wall Stress (kPa)

1000

No Fill

Blood Clot

CPD10

CPD20

Simple

CPD30

Branched

100

10

1

1

2

3

4 Subject

5

6

7

Figure 2: Plot of mean wall stresses in log-scale for unfilled and CTM-treated models

DISCUSSION The results from this study indicate that CTM reduces CAA wall stress, which suggests it could act as a cushion, absorbing hemodynamic forces and reducing the risk of rupture. Additionally, CTMs reduce wall stress more effectively with larger coil packing densities, perhaps as the result of higher stiffness. Although statistically significant, the differences between packing densities tested do not provide a large clinical effect. The reduction in resultant wall stress with increasing CPD is negligible in comparison to the difference observed after initial filling of CAAs, signifying that the level of CAA occlusion overshadows CTM strength in contribution to stress-reduction. These results align well with Sadato et al., which suggests that residual volume after embolization of CAAs is the foremost consideration for preventing recanalization. Since tight coil packing is already targeted during embolization, further investigation of coil compaction is needed. Finally, CTM treatment is less effective for branched CAA formations than simple CAAs. Lessened stress reduction in branched CAAs could be attributed to complex attachment geometries and larger contact areas between blood flow and the CTM. REFERENCES 1.Costalat et al. Journal of Biomechanics 44.15, 2685-2691, 2011. 2.Debrun et al. Surgical Neurology 53.2, 150-156, 2000. 3. Sadato et al. PLos One 11.5, 2016. ACKNOWLEDGEMENTS Patient images were obtained from the Department of Neurosurgery, University of Pittsburgh Medical Center. Funding was provided by the Swanson School of Engineering and the Office of the Provost.


SVF CELL-SEEDED TEVGs AND THE REMOVAL OF THE IN VITRO DYNAMIC CULTURE PERIOD Kamiel Saleh1, Darren Haskett2,5, Lauren Kokai3,5, Justin Weinbaum1,5, Antonio D’Amore1,2,5, William Wagner1,2,5,6, J. Peter Rubin3,5, David Vorp1,2,4,5,6 University of Pittsburgh, Departments of Bioengineering1, Surgery2, Plastic Surgery3, and Cardiothoracic Surgery4, McGowan Institute for Regenerative Medicine5, Center for Vascular Remodeling and Regeneration6

Email: kas367@pitt.edu INTRODUCTION Tissue engineered vascular grafts (TEVGs) seeded with autologous cells offer a treatment alternative to current coronary artery by-pass procedures1-4. An attractive source of cells for TEVGs is the stromal vascular fraction (SVF)5. This cell population comes directly from donor adipose tissue and is composed of several phenotypes, including multipotent mesenchymal and endothelial progenitor cells6. In the past, the Vorp laboratory has dynamically cultured SVF cell-seeded scaffolds in vitro for 48 hours to ensure adequate cell binding to the scaffold before implantation (unpublished data). However, this 48-hour time period is impractical in a clinical environment. The goal of this study was to assess the necessity of this culture period. For this, we evaluated 1) the phenotypic differences between constructs with and without 48 hours of dynamic culture and 2) the remodeling of implanted SVF cell-seeded TEVGs without a 48-hour dynamic culture period. METHODS SVF cells were collected from young healthy adult (<45 years, non-diabetic) adipose tissue and seeded onto poly(ester urethane)urea bilayered scaffolds by utilization of a rotational vacuum seeding device3. Seeded constructs were incubated in culture media (37° C) for approximately 4 hours, to allow for initial cell adhesion. Scaffolds were then either immediately fixed or dynamically cultured for 48 hours and then fixed, for phenotypic comparison. Cells were analyzed for CD90, CD31, and CD34 for MSC, endothelial, and endothelial progenitor cells respectively, using immunofluorescent chemistry (IFC).

Furthermore, some SVF seeded constructs were implanted on the day of SVF harvest as aortic interpositional grafts into our Lewis rat model and explanted after 8 weeks. After explantation, the remodeled TEVGs were explanted and stained using IFC for von Willebrand factor (vWF), alphasmooth muscle actin (αSMA), and human CD90. DATA PROCESSING Images of stained constructs were analyzed by separately counting cell nuclei and cells staining positively for the given cell marker. A ratio of positive cell marker to total cell nuclei was taken and averaged among data from each patient. Then, a paired student’s t-test was used to evaluate statistical differences between immediate and 48 hour cultured constructs. Additionally, explanted TEVGs were analyzed for remodeling by noting the presence of vWF, αSMA, and CD90 stains. RESULTS Phenotypic analysis: The percentage of cells staining positively for CD90, CD31, and CD34 is shown in Table 1 below. Table 1: Average Cell Population (n = 5 patients)

A paired t-test indicated no significant statistical difference between the CD90, CD31, or CD34 staining of initial and 48-hour construct.


Remodeling in vivo: TEVGs incubated for only 4 hours, stained positively for human CD90 when explanted after a 5-day period and negatively for human CD90 when explanted after 8 weeks (Figure 1).

Figure 1: Human CD90 marker stain (red) of explanted TEVGs after 5 days(left) and 8 weeks (right). Cell nuclei stained with DAPI (blue). Image scale defined by 100 µm scale bar (bottom right).

Additionally, TEVGs tested positively for vWF and αSMA after an 8-week period, as is shown in Figure 2 below.

Figure 2: vWF (green) staining of the endothelial layer on (left) and αSMA(red) staining (right) indicate the presence of TEVG remodeling. Image scale defined by 100 µm scale bar (bottom right).

DISCUSSION The data in Table 1 revealed no phenotypic difference between constructs seeded with SVF at two different lengths of pre-incubation. Although standard deviations seem large in comparison to the means, these values are merely a result of biological sampling and patient-to-patient variability. This is confirmed through the utilization of a paired t-test, which showed that no significant difference existed between constructs within the same patient group. Human CD90 was used to note the presence of human SVF cells in explanted TEVGs. Explanted TEVGs stained positive for CD90 after 5 days of implantation, but negative after 8 weeks (Figure 1). This suggests that human SVF cells were no longer

present after an 8-week period of implantation. Additionally, positive staining for vWF and αSMA indicated the formation of endothelial and smooth muscle tissue after an 8-week period (Figure 2). These implantation and phenotypic comparison results indicate that a 48-hour dynamic culture period is unnecessary for successful remodeling of a TEVG and can be removed from the procedure to enhance clinical feasibility. Other cell phenotypes in the SVF have yet to be characterized for their presence in seeded scaffolds. It is pertinent to characterize all cell types that might be involved in TEVG remodeling. Hence, future work may involve staining for CD4, CD20, and CD68, to characterize the involvement of T cells, B cells, and macrophages respectively. Furthermore, TEVG remodeling has only proven to be successful using SVF cells from healthy adult adipose tissue in Lewis rat models. It is unclear whether TEVG remodeling would occur in the case of diabetic recipients, as MSCs from diabetic patients have been shown to have a diminished potential for promotion of fibrinolysis7. Since functional TEVG technology is desirable within the cardiovascular disease high-risk population, documenting TEVGs seeded with SVF cells from diabetic mellitus patients presents an additional direction for future study. REFERENCES 1.Ye et al. J Biomed Mater Res A 100A, 3251-258, 2012. 2. Nieponice et al. Biomaterials 29, 825-33, 2008. 3. Soletti et al. Acta Biomater 6, 110-22, 2010. 4. He et al. Biomaterials 31, 8235-244, 2010. 5. Minteer et al. Clin Plast Surg 42, 169-79, 2015. 6. Cytom Part A 77A, 406, 2010. 7.Krawiec et al. Tissue Eng Part A 22, 765-75, 2016. ACKNOWLEDGEMENTS The authors would like to acknowledge the sources of support: NIH grant to Dr. David Vorp (NIH R21 HL130784) as well as the Swanson School of Engineering and Office of the Provost.


FABRICATION OF PATIENT-SPECIFIC INTRACRANIAL ANEURYSM MODELS FOR BURST TESTING Toby Zhu1, Joseph Pichamuthu1, 2, 5, Hritwick Banerjee6, Justin Weinbaum1, 2, Hongliang Ren6, David Vorp1, 2, 3, 4, 5 1 Department of Bioengineering, 2McGowan Institute for Regenerative Medicine, 3Department of Surgery, 4Department of Cardiothoracic Surgery, 5Center for Vascular Remodeling and Regeneration, University of Pittsburgh, Pittsburgh, PA, USA 6 Advanced Robotics Center, Department of Biomedical Engineering National University of Singapore, Singapore Email: toz6@pitt.edu, Web: http://www.engineering.pitt.edu/vorplab/ INTRODUCTION A cerebral or intracranial aneurysm (ICA) is a condition that is defined as a local dilation of an artery in the brain due to locally weakened blood vessel walls. This creates a balloon-shaped bulge in the thin artery wall that can rupture, and the ensuing subarachnoid hemorrhage can cause a stroke, coma, or even death. Therefore, it is of interest to understand how ICAs grow and eventually rupture in order to develop earlier diagnosis or treatment techniques. Current imaging technologies include computed tomography and magnetic resonance imaging, which can be used to generate threedimensional computer-assisted design models. However, these 3D models only provide the shape of the ICA and do not reveal any dynamic deformation or strain variation information under pulsatile flow. It is also difficult to utilize these imaging techniques in hemodynamic and biomechanical analysis to study the growth and eventual rupturing of the ICA. Currently, computational fluid dynamics (CFD) simulations are used in aneurysm and vasculature analysis to provide stress information [1]. It would be useful to create an physical experimental model in order to validate the CFD simulation data, which currently few studies have done [2]. Even fewer studies have measured the total wall stress in a physical aneurysm at the time of bursting. In this study, we introduce a method to fabricate an in vitro aneurysm model reconstructed from patient-specific geometry data.

METHODS 3D Printing Approach This method involved 3D printing two halves of an aneurysm with and then combining them into a whole piece. A digital CAD file of an aneurysm was modified using Geomagic and was cut in half along the parent artery to produce two non-symmetrical pieces. For this method, TangoBlackPlus material and an Objet260-Connex printer were selected. This material has a tensile strength of 1.3-1.8 MPa [3] (human vasculature: 1-3 MPa [2]). Injection Molding Approach A mold consisting of an outer shell and inner portion was 3D printed with a LulzBot TAZ 5 3D printer using polylactic acid. The outer shell was designed in Geomagic and consisted of an aneurysm split into two pieces via a plane to allow for assembly and disassembly of the mold. One of the halves also had a small hole to allow for injection of the molding material. The inner portion consisted of a smaller version of the original aneurysm offset inwards by 500 Îźm. As a result, the mold had empty space between the inner portion and outer shell that the molding material could fill to create a membranous model. These printed materials were scaled up in size 3X from the original CAD model to allow for ease of fabrication given the relatively low resolution of the TAZ printer. Two molding materials were tested: DragonSkin 10, with a tensile strength of 3.3 MPa, viscosity of 23,000 cps, and elongation at break of 1000% [4], and EcoFlex 0030, with a tensile strength of 1.4 MPa, viscosity of 3,000 cps, and elongation at break of 900% [5].


RESULTS 3D Printing Approach Printing and gluing two halves of the aneurysm did not turn out to be an effective approach. The fabricated models are meant to go through burst testing, and if they are constructed using this method then the weakest point would be where the two halves are glued together. This would result in premature bursting or other such artifactual results. Injection Molding Approach Performing injection molding with Dragon Skin 10 did yield a whole aneurysm piece. However, the injection molding process yielded the formation of numerous air bubbles in the resulting model due to the material’s high viscosity. Since there is little space in the mold for the injected material to flow, forcing a high viscosity liquid through causes the formation of many air bubbles. Attempting to perform a burst test on this model with its uneven surface would not mimic the true bursting of this aneurysm, as the wall would be thinner in some places and thicker in others. EcoFlex 0030 has a much lower viscosity, and performing injection molding with this material resulted in a much smoother model, which can be seen in Figure 1.

the mold and not tear. However, for a burst test this would be an undesirable property as it may not rupture analogously to that of an actual aneurysm. A lower elongation at break would inherently yield a better and more accurate experiment, but it might not be possible for it to be separated from the model without tearing. Thus, materials with different elongations at break should be tested to find the material with the lowest value that still holds its shape using the method described. A completely different fabrication method using a dissolvable core could also be considered, where a layer of silicone molding material would be sprayed or painted on the outer surface of this core. After the silicone has set, the core would then be dissolved, leaving behind a membranous structure. The core could be 3D printed using starch, molded out of wax, or machined with a low melting point metal alloy. Starch can be submerged in a tank of water and dissolved, wax can be melted by applying gentle heat or dissolved in toluene, and certain types of alloys will melt in hot water. The molding material will then need to be carefully considered in order to accommodate the specific treatment needed to remove the inner core. REFERENCES 1. Kojima M. MHS, 2010 International Symposium on, 2010; 384–389 2. C. Shi. Int J Med Robotics Comput Assist Surg 2013; 9: 213–222 3. Stratasys, Ltd. PolyJet Materials Data Sheet. 2014, 2015. 4. Smooth-On, Inc. Dragon Skin ® 10 MEDIUM Product Information. 5. Smooth-On, Inc. Ecoflex® Series Super-Soft, Addition Cure Silicone Rubbers.

Figure 1: Physical model of patient-specific aneurysm constructed using injection molding with EcoFlex 0030.

DISCUSSION EcoFlex 0030 was found to be the best material for this application, but there are still some improvements that could be made on this design. This material has an elongation at break of 900%, which makes it elastic enough to be stretched over

ACKNOWLEDGEMENTS I would like to thank Professor Hongliang Ren, Dr. David Vorp, Dr. Justin Weinbaum, and Joseph Pichamuthu for mentoring me, Hritwick Banjeree for helping me with the soft fabrication, the SERIUS program for giving me the opportunity to go to Singapore, and the Swanson School of Engineering, the Office of the Provost, and the Study Abroad Scholarship for funding my research.


VERIFYING NORMALITY OF OCULAR TISSUE THROUGH DEVELOPMENT OF A SEMIAUTOMATED OPTIC NERVE AXON COUNTING METHOD Katelyn Axman1, Addison Kaufmann2, Sundaresh Ram2, Jonathan Vande Geest1. Soft Tissue Biomechanics Laboratory, Department of Bioengineering University of Pittsburgh1, PA, USA; University of Arizona2, AZ, USA Email: kfa7@pitt.edu, Web: http://stbl.arizona.edu INTRODUCTION Glaucoma is the second leading cause of irreversible blindness around the world, affecting more than 200,000 people every year. It has been shown that primary open angle glaucoma disproportionately affects those of African Descent (AD) and Hispanic Ethnicity (HE) over those of European Descent (ED). [1,2] Though this phenomenon may be due in part to a disparity in socioeconomic factors, it is reasonable to assume that certain biomechanical and morphological differences between the eyes of AD, HE and ED donors may play an important role. Because axons of retinal ganglion cells that traverse the optic nerve become considerably degraded and are eventually lost in patients with glaucoma [3], the axon count of optic nerve samples is one method to confirm the normality of donor ocular tissues. The goal of this work is to identify baseline axon counts for optic nerves from those of AD, HE and ED race/ethnicity. The information gathered in this work will be useful for ensuring that the biomechanical and structural endpoints being measured in separate studies in our laboratory are solely due to racial and/or ethnic differences. This data will also be useful as a control group for future experiments investigating how axon counts may be different in glaucomatous samples as a function of racial/ethnic background. MATERIALS AND METHODS Our laboratory received optic nerves from Midwest Eye Bank (MWEB) (n=4), San Diego Eye Bank (SDEB) (n=2), and Banner Health (BH) (n=1), containing nerves from both the left (OS) (n=5) and right (OD) (n=2) eyes, from AD (n=2), HE (n=2) and ED (n=3) donors. Within 4 hours from death, the optic nerves were cut and fixed in Poly/LEM at their respective eye banks, at which point they were sent to our laboratory. Once received, the nerves were transferred to vials containing 2.5% glutaraldehyde and stored for 24 hours. Finally, they were transferred to PBS and stored at 4 °C

until they were sent out for processing. During processing, optic nerve samples were cut crosssectionally, stained with OsO4, and embedded onto slides that were visualized on a Nikon Eclipse 90i microscope.

Figure 1: Microscope image of full optic nerve cross section from AD sample MWEB 36 OS. Dark sections indicate bundles of axons, separated by the lighter connective tissue.

Each cross section was individually imaged at 60x magnification using Nikon NIS-Elements software; 15% overlap as well as autofocus capabilities were used for each pixel, allowing for successful and detailed image montaging. DATA ANALYSIS Each montaged image, saved as a .tif file, was then read one at a time through a Matlab program written to perform a semi-automated axon count across 100% of the given optic nerve cross section. The user was first prompted to identify the bounds of the optic nerve cross section in order to crop the image. Ten small sections of axons throughout the entire image were randomly selected and presented one at


a time, and the user was asked to manually count the axons in each of those sections. These manual counts were used to extrapolate across the entire cross-sectional area, in order to give an estimated axon count for the whole optic nerve. Each image was run through the program four times, and an average and standard deviation for axon counts was calculated.

and no significant difference between axon count for samples of ED, AD and HE was found. DISCUSSION All samples tested were within the normal range for human eye axon counts, indicating healthy nonglaucomatous eyes. [4] Our lab will continue to perform axon counts on eyes of different races as a means of verifying normality, so that other parts of these undiseased eyes may be studied and compared between different racial backgrounds. We also intend to develop a fully automated code for axon counting, in order to improve count accuracy. REFERENCES [1] Tielsch, J.M. The Baltimore Eye Survey. Jama, 1991. 266(3): p. 369-74. [2] Sommer, A. The Baltimore Eye Survey. Arch Ophthalmol, 1991. 109(8): p. 1090-5. [3] Quigley, H.A., Am. J. Ophthalmol. 95:673–691.

Figure 2: A small magnified section of axons taken from an image of an optic nerve cross section, after image contrast had been enhanced.

[4] Frederick, S. Ophthalmol, 1989, 96(9): p. 13251328.

RESULTS Of the seven samples tested, mean semi-automated axon counts ranged from 741,045 to 1,031,924. Calculated standard deviations indicated a minimum axon count of 601,240 for sample MWEB 55 OS, and a maximum axon count for sample MWEB 36 OS at 1,351,184. A t test was performed,

ACKNOWLEDGEMENTS The author would like to thank Dr. Jonathan Vande Geest, the SSOE Bioengineering Department, and the Office of the Provost at the University of Pittsburgh for contributing to the funding that facilitated this research. Addison Kaufmann and Sundaresh Ram, an undergraduate and graduate student at University of Arizona, wrote much of the in this project. code used

Table 1. Graph indicating the optic nerves tested from AD, ED and HE backgrounds and the calculated axon counts with standard deviations for each sample.


MODELING AND IN-SILICO ANALYSIS OF CLINICALLY USED CORONARY ARTERY STENTS Jacob N. Herman and Zhi T. Ang Biofluids Mechanics Laboratory, Department of Biomedical Engineering National University of Singapore, Singapore Email: jah225@pitt.edu, Web: http://www.bioeng.nus.edu.sg/biofluid_lab/index.html INTRODUCTION The coronary arteries maintain heart functionality by providing nutrients and oxygen found in blood to the muscles of the heart. Coronary artery disease (CAD) is characterized as the growth of atherosclerotic lesions over time. The aging population is most susceptible to CAD because of the longer time period that the atherosclerotic lesion has to grow. These lesions have drastic and potentially detrimental effects on the natural hemodynamics of coronary arteries. While the current gold standard of treatment for CAD is coronary artery bypass graft surgery (CABG), the older population is often times unable to undergo an open-heart surgery due to the high mortality rate [1]. Alternatively, stenting provides a minimally invasive, percutaneous coronary intervention to treat CAD for a patient unable to undergo CABG. While stenting is not yet as effective as CABG, many research groups study stent expansion and the changes in hemodynamics. Stent expansion, or deployment, establishes the stent position within the artery. This is the precursor to determine long-term stent efficacy. Analyzing the stent expansion stress patterns can reduce complications and aid in finding improved expansion methods [3]. Specifically, uneven distribution of stresses can results in a dogboning effect, further damaging the stent and artery. Stent expansion analysis provides initial evidences into stress distribution, failure points, and a basis for comparison to normal hemodynamics. However, there are no current in-vivo methods to analyze the stresses that are placed on stents. Computer modeling and simulation provide a means to evaluate the effectiveness of a coronary artery stent without the use of in-vivo testing [2]. Solidworks and COMSOL Multiphysics provide

platforms to model and simulate, respectively, the expansion of a stent within a coronary artery. METHODS The modeling and simulations performed were accomplished with the use of Solidworks and COMSOL Multiphysics. The stents modeled were selected based upon the food and drug administration’s (FDA) approval status as well as relevant literature proving clinical efficacy. The Nexstent (Endotex) and Xact Stent (Abbott Vascular) were modeled and simulated based on the stated criteria. The Solidworks platform was used to model the stent geometry that was obtained from public access FDA information as well as manufacturer details. Stent modeling involved copying and rotating a singular symmetrical piece of the stent in a circular pattern until a cylindrical pattern was reached and the overall geometry was achieved. Solidworks was also used to perform a failure test to ensure stable geometry. Stable geometry is required for exportation to COMSOL Multiphysics. The stent model was exported to COMSOL Multiphysics for expansion analysis. Once imported into COMSOL Multiphysics, the stent was analyzed for symmetry such that only one segment of the stent would need to be simulated to reduce computation time. The stents materials were set to stainless steel, as this has been the industry standard and for simplicity of computation and analyzation. COMSOL Multiphysics was used to simulate and analyze the stresses on the stent during expansion. RESULTS Initially, both stents failed within Solidworks, indicitive of unstable geometry. The stents were edited until a stable geometry was reached. These edits involved slight adjustments to the repeat unit geometry such that no overlaps or gaps occurred


between the repeating units within the stent. Both the Nexstent and the Xact Stent were found to have stress distributions that were moderately distributed. The stresses were even across the stents struts, however they were found to be higher than the stresses at the nodes. DISCUSSION The stresses across the struts imply that they are more likely to fail than the nodes under high stress conditions, such as expansion, as well as become fatigued sooner. A slight uneven distribution between the struts and nodes is characteristic, however, if the disparity between the stresses increases, the stent will be proportianally prone to damage. In both the Nexstent and Xact Stent, the difference is stress between the struts and nodes was not found to be large enough to merit concern. Additionally, the overall stress distribution was found to have slightly higher stresses towards the edges of the stent than in the center region. When compared to the literature values for the hemodynamics of a successful coronary artery stent, higher stress points towards the stent edges can result in dogboning, further altering the flow patterns. This uneven stent bending compromises the stents effectiveness and can even worsen the original CAD. CONCLUSION Computer simulation provides a safe means to evaluate coronary artery stents. The higher stresses at the analyzed stent struts as compared to the nodes is expected but should be minimized to avoid

fatigue during long term stent use. Additionally, the expansion of these stents can give rise to dogboning, potentially exacerbating the CAD. However, the uneven stress distribution is not convincing enough to be considered hazardous. In further studies, additional stents should be analyzed and the analysis should be expanded to compare the hemodynamics between a healthy patient and stented patient. The current analysis sets the foundation to further explore the effect of stenting within coronary arteries. REFERENCES [1] Libby P. and Theroux P. (2005). Pathophysiology of Coronary Artery Disease. Journal of the American Heart Association. DOI: 10.1161/CIRCULATIONAHA.105.537878 [2] Dehlaghi V., J. Mater. Process. Technol., 2008, Vol. 197, pp. 174-181. [3] Beier S., et al. (2016). Hemodynamics in Idealized Stented Coronary Arteries: Important Stent Design Considerations. Annals of Biomedical Engineering. DOI: 10.1007/s10439-015-1387-3 ACKNOWLEDGEMENTS The Summer Engineering Research Internship for U.S. Student (SERIUS) was set up and facilitated by the Swanson School of Engineering Study Abroad office and the National University of Singapore External Relations Office. Funding for this program was provided by the Office of the Provost, the Study Abroad Scholarship, and the OCC Office at the University of Pittsburgh.


Predicting Phase Behavior of Organic–Salt–Water, Two-Phase Systems using the AIOMFAQ Model Forrest Salamida, Giannfranco Rodriguez and Dr. Eric Beckman Beckman Lab, Department of Chemical Engineering University of Pittsburgh, PA, USA Email: fms11@pitt.edu, Web: www.engineering.pitt.edu/Departments/Chemical-Petroleum/ INTRODUCTION The current methods by which the desalination of water is achieved are energy inefficient. The two most common methods of desalination are distillation and reverse osmosis (RO), the latter of which requires facilities where energy consumption accounts for nearly 50% of operational costs (Seawater Desal…, 2011). Yet, distillation is no better; only 15% of the bottled water operators in the industry use distillation in comparison to the nearly 40% who use RO due to lower costs (Kucera, 2005). These conditions have led researchers on a quest for more efficient alternatives. The scope of this project is to determine the feasibility of using reactive chemical extraction to desalinate water. This new method of desalination will use less energy than currently required and increase the availability of clean and safe drinking water. For it to work, an organic compound with the following qualities must be designed: 1) Reversibly reacts with water under reasonable conditions (temperature and pressure) 2) Impedes the transfer of salt across the water-organic phase barrier The scope of this paper is to detail the computer model built to predict the phase behavior of organic-salt-water mixtures. It will act as a tool in the search for organic compounds that repel salt. The advent of computer modeling has expedited the design phase for the creation of a chemical process by allowing a user to run multiple iterations of a program while simultaneously having the functionality to vary chemical structure. This shifts the focus of research efforts for viable candidates from broad molecular structures to specific numbers and types of functional groups by improving a molecule incrementally.

Figure 1: The functional groups that were parametrized in the AIOMFAC model (Zuend, 2008)

METHODS The Universal Quasichemical Functional group Activity Coefficients (UNIFAQ) model was developed in 1975 by Freudensland to explain the phase behavior of organic compounds (Freudensland, 1975). By combining thermodynamics and empirical measurements of multi-organic component-systems, Freudensland and his team were able to determine the effect of single functional groups on the entire chemical behavior of a molecule when introduced to a mixture. This model is still used in the oil and gas industry to predict the behavior of the multiple products that are produced during the process of cracking crude oil. In 2008, Zuend expanded the UNIFAC model by developing the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) model (Zuend, 2008). The AIOMFAC model describes tropospheric aerosol behavior. In studying atmospheric composition, Zuend et al. successfully modeled inorganic component behavior by determining how mixtures behave when introduced to multiple salt. He also improved the accuracy of the parameters that describe functional groups not commonly found in crude oil like alcohols.


Although developed to study atmospheric composition, the AIOMFAC model can be applied to a myriad of organic-inorganic-water-multi phase systems, allowing for the study of salt-waterorganic two phase equilibrium.

physically separated from the system by remaining in a small amount of unreacted water, at which point the system’s conditions can be modified to reverse the reaction leaving fresh water.

With use of the AIOMFAC model, a program was built to explore the settling of mixtures containing water, sodium, chloride and an organic molecule that is designed by a user. The user also determines the temperature of the system and the mole fraction composition of the initial mixtures components. RESULTS An algorithm was developed to perform the Rachford-Rice flash distillation calculations for liquid-liquid equilibrium (LLE). The resulting real valued functions zeros are approximated by the Newton-Raphson method for one variable, mole fraction composition. The algorithm determines how the system separates into phases at equilibrium. This is mathematically represented as the mole fraction compositions for all mixture components, i, in each phase they are present, when the system is at equilibrium. Ki = γi2 / γi1 [1] xi2 = zi / (1 – {Ki – 1}) [2] xi1 = Ki * xi2 [3] (Zuend, 2008) Equilibrium conditions are determined by calculating the activity coefficient for each mixture component. When the activity coefficients for the mixture components are equivalent, or within a tolerance level set by the user, this is said to be an equilibrium solution. The relationships between the activity coefficients, γ, the mole fraction compositions, x, and the partition coefficient, K, are represented by equations 1-3 above. Plotting these mole fraction compositions over a range of temperatures gives a LLE curve. DISCUSSION Researchers will use the program as a tool to develop viable reaction pathways for the reactive chemical extraction of water. This will be accomplished by modeling the settling behavior of the products and reactants of a proposed pathway. The program will determine the effectiveness of the pathway by recording the percentage of salt that is repelled by the designated organic molecule, while the organic molecule reacts with water. The salt is

Figure 2: Description of the desired outcome in the chemical extraction of water. Note that the salt stays contained in the water phase, while the water is free to pass through the phase barrier.

CONCLUSION The program requires validation before reliable data can be generated. This will be accomplished by comparing the program’s generated equilibrium condition data with experimentally measured data that exists for simple chemical systems such as octanone and water. Producing accurate results will allow The final step of this project is to determine potential chemical structures for the organic molecule that is required to perform extraction. The generation of LLE curves for a multitude of compounds is essential in narrowing down the list of potential chemicals. REFERENCES 1. "Seawater Desalination Power Consumption." Watereuse.org. WateReuse Association, Nov. 2011. Web. 25 Aug. 2016. 2. Bruce Kucera. "Water Distillation." Water Quality Products. Water Quality Products, 26 Sept. 2005. Web. 25 Aug. 2016. 3. Fredenslund, A., Jones, R. L., and Prausnitz, J. M.: GroupContribution Estimation of Activity Coefficients in Nonideal Liquid Mixtures, AIChE J., 21, 1086–1099, 1975. Atmos 4. Zuend, A., et al. (2008), A thermodynamic model of mixed organic-inorganic aerosols to predict activity coefficients, Atmos. Chem. Phys. 8, 4559-4593.

ACKNOWLEDGMENTS This research was funded by a gift from the PPG Foundation.


3rd GENERATION ROBOTIC SOCK WITH ANKLE FEEDBACK CONTROL James Y Liu Department of Electrical Engineering University of Pittsburgh, PA, USA Email: jyl19@pitt.edu INTRODUCTION Every year, approximately 15 million patients suffer from stroke and almost two thirds suffer from subsequent hemiparesis (weakness in one half of the body)[1-2]. In stroke patients, a major area of concern is Deep Vein Thrombosis (DVT) defined as the formation of venous thrombi in the deep veins of the calves. This can lead to Pulmonary Embolism (PE) which is a life-threatening condition when the venous thrombi can become dislodged and block a pulmonary artery to the heart. To help stroke patients recover, a series of ankle exercises to promote blood flow have been designed. However, these can be extremely difficult to perform especially when patients are bedridden or paralyzed, a common side effect. We are tasked with fabricating an inflatable calf sleeve with feedback control – part of an ankle prosthesis that performs dorsiflexion/plantarflexion and inversion/eversion. Existing devices to perform similar functions involve a rigid mechanical component which only allows one degree of freedom and causes discomfort. In addition, there is no strong correlation between the use of the device and the reduction of DVT risk. Many hospitals resort to manual calf stimulation via massage by a healthcare professional, an inefficient and costly tactic. To alleviate this issue, the team focuses on designing a lower hemisphere prosthesis that performs these functions without the assistance of a professional caretaker, saving time and money. In addition, this device can be worn on the calf for the duration of a patient’s hospitalization and is also compatible with other devices that monitor heart rate, blood pressure, etc. METHODS The first model is fabricated using a dual sided cloth-plastic blend. The benefit to this material is its cost and the ease with which multiple air-tight pockets can be created by simply applying heat. The

initial sketch is drawn over the cloth-plastic material (Figure 1).

Figure 1: First prototype This shape is selected from experimentation in order to create a sleeve that would tightly conform to the curvature of a human calf. The triple air pocket design is selected to allow for better control of the inflation and for a gradual increase in pressure from bottom up. This has the effect of moving blood towards central circulation and will discourage the formation of any thrombi. In addition, the Velcro strips are chosen for ease of attachment/detachment and are aligned so that the sleeve attaches smoothly upon wrapping around the calf. The second prototype is based off of the Tripulse Calf Sleeve by Flowtron. Specifically, the Tripulse is a calf sleeve designed to aid bedridden patients diagnosed with DVT and PE. It is assumed that the majority of the patients are suffering from stroke. The Tripulse has a unique design as shown below, featuring an entirely Velcro exterior and three pronged attachments. However, it is only recommended to be used once due to sanitary reasons. Thus, a second prototype is fabricated to be worn outside of a compression sock that can be bought from any sports company. It also includes


the three prong attachment system with Velcro covering half of the exterior. The aim is to retain all the strengths of the Tripulse while eliminating the weakness of only one use. This design allows the sleeve to fit all patients snugly while tapering at the ankle. It also allows for a smooth inflation and deflation with increasing pressure from the bottom of the tibia to the kneecap (Figure 2).

tested on a model calf with replicated blood vessels. Specifically, the plastic leg is wrapped in waterfilled long balloons. The sleeve is then strapped on and inflated. In a series of 20 trials, the water flowed upwards 5 in. on average in 4 out of the 5 balloons. Last, but not least, the actuator control box is examined and found to be too large to be attached directly to the prosthesis. Thus, it will have be placed on the patient bed (Figure 4).

Figure 2: Interior of the second prototype with air flow system outlined After testing and gathering data, a final model is fabricated with denim as an outer layer with plastic pockets inside. The denim outer is coated with Velcro for ease of attachment and adjustability. In addition, the calf sleeve is paired with a compression stocking bought from Under Armour to provide additional compression and to soak up the sweat (Figure 3).

Figure 4: Map of the actuator circuitry DISCUSSION In essence, a calf sleeve is designed to improve the efficiency of therapy for patients with DVT. The design is based on the Tripulse system from Flowtron with multiple improvements and testing to support the effectiveness of this device. This calf sleeve is compatible with numerous other prosthesis and utilizes a series of three actuators to inflate and deflate the air pockets in a series of cycles. It could be potentially advanced with a sturdier material such as nylon and with a more ergonomic control system. REFERENCE [1] Gresham GE, Phillips TF, Wolf PA. Epidemiologic profile of long-term stroke disability: the Framingham study. Arch Phys Med Rehabil, vol. 61, pp. 487-91, 1979.

Figure 3: The compression stocking works well with both the calf sleeve while leaving room for the dual actuators responsible for dorsiflexion TESTING The second prototype underwent extensive testing starting with collecting human feedback. Lab members report that the sleeve is tight and exerts more than enough pressure upon their calf that was required. However, it is found that the sleeve is not breathable and caused sweating. Next, the average pressure of the prototype is measured and found to be approximately 62 mmHg with a standard deviation of about 11 mmHg – which is above the original specification of 40 mmHg. The prototype is

[2] World Stroke Organisation (2012, March, 1), [Online], Available: http://www.world-stroke.org. ACKNOWLEDGEMENTS This program would not have been possible without the support and funding of the SERIUS program by the University of Pittsburgh Swanson School of Engineering.


Design of Highly Efficient Bifunctional Metal-Organic Framework Catalysts for Tandem Catalysis By Shortening the Reaction Pathway Benjamin Yeh Department of Chemical Engineering University of Pittsburgh, PA, USA Email: byy5@pitt.edu INTRODUCTION Tandem reactions are consecutive organic reactions that proceed from a reactive intermediate. Research into tandem reactions is important due to their inherent nature where there is no isolation of intermediates, offering a huge economic and environmental advantage. Most often, tandem reactions are one pot reactions, reducing the need for a solvent and the extra space required for reactors. Metal-organic frameworks (MOFs) are high porous materials contain metal ions and organic ligands and can serve as catalysts. Multifunctional MOFs have shown promising results when carrying out tandem reactions, as they can be functionalized antagonistically where the acidic and basic moieties are in different locations. Research has shown that the UiO-66 has the ability contain many different moieties, which varies their catalytic capabilities. The purpose of this research is to use the UiO-66 MOF with different acidic and basic functionalities synthesized using the MHT method for tandem reactions. This project attempts to explore how the ratios of acidic and basic sites with affect the progress of the deacetalization−Knoevenagel reaction using a toluene and water solvent system discussed in the literature and a proposed safer solvent system of water and ethanol at a lower temperature. The reaction progress is measured with 1H Nuclear Magnetic Spectroscopy (NMR) to show how much of the initial reactant was converted to the final product.

METHODS The UiO-66 was synthesized using the modulated hydrothermal synthesis method, which is generally safer and more environmentally-friendly than traditional methods of making UiO-66 MOFs [1]. First, 1.8g of Zr(NO3)4 was dissolved in 20 mL of water and 30 mL of acetic acid. To vary the acidity and basicity of the UiO66 MOF, sodium sulfoterephthalate (SSBDC) and aminoterephthalate (ATC) were used as ligands and varied by molar ratio (total 5 mmol), respectively. Figure 1 shows a schematic of how the organic ligands are linked to the inorganic metal ion. The mixture was refluxed at 90°C for 24 hours. The product was washed every 24 hours for 96 hours. The sample was washed with water for the first two 24 hour

rotations and then ethanol for the last two 24 hour rotations. Finally, the sample was dried in a vacuum oven at 120°C for 24 hours to yield the dried product.

Figure 1: Simplified synthesis scheme of UiO-66 MOFs using MHT method

The tandem reaction to test the efficiency of the MOF catalysts is shown below in Figure 2. Two solvent systems were initially used: water and toluene, and water and ethanol (2.5mL for each solvent, 5 mL for each system) using only the UiO-66-SO3H-50 MOF. Water and ethanol later used as the primary solvent for the rest of the tandem. Thirty microliters of benzaldehyde dimethyl acetal (BADA), malonitrile (30mg) was added to the solvent system. The MOF catalyst (10mg) was added to each system, and then heated for 24 hours. Initially, the tests were done at 60°C and stirred at 800 RPM. During a certain amount of time, an 1H NMR sample was extracted and analyzed using Bruker’s 600 MHz Nuclear Magnetic Resonance (NMR) spectrometer

Figure 2: Tandem deacetalization−Knoevenagel reaction scheme to test bifunctional catalyst properties

DATA PROCESSING Using MestNova, the relative peak areas of the tertiary hydrogen (~5.5 ppm) on BADA, the aldehyde hydrogen (~9.9 ppm) on benzaldehyde, and the alkenyl hydrogen (~8.0 ppm) on benzalmalonitrile were measured and compared to calculate the conversion of the substrate to the intermediate and final product. The conversion of the overall tandem reaction is defined as, đ??ľđ??ľđ??ľđ??ľ + đ??śđ??śđ??śđ??ś đ??´đ??´đ??´đ??´ + đ??ľđ??ľđ??ľđ??ľ + đ??śđ??śđ??śđ??ś


where A, B, and C are the relative peak heights of BADA, benzaldehyde, and benzalmalonitrile, respectively. RESULTS H NMR tests with the UiO-66-SO3H-50 MOF in the toluene and water system and the ethanol and water system were performed. The kinetic curves are recorded below in Figure 3 below. 1

Figure 3: Kinetic curve of the toluene/water and the ethanol/water solvent systems for the UiO-66-SO3H-50 MOF

Due to time constraints, the rest of the 1H NMR samples with a less sensitive analyzing method. Therefore, none of the 1H NMR spectra reported a peak for the intermediate, benzaldehyde, and it was difficult to quantitatively report the relative peak heights of the samples and the overall conversion could not be calculated. However, some general conclusions could be made and are reported below in Table 1 about the bifunctional UiO-66 MOFs.

DISCUSSION From Figure 3, it is seen that the conversion of BADA to benzalmalonitrile occurs almost instantly in the ethanol/water system, and complete conversion occurs much faster than the toluene/water system. This can be explained by the solvent system because water, a reagent for the tandem reaction, is insoluble in toluene. Water is soluble in ethanol, allowing water to react more favorably with BADA in a homogeneous solution. Furthermore, in comparison with the initial study where toluene/water was used, ethanol is more environmentally friendly and safer than toluene, making the water and ethanol solvent system better industrially.

For the NUS-6 MOF, the alkenyl hydrogen peak for benzalmalonitrile is not present in the 1H NMR spectra. This makes sense for NUS-6 because the tandem reaction requires both an acid and base catalyst. NUS-6 is not bifunctional and only contains acidic moieties, catalyzing BADA to benzaldehyde only. The UiO-66SO3H-75 did not show the hydrogen peak for benzalmalonitrile until about 24 hours but the peak was minimal. This can be explained by the large sulfonic acid groups, which sterically hinder the substrate from reacting to form the final product. UiO-66-NH2, which only has basic ligands, showed the presence of the final product. There is no Brønsted-Lowry acid to catalyze the reaction, but the UiO-66- NH2 may have contained acid as an impurity because its synthesis involved acetic acid. This is why the hydrogen peak for the product is relatively prominent, as the intermediate product was able to form for the reaction to proceed. Lastly, it is important to show that the bifunctional UiO-66-SO3H50 MOF can catalyze the reaction faster than a biphasic mixture of UiO-66-NH2 and NUS-6. According to the 1 H NMR spectra, the peak for benzalmalonitrile was prominent just after 10 minutes when UiO-66-SO3H-50 MOF was used as the catalyst. In comparison, the peak for benzalmalonitrile appeared after 60 minutes when the biphasic mixture of MOFs were used as the catalyst. When the catalyst system is a biphasic mixture, the substrate has to move through two MOFs which can be hundreds of nanometers apart. This will cause a longer reaction time for BADA to be converted into benzalmalonitrile. Further work needs to be done with other tandem reactions with bifunctional UiO-66 MOFs to analyze the efficiency of this catalytic material. Computational work will be done to calculate the acidity and basicity of the MOFs as well as the distance between acidic and basic moieties.

REFERENCES [1] Hu et al. “A modulated hydrothermal (MHT) approach for the facile synthesis of the UiO-66 type MOFs.” (2015 May 1). Inorg. Chem. (online article) DOI: 10.1021/acs.inorgchem.5b00435 ACKNOWLEDGEMENTS I would like to thank Hu Zhigang and Dr. Dan Zhao at the National University of Singapore for mentoring me on my first experimental project as well as Jingyun Ye and Dr. Karl Johnson for supporting me at the University of Pittsburgh. I would also like to thank the Swanson School of Engineering and the Office of the Provost for giving me this opportunity and funding me on this project to allow me to do research in Singapore.


COMPARISION AND OPTIMIZATION OF BIOMIMETIC SUPERHYDROPHOBIC SURFACES CREATED USING COMB-LIKE-POLYMERS AND PERFLUORINATED CHEMICALS Abhinav Garg, Giselle Baillargeon, Andrew Kozbial, and Dr. Lei Li Li Lab, Department of Chemical and Petroleum Engineering, Swanson School of Engineering, University of Pittsburgh, PA, USA Email: ABG26@PITT.EDU

Comb-like polymers (CLPs) consist of a carbon backbone with combs containing only one or two fluorocarbons, resulting in a more benign compound compared to PFASs [1]. One major feature of PFASs is their hydrophobicity. CLPs and ZDOL, (a common PFAS) exibit similar water contact angles (WCAs) of ca. 90° and ca. 115° respectively after annealing on a smooth, UV cleaned SiO2 substrate. METHODS The lotus leaf effect of hierarchical nano and micro rough, superhydrophobic surfaces was mimicked in this research study by utilizing inert micro particles such as talc powder and Al2O3. SiO2 wafers were used as the substrate and were pre-treated using a UV/Ozone cleaner for 10 minutes to remove hydrocarbon surface contaminants. Micro particle suspensions were created using Vertrel-XF or Deionized (DI) water and talc or Al2O3 micro particles. These suspensions were drop-cast onto the SiO2 wafer and using a glass pipette and allowed to dry completely in a gravity oven set to 150 °C.

polymer adsorption and orientation, the coated substrate was annealed at 150 °C for the length of time required by the polymer in question (30 min for CLP-656 and ZDOL polymers). Water contact angle (WCA) and hexadecane contact angle (HCA) measurements were taken using a VCA Optima XE instrument. Static contact angles were obtained using 2 μL liquid droplets. In such cases where a 2 μL droplet did not leave the needle dispenser tip due to the degree of hydrophobicity of the surface, a 6 μL droplet was used in its place. The condition of superhydrophobicity is met when the WCA is greater than 150° [2]. RESULTS After significant preliminary testing, the procedure for rough surface fabrication and testing was selfdeveloped and consisted of the following: UV treatment of SiO2 substrate (10 minutes), micro particle drop-casting, oven drying (10 minutes), dipcoating in 6.4g/L polymer solutions, oven annealing (using optimal heating time per polymer), and contact angle testing (WCA and HCA). Varying micro particle suspension concentrations were used for rough surface testing. The WCA and HCA results of these studies are given in figures 1-3 Contact Angle (°)

INTRODUCTION Poly- and perfluoroalkyl substances (PFASs) are used in industrial processes, and in many consumer products, but have recently been found to cause serious health problems along with polluting the environment with highly stable synthetic compounds. In response, the EPA has begun mandating the phasing out of long chain PFASs. Additionally, the Madrid Statement was written to encourage the development of suitable replacements for PFASs [1].

160 140 120 100 80 60 40 20 0 0 g/L 5 g/L10 g/L20 g/L30 g/L

The micro-rough surface was then coated with a nanometer-thick polymer film by using the standard dip-coating method. The sample was dipped into the desired polymer solution until the entire surface was submerged. The sample was then removed and placed into a glass petri dish. In order to attain proper

0g/L 5g/L 10g/L20g/L30g/L

ZDOL 4000 WCA

CLP 656 HCA

Figure 1: Contact angles of ZDOL-4000 and CLP656 with varying concentrations of Al2O3/DI


on

a

UV-treated

SiO2

ZDOL 4000

40g/L

30g/L

20g/L

10g/L

5g/L

0g/L

40 g/L

30 g/L

20 g/L

10 g/L

5 g/L

160 140 120 100 80 60 40 20 0 0 g/L

Contact Angle (°)

Water suspensions substrate.

CLP 656

WCA

HCA

ZDOL 4000 WCA

60g/L

50g/L

40g/L *

30g/L *

20g/L

10g/L

0g/L 5g/L

60g/L

50g/L*

30 g/L

40 g/L*

20 g/L

10 g/L

160 140 120 100 80 60 40 20 0 0 g/L 5 g/L

Contact Angle (°)

Figure 2 Contact angles of ZDOL-4000 and CLP656 with varying concentrations of Al2O3/VertrelXF suspensions on a UV-treated SiO2 substrate.

CLP 656 HCA

Figure 3: Contact angles of ZDOL-4000 and CLP656 with varying concentrations of Talc/VertrelXF suspensions on a UV-treated SiO2 substrate. (*) indicates data averaged across multiple trials. DISCUSSION When compared to polymers on a smooth surface, in general, WCA increased and HCA either remained constant or decreased to zero when polymers were adsorbed onto a micro rough surface. The Al2O3/Water suspension yielded a uniform and homogeneous rough surface across all concentrations. This suspension exhibited a constant particle coverage across all suspension concentrations above 10 g/L [3]. Though the surface roughness greatly increased the WCA of both polymers when compared to a smooth surface, neither ZDOL nor CLP-656 achieved superhydrophobicity. In general, the WCAs of the samples created using Al2O3/Vertrel-XF were lower than those achieved with the Al2O3/Water suspension. The low surface

coverage and non-uniformity of particle coverage led to inconsistent contact angle trends. Despite the lack of a clear trend, the presence of particles still increased WCA of the polymer films above that possible on a smooth surface, providing evidence of the lotus effect [2]. The Talc/Vertrel-XF suspension yielded the greatest WCA and lowest HCA for both CLP-656 and ZDOL polymers. This particle suspension showed up to 100% surface coverage with increasing suspension concentration which correlated with the increase of WCAs seen in Figure 3. The talc micro particles appeared to be non-uniform and “mountainous” compared to the highly uniform rough surface of Al2O3/water. This non-uniform topography likely offered both micro- and nano-scale roughness, which, in conjunction with complete surface coverage, enabled the surface to achieve superhydrophobicity. It is important to note that the contact angles exhibited by CLP-656 were similar to those of ZDOL. This critical observation proves that CLPs may perform just as well, if not better, than PFPEs under rough surface conditions [3]. REFERENCES [1] Martin, Jonathan W.a, Scott A. Mabury, Keith R. Solomon, and Derek C. G. Muir. "Bioconcentration and Tissue Distribution of Perfluorinated Acids in Rainbow Trout (Oncorhynchus Mykiss)."Environmental Toxicology and Chemistry Environ Toxicol Chem 22.1 (2003): 196-204. Web. [2] Bhushan, Bharat, Michael Nosonovsky, and Yong Chae Jung. "Lotus Effect: RoughnessInduced Superhydrophobic Surfaces." Nanotribology and Nanomechanics (n.d.): 9951072. Web. [3] Garg, Abhinav. “Comb-Like-Polymers as a Replacement to Perfluorinated Chemicals.” Department of Chemical & Petroleum Engineering. Swanson School of Engineering. University of Pittsburgh. Pittsburgh, PA. (2016). ACKNOWLEDGEMENTS I would like to thank Ms. Giselle Baillargeon for her hard work with control group testing. Additional thanks to Dr. Andrew Kozbial and Dr. Lei Li for the research opportunity and guidance throughout the project. A special thanks to PPG industries for accepting me for their research fellowship and helping to fund my summer 2016 research


UNDERSTANDING THE EFFECT OF ADSORBED WATER IN ALCOHOL DEHYDRATION ON γ-AL2O3 USING MICRO KINETIC MODELING Peter D. Tancini1, Giannis Mpourmpakis1 and Matteo Maestri2 Department of Chemical Engineering, University of Pittsburgh, Pittsburgh, PA 15260, United States 2 Laboratory of Catalysis and Catalytic Processes, Dipartimento di Energia, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano, Italy Emails: pdt10@pitt.edu, gmpourmp@pitt.edu, matteo.maestri@polimi.it 1

INTRODUCTION Alcohol dehydration has become a topic of great importance due to its ability to create fuel and chemical feedstock. Lewis Acid Catalyzed alcohol dehydration on γ-Al2O3 has become a focal point for alcohol dehydration due to the calculated low activation barriers on the surface1. Kostestkyy et. Al. showed that dehydration takes place on the Lewis acid sites of γ-Al2O3 through a concerted E2-type mechanism1. Despite having low dehydration barriers on the surface, the work by Jenness et. al. shows that the binding energies for water and alcohol are very similar on the tri- and penta- coordinated Al sites2. Due to these similar barriers, it is possible that water could compete with alcohol for adsorption to these catalytic sites and poison the catalyst. Surface acidity has been shown to decrease with respect to increasing coordination numbers, implying that the tri-coordinated acid site is most favorable for dehydration and thus the focus of our model2. In order to test the effects of water coverage on the tricoordinated site, we created a steady-state packed bed reactor (PBR) micro kinetic model with DFTcalculated reaction barriers. METHODS All reaction barriers were calculated using the B3LYP hybrid functional and 6-311G* basis set as implemented in the Gaussian G09 software. The reaction pathway was created by first performing a scan across the reaction coordinate1. Energy maximums found along all reaction coordinates were fully relaxed and then optimized to the corresponding saddle point to pinpoint the real transition state1. The same optimizations were performed for reaction intermediates1. All intermediate and transition states, along with ground state reactants at infinite separation, were obtained by fully optimizing the structure, verified by vibrational frequency and Intrinsic Reaction Coordinate calculations, respectively1. Four

alcohols were investigated: ethanol, propanol, isopropanol, and t-butanol. The micro kinetic model was simulated using a onedimensional heterogeneous, isothermal, and isobaric model for a PBR. Created using Matlab R2015B, the micro kinetic model accounts for transport limitations, but these were assumed to have little effect on catalytic surface coverage and thus set to a minimum. Residence time was taken to be the independent time variable and inlet flow rates were equated as a function of residence time, pressure, and temperature. Reactor specifics were taken from a reactor modeled by Maestri et. al. used for steam and dry reforming of methane on a rhodium catalyst3. The reactor was assumed to be annular and 2.2 cm in length. In addition, the γ-Al2O3 surface per unit (aAl2O3) reactor volume was taken to be 600 cm-1. Governing equations and reactor specifics can be found in Table 1. Surface kinetics were modeled by creating rate equations with the barriers calculated using DFT. All sticking coefficient/pre-exponentials were set to a fixed value in order to have the DFT calculated barriers be the only contributing computational factor to surface chemistry. Pre-exponentials do vary between desorption coefficients and elementary reaction step coefficients, 1e+9 and 1e+13 respectively, as is consistent with the CHEMKIN modeling scheme3. RESULTS Overall, the micro kinetic model is accurate and flexible. Initial conversion and gas phase flux results are consistent with the expected trend of conversion increasing with alcohol substitution (primary < secondary < tertiary)2. Figure 1 shows the gas phase conversion (conversion of 1 is 100%) with respect to residence time of reactant alcohols at 773 K.


In addition to consistent gas phase findings, the model gave an in-depth look at surface coverage. As expected, due to their relative binding energies, water does compete with alcohol for active sites on the catalyst. Figure 2 relates the negative log of surface coverage to residence time. Values of infinity imply 0% coverage, whereas values approaching 0 imply nearly 100 % coverage.

that the dominant path for adsorbed alcohol is to desorb from the catalyst. In fact, our model predicts that alcohol binding energy plays a larger role in the rate of alcohol dehydration than the dehydration barrier. With this in mind, future work on the model will look at different active sites. Although the tri-coordinated aluminum site bonds strongest to the alcohol, it is possible it also bonds too strongly to water. Further work on other catalytic sites that have a larger discrepancy in binding energies between water and alcohol may yield a faster conversion or a less populated surface.

Figure 1: Gas Phase conversion results for BPR model. Increasing trend between conversion and alcohol substitution proves model accuracy.

Three compounds compete for active spots on the active aluminum site of the catalyst: hydroxide, alcohol, and water. Initially, the catalyst is covered entirely by alcohol, but as the reaction progresses the water formed due to alcohol dehydration does not desorb from the catalyst and blocks actives sites for further alcohol dehydration. Alcohols are still able to adsorb, as is shown by continued increase in gas phase conversion, but their rate of adsorption is greatly decreased due to the lower number of active sites. DISCUSSION Increasing alcohol substitution also increases catalytic binding energy and decreases alcohol dehydration barriers. Reaction path analysis shows Table 1: Governing Equations and Reactor Specifics3

Figure 2: Negative log of aluminum active site coverage with respect to residence time at 773 K. The surface changes from initially 100 % alcohol to nearly 100 % water.

REFERENCES 1.P. Kostetskyy, G. Mpourmpakis, Catal. Sci. Technol., 2015, 5, 4547-4555. 2.G. R. Jenness, M. A. Christiansen, et. Al, J. Phys. Chem., 2014, 118, 12899-12907 3. M. Maestri, D. G. Vlachos, et. Al, J. Catal., 2008, 259, 211-222 ACKNOWLEDGEMENTS We would like to Dr. Giannis Mpourmpakis, the Swanson School of Engineering, and the Office of the Provost for their generous financial support.


Modeling and Optimization of Immune-Regulatory Signaling Strategies of the Host Response to Influenza Virus Infection Serena W. Chang and Jason E. Shoemaker Department of Chemical Engineering University of Pittsburgh, PA, USA Email: sec107@pitt.edu •

INTRODUCTION Viruses, such as the influenza virus and HIV, affect a significant number of people each year. Seasonal influenza alone accounts for an estimated 40,000 deaths and $87.1 billion per year for influenzarelated costs in the United States1. The immune response to the presence of such viruses is the production of interferon by the infected host cell. Interferon then invokes virus replication suppression as well as local inflammation, both which must be carefully managed. The ability to predict immune responses to infection may reveal novel immune-targeted therapies. Immunomodulatory therapies are of particular interest in that they are generally not strain specific, meaning the therapies could broadly protect high risk groups.2

Modified equation following Hill kinetics replaced the step-wise time-based function in the phosphorylated IRF7 protein equation. • Interferon α and β were no longer distinguished separately. • New virus equation following Hill kinetics was introduced and the other equations adjusted accordingly based on the pathway. The current model uses 7 ordinary differential equations and 21 parameters. Optimization of Hill equation values and degradation terms was conducted to attempt to reach steady state.

The mathematical model used was based on an existing host cell response model by Qiao et al. which utilized 8 ordinary differential equations and 19 parameters. Figure 1 below shows the generic pathway characterized by the model.

DATA PROCESSING Concentration data generated from the simulation via MATLAB were normalized with respect to the maximum per species during the optimization process. Optimization was initially conducted by tuning the Hill equation parameters of the modified phosphorylated IRF7 equation based on the normalized data from the original model. Parameters of interest in the model were targeted by perturbation of individual parameters within 10 percent of the original value to gauge the deviation.

Figure 1: Simplified network of host cell response to interferon pretreatment and virus introduction.

METHODS To address the limitations of the prior model, adjustments were made to the original equations. Specifically: • 3-hour pretreatment of interferon prior to virus introduction was eliminated.

The simulation and optimizations were run in MATLAB using the ODE solver ode15s and the optimization function fmincon. Further simulation modeling was performed in Simulink.

RESULTS Optimization of the IRF7 Hill equation and degradation terms yielded results similar to the original model shown in Figure 2.


DISCUSSION Optimization of the original equation with just the phosphorylated IRF7 Hill equation substitution showed that the equation would not reach steady state without further modifications. Interferon concentrations continued to increase past the 10hour time period despite the addition of degradation factors to the equations.

Figure 2: Simulation of IFN-pretreated cell response to virus infection. IFN-β pretreatment consisted of time -3 to 0 hours and virus infection occurred at 0 hours. Species traced were phosphorylated STAT1 (STAT), SOCS1 mRNA (SOCSm), IRF7 mRNA (IRF7m), phosphorylated IRF7 (IRF7p), interferon α and β mRNA (IAm and IBm), and interferon α and β protein outside the cytoplasm (IAenv and IBenv).

Figure 3 below displays the new model with all of the adjustments as mentioned.

Figure 3: Simulation of cell response to virus infection. Species traced were phosphorylated STAT1 (STAT), SOCS1 mRNA (SOCSm), IRF7 mRNA (IRF7m), phosphorylated IRF7 (IRF7p), interferon mRNA (IFNm), interferon protein outside the cytoplasm (IFNenv), and viral protein concentrations.

Introduction of the virus equation, similar to the work of Fribourg et al., gives more control to the model to lower interferon concentrations over time, as it would happen in reality. However, unrealistic negative concentration values in the new model indicate that further modification of this model is necessary to address issues in both model feasibility and stability. The introduction of a feedback loop may bring the model to steady state. REFERENCES 1. Shindo, N. World Health Organization. (2010). Global seasonal influenza disease burden and implementation of WHA 56.19 [PowerPoint slides]. Retrieved from http://www.who.int/immunization/sage/Influenza_4 _WHA56.19_N_Shindo_SAGE_April_2010.pdf. 2. Peiris J.S., Cheung C.Y., Leung C.Y., Nicholls J.M. “Innate immune responses to influenza A H5N1: friend or foe?” Trends Immunol. 2009 Dec ; 30(12):574-84. 3. Qiao L. et al. “Immune response modeling of interferon beta pretreated influenza virus-infected human dendritic cells.” Biophys J. 2010; 98: 505– 514. 4. Fribourg et al. “Model of influenza A virus infection: dynamics of viral antagonism and innate immune response.” J Theor Biol. 2014 Jun 21; 351: 47-57. ACKNOWLEDGEMENTS Funding provided by the Swanson School of Engineering and the Office of the Provost.


Analyzing the Control Strategies of the Influenza Virus Travis La Fleur and Jason E. Shoemaker Department of Chemical and Petroleum Engineering University of Pittsburgh, PA, USA Email: tll38@pitt.edu, Web: http://shoemakerlab.pitt.edu INTRODUCTION Many modern day medicines offer treatments for virus infections, but to significantly improve our methods of treatment, a systematic understanding of how viruses behave after entering a cell is needed. One virus of particular interest is the influenza virus, more commonly referred to as the “flu”. The primary goal of this project was to gain further insight on the control concepts that govern influenza replication and to increase the accuracy of an existing mathematical model of this process. More specifically, the aim was to identify which parameters and proteins were most essential for Influenza virus replication. Heldt et al. mathematically modeled the intracellular dynamics of Influenza in which they investigated virus entry, replication, nuclear export and viral release. They focused primarily on the viral proteins effect on the regulation of virus replication [1]. As determined by their model, they observed protein synthesis rates up to 24 hours post infection process to be unhindered as virus release begins (4 hpi). They observed a similar trend among vRNA, mRNA and cRNA levels during the infection process, which agrees with data obtained experimentally by Kawakami et al. [2]. Although protein synthesis rates are unhindered, protein levels within the infected cell begin to taper off at certain values after viral release. A sensitivity analysis on a model helps to identify important input variables, that when adjusted in small increments consequently produce large uncertainties. For any given model, “noise” is expected; so input variables of this kind should be improved upon to lower their sensitivity. Therefore, a sensitivity analysis on models that can predict experimental data (such as Heldt et al.’s) will be helpful in increasing the accuracy.

METHODS Mathematically modeling the intracellular dynamics of Influenza was done by coding 32 differential equations and solving them simultaneously using Matlab R2015b. Once the results matched those from Heldt et al., the determination of which proteins influenced viral replication the most was done by using two sensitivity analyses. Both analyses perturb certain input parameters, and show how the perturbations affect the outputs selected. The first analysis done was a local sensitivity analysis. Using the SimBiology add-on for Matlab, a visual representation of the replication process was created, the reactions and their rate laws were coded into the model, and the analysis was performed. The inputs selected were 5 parameters that governed viral entry, and the associated outputs were the virions entering the nucleus. The second analysis performed was a global sensitivity analysis (GSA). Unlike the local sensitivity analysis, this accounts for the affects of the input values throughout the entire process rather than one local area. To do this, the SAFE (Sensitivity Analysis for Everybody) toolbox for Matlab was used [3]. This toolbox contains various methods to perform the analysis, but the one used for this project was the Elementary Effects Test. Coding the influenza model into this toolbox allowed the for one input and one output at a time. Using 4 of the same inputs as the local sensitivity analysis, sensitivities of these parameters on protein production were collected on a global scale. RESULTS The local sensitivity analysis was performed on parameters that dictated viral entry to determine which parameters produced the largest uncertainty. Figure 1 shows sensitivity readings for these parameters with virions entering the nucleus as the output. Table 1 below describes the parameters used in these analyses.


Parameter Ken Kfus Ffus

Pnep

Description Endocytosis Fusion with endosomes Fraction of fusion-competent virions Attachment for lo-affinity binding sites Attachment for hi-affinity binding sites Nuclear export protein

Vpnuc

Virions in the nucleus

Kattlo Katthi

Table 1: All parameters used in the sensitivity analyses.

Figure 1: Local sensitivity plots of viral entry. X-axis labeling the input values, and the y-axis representing their associated sensitivities.

To determine the sensitivity with respect to the proteins, the global sensitivity analysis was performed with the same inputs (except kfus), but here one protein was used as the output value. Figure 2 shows the Elementary Effects Test performed on the nuclear export protein (Pnep). The measure of elementary effects quantifies the relative importance of each input parameter.

DISCUSSION Comparing the global and local sensitivity analyses, it is seen that ‘ken’ is the most sensitive parameter in both viral entry and in protein synthesis. It can be concluded then, that ‘ken’ should be improved upon to lower its sensitivity and therefore improving the accuracy of the model. After improving upon these parameters, additional analyses should be performed to identify which protein species have the highest sensitivities with respect to the virions released from the cell. In further research, the addition of known host cell responses should be added to this model to improve its accuracy. One known feedback that can be added to the model is the immune system response of the viral attack on the host cell. The typical immune system response to the presence of a virus is to stimulate T and B cells, which consequently cause the production of antibodies [4]. The addition of the immune system response to the mathematical model will allow for a more accurate depiction of how quickly influenza replication can occur in the presence of active antibodies. With this addition, the understanding of influenza behavior after entering a cell will be more complete, and therefore the model will better represent the influenza replication process. REFERENCES 1. Heldt et al. J. Virol. , 86 , 7806-7817, 2012 2.Kawakami et al. J of Virol Methods, 173, 1-6, 2011. 3.Pianosi et al. Envi Modeling & Sci., 70, 80-85, 2015. 4. Ludwig et al. Trends in Molecular Medicine, 9, 46-52, 2003. ACKNOWLEDGEMENTS Dr. Jason Shoemaker, the Swanson School of Engineering, the Office of the Provost, and PPG funded this project jointly.

Figure 2: Mean of Elementary Effects plotted on the y-axis, with number of model evaluations on the x-axis. The dotted lines represent the evaluation using bootstrapping, and the solid lines are without bootstrapping.


THE ENGINEERING OF NANOSTRUCTURED INTERMETALLIC CATALYSTS FOR CO2 REDUCTION Brett Amy, Yahui Yang, and Götz Veser University of Pittsburgh Center for Energy, Department of Chemical Engineering University of Pittsburgh, PA, USA Email: bna14@pitt.edu INTRODUCTION A major contributor to the global warming crisis is the release of anthropogenic carbon dioxide [1]. It is therefore imperative to discover and implement methods of eliminating industrial CO2 waste. One such active field of study is the use of chemical catalysts to facilitate the high-energy reduction of CO2 to methanol, CH3OH [2]. Methanol functions well as a fuel additive, and the CO2 waste is effectively recycled. The current industry standard for methanol synthesis is a Cu/ZnO catalyst. This catalyst is functional, yet there are deficiencies which could be addressed by alternative materials. The reaction conditions for reduction using Cu/ZnO are typically ~200-300°C and approximately 50 bar, and the amorphous nature of the catalyst leaves little room for control of the active sites. One promising candidate for the reduction of CO2 to CH3OH is a copper-zirconium intermetallic alloy [3]. Indeed, preliminary computational models of Cu-Zr intermetallic nanoparticles conducted by collaborators in our department have shown positive results for CO2 adsorption/desorption onto the catalyst surface, a key step in the hydrogenation reaction. Creating a defined nanostructure consisting of Cu-Zr active sites would allow a closer look into the mechanisms of reaction. METHODS In order to verify and characterize these observations, an experimental synthesis was devised. The target product was a well-defined intermetallic catalytic Cu-Zr nanoparticle with ~5nm diameter and 5:1 molar Cu:Zr ratio, evenly dispersed on the surface of spherical silica nanoparticles (~250nm diameter). The first step of the synthesis was to create the silica nanoparticles. This was accomplished using the

well-established Stöber method [4]. Tetraethyl orthosilicate (TEOS) formed SiO2 precursors in a basic aqueous environment via hydrolytic and condensation reactions. Control of the concentration of TEOS and base (typically ammonium hydroxide) as well as reaction time allowed for fine control of the diameter of the resulting silica nanoparticle. The synthesis of the silica nanoparticles was allowed to proceed for one hour. After this time, the material was centrifuged, washed, calcined and rehydroxylated to improve catalytic activity. When the silica nanoparticles had dried, copper nanoparticles were deposited onto the surface. This was accomplished via addition of copper chloride to an aqueous suspension of the silica nanoparticles. Following thorough mixing and dissolution of the two species, the solution was titrated to a pH of 10 using ammonium hydroxide. This precipitated the copper onto the surface of the silica. The material was again centrifuged, washed, and calcined. An image of the material can be seen in Figure 1. The copper catalyst was reduced under H2 to remove oxides before the zirconium partial replacement was performed. Then, a round-bottom flask containing tetraethylene glycol (TEG), the Cu/SiO2 nanoparticles and dilute zirconium chloride (in a molar ratio of 5:1 Cu:Zr) was heated (typically to 80°C, though other temperatures were tested) and stirred under inert atmosphere for 5-20 minutes. This allowed for the exchange of ions between the salt and the metallic surface. The product was then cooled, centrifuged and washed, and then annealed under argon after drying. The product was characterized using energy-dispersive x-ray spectroscopy (EDX), x-ray diffraction (XRD), and pulse chemisorption/temperature programmed desorption (TPD) analysis.


RESULTS Cu/SiO2 nanoparticles were successfully synthesized as seen in Figure 1. XRD diffraction patterns (Figure 2) show the presence of a sharp peak for metallic copper and smaller accompanying peaks indicating CuO. A pulse chemisorption/TPD analysis was run on Cu/SiO2 material to observe behavior of the control sample. No strong adsorption or desorption of CO2 was observed. This observation is corroborated by literature, which supports that CO2 does not adsorb on pure copper at temperatures above 100K [5].

Figure 1: TEM image of the Cu/SiO2 sample.

The initial XRD results showed a promising level of replacement of copper by zirconium. In Figure 2, five different spectra are displayed for a sample synthesized at 80°C; samples were drawn from the synthesis broth in 5 min increments. Longer timeframes resulted in the gradual reduction of peak intensity, as well as increased preference for cuprous oxides over their metallic counterparts. Further

Figure 2: XRD spectra of Cu-Zr intermetallic catalyst samples. Only compounds listed in the legend are observed in these samples.

analysis of samples created under similar conditions found additional distinct XRD peaks matching those of ZrO2. EDX analysis conducted on Cu-Zr samples found a lower Zr:Cu ratio than expected if all replacement possible had occurred. DISCUSSION There is an apparent issue with the metallic replacement step of the synthesis. The ZrO2 observed in some XRD spectra is believed to result from an irreversible process which occurs during the synthesis. Literature analysis and the low pH of ZrCl4 TEG solution (~1) support the hypothesis that the salt is highly reactive and produces HCl, which hinders the synthesis. The low amount of Zr seen in EDX analysis further suggests a troubled synthesis. It is probable that most zirconium exists in the TEG solution as a partially oxidized salt, zirconyl chloride. Finally, there is an observable degradation of the copper nanoparticles after as little as 20 minutes when heated in TEG alone (i.e. without ZrCl4). Future works will involve identifying a more stable zirconium salt to use instead of ZrCl4 and tailoring the reaction conditions to cause as little damage to the nanoparticles as possible. Alternatives to the current ‘wet chemical’ replacement method may also be considered. Pulse chemisorption and TPD analysis of a well-defined Zr-Cu/SiO2 sample will then provide key insight into the viability of the intermetallic catalyst. REFERENCES 1. Smith, Joel B., and Dennis A. Tirpak. “The potential effects of global climate change…” 1989. 2. Studt, Felix, et al. "CO and CO2 hydrogenation to methanol calculated using the BEEF-vdW functional." Catalysis letters 143.1 (2013): 71-73 3. Szummer, A, et al. “Hydrogenation enhancing catalytic activity of Cu-Zr amorphous alloys.” J. Phys.: Condens. Matter 14, 2002. 4. Ibrahim, Ismail A.M., et al. “Preparation of spherical silica nanoparticles: Stober silica.” Journal of American Science, 2010;6 (11). 5. M.J. Sandoval, et al. “TPD Studies of the Interactions of H2, CO, and CO2 with Cu/SiO2”. J. of Catalysis, V. 144, Issue 1, Pages 227-237. 1993. ACKNOWLEDGEMENTS Thanks to the Swanson School of Engineering for providing funding, as well as to Dr. Veser’s research group for their diligent work.


CELL RECOVERABILITY AFTER EXPOSURE TO COMPLEX ENGINEERED NANOPARTICLES Julie Hartz, Sharlee Mahoney, Thomas Richardson, Ipsita Banerjee, and Götz Veser Department of Chemical Engineering University of Pittsburgh, PA, USA Email: jlh233@pitt.edu, Web: http://www.pitt.edu/~gveser/www/index.html INTRODUCTION Over the past decade, the use of nanoparticles (NPs) and complex engineered nanomaterials (CENs) in both consumer products and industrial applications has been exponentially increasing. They show great potential in many sustainable applications such as improving solar panel efficiency [1]. NPs are desirable due to their unique physical and chemical properties which differ from their bulk forms. For example, in its bulk form, elemental gold is yellow and chemically inert while NP gold is red and chemically reactive with favorable catalytic properties [2]. However, despite their significant promise, NPs could have unforeseen, detrimental health effects on humans and the environment due to their unique properties. Current environmental and health regulations regarding NPs treat them as identical to their bulk substances. These insufficient regulations motivate our lab to study the potential toxicity of CENs.

cells were cultured on Transwell inserts that allowed us to expose them to fluorescently tagged CENs for a given time, and then relocate them to a NP-free environment, taking samples during- and post-exposure. METHODS Non-hollow Ni on SiO2 (“nhNi@SiO2”) CENs were synthesized using a one pot reverse microemulsion that was developed and reported previously by the Veser lab [3]. The CENs are composed of Ni NPs embedded in an amorphous, porous SiO2 support. These CENs were fluorescently tagged by stirring 0.20 g of nhNi@SiO2 in a solution of 10 mL DI water, 10 mL 1M HCl, 50 µL (3-Aminopropyl) trimethoxysilane, and 50 μL Alexa Fluor 488 NHS Ester for 2 hours, and subsequently washing three times with DI water.

Previous studies in our lab investigated the effect of Ni/SiO2 CENs on cell metabolism because they are widely used catalysts composed of a toxic metal (Ni) and nontoxic support (SiO2). We compared the toxic effects of CENs to those of NiCl2, a metal salt that produces free Ni2+ ions in solution. Both materials caused metabolism to decrease with increasing exposure time. However, when the exposure was removed, we observed an interesting difference in the long-term toxic effects from the two materials: cell metabolisms of those exposed to free Ni2+ ions returned to full functionality, while those exposed to CENs did not recover. These results motivated us to further investigate cell recoverability after CEN exposure.

Mouse-derived NIH 3T3 cells were used to evaluate uptake and recoverability after exposure to fluorescently tagged nhNi@SiO2. Cells were cultured in Dulbecco's Modified Eagle Medium (DMEM, Life Technologies) supplemented with 10% fetal bovine serum (FBS, Atlanta Biologicals) and 1% penicillin streptomycin (P/S, Life technologies), referred to as 3T3 media. The cells were cultured at 37oC in a 5% CO2 environment. Cells were passaged at 70% confluency (every 2-4 days). They were seeded at 400,000 cells/well on three 24 mm diameter Transwell Permeable Supports (Corning) that had been previously soaked in 3T3 media for 15 minutes. The Transwell inserts were then placed in a 6-well plate; 1.5 mL of 3T3 media was added into the Transwell, while 2.6 mL was inserted to the well itself.

For the present study, in order to determine why cells could not recover, we examined CEN uptake patterns. It was important to determine whether cells were taking up the CENs, and if so, whether the cells have the ability to expel them. To do so,

To sterilize the CENs for the nhNi@SiO2 media solutions, they were exposed to UV light for one hour. Subsequently, the NPs were sonicated for 15 minutes at a concentration of 50 mg Ni/L in 3T3 media supplemented with 20 mM HEPES buffer.


After sonication, the media solution was added to both the well and the Transwell. The control was exposed to NP-free media. To analyze the CEN uptake, cells were trypsinized, collected, centrifuged, washed, and resuspended in PBS. An Accuri C6 flow cytometer was used to quantify the fluorescent intensity of the samples. After 24 hours, the control cells and one Transwell of exposed cells were collected and analyzed. The remaining Transwell of exposed cells was transferred to a well with fresh, CEN-free media and incubated for another 24 hours, after which they were collected and analyzed. DATA PROCESSING At least 10,000 events were collected per sample during flow cytometry. The events were gated using the side scatter to eliminate dead cells, dust, and the forward scatter to discount doublets. The absolute values for fluorescent intensity were collected and normalized by dividing by the respective reading from the control group. RESULTS

Figure 1. Normalized fluorescent intensity of cells not exposed to CENs (left), cells exposed for 24 hours with no recovery time (middle), and cells exposed for 24 hours followed by 24 hours of recovery time (right). All “exposed” cells experienced a solution of 50 mg Ni/L fluorescently tagged nhNi@SiO2. To determine the cells’ ability to expel the CENs during a recovery period, the fluorescent intensity was measured on a single cell basis after both an exposure for 24 hours and a subsequent recovery period of 24 hours. After 24 hours of exposure to

the 50 mg Ni/L solution of nhNi@SiO2, the normalized fluorescent intensity increased by ~23% compared to the control. The cells that were exposed for 24 hours and subsequently given another 24 hours of no exposure displayed a similar, but slightly decreased, fluorescent intensity. DISCUSSION Since the cells were exposed to fluorescently tagged CENs, the increase in the cellular fluorescent intensity after 24 hours of exposure indicates that the cells took up the CENs. However, the fluorescent intensity remained fairly constant between the cells that were exposed for 24 hours to those that had a subsequent recovery period, implying that the cells are incapable of expelling CENs once they are engulfed by the cell. In a previous study in our lab, we showed that cells exposed to a NiCl2 salt exhibited a decrease in cell metabolism, but once the exposure was removed, metabolism returned to its full capacity because the cells could expel the toxic Ni2+ ions via diffusion. With the latest Transwell results revealing that cells cannot effectively expel CENs, we hypothesize that the Ni NPs on the CENs will act according to a Trojan horse mechanism, in which they will continue to expel toxic Ni2+ ions within the cell, making recovery impossible. REFERENCES 1.Nakayama, K. et al., “Plasmonic nanoparticle enhanced light absorption in GaAs solar cells.” Applied Physics Letters (2008): Lett. 93. Web. 2.Love, S. A., et al., “Assessing Nanoparticle Toxicity.” Annual Review of Analytical Chemistry, 5.1 (2012): 181-205. Web. 3.Mahoney, et al., The Developmental Toxicity of Complex Silica-Embedded Nickel Nanoparticles Is Determined by Their Physicochemical Properties, PLOS ONE, 2016. 11(3): p.e0152010. ACKNOWLEDGEMENTS I would like to show my immense gratitude to PPG and SSOE for supporting this research. My thanks are also extended to my research mentors Dr. Götz Veser and Dr. Ipsita Banerjee, as well as Thomas Richardson, Jason Ferree, and Sharlee Mahoney for their guidance, patience, and wisdom.


ASSESSING TOXICITY OF COMPLEX ENGINEERED NANOPARTICLES Kenny To, Sharlee Mahoney, Ipsita Banerjee and Gรถtz Veser Department of Chemical Engineering University of Pittsburgh, PA, USA Email: klt60@pitt.edu, Web: http://www.engineering.pitt.edu/Departments/Chemical-Petroleum/ INTRODUCTION Recently, the emergence of new nanoparticle technology has radically increased human exposure to nanoparticles (NPs). Such products where NPs are found include clothing, cosmetics, and beer bottles. These products incorporate NPs due to their unique active properties that differ from the bulk properties. Although society stands to greatly benefit from NP technology, it is imperative to look at the health effects and overall safety of this technology. Since there is no current standard to uphold in terms of NP regulation, research needs to be conducted in order to learn more about the long term toxic effects that contact with NPs entails. One type of NP is the amorphous silicon dioxide (SiO2) NP, which are widely accepted as being nontoxic, and are very popularly used in catalysis and drug delivery applications. Another common type of NP is the metallic NP, which are popular for their catalytic properties, but the main toxicity concern is that their ions can dissolve from the particle. By integrating a metallic NP, more specifically nickel (Ni), onto an amorphous SiO2 support, complex engineered nanoparticles (CENs) can be synthesized with different configurations and their toxic properties can be observed in varied environments. This project hypothesizes synthesizing and analyzing Ni NP shell embedded inside a hollow, porous SiO2 shell (hNi@SiO2, Figure 1) will allow toxicity mitigation of nickel nanomaterials by increasing the thickness of the nontoxic shell.

Figure 1: Visual of hNi@SiO2, with varying shell thicknesses METHODS The study consisted of synthesizing hNi@SiO2 with varying SiO2 shell thicknesses and then analyzed

using thorough physicochemical characterization techniques. To create these CENs, a reverse microemulsion mediated sol-gel process was used. After the complex engineered nanoparticles were synthesized, transition electron microscopy (TEM) was used in order to determine the shell thickness, CEN size, and more importantly to see if the actual synthesis was a success. A synthesis is considered a success if the nanoparticles are not clumped together and there is no rupture of the nontoxic silica shell. Three different variations of hNi@SiO2, Figure 2, were synthesized and characterized.

Figure 2: TEM images of 8, 11.5, and 15 nm shell hNi@SiO2, respectively One of the first assays to be used after nanoparticles are successfully synthesized is the settling assay. This is done using UV/Vis spectrophotometry to determine the concentration of particles in solution of media. Therefore, these measurements directly correlate to the amount of particles that has settled to the bottom of the well. To begin this experiment, 200 mg Ni/L of the CEN was dispersed in media. The absorbance of the three CENs was examined for 4 hours total, with the data being recorded every 5, 10, 20, and 30 minutes for each respective hour of examination. Rapid settling is predicted to result in a higher effective CEN concentration at the bottom of the well. Another assay that is used is the dissolution assay. Dissolution of metal ions from metal NPs is an important mechanism that determines toxicity. Therefore, the dissolution of the Ni ions from the CEN into media directly correlates to toxicity [1]. This assay was done by first dispersing 100 mg Ni/L


A/A0

RESULTS The settling analysis results, Figure 3, showed that by the end of the 4-hour examination, the thicker the SiO2 shell was, the higher the concentration of the CEN was at the bottom. 1 0.8 0.6 0.4 0.2 0

8 nm 11.5 nm 15 nm 0

60 120 180 240 Time (min)

Normalized Metabolism

In order to test the CENs in an in vitro model, 3T3 fibroblast cells are used. The reason why these cells are used over any other cells is that they are cheap, robust, and proliferate quickly. Our nanomaterial cell experiments begin by plating the cells into a 24-well plate, and allowing them to incubate for 24 hours. When the cells are done proliferating, the nanoparticles are sterilized for 1 hour by placing them underneath UV light and then added to media in the highest desired concentration (300 mg Ni/L), sonicated, and then diluted to different concentrations. These different CEN solutions are then added to the cells and then allowed to be exposed for 24 hours. The MTS assay, which analyzes metabolic activity, can now be used. Following NP exposure, a chemical called MTS is added to the cells [1]. Healthy cells should produce formazan and therefore a color change. The metabolism of the cells is analyzed by measuring the color of the solution at different CEN concentrations and then normalized to the control (0 mg Ni/L).

showed that the 11.5 and 15 nm shell had a similar trend. 1 0.8 0.6 0.4 0.2 0

8 nm 11.5 nm 15 nm 0

100

200

300

Concentration (mg Ni/L)

Figure 4: MTS analysis results From the dissolution analysis, less nickel ions from the hNi@SiO2 would disperse into solution as the shell thickness increased. 10 mg Ni2+/L

of hNi@SiO2 into media. At specific time points (1, 4, 6, and 24 hour), the CEN was separated from solution using centrifugal filtration and the concentration of nickel ions in solution was then analyzed.

8 6

8 nm

4

11.5 nm

2

15 nm

0 0

4

8 12 16 20 24 Time (hr)

Figure 5: Dissolution analysis results DISCUSSION Since the MTS analysis is the most important test for toxicity, the results agree with the hypothesis that as shell thickness increases, the level of toxicity decreases. The dissolution assay serves as an explanation for the MTS assay since the concentration of ions directly correlate to toxicity. That is why, 11.5 and 15 nm shell hNi@SiO2 have very similar trends. The settling analysis also showed that the thicker-shelled CENs would settle faster (and should be more toxic), which confirms that toxicity is based solely off of ion dissolution and not settling.

Figure 3: Settling analysis results

REFERENCES [1] Love, S.; Haynes, C., Assessing Nanoparticle Toxicity. Annual Reviews 2012.

The MTS assay showed that the 8 nm thick shell hNi@SiO2 reduced the metabolism of the 3T3 cells more than the thicker shell nanoparticles as the concentration of CEN increased. The results also

ACKNOWLEDGEMENTS Thanks to members of the Veser and Banerjee lab, the Swanson School of Engineering , and the Office of the Provost


INFLUENCING DIFFERENTIATION AND GROWTH OF NEURAL PROGENITOR CELLS WITH GENE SILENCING AND LAMININ Meghan J. Wyatt1, William Ong2, Wai Hon Chooi2, Sing Yian Chew2 1 Department of Bioengineering, University of Pittsburgh, PA, USA 2 Chew Lab, School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore Email: mjw117@pitt.edu, Web: http://www.ntu.edu.sg/home/sychew/ INTRODUCTION Damage to the central nervous system (CNS) caused by injury or illness can be debilitating. One novel potential treatment for CNS injury is the use of cell replacement therapy utilizing neurons derived from induced pluripotent stem cells (iPSCs). In order to utilize iPSCs as an effective treatment, researchers must be able to control the differentiation of iPSCs into neuronal lineages. Stem cell fate commitment is driven by a variety of factors both physical and chemical. The ECM molecule laminin is known to be of particular importance to the proliferation of neural stem cells [1]. In this study we seek the optimal concentration of laminin necessary for cell seeding, as determined by cell number. In addition to laminin, culturing cells on 3D electrospun scaffolds may provide the appropriate topography cues for differentiation. Another challenge in the direction of stem cell fate commitment is the presence of repressive pathways that guard against lineage commitment. Previous study has demonstrated the efficiency of the small interfering ribonucleic acid (siRNA) RE-1 silencing transcription factor (REST) for the enhancement of neuronal fate commitment [2]. One known method of modulating neuron development is the use of retinoic acid (RA) and purmorphamine(PMN) [3]. Therefore we chose to examine the effects of REST siRNA treatment on existing protocols for NPC to motor-neuron progenitor cells (MNPs) differentiation. METHODS Laminin Concentration Study The first portion of the study was concerned with determining the amount of laminin coating needed for seeding iPSC derived NPCs. To induce differentiation into a neural lineage, iPSCs were

cultured in a specific NPC medium that contained the following components: • N2: 100x (0.5 ml) • B27: 50x (1.0 ml) • Glutamax: 100x (0.5 ml) • P&S: 1%/100x (0.5 ml) • BSA: 50000x (0.001 ml) • hLIF: 1000x (0.05 ml) • SB431542: 1000x (0.05 ml) • y-27632: 1000x (0.05 ml) • CHIR99021: 1000x (0.05 ml) • DMEM/F12: 23.65 ml • Neurobasal: 23.65 ml Cells were regularly passaged in a 1:3 ratio at full confluence. Excess cells were stored in cryostor CS10 according to manufacturer protocol. Thawing of cells for further experiment was done according to manufacturer protocol. Aligned nanofibrous PCL scaffolds were electospun onto aluminum foil then cut to fit 24 well plates. Scaffolds were pre-wet in 70% EtOH and washed in DI water before coating with 0.5 mg/ml of polydopamine for 4 hours at room temperature followed by poly-DL-ornithine coating for one hour at 37°C. Coated scaffolds were washed 3 times and lyophilized overnight. Next scaffolds were UV sterilized prior to coating with laminin for 2 hours at room temperature. Six different weights of laminin were tested: 10 μg, 5 μg, 2.5 μg, 1.25 μg, 0.625 μg, and 0 μg. Each laminin coated scaffold was seeded with 50,000 iPSC-NPC, and cultured in motor neuron progenitor medium (N2B27 + retinoic acid (RA) + purmorphamine + rock-inhibitor). After 3 days cells were counted and stained with a Live/Dead Assay kit.


MNP Study Scaffolds and coverslips were prepared for culture of iPSC-NPCs. Cells were seeded onto either aligned electrospun scaffold or 2D coverslip. Four differentiation protocols were used for 2D culture and two were used for scaffolds. For the 2D, Group 1 received culture medium with PMN+RA, Group 2 received PMN only, Group 3 received RA for 4 days then PMN+RA for 10 days, and Group 4 received normal medium for 4 days then PMN+RA for 10 days. Scaffold Group 1 received medium with PMN+RA and Group 2 received RA for 4 days then PMN+RA for 10 days. Groups were subdivided and treated with each of: REST siRNA, scrambled siRNA (scr-siRNA), or no additional differentiation cues (no treatment, n.t.). After 14 days cells are fixed and immunostained for neuronal lineage markers: TUJ1, MAP2, Nestin, and Olig2. RESULTS As seen in Figure 1, cell number was higher among the 10 μg, 5 μg, 2.5 μg, and 1.25 μg groups.

Figure 1: Plot of cell number vs laminin concentration. Error bars represent SEM.

Figure 2 below displays the results of staining for Nestin and Olig2 on the 2D culture groups. All of

the 2D culture groups stained positive for Nestin and negative for Olig2. Stains for TUJ1 and MAP2 were also negative for all of the 2D culture groups. Scaffold Group 2-Rest and Group 2-Plain stained positive for Nestin. All scaffold Group 2 subgroups stained positive for Olig2 and negative for TUJ1. Group 1-Plain and Group 1-Rest stained positive for MAP2. DISCUSSION Too low mass of laminin is detrimental to cells as seen by lower cell numbers in the 0.625 μg and 0 μg groups. In the 2D culture experiments on the differentiation of NPC to MNP cells, positive staining for Nestin, an NPC marker, in all groups indicates a lack of differentiation. Negative stains for MNP cell markers TUJ1, Olig2, and MAP2 highlight that there are no MNP cells present in the culture. In MNP scaffold experiments, presence of MAP2 and Olig2 suggests that some of the NPCs have differentiated into MNP cells. Increased MNP marker signal in REST siRNA+scaffold subgroups relative to control+scaffold subgroups indicated that REST siRNA enhances iPSC-NPC differentiation. Together, these results suggest that a combination of topographical cues and silencing of REST is most effective in inducing differentiation of NPCs. Since many cells were negative for MNP markers, we also consider the possibility that cells not expressing the chosen markers have already passed through the MNP stage and fully differentiated into neurons. In future studies we will examine this possibility by testing cells for neuronal markers such as Islet1. REFERENCES 1. Hall et al. BMC Neurosci 9, 71, 2008. 2. Lowa et al. Biomaterials 34, 3581-3590, 2013. 3. Sances et al. Nat Neurosci 16, 542-553, 2016. ACKNOWLEDGEMENTS This work was funded by the Swanson School of Engineering, the University of Pittsburgh Office of the Provost, the University of Pittsburgh Study Abroad Scholarship, and the Nanyang Technological University Summer Research Internship Program.

Figure 2: Immunofluorescent staining for neuronal markers Nestin and Olig2. Nestin is shown in red, DAPI in blue, Olig2 in green.


IMPROVEMENT OF ROSETTA BIOCOMPUTING SOFTWARE FOR CANONICAL ANTIBODY CDR LOOP PREDICTION Laura Beth Fulton Mechanical Engineering, University of Pittsburgh SSOE Summer Scholar Johns Hopkins University, Department of Chemical and Molecular Bioengineering Program in Molecular Biophysics, Gray Laboratory, Baltimore, MD Email: laurabeth@pitt.edu, Web: laurabeth.xyz INTRODUCTION Computational modeling of protein structures and protein-protein interactions is an increasingly important method for molecular biophysics research as well as for applied research for drug design. Experimental protein structure prediction is often labor intensive, time consuming, and costly, involving techniques for obtaining structures such as X-ray crystallography and NMR [1, 2]. At Johns Hopkins University, the Gray laboratory is developing computational tools for antibody structure prediction and antibody docking as part of the Rosetta Commons biocomputing software suite to tackle real world problems [3, 4]. Antibodies, with their high affinity and specificity to target antigens, are of particular interest as potential therapeutics for the prevention and treatment of infectious diseases. They are Yshaped glycoproteins that consist of a constant region and a fragment variable (Fv). Antibody specificity is located in the Fv, which contains the antigen-binding site that is comprised of six complementary determining region (CDR) loops [1, 2]. The six CDRs consist of variable regions of light (VL) and heavy (VH) chains. CDR loops (L1, L2, L3, H1, H2, H3) are immunoglobulin (Ig) hypervariable domains responsible for antigen recognition and specific antibody (Ab) binding [1]. In order to create a homology model of an antibody variable domain, Rosetta biocomputing software splits the antibody sequence into heavy and light chain framework regions and six CDR loops. For each of these regions, a template is picked from a set of antibody crystal structures based on sequence similarity using a BLAST search [5]. The six CDR loops are then grafted onto the framework regions. Previous research conducted in the Gray laboratory indicates that template selection using BLAST does not always lead to the lowest possible energy as assessed by the Rosetta score function. Alternative templates that were manually selected lead to substantially lower energies in some cases.

“A New Clustering of Antibody CDR Loop Conformations” published in the Journal of Molecular Biology demonstrates clustering of conformations of the five non-H3 CDR loops [1]. For each of the clusters, one representative median structure is identified [1]. Research in the Gray laboratory has suggested that the selection of these median conformations as templates for the non-H3 CDR loops can yield lower energy structures than selection of a template based on sequence similarity. The goal this research focuses on improving accuracy of CDR loop prediction and contributing to the ongoing goal of improving protein prediction. To improve selection, median protein CDR loop conformations were incorporated into the antibody prediction algorithm, a python script was written that parsed for PDBs and their clusters, and C++ code was added for proline filtering for the CDR loops. This research will contribute to an improved prediction of antigen binding sites that is highly relevant for antibody docking applications and design strategies based on homology models. METHODS Part 1: CDR cluster identification was obtained from “A New Clustering of Antibody CDR Loop Conformations.” Information for structures and their median PDB ids was recorded in a hash table. As part of cluster identification, positioning of prolines for clusters was noted and rules drafted for proline filtering. Proline filtering occurred for H1 and L3 clusters. For the H1 cluster sequence, filtering is possible only when the sequence length is 13 and there also exists a proline (P) at position 9 of the sequence. For L3, sequence length is checked and if the length is 8, 9, or 10

filtering is possible. For lengths 8 or 9 a proline must be present in position 6 for filtering and for length 10 prolines must be present at positions 6 and 7 for filtering to occur. Part 2: A python script was written to parse the Rosetta antibody database. The script identified

1


the PDBs and corresponding heavy (H1-H3) and light (L1-L3) chains. This information was organized and piped to a file for later use. Cases where chain information was missing for PDBs were also noted and this data and related error messages were piped to a CDR failures file for debugging.

Figure 1: Flow of parsing for PDBs Part 3: To improve protein prediction for the CDR loops, proline filtering was added. C++ code was written to filter protein sequences for L1 and H3 clusters based on sequence length and proline positioning. For example, based on CDR cluster identification rules obtained in Part 1, a L3 sequence of length 8 with a proline at position 6 would be filtered out whereas an L3 sequence of length 8 lacking a proline at position 6 would remain unfiltered.

RESULTS AND DISCUSSION This research to improve prediction of antibody CDR looping contributes to the goal of the Rosetta Commons Biocomputing software to better understand and improve protein structure prediction methods to solve practical problems. Using documented rules helped structure how filtering occurs, improving antibody selection. Parsing the Rosetta antibody database was a key step in identifying PDBs and obtaining chain information. By identifying CDR clusters, it was then possible to write filtering rules involving median cluster id and prolines to improve accuracy of antibody CDR loop prediction. Understanding antibody looping will help researchers better target diseases caused by misfolded proteins – type II diabetes, Alzheimer’s, Parkinson’s, Huntington’s, sickle cell - to develop genetic and therapeutic medical cures. REFERENCES 1. North, B., Lehmann, A., Dunbrack, R.L., “A New Clustering of Antibody CDR Loop Conformations.” Journal of Molecular Biology. 406.2, 28–256, 2011. 2. Polinelli, L., et. al. “Antibody Complementarity Determining Regions (CDRs) Can Display Differential Antimicrobial Antiviral and Antitumor Activities.” PLOS. 3(6) 2008. 3. Weitzner, B. D., Kuroda, D., Marze, N., Xu, J. and Gray, J. J., “Blind prediction performance of RosettaAntibody 3.0: Grafting, relaxation, kinematic loop modeling, and full CDR optimization.” Proteins, 82, 1611–1623, 2014. 4. Sircar, A., Kim, E.T., Gray, J. J., “RosettaAntibody: antibody variable region homology modeling server.” Nucl. Acids Res. Suppl 2., 474-479, 2009 5. Definition: BLAST. U.S. National Library of Medicine, NCBI National Center for Biotechnology Information. ACKNOWLEDGEMENTS Funding for this research was provided by the University of Pittsburgh Swanson School of Engineering and the Office of the Provost. Thank you to Dr. Jeffery Gray, Principal Investigator at Johns Hopkins University for hosting this research experience and thanks to my summer research mentor, graduate student Jeliazko Jeliazkov

Figure 2: Filtering by Prolines

2


PERSISTENCE OF EBOLA SURROGATE AT VARIOUS TEMEPERATURES AND NEUTRAL PH Nicole E. Cimabue Environmental Laboratory, Department of Civil and Environmental Engineering University of Pittsburgh, PA, USA Email: nec34@pitt.edu INTRODUCTION In 2014 the World Health Organization (WHO) declared an outbreak of the Ebola virus as “a public health emergency of international concern.” [1]. The outbreak, which began in Western Africa of that year, has ultimately caused the infection of 28652 people and the death of 11325 by August of 2016 [2]. The symptoms include headaches, vomiting, anorexia, diarrhea, unexplained bleeding and can lead to death. It is currently unknown how long the virus persists in human waste and wastewater. The primary aim of this research is to contribute to the understanding of the persistence of Ebola virus in water environments using a surrogate virus. The surrogate bacteriophage ϕ6 (phi6) is used as it does not require Biological Safety Level 4 access. Information on the persistence of Ebola virus in fluids at different temperatures and pH levels is currently unknown. Conducting research on the survivability of the surrogate virus ϕ6, which is similar to the Ebola virus, will provide an understanding of the disease in various water environments.

four different temperature conditions (4°C, 25°C, 37°C, 45°C). The same procedure was followed for the virus at pH 7, however HEPES buffer was added to assure no change in pH. This solution was kept at 25°C. A solution of the host bacteria, Pseudomonas syringae, and LB was incubated at 25°C overnight on a shaking table the day before an experiment. At each sampling time, three aliquots of the relevant temperature condition were serially diluted with 0.9 mL of Dulbecco’s Modified Eagles Medium (DMEM). Each dilution was plated in triplicate with each plaque assay containing 2.5 mL of 0.6% soft agar, 0.2 mL host bacteria, and 0.1 mL of test dilution overlaid onto a 1.5% LB plate. A negative control was run at each time point by plating 0.1 mL DMEM in place of the test dilution. All plaques were allowed to dry and inverted in an incubator at 25°C overnight. The plaque forming units (pfu) were counted the day after each experiment to determine the persistence of the virus. An example of a plaque assay is shown in Figure 1.

This study examined the endurance of the Ebola virus over time by diluting the virus at different temperatures in distilled water. METHODS The virus phi6 stock was spiked into a testing medium and kept at a temperature of -80°C at an initial concentration of 2.36 × 1010 pfu/mL. Prior to each experiment, a stock solution of 20 mL cell culture media and 23.5 μL frozen phi6 was measured into a flask. 0.5 mL of the stock solution was pipetted into multiple aliquots that were kept in the dark at

Figure 1. Plaque assay


Figure 2. Linear regression models for the persistence of phi6 in DI water at different temperatures. RESULTS The persistence of phi 6 at each temperature is summarized in Figure 2. All temperatures, excluding 4°C, show significant decay. Previous research on the virus’s persistence in deionized water is not available, however, the survivability of phi6 in wastewater and sterilized wastewater at 25°C suggest that the virus will be reduced by 90% in 7 and 53 days respectively [3]. This information served as an estimated average reduction rate for these experiments.

colder temperatures (4°C) while the virus died more rapidly in the highest temperature sample of 45°C in less than one day.

At room temperature, or 25°C, phi6 is reduced by 90% in 1-2 days. This was faster than expected and compared to other viruses with enveloped structures, phi6 deactivates relatively quickly. Avian Influenza virus experiences the same reduction in at least 16 days on average [4] and the Swine Coronavirus in 11 days [5]. However the HIV virus is reduced at in an average of <2 days, relatively similar to phi6 [6].

REFERENCES 1. World Health Organization. WHO statement on the meeting of the International Health Regulations Emergency Committee regarding the 2014 Ebola outbreak in West Africa. (http://www.who.int/mediacentre/news/state ments/2014/ebola-20140808/en/) 2. CDC 2014 Ebola Outbreak in West Africa (https://www.cdc.gov/vhf/ebola/outbreaks/2 014-west-africa/) 3. Yinyin et al. Environ. Sci. Technol 50, 50075085, 2016. 4. Nazir et al. Avian Diseases 54, 720-724, 2010. 5. Casanova et al. Water Res. 43, 1893-8, 2009. 6. Casson et al. Water Environment Research 64, 213-215, 1992.

DISCUSSION These experiments indicated a correlation between persistence of phi6 and temperature. It was found that phi6 persists longer (28-days) in

ACKNOWLEDGEMENTS Thank you to the Swanson School of Engineering and the Office of the Provost for their funding of this research.


WELL PLUGGING WITH ABSORBANT CLAY: STUDYING BENTONITE PELLET DESCENT IN BOREHOLES Carolyn Wehner Department of Civil & Environmental Engineering University of Pittsburgh, PA, USA Email: cmw134@pitt.edu INTRODUCTION The benefits of nuclear energy are numerous, however with these benefits come the challenges of disposing the high-level nuclear wastes from producing such energy. The US Department of Energy (DOP) has now suggested a geological method of Deep Bore Hole (DBH) disposal that would involve drilling a borehole well to a depth of 5,000 meters. Canisters, containing the highlevel wastes, would occupy the bottom 2,000 meters, while the top 3,000 meters would be sealed with alternating materials including a key material, bentonite clay [1]. Bentonite is an ideal sealant because it is easily deployed, self-healing, and a low permeability clay. The current industrial bentonite placement in boreholes consists of dropping compacted pellets into the borehole, these pellets hydrate and swell as they fall downward in the water filled borehole. However, the bentonite can expand too rapidly leading to the plugging at a shallower depth than intended, this rapid expansion and plugging phenomena is called bridging. The pellets reaching the desired depth hinges greatly on the pellet’s descent velocity and its swelling rate which determines its potential to bridge too early. Prior field studies completed by the Petroleum Industry suggests that the bentonite pellets can only be reliably placed at a depth of 1,000 meters or less. Reliable use in deeper wells is prevented by the occurrence of bridging before the plug reaches the desired depth [2]. The goal of this research is to (1) study the phenomena of bridging at deeper depths than 1,000 meters

and (2) test if bridging behaviors depend on grain size. METHODS A testing system that simulates a borehole was constructed such that the borehole was represented by a cylindrical tube placed vertically. The apparatus is filled with distilled water and a plug at the bottom keeps the water inside. A level is used to ensure an accurate depiction of a borehole. The borehole simulant is positioned with a light source behind the tube to effectively see the particle that will be dropped into the waterfilled apparatus. A measuring stick is placed along-side the borehole simulation to track distance, as shown in Figures 1a and 1b. Various sized pellets are tested, therefore an individual pellet’s dimensions are taken, after which it is dropped into the apparatus for testing. Two cameras are used to record the process of the pellet falling down the column; one camera observes the top of the testing column and the second observes the bottom.

Figure 1a: full scale drawing of apparatus

Figure 1b. close up view of testing apparatus


DATA PROCESSING The velocity of each descending pellet is obtained from video images by utilization of MATLAB’s Image Processing Toolbox. With multiple experiments and various pellet sizes, velocities are observed and compared to predicted particle velocity based on Stokes’ settling law. RESULTS AND DISCUSSION Results indicate that granular size does have an effect on the falling velocities and bridging of bentonite pellets in boreholes, as demonstrated by figure 2. However, the overall trend is counter to what is expected from Stokes’ Law. According to Stokes’ Law, balance between drag force, gravitational force, and buoyancy force leads to (đ?œŒđ?œŒđ?œŒđ?œŒ −đ?œŒđ?œŒđ?œŒđ?œŒ )a predicted velocity v = đ?‘ đ?‘ đ?‘ đ?‘ đ?‘¤đ?‘¤đ?‘¤đ?‘¤ đ??ˇđ??ˇđ??ˇđ??ˇ2, where v is the velocity of the 18đ?œ‚đ?œ‚đ?œ‚đ?œ‚

particle, đ?œŒđ?œŒđ?œŒđ?œŒđ?‘ đ?‘ đ?‘ đ?‘ is the density of the soil particle,

đ?œŒđ?œŒđ?œŒđ?œŒđ?‘¤đ?‘¤ is the density of the water, Ρ is the viscosity of the water, and D is the diameter of the soil particles. Consequently, it was expected that smaller particles would have a lower velocity than larger particles, and the descent velocity for millimeter-scale particles would be on the order of tens of meters per second. However, slow velocity and the opposite dependence on particle size is shown in these experiments. This observation indicates a large additional drag force induced by interaction between the particle and the wall of the tube (analog borehole). Also, the slow descent velocity evidences drag due to turbulent fluid around the pellet, which is also not accounted in Stokes’ Law. This is an important

demonstration of these additional drag forces because the tendency in industry is to select larger pellets, approaching the size of the borehole, in order to promote more rapid descent. The results presented here indicate this approach is potentially counterproductive and a method based on a more comprehensive model of settling in the presence of drag force induced by interaction between the pellet and the borehole should therefore be pursued. For smaller particles predicted velocity will eventually decrease. Observation of these smaller particles requires modification of the resolution of our images. ACKNOWLEDGEMENTS This summer research fellowship award was funded by the University of Pittsburgh’s Swanson School of Engineering and the Office of the Provost. The research was mentored by Andrew Bunger. REFERENCES [1] Arnold B.W., P. Vaughn, R. MacKinnon, J. Tillman, D. Nielson, P. Brady, W. Halsey, and S. Altman. 2012. Research, development, and demonstration roadmap for deep borehole disposal. Technical Report SAND2012-8527P, Sandia National Laboratories. [2] Clark, J., and B. Salsbury. 2003. Well abandonment using highly compressed sodium bentonite- an Australian case study. In SPE/EPA/DOE Exploration and Production Environmental Conference, San Antonio, Texas, USA, 10-12 March. SPE 80592.

Figure 2: bentonite pellet behaviors: pellet velocity vs particle and borehole diameter ratio, experimental data shown in blue


AN EVALUATION OF CARBON ELECTRODES FOR ANTI-FOULING IN THE ELECTROFENTON PROCESS Brandon Contino, Josh Hammaker, and David Sanchez Department of Civil and Environmental Engineering University of Pittsburgh, PA, USA Email: bmc95@pitt.edu INTRODUCTION As global industrial output has increased over the last 100 years, conventional wastewater treatment plants have not evolved to handle the increase of new synthetic pollutants. One of these pollutants, persistent organic pollutants (POP), are difficult to remove and require advanced methods. The electrochemical advanced oxidation process (EAOP), has shown strong promise in treating POPs. The EAOP electro-fenton process is one of the most effective processes, where the hydroxyl radical (known for its ability to treat POPs) is created through a reaction between Fe2+ and H2O2 (1). Because of this process, H2O2 has strong promise as an anti-fouling agent, and its production via reactions between water and electrodes lead to potential new anti-fouling coatings. Fe2+ + H2O2 → Fe3+ + •OH + OH−

(1)

In this paper, the production of H2O2 is analyzed, specifically through the use of carbon based electrodes. Carbon was chosen because of its high surface area, nontoxicity, and low cost [1]. A comparison on the effectiveness of several carbon electrodes is conducted in a single-chamber electrolytic cell. METHODS Measuring H2O2 Concentrations The concentration of H2O2 in a solution is a very difficult value to measure. Techniques can vary from reflectometry, permanganometry, iodometry, cerium oxidation with a ferrous orthophenanthroline indicator, and spectrophotometric measurements [1]. In this paper, a variation of the spectrophotometric measurement of [2] at 410nm with a TiCl4 indicator was utilized. The exact breakdown of the solution that was analyzed through spectrophotometric methods is as follows (in the order added): 1860uL of deionized water, 600uL of the solution to be measured, 240uL of TiCl4, and 300uL of 95%

H2SO4. The absorbance of the solution at 410nm was then utilized to determine H2O2 concentrations. An H2O2 calibration curve for the TiCl4 solution was created in order to correlate absorbance measurements with H2O2 concentrations. During experiments, a baseline sample was taken before voltage was applied to the cell. Samples were then taken at 5, 10, 15, 20, 40, 60, 90, 120, 150, 180, and 195 minutes and measured for H2O2 concentration. The set times were determined and calibrated by previous work in the lab. Electrolytic Cell Construction The cathode was a 10x10x0.37mm piece of carbon paper, the anode was a 55x30x10mm piece of graphite, and the electrolyte solution was 655mL of .05M K2SO4 with a pH of 3. The electrolytic cell was operated galvanostatically at the following currents (mA): 15, 100, 125, 150, 175, 200, and 300 at a temperature of 35°C. 35°C was chosen based on [3] and the current variances determined in previous work. RESULTS Overall, the all carbon electrolytic cell was successful in creating H2O2 on a mM level. The results of the trials can be seen in Figure 1. In the figure the effects of H2O2 decomposition, along with H2O2 can be seen. The effects of this are illustrated in the slight sinusoidal oscillations around the overall general logarithmic path. The results show that as current increases, the H2O2 production does as well, until it peaks around 1.52mM at 200mA. Most notable is the cell’s ability to produce 0.7mM of H2O2 at only 15mA, which shows the greatest efficiencies from all the tests. The H2O2 produced per watt for the varying currents are shown in Figure 2.


DISCUSSION

H2O2 Production of Carbon Paper at Varying Currents 1.55

The reason for a peak H2O2 generation around 200mA is because of the properties of the carbon paper and the increased flow of e- into the reaction in (2)

1.35 1.15

H2O2 (mM)

0.95 0.75 0.55 0.35

O2 + 2H+ + 2e- → H2O2

0.15 -0.05

0

20

40

60

80

-0.25

100

120

140

160

180

200

Time (Minutes) 15mA

100mA

125mA

150mA

175mA

200mA

300mA

Figure 1: Production of H2O2 mM from each of the applied currents. 200mA produces the most H2O2 with a value of 1.52mM.

H2O2 Production / Watt for Carbon Paper Cathode H2O2 (mM/Watt)

20 15 10 5 0

0

50

100

150

200

250

300

350

Currents (mA)

Figure 2: H2O2 production per watt. There is a clear 14mM/W decrease from 15mA to 100mA, but only a .6mM/W decrease from 100mA to 300mA. H2O2 Production of Varying Cathodes at 100mA 1

H2O2 (mM)

0.8 0.6 0.4 0.2

-0.2

0

20

Carbon Paper

40

60

80

Time (Minutes)

Graphene+Felt

100

Graphene+Brush

120

140

(2)

160

The 200mA allowed for an optimal amount of H2O2 generation, without creating too much H2O2 decay as seen in (3) H2O2 → O2(g) +2H2O

(3)

The decreasing production efficiency (H2O2 mM/Watt) despite the corresponding increases in current values on the carbon paper is likely due to a mass transport limitation and or a saturation of production sites. This is supported by the increases in voltage required to operate the cell at the corresponding currents. Although a linear relationship between voltage increase and current increase (2.8V – 11.8V) exists, increases in voltages limit cell operation. Most ideal voltages lie around 0.5V – 1.6V [1]. With optimal current settings, the carbon paper has been identified as the most effective tested carbon electrode for H2O2 generation; however, when compared at 100mA carbon paper was second to the carbon brush. This result is likely because of the larger surface area of the carbon brush, allowing more reaction sites for for H2O2 generation. Further testing could involve using larger sheets of carbon paper to increase the surface area.

Carbon Brush

Figure 3: H2O2 production per watt. There is a clear 14mM/W decrease from 15mA to 100mA, but only a .6mM/W decrease from 100mA to 300mA.

Results from tests performed using graphene coated felt, graphene coated carbon brushes, and carbon brushes can be seen in Figure 3. 100mA was discovered to be the optimal current for the aforementioned electrodes, and as such the 100mA carbon paper results are overlaid in Figure 3.

REFERENCES 1. Brillas, E., I. Sirés, and M.A. Oturan, E Chemical Reviews, 2009. 109(12): p. 65706631. 2. Wolfe, W.C., Analytical Chemistry, 1962. 34(10): p. 1328-1330. 3. Boye, B., M.M. Dieng, and E. Brillas, Environmental Science & Technology, 2002. 36(13): p. 3030-3035. ACKNOWLEDGEMENTS Would like to thank PPG for the funding of this research, and the Swanson School of Engineering for the use of its facilities.


SpiWave: Automated Spiral Evaluation for Parkinsonian Patients Using Wavelets Swaroop Akkineni1, Thomas Wozny2, Ahmad Alhourani2, Michael McDowell2, Mark Richardson2, Samuel J. Dickerson1 1 Swanson School of Engineering, Department of Electrical and Computer Engineering , University of Pittsburgh, USA 2 Department of Neurological Surgery, University of Pittsburgh Medical Center, USA Email: swa7@pitt.edu, Web: http://www.engineering.pitt.edu/Departments/Electrical-Computer/ INTRODUCTION As per the Parkinson’s Disease Foundation, there are approximately 10 million people worldwide who are currently diagnosed with Parkinson’s [1]. The current standard for classifying the severity of an individual’s disease, the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS), relies on a variety of tests, ranging from auditory verbal exams to physical and movement tasks that require the examiner to manually measure and calculate each individual exam score [2]. The manner by which these measurements are made often vary from one practitioner to the next, are not repeatable, and are error prone. As a result, critical medical decisions concerning the care of movement disorder patients are often based on subjective information. In this work we present a Point-of-Care solution to this this problem. We have developed a mobile platform that automates portions of the MDS-UPDRS tests. Specifically, an automated assessment that extracts information about the severity level of

Figure 2) Image shows the horizontal, vertical and diagonal components of the patient's trace after wavelet decomposition. Top left quadrant shows the low frequency approximation along with the results from a second level decomposition. difference in symptoms before and after DBS treatment are visually obvious, our platform provides the clinician with a quantitative metric to track the degree to which the patients have improved during the procedure.

Figure 1) The mobile application used by patients. Patients trace the spiral, starting from the outside working inwards. patient tremors from hand-drawn traces. In our platform, patient trace data is collected using a custom smart device software application (figure 1). Patients, at various stages of treatment, perform the task of tracing a spiral on the mobile device using a stylus. Drawing a spiral is useful in assessing motor symptoms because it requires a motion that forces patients to use a variety of muscles across two spatial dimensions and is a motor task that is significantly impacted by tremors. After testing is complete, the data is analyzed using the Discrete Wavelet Transform and the severity of the patient’s symptoms are assessed. METHODS The participants in this study were six patients, all with severe motor symptoms due to Parkinson’s disease, going through Deep Brain Stimulation [3]. Deep Brain Stimulation (DBS) is a surgical therapy used to treat the most severe cases. While the

The Discrete Wavelet Transform (DWT) allows for both spatial and frequency domain information to be extracted from an image; Specifically, the DWT decomposes the patient trace into high frequency “details” and a low frequency approximation of the global behavior [4]. The transform is implemented by passing the traced spiral image through a series of high-pass and low-pass filters. After filtering, the images are split into low and high frequency components. Those components are downsampled along their indexed columns, sent through another round of high/low pass filters, and then finally down-sampled among the image’s rows. The result of the decomposition are four coefficient matrices: one approximate coefficient matrix cA and three detail coefficient matrices cH, cV, and cD (horizontal, vertical, and diagonal respectively). Multiple images are then reconstructed from each coefficient matrix using the inverse wavelet transform. This decomposition can be repeated at multiple levels on the individual reconstructed components. Figure 2 shows an example of a patient trace decomposed into two levels. In our methodology the distribution of signal energy, evaluated as the square of the pixel intensity magnitude, among the reconstructed images is used to measure and assess the severity of the patient’s tremor. Our methods were first validated using a set of “synthetic tremors”, drawn spirals that have oscillations at known


Figure 3) Evaluation of signal energy versus tremor amplitude (0cm – 2.54 cm) using synthetic data set

Figure 4) Evaluation of signal energy versus tremor frequency (0 Hz – 6 Hz) using synthetic data set frequencies and amplitudes. After validation with synthetic data, we examined data collected from actual patients undergoing treatment. The synthetic dataset included 19 generated spirals; these included three separate sets of spirals that had amplitudes of 2.54 cm, 1.27 cm, and .64 cm and also a “tremor-less spiral,” where the spiral was traced with as little variation as possible. At each amplitude, spirals with oscillation frequencies ranging from 1 Hz to 6 Hz was included. These frequencies and amplitudes were selected so as to correspond with the clinically accepted ranges on the MDS-UPDRS. RESULTS Figure 3 shows the distribution of energy in each of the decomposition components versus tremor amplitude (at f = 6 Hz) and figure 4 shows the same distribution versus tremor frequency (at A = 1.27 cm). The results show that as subjects tremor increases in either amplitude or frequency, the energy in the spiral components monotonically decreases, providing a consistent measures of tremor. It should be noted that the “perfect spiral” consistently has the greatest energy among all the test cases. This is because a perfectly drawn, tremor-less spiral requires a greater amount of high frequency detail (horizontal, vertical and diagonal stylus strokes) to create and as oscillations are introduced, that detail is lost. The method was next validated with the patients currently diagnosed with Parkinson’s disease. Figure 5 shows the energy present in just the diagonal image component as a patient progress through surgical treatment stages. When comparing the energies of the respective images, a trend similar to the

Figure 5) Evaluation of signal energy in the diagonal image component versus surgery stage for a test subject undergoing DBS treatment for Parkinson’s disease results of the synthetic tests is observed, the energy of the spiral components greatly increases as the tremor is reduced and the patient has the ability to trace in greater detail. During the preoperative phase, the patient’s symptoms are at its worst and the measured energy is at its lowest (E = 1.95 × 105). Once the patient has the electrode in place, but before the DBS therapy is activated, a slight improvement in the patient can be seen (E = 3.73 × 105). This improvement is due to the microlesion effect, neurons are briefly stunned as the electrode is initially placed into the brain, causing a temporary effect similar to the stimulation. Once the patient’s electrode is turned on, a demonstrative improvement in their ability to trace out a spiral can be observed (E = 7.31 × 105). After the electrode is deactivated, the patient returns to a state similar to that of initial electrode insertion E = 3.41 × 105). CONCLUSION The results from the synthetic and collected patient data show that our methodology can successfully be used as an automated, Point-of-Care platform for assessing and monitoring Parkinson’s disease tremor symptoms. The wavelet transform is a promising to use in this context for its ability to quantify tremors, provide clinicians with an objective measure and also to be integrated into a low-power mobile device. For future work, the next step is to determine which other features can be extracted from patient data and used to incorporate additional MDS-UPDRS exams into the software. As Parkinson’s disease has more symptoms than just tremors that can be used for evaluation purposes, the addition of tests would allow for a more complete assessment of the patient’s overall state and disease progression. To conclude, the prototype presented in this work has the potential to be an instrumental tool in the treatment of Parkinson’s Disease patients. REFERENCES 1. Parkinson’s Disease Foundation (online). Available at: www.pdf.org/. Accessed August 26, 2016 2. Goetz et al. Movement Disorders 23, 2129–70, 2008. 3. National Institute of Neurological Disease (online). Available at www.nih.gov/. Accessed August 15 2006. 4. Gonzalez et al. Digital Image Processing. 483-532, 2008. ACKNOWLEDGEMENTS Subjects were screened at UPMC Presbyterian Hospital. Funding was provided by the Swanson School of Engineering and by the Office of the Provost


It’s All in Your Head - Bridging Neurological Signals to the Physical World through EEG Michael Urich1, Ker-Jiun Wang2, Bo Luang3 1 Department of Electrical and Computer Engineering, 2Department of Bioengineering, 3UPMC Department of Neurological Surgery University of Pittsburgh, PA, USA Email: mpu2@pitt.edu INTRODUCTION Recently, more and more of our daily communication has become digital, in the form of email, video calling, and the like. This has been tremendously powerful in connecting people over geographic restrictions, but operating a device like a phone or computer to do this still requires fine motor control. As such, users suffering from paraplegia or the loss of a limb are prohibited from being able to communicate normally. There have been some custom-made solutions, such a Stephen Hawking’s cheek-implanted sensor, but these are tremendously expensive and require complex surgery and advanced training, making them inaccessible to most patients. A simpler noninvasive device would be revolutionary and would be able to connect thousands of people, helping them live a more normal life. This summer, I worked with graduate students KerJiun Wang and Bo Luang, under the supervision of Dr. Zhi-Hong Mao, on developing a system for detecting saccades (eye movements and blinks through wearable electroencephalogram (EEG) electrodes. A saccade sensor does not require extensive individual patient training compared to other brain machine interface (BMI) devices, especially in motor control, and is fairly intuitive to use, as well as noninvasive. My role was to process a recorded EEG signal, which is a composite indicator of activity in many brain areas, and break it down into the basic directional eye movement. We hope to further this development to control modern consumer electronic devices, which would allow paraplegics or people with similar physical inhibitions to communicate naturally using stock devices like smartphones. Our lab group hoped to construct a device that can read EEG signals and recognize when the eye is moving, and in what direction it is moving. By

doing this, we could move a computer mouse cursor so that it tracks where the user is looking. By combining this with an on-screen keyboard, our device would allow a patient who has lost fine motor control to use a computer normally. METHODS Our lab group needed to understand what a typical EEG signal would look like when the eye moves, and how the signal would differ for different directions of motion. We conducted several studies in which the user would wear EEG electrodes and follow a moving ball on a screen either left and right or up and down, changing direction once per second. We found in our early analysis that natural blinking during motion would affect the signal’s waveform, so we included timed blinking tests without motion to isolate and subtract this artifact. We also tested several methods of electrode placement to determine which location would best isolate the eye saccade. We started with the standardized 10-20 system, which is the most popular placement for general use, covering the entire scalp. This helped us to narrow down the areas of the scalp which gave the strongest signal. From this, we settled on mounting electrodes on the scalp slightly above the ear, with a reference placed on the earlobe. This resulted in a fairly noise-free signal, as is discussed later. Recording was performed using the OpenBCI 32-bit 8-channel board. It provides a simple method for experimenting, along with wireless communication between the recording computer and the headset to eliminate USB noise. We also chose an open-source stack wherever possible, for simpler integration later on. DATA PROCESSING For signal processing, we used the popular opensource MATLAB library EEGLAB, which has some built-in capability for feature extraction using


Independent Component Analysis (ICA) [2]. With our initial 10-20 datasets, we band-pass-filtered from 1 to 45 Hz, as EEG signals are limited to that range and anything higher would be electrical noise. Additional cleaning required removal of the DC offset, which is caused by a noisy power supply and can be removed by subtracting the mean value of each waveform. The resulting signal was plotted, which made it easy to see a spike in some channels at exactly the time that the saccade took place. This helped us narrow down a region of the scalp that would give a good signal for further testing that would not require human identification. This data processing pipeline was retained as we revised our electrode placement. We performed an Independent Component Analysis (ICA) to detect the saccade events in the data. An ICA is a multidimensional filter that is commonly used in EEG processing [1]. EEGLAB has functionality built-in to train a multidimensional filter to reject eye blinks, and we applied this to out recording of eye movements that contained blinks. RESULTS Once we obtained recordings from the strongest area, the scalp above the ear, we were able to detect the voltage peaks from an ICA decomposition, as shown below in Figure 1. Thanks to our optimized electrode placement and filtering, we are able to make eye saccades the dominant event of the recording. The voltage peaks

illustrate where the eye made a movement to the left or right. DISCUSSION Our lab group was successful in developing a method for recording eye movements, as well as software for automatically detecting movements. Our program is able to identify when an eye saccade or blink has taken place. Further work this fall semester will include distinguishing between directions of movement, and then in networking these detected movements to a consumer device. Our work will be open-sourced and available for other researchers to use in their EEG eye movement studies. REFERENCES 1.Ker-Jiun Wang. Awakening the Force: Decoding Human Intention Through the Coupling of EEG Signal and Saccade Movement to Control Wearable Devices, 11th ACM / IEEE International Conference on Human-Robot Interaction (HRI 2016) 2.Onton and Makeig. Information-based modeling of event-related brain dynamics. Elsevier Progress in Brain Research (2006) 3. ACKNOWLEDGEMENTS Funding for this summer’s research was provided by the Swanson School of Engineering and the Office of the Provost at the University of Pittsburgh.

Figure 1: A plot within EEGLAB of the processed signal, with voltage peaks highlighted to indicate an event has been detected. The periodicity of the detected events matches the one-second change of motion during recording.


MODELING AND EXPERIMENTAL ANALYSIS OF THE TEMPORARY, FULLYRETRIEVABLE STENT FOR TRAUMATIC HEMORRHAGE CONTROL Mark Littlefield, Yanfei Chen, Bryan W. Tillman, Sung Kwon Cho, and Youngjae Chun Medical Device Manufacturing Laboratory, Departments of Industrial and Bioengineering University of Pittsburgh, PA, USA Email: marklittlefield@pitt.edu, Web: http://www.pitt.edu/~yjchun/home.html crimping process by applying a -1mm radial displacement to the interior cylindrical faces of the stent geometry. The force reaction probe tool was then used to acquire the radial force. An additional FEA strategy was implemented for the 3D crimping simulations by applying a uniform pressure radially inward on the stent and measuring the directional deformation in the radial direction.

INTRODUCTION Temporary stenting before surgery has proven effective in the treatment of colonic obstruction, pulmonary embolism, tracheoesophageal fistulas, and other conditions [1, 2, 3]. This research is predicated on the attempt to apply this practice to hemorrhages caused by traumatic injury to the torso, i.e. gunshot, deep stab wound, or shrapnel from an explosive device. While manual compression is a mainstay of hemorrhage control for the neck or extremities, this approach is not effective for hemorrhagic injuries of the torso due to their location deep within the rib cage or abdomen rendering them inaccessible to compression [4]. As a result, the associated mortality rate is quite large approaching 80% [5, 6]. This device would help curtail the high mortality rate amongst these victims by keeping the patient alive during the time between when the injury was suffered and when attention from the proper medical professionals is received [5, 6]. A novel, temporary, fully-retrievable, nitinol stent for the control of internal hemorrhages has been first developed by our research group and is currently going through the design optimization process. The aim of this project is to find the optimal design of the stent using Finite Element Analysis (FEA) and experimental methods.

The experimental methods consisted of using a Zenith Flex AAA Endovascular Stent Graft (Cook Medical), Futek LSB200 force probes, and Sensit measurement software (version 2.3), a 1mm parallel plate pinch was applied to the stent, and the radial force was recorded. This approximation was used in the evaluation of the results acquired from the FEA simulations. DATA PROCESSING Radial force data from the 2D analysis and 3D displacement method were collected and recorded in Excel (Microsoft, 2013). Plots of the radial force versus different design parameters were fitted with trendlines to observe the nature of their correlation. Radial force data from the 3D pressure method were collected and imported into Minitab Express statistical analysis software (version 1.4). Linear regression analyses were conducted in order to validate a linear relationship between pressure and displacement as well as compare the slopes of different models as an indicator of stiffness behavior.

METHODS For the computational modeling work, several different designs were modeled with SolidWorks (Dassault Systemes, 2015) beginning with a 2D representation of the stent followed by several different 3D models including 5-, 6-, and 10-pedal designs with 2, 3, and 4 flower patterns. The geometries were imported to ANSYS Mechanical (Static Structural, version 16.1) to simulate the

RESULTS

1


From the 2D analysis, relationships were established between the radial force and design parameters. Wire thickness was varied from 100 to 500 microns and radial force increased exponentially between 0.00075 to3.7848 0.33 N according to the equation đ?‘Śđ?‘Śđ?‘Śđ?‘Ś =varied 4.29đ?‘’đ?‘’đ?‘’đ?‘’from 8(đ?‘&#x;đ?‘&#x;đ?‘&#x;đ?‘&#x;to2 = Curvature length was 0.999). 12mm and 0.017 radialdown forceto exhibited exponential decay from 0.005 N according to −2.971 the equation đ?‘Śđ?‘Śđ?‘Śđ?‘Ś = 8.4388đ?‘Ľđ?‘Ľđ?‘Ľđ?‘Ľ (đ?‘&#x;đ?‘&#x;đ?‘&#x;đ?‘&#x;2 = 0.996). Pedal width was varied from 2 to 8mm and radial force generally decreased, but a definitive relationship could not be established as the values hovered around 0.008 N.

proposed device. It would also be beneficial to include the additional layer of material associated with the increasingly popular covered stents when conducting future computational modeling work. ACKNOWLEDGEMENTS Special thanks to the Swanson School of Engineering and the Office of the Provost at the University of Pittsburgh for funding this project. REFERENCES [1] Van Hooft, J. E. (2007, July 3). Colonic stenting as bridge to surgery versus emergency surgery for management of acute left-sided malignant colonic obstruction: A multicenter randomized trial (Stent-in 2 study). BMC Surgery, 7(12). doi:10.1186/1471-2482-7-12 [2] Eleftheriadis, E., & Kotzampassi, K. (2005, May 3). Temporary stenting of acquired benign tracheoesophageal fistulas in critically ill ventilated patients. Surgical Endoscopy, 19, 811815. doi:10.1007/s00464-004-9137-x [3] Schmitz-Rode, T. (2006, November 4). Temporary Pulmonary Stent Placement as Emergency Treatment of Pulmonary Embolism. Journal of the American College of Cardiology, 48(4), 812-816. doi:10.1016/j.jacc.2006.04.079 [4] White JM, Cannon JW, Stannard A, Markov NP, Spencer JR, Rasmussen TE. Endovascular balloon occlusion of the aorta is superior to resuscitative thoracotomy with aortic clamping in a porcine model of hemorrhagic shock. Surgery. 2011;150:400-409 [5] Kelly JF, Ritenour AE, McLaughlin DF, Bagg KA, Apodaca AN, Mallak CT, Pearse L, Lawnick MM, Champion HR, Wade CE, Holcomb JB. Injury severity and causes of death from operation iraqi freedom and operation enduring freedom: 2003-2004 versus 2006. The Journal of trauma. 2008;64:S21-26; discussion S26-27 [6] Demetriades D, Theodorou D, Murray J, Asensio JA, Cornwell EE, 3rd, Velmahos G, Belzberg H, Berne TV. Mortality and prognostic factors in penetrating injuries of the aorta. The Journal of trauma. 1996;40:761-763

The results of the 3D crimping simulation in ANSYS using the displacement method showed that the influence of the number of pedals on the radial force is negligible, but radial force increased exponentially as the number of flower patterns increased. The force obtained through the parallel plate experiment was 0.32 N which is consistent with the current literature and validates the computational modeling results. DISCUSSION Computation modeling work is an important part of the development of stent devices. Building and testing physical prototypes is expensive, time consuming, and in some cases does not produce sufficient feedback. The FDA’s guidelines for stent devices requires FEA considerations but also experimental methods to validate the simulations. Thus, this combination of virtual and tangible work is practical to the development of a new device. These trends established from the computational modeling and experimental tests aid in the optimization of the skeletal structure of the stent and could be used for other applications such as vascular occluders, filters, and other nitinol-based endovascular devices. It is recommended that future studies include modelling of the retrieval process to analyze the mechanical behavior and ensure the safety of the

2


NITROGEN DOPING CARBON NANOTUBES

Rithika Reddy Laboratory for Advanced Materials at Pittsburgh, Department of Industrial Engineering University of Pittsburgh, PA, USA Email: rir21@pitt.edu, Web: http://www.pitt.edu/~pleu/Research/index.html INTRODUCTION Carbon nanotubes are widely used due to the various applications that can be utilized. With chemical modification, electrochemical and physiochemical properties can be altered to its advantage. Nanotubes can have varying properties depending on the diameter and chirality of its structure. Doping of carbon nanotubes can also alter electronic, vibrational, chemical and mechanical properties. This is done by introducing noncarbon atoms or molecules and compounds in small concentrations. Although there are various methods of doping, the type of doping decided on was substitutional doping. Nitrogen is used as a doping agent since it will improve the number of electronic states depending on the location of the electrons. There are two different C-N bonds that can occur. The first is a three-coordinated N atom in the sp2 network and the second is a pyridine type. A high nitrogen concentration is usually 2-5% which then introduces defects into the tubular structure and then lowers the mechanical strength. [1]

METHODS To begin, a silicon wafer is prepped by sputtering a layer of alumina and then evaporating a layer of iron. The alumina is used as a layer of growth by which the carbon nanotubes will grow on. Additionally, the iron is used to increase catalytic activity. This can include enhanced electron transfer and higher surface roughness. The combination of both layers results in an increased electric current. [3] After prepping the substrate, carbon nanotubes can be grown using a tube furnace. To begin 200 sccm of argon is flowed through the tube while increasing the temperature to 750° C. Once at 750° C, 100 sccm of argon, 200 sccm of H2, and 100 sccm of C2H4 are flowed through the tube for 10 minutes. Once this is completed, 100 sccm of argon is flowed through till temperature is cooled down. Argon is used before and after the growth period in order to keep the sample clean.

Plasma-enhanced chemical vapor deposition (PECVD) is a technique used to induce large areas of N-doped carbon nanotubes. This method involves using ammonia at a high temperature to infiltrate the carbon nanotubes and cause defects. This is intended to replace certain carbon bonds with nitrogen, creating C-N bonds. [1] In nitrogen doping carbon nanotubes, the main purpose is to use the final result for water disinfection and antimicrobial purposes. This method is being considered due to its lack of harmful byproducts and chemical disinfectants which are generally used. In addition, there is always a constant demand for water treatment and recycling systems which require new and evolving technologies. [2]

Figure 1: Carbon nanotubes seen from a side view.

Once carbon nanotubes are grown various plasma treatment recipes are tested to carry out nitrogen doping of carbon nanotubes. Plasma treatment recipes were created based on the recommended pressure of 500 mTorr, and a flow rate of 15 sccm of


ammonia. The temperature was the tested variable and since the process time is inversely proportional to the power a conclusion was made to set the power at 10 watts and the process time to 900 seconds. Other than the temperature, the power can be inversely changed with the process time. [4]

Figure 3: Comparison of nitrogen doped carbon nanotubes and undoped carbon nanotubes.

Figure 2: Varying plasma treatment recipes used

RESULTS & DISCUSSION Plasma treatment results were attained using attenuated total reflectance- fourier transform infrared spectroscopy (ATR-FTIR). Using this technology there were various wavelengths given that correspond to certain chemical structures and bonds. Certain wavelengths correlate to atoms, compounds, or molecules containing nitrogen. [5] Table 1: Wavelengths corresponding to chemical structures containing nitrogen.

Using ATR-FTIR, there was found to be a possible presence of NH2, N-H, and C-N in the nitrogen doped carbon nanotubes. Consequently, in the undoped carbon nanotubes there were only corresponding wavelengths to the presence of C-H and NH2. A C-N bond is a chemical bond that could indicate that carbon nanotubes have been successfully doped.

Due to these results the conclusion can be made that there is a C-N bond which is the primary indicator of nitrogen doping. This would then suggest that the plasma treatment process works as a method of nitrogen doping by post-treatment. REFERENCES 1. Terrones, M., Filho, A. G., & Rao, A. M. (2007). Doped Carbon Nanotubes: Synthesis, Characterization and Applications. Topics in Applied Physics Carbon Nanotubes, 531-566. doi:10.1007/978-3-540-72865-8_17 2. Li, Q., Mahendra, S., Lyon, D. Y., Brunet, L., Liga, M. V., Li, D., & Alvarez, P. J. (2008). Antimicrobial nanomaterials for water disinfection and microbial control: Potential applications and implications. Water Research, 42(18), 4591-4602. doi:10.1016/j.watres.2008.08.015 3. Su, Chih-Chung, and Shuo-Hung Chang. "Effective Growth of Vertically Aligned Carbon Nanotube Turfs on Flexible Al Foil." Materials Letters 65.17-18 (2011): 2700-702. 4. Capasso, A., Salamandra, L., Chou, A., Carlo, A. D., & Motta, N. (2014). Multi-wall carbon nanotube coating of fluorine-doped tin oxide as an electrode surface modifier for polymer solar cells. Solar Energy Materials and Solar Cells, 122, 297-302. doi:10.1016/j.solmat.2013.10.022 5. Infrared Spectroscopy. (n.d.). https://www2.chemistry.msu.edu/faculty/reusch/ virttxtjml/Spectrpy/InfraRed/infrared.htm ACKNOWLEDGEMENTS Funding for this study was provided jointly by Dr. Paul Leu, the University of Pittsburgh Swanson School of Engineering and the Office of the Provost.


DESIGN AND TEST OF A NEW POWDER DELIVERY SYSTEM Terry McLinden, Advisor: Dr. Markus Chmielus Material Science and Mechanical Engineering Laboratory, Department of Mechanical Engineering and Material Science University of Pittsburgh, PA, USA Email: tpm30@pitt.edu , Chmielus@pitt.edu Introduction Additive Manufacturing (or 3D printing) is the process by which products are created by the addition of materials rather than the more conventional subtraction of materials. [1]. It does this by allowing the user to create the part with the help of a computer aided design program. Additive manufacturing allows for more complex external and internal geometries, allows for the rapid production of prototypes, and cuts down the time of assembly by using computer aided design programs. [2] There are many different types of 3D printing, most of which are used to make different parts and prototypes out of a single material or alloy. However, there has recently been a focus on multi-material additive manufacturing, allowing for the creation of products using multiple materials. Parts printed using multi-material additive manufacturing can have improved mechanical properties and provide diverse functions by varying the material in between layers. [3] The goal of this project is to develop a new powder delivery system that allows the user to create binary/composite additively manufactured samples by using two or more metallic powders. Product Design A powder delivery system consisting of a capsulelike device to hold the powder with a push mechanism to dispense the powder was designed. Components included the capsule, a holder for the capsule, and the pushing mechanism. The capsules were adapted from a 25.4 mm tall rubber end cap with diameters of 12.2 mm and 8.2 mm. The 12.2 mm diameter end cap had a 7 mm slit

cut into it while the 8.2 mm end cap had a 4.5 mm slit. When the rubber end cap experienced a push forced parallel to the top of the slit the slit opened. When the force was removed, the slit closed. The holder (31.75 mm x 19.25 mm) was 3D printed out of PLA and was designed to allow the slit to open when a force is applied to one side. Cut into each of the 4 sides are half circles where the capsules are held, as shown in Figure 1. The cut into the holder is slightly deeper than the radius of the capsule so that the capsules would be secure. A rod to place down the center attaches to a motor and allows the holder to rotate so that 4 different capsules can be used.

Figure 1: Capsules holder and piston system

The pushing mechanism was designed using an air compressor hooked up to a piston. The piston and air compressor system was controlled using a Robotxt controller. This allowed the piston to be operated automatically and also allowed the user to control the amount of time the piston was extended. The system was attached to a Fischertechnik Robotic arm which allows to whole system to move, so the powder may be strategically placed in the build box.


Methods To test the powder dispensing characteristics, a push test was set up, varying the time the piston was extended. The times tested were 0.5, 1, 1.5, and 2 s for two different diameter end caps. The powder was dispensed in a plastic weigh boat of known mass then weighed on an OHAUS digital analytic scale (0.1 mg accuracy). For each time step and end cap the process was repeated 20 times. Each previous weight was subtracted from the current, thus giving the amount dispensed per push. Results Table (1) shows the amount of powder dispensed for the 8.2 mm capsule with a 4.5 mm slit. Average amount of powder dropped per push increase with time extended two seconds, at which point the average decreased slightly from 44.8 to 39.7 mg dispensed. Table 1: 8.2 mm Diameter with 4.5 mm Slit .5 1 1.5 2 Time Piston Extended [s] [mg] 13.1 38.3 44.8 39.7 Average amount of powder dispensed [mg] 7.0 29.2 27.5 24.7 Standard Deviation

Table 2 shows that when the slit diameter is increase the overall average amount of powder dispensed per push drastically increases. The larger diameter capsule also follows the same trend as the smaller one whereas time extended increases so does the average amount dispensed per push until 2 s is reached where it slightly dips again. Table 2: 12.2 mm Diameter with 7 mm Slit 0.5 1 1.5 Time Piston Extended [s] [mg] 156.1 721.2 763.0 Average amount of powder dispensed Standard deviation

[mg]

36.1

282.8

240.8

2 733.0 264.0

Discussion Figure 2 shows that as the outside diameter and slit diameter increased so did the average amount dispensed. The results also showed an increase in averaged dispensed until the 2 s mark for both slit diameters. This may be caused by the piston being time regulated and not displacement regulated, meaning that the piston was pushing the slit too far causing the slit to close off. The other interesting

observation is the high jump in dispensed weight between 0.5 and 1 s. This increase is likely caused by the slit not having a chance to open to an optimal area when the piston is only extended for 0.5 s. Figure 2: Amount Dispensed vs Time Piston Extended

The standard deviation for the tests were extremely high, possibly due to: (1) the piston not being displacement regulated (every time the piston pushed the slit the slit would open with a different area) or (2) the mount that the holder was attached to the robotic arm was flexing from the force of the push. Both of these problems could be addressed. The first could be solved with a push system that can regulate the displacement and force of the push, therefore making it more accurate and allowing a wider range of dispensing amounts. The second would be to make a stiffer mount (e.g. out of metal instead of plastic pieces). If both of these issues are fixed, the push capsule powder delivery system could be a- way to effectively deliver multi-material powder in additive manufacturing. References [1] J.-P. Kruth, T. Nakagawa, M.C. Leu; Progress in Additive Manufacturing and Rapid Prototyping, 525-537. 1998 [2] What is additvie manufacturing?. retrived from: http://additivemanufacturing.com/basics/ [3] S. Chianrabutran, B.G. Mellor, S. Yang; A Dry Powder Material Delivery Device for Multiple Material Additive Manufacturing, 36-46. 2014 Acknowledgments The PPG foundation for providing the grant, Professor Chmielus for being a great mentor, and all of the students in the Chmielus lab.


Numerical Solutions to the One-Dimensional Wave Equation via First Upwind Difference, LaxWendorff, and Euler’s BTCS Implicit Methods Ty C. Zatsick Mechanical Engineering and Materials Science University of Pittsburgh, PA, 15261 Email: tcz2@pitt.edu In addition to the initial condition it was necessary to INTRODUCTION define the following: đ?‘Ľđ?‘Ľ ∈ [0,70] đ?‘šđ?‘š, ďż˝ ∈ [0,0.15] ďż˝and The one-dimensional wave equation is a partial ďż˝ đ?‘Žđ?‘Ž = 200 . Both spatial and time domains were differential equation which enables the description of waves. Waves are applicable in multiple physicsand after discretization the following đ?‘ đ?‘ discreti based disciplines, with a focus on fluid dynamics for zed, were ∆đ?‘Ľđ?‘Ľ = hs.= 1.0 m, and âˆ†ďż˝ = đ?‘˜đ?‘˜ the research completed in summer 2016. = 0.005,obtained. 0.0025, 0.00125 The one-dimensional wave equation is represented by equation 1 below đ?œ•đ?œ•(đ?‘Ľđ?‘Ľ, ďż˝)

đ?œ•đ?œ•(đ?‘Ľđ?‘Ľ, ďż˝)

(1) đ?œ•đ?œ•ďż˝ đ?œ•đ?œ•ďż˝ where velocity “Uâ€? is a function of time “tâ€? and the spatial variable “xâ€?. The speed of sound is represented by “aâ€?. + đ?‘Žđ?‘Ž

=0

The one-dimensional wave equation will be solved using three different numerical methods: First Upwind Difference, Lax-Wendroff, and Euler’s Backward Time Centered Space (BTCS) Implicit.

With the previous definitions the CFL number “câ€? can be defined for each time-step. đ?‘?đ?‘? = đ?‘?đ?‘? = đ?‘?đ?‘? =

đ?‘Žđ?‘Žđ?‘˜đ?‘˜ â„Ž đ?‘Žđ?‘Žđ?‘˜đ?‘˜

= 0.50

â„Ž đ?‘Žđ?‘Žđ?‘˜đ?‘˜

= 0.25

when â„Ž = 0.005

when â„Ž = 0.0025

when â„Ž = 0.00125 â„Ž We denote the function (đ?‘Ľđ?‘Ľďż˝ , ďż˝ ďż˝ ) = đ?‘ˆđ?‘ˆďż˝ for visual simplification. ďż˝

The first method is the First Upwind Difference and is represented by equation 2 below ďż˝+1

NUMERICAL SCHEME Before applying these three methods it is first necessary to apply an initial condition. In this analysis a step initial condition was applied and is shown in Fig. 1 below.

= 1.00

ďż˝

ďż˝

ďż˝

đ?‘ˆđ?‘ˆďż˝âˆ’ đ?‘ˆđ?‘ˆďż˝âˆ’1 − đ?‘ˆđ?‘ˆďż˝ =0 + đ?‘Žđ?‘Ž â„Ž đ?‘˜đ?‘˜

đ?‘ˆđ?‘ˆďż˝

(2)

The second method is the Lax-Wendroff and is represented by equation 3 below ďż˝+1

ďż˝

đ?‘ˆđ?‘ˆďż˝ = đ?‘ˆđ?‘ˆďż˝ −

ďż˝ ďż˝ đ?‘ˆđ?‘ˆďż˝ + 1 − đ?‘ˆđ?‘ˆďż˝âˆ’1

2

ďż˝

+ đ?‘?đ?‘? 2 ďż˝+ 1

ďż˝ + đ?‘ˆđ?‘ˆ ďż˝ − 2đ?‘ˆđ?‘ˆďż˝ ďż˝âˆ’1

(3)

2

đ?‘?đ?‘?

The third method is Euler’s BTCS Implicit and is represented by equation 4 below �

đ?‘?đ?‘?

ďż˝+1 đ?‘ˆđ?‘ˆďż˝âˆ’1

2

−

đ?‘ˆđ?‘ˆďż˝+1

ďż˝

− đ?‘?đ?‘?

ďż˝+1 đ?‘ˆđ?‘ˆďż˝+1

2

= −đ?‘ˆđ?‘ˆďż˝

ďż˝+1

(4)

RESULTS and Equations 2, 3, and 4 were solved for ďż˝ iterated such that plot data would be extracted at 0.00, 0.05, 0.10, and 0.15 seconds. Figure 1: Initial wave distribution at t = 0.0s.


Plots for the First Upwind Difference, LaxWendroff, and Euler’s BTCS Implicit methods were generated for c = 1.00, 0.50, and 0.25.

Figure 2: Wave Propagation for First Upwind Difference Method, Left to Right: c = 1.00, c = 0.50, c = 0.25.

Figure 3: Wave Propagation for Lax-Wendroff Method, Left to Right: c = 1.00, c = 0.50, c = 0.25.

Figure 4: Wave Propagation for Euler’s BTCS Implicit Method, Left to Right: c = 1.00, c = 0.50, c = 0.25.

DISCUSSION The First Upwind Difference Method provides first order accuracy for the spatial and time variables. From observing Fig. 2 we notice no dispersion error for a “c” value of 1. Decreasing the “c” value leads to larger amounts of dispersion error. The Lax-Wendroff Method provides second order accuracy for the spatial and time variables. From observing Fig. 3 we notice no dispersion error for a “c” value of 1. Decreasing the “c” value leads to larger amounts of dispersion error. Euler’s BTCS Implicit Method provides second order accuracy for the spatial variable, and first order accuracy for our time variable. From observing Fig. 4 there are small amounts of dispersion error for all three “c” values, with error increasing as the “c” value decreases.

To conclude our result, the accuracy of the CFL number is lowest for “c” values closer to 1. As the “c” value decreases a larger amount of dispersion error is observed. This shows that a smaller time step does not necessarily lead to a more accurate result, and that the CFL number is a strong indicator of accuracy when it comes to solving the onedimensional wave equation via numerical methods. ACKNOWLEDGEMENTS Research funding was provided by the Swanson School of Engineering and the Office of the Provost. REFERENCES 1. Hoffmann, Klaus A., and Steve T. Chiang. Computational Fluid Dynamics. Wichita, Kan.: Engineering Education System, 2000.


INCREASING THE DENSITY OF LOW TEMPERATURE SINTERED SILICON CARBIDE BY MEANS OF POLYMER IMPREGNATION AND PYROLYSIS Mackenzie N. Stevens and Jung-Kun Lee Department of Material Science and Engineering University of Pittsburgh, PA, USA Email: mas579@pitt.edu INTRODUCTION Silicon carbide powders have high temperature compatibility and chemical stability, preferable properties for applications such as solid fuel-cell cladding The goal of the current work is to produce dense SiC from powder without heating to its melting point, but instead low temperature sintering followed by a few cycles of polymer infiltration and pyrolysis, PIP. SiC densities are compared with differing binding agents and amounts, particle sizes, and viscosities of the polymer precursor AHPCS SMP-10 (allylhydridopolycarbisilane). Polyvinylpyrrolidone (PVP) and polyethylene glycol (PEG) were used as binding agents. In attempt to increase density at low sintering temperatures, Read used 3-wt%PEG to bind SiC powders prior to sintering. His work explored cost effective production of chemically stable agents capable of cladding nuclear fuel technologies [1]. The current work expands this idea, introducing several new variables. Three combinations of binder were compared, one with 3-wt%PEG, one of 3-wt%PEG and 3-wt%PVP, and lastly 1.5-wt%PEG and 1.5-wt%PVP. High green density materials will generate those with higher sintered density, greater mechanical strength and decreased flaw content. Pore size, shape and distribution affect density and effectiveness of polymer infiltration [2]. Cold-isostatic pressing (CIP) is compared to uniaxial pressing to reveal this effect. The temperature at which crosslinking begins, or the flash point of polycarbosilanes such as SMP-10 is 89°C [3]. Heating above the flash point increases viscosity, which the amount of polymer that escapes the pores. Sintered pellets are submerged in polymeric precursor. After sufficient impregnation of pores, pellets are transferred to the furnace for pyrolysis and crystallization. Liquid polymer experiences volume decrease upon pyrolysis, so multiple PIP cycles are required to eliminate the whole pore [4].

Pure SiC has density 3.2 g/cm3 [5]. Density proves to be a function of particle size, pressing technique, binder contents and SMP-10 viscosity. Therefore, optimum combinations of these variables generate the greatest density. METHODS Two sizes of silicon carbide powders were used: Sigma Aldrich 400-mesh (large) and Saint Gobain Sika Sintex-15 (small). Samples composed of 400-mesh powder compare to 3:1 ratio (large: small) particles to determine which particle combination produces greatest density. Powders were mixed, ground, sieved and then pressed using a mold and die and a Carver uniaxial press to 1000 lbs. Some samples were additionally pressed using cold-isostatic press (CIP) to 60,000 psi with 25 min dwell time. Samples were first sintered to 1500°C in Webb 109 furnace to vaporize binders and strengthen samples to withstand PIP without collapsing. After sintering, samples were suspended in Starfire AHPCS SMP-10 in a vacuumed flask until no visible air bubbles escaped the pellet. SMP-10 of two viscosities were used, 100 cp and 500 cp. 1200°C pyrolysis was completed in Webb 109. After pyrolysis, samples returned to the polymer for infiltration, then back to the furnace, repeating this process 4 times. RESULTS The densest SiC sample was nearly 80% dense after 4 cycles of PIP (100 cp), pressed isostatically, composed of 3-wt%PEG 3:1 large to small particles. 400-mesh SiC 3wt%PEG uniaxial pressed immersed in 500cp SMP-10 was the least dense sample with 71% density. Green density of uniaxial pressed samples ranged 1.65-1.88 g/cm3, while samples pressed with CIP had green densities 1.85-2.29 g/cm3. CIP samples produced larger densities compared to uniaxially pressed samples, but uniaxial pressed samples had greater density


change. Density increased as number of PIP cycles increased, as shown in figs. 1 and 2. All samples have greatest density increase during first two PIP but 3-wt% PEG 3-wt% PVP has the greatest change. In more viscous polymers, after 4 infiltrations 3-wt%PEG 3-wt%PVP (6wt% total binder) had better density than samples with 3-wt% binders. Samples with 3wt% total binder produce greater density when infiltrated in low viscous polymer.

Uniaxially pressed 400 mesh SiC in 500 cP AHPCS 2.4

Density (g/cm^3)

2.35

2.3

2.25

2.2

2.15

2.1

3 wt% PEG (100cp)

2

3 wt% PEG(500cp)

3wt% PEG 3wt% PVP (100cp)

2.05 1.95

1.9

1.5wt%PEG 1.5wt%PVP (100cp)

3wt% PEG 3wt% PVP(500cp) 1.5wt%PEG 1.5wt%PVP(500cp)

1

2 3 4 Infiltration number Figure 1 Uniaxially pressed 400-mesh SiC density as a function of infiltration number using 100 cp and 500 cp SMP-10.

CIP 3:1 Density after PIP in 100 cp and 500 cp SMP-10

REFERENCES 1. Reed, R. Low Temperature Sintering of Silicon Carbide through a Liquid Polymer Precursor University of Pittsburgh, 2014. 2. Lee and Rainforth Ceramic Microstructures: Property Control by Processing 1994 3. Colombo et al. Polymer Derived Ceramics: From Nano-Structure to Applications 2010. 4. Sarma et al. J Nuclear Materials 352, 324333, 2006. 5. Reed, J. S. Principles of Ceramic Processing 1995.

2.55

Density (g/cm^3)

2.5 2.45 2.4 2.35

3 wt% PEG (100 cp)

2.3

3 wt% PEG (500 cp)

2.25

CONCLUSIONS Higher green density provided higher sintered densities as expected, however, the total change in density after four cycles of PIP was greater for uniaxially pressed samples. Pore channels seem to be a significant factor when improving density of SiC. Larger pore channels allow better infiltration and greater density change during PIP. Smaller pore channels hinder density change, but may improve initial density. Less pressure permits larger pore channels, as seen in uniaxial pressing, while isostatic pressing forced powder to compact in all directions, minimizing pore channels and vacancies. Uniaxial pressed samples seemed to produce greater densities with viscous polymer precursor since the thicker polymer could seep into the larger pores. The samples pressed with CIP produced conflicting results; low viscous SMP-10 (~100cP) generated higher density than the highly viscous polymer (~500cP). Isostatic pressing provides better green and final densities than uniaxial pressing of SiC. Mixtures of small and large particle sizes prove best when increasing density. The 3:1 large to small samples had the best final density, likely resulting from particles fitting in the interstitial sites. Density can be increased using low temperature sintering and PIP, but density of 3.2 g/cm3 was not reached.

3 wt% PEG, 3 wt% PVP (100 cp) 1.5 wt% PEG, 1.5 wt% PVP (100 cp) 3 wt% PEG, 3 wt% PVP (500 cp) 1.5 wt% PEG, 1.5 wt% PVP (500 cp)

1

2

3

4

Infiltration Number Figure 2 Cold isostatic pressed 3:1 SiC (large to small) density as a function of infiltration number in 100cp and 500 cp SMP-10.

ACKNOWLEDGMENTS This work was made possible thanks to the donations of PPG. Use of the CIP was courtesy of Robert Morris University with the help of Dr. Ergin Erdem and Dr. Arif Sirinterlikci


VISUAL STUDY OF OXIDATIVE PROPERTIES OF COPPER CORE SILVER SHELL NANOPARTICES Vincent Antoine Verret Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: viv14@pitt.edu, Web: http://mems.pitt.edu/ INTRODUCTION Metallic nanoparticle synthesis is an important subject in various applications, including coatings and printed electronics. For such applications, silver nanoparticles are commonly used for their good conductivity and resistance to oxidation. Silver however, due to its uses in many different fields is quite expensive. For this reason researchers have sought a metal that when used for nanoparticle synthesis would be cheap yet have comparable utility. Copper is one such metal, being more abundant than silver with comparable conductivity. However copper suffers from oxidation more readily than silver, preventing it from being sintered in ambient conditions, increasing the cost of effective use of copper nanoparticle. This study seeks to design a fabrication method for a copper-based nanoparticle that alleviates oxidation problems by encasing particles in a protective silver shell. Many studies have attempted similar things to that effect including coating the particles in a reducing agent that is worn away during the sintering process while protecting the particles from oxidation [1]. The use of a silver shell is an attempt to simultaneously alleviate the oxidation problem while being simple do with reasonable conductivities. The fabrication process used in this test uses a galvanic reduction of silver onto already an already prepared copper seed, as shown in figure 1 [2, 3].

Figure 1: Schematic the fabrication of Cu core Ag shell nanoparticles using galvanic reduction

METHODS A copper seed was first created by dissolving copper (II) formate hydrate in Ethylene Glycol with polyvinylpyrrolidone in the following quantities: 45

mg (HCO2)2Cu · xH2O, 30 ml EG, and 3 g PVP. The EG serves as the reducing agent for the initial reduction of copper formate into metallic nanoparticles, while the PVP acts as a surfactant to prevent agglomeration of the copper into large shapes. Then using a microwave reactor, the sample is heated to 175 °C over 20 minutes, and held at that temperature for 1 minute, after which the vessel is removed from the microwave and the sample is poured into a three neck vessel and the vessel is sealed and flushed under a nitrogen atmosphere. The vessel is heated to 175 °C in an oil bath while a magnetic stir bar is used to stir the mixture. Using a glass pipette, a solution of 11 mg of silver nitrate in 2 ml EG is added into the solution and the vessel is quickly resealed. The mixture is allowed to react for 5 minutes before it is removed from heat and allowed to cool to room temperature, at which point the nitrogen flow is cut off. Using a centrifuge the 32 ml solution is washed in two parts using 50 ml centrifuge tubes to remove the PVP. First approximately 10 ml of the new copper core silver shell suspension is mixed with 40 ml of deionized water and centrifuged at 7900 RPM for 20 minutes. After the cycle is complete, the top of the centrifuged mixture is removed, leaving the tube mostly empty except for nanoparticles, stuck to the bottom, and the tube is refilled to 30 ml with ethanol. The mixture is sonicate to suspend the particle mixture and centrifuge again at the same conditions. After the second cycle, the mixture is poured out the same way, and 3 ml of ethanol is added. The mixture is then sonicated again to suspend it for use in depositing on 8 mm by 8 mm glass substrates. This is done in 30 μl droppings twice, allowed to dry between each. For comparison purposes, batches of silver nanoparticles and copper nanoparticles were prepared for control purposes and deposited the same


way. To test the protection from oxidation at high temperatures, the samples were sintered for 30 minutes on a hot plate at various between 100 °C and 200 °C at which previous studies indicate that protection from a silver shell may break down. DATA PROCESSING Before each sintering test, colors of each sample was examined and recorded using a camera. Likewise, at the end of the cycle similar pictures were also taken. In addition, quick tests for conductivity were made using a multimeter. RESULTS Overall, the copper core silver shell films showed minor to nonexistent changes in color, excepting temperatures above 175 °C, at which the temperature changes were more noticeable. At the lower temperatures, the color changes between the copper nanoparticles and the copper core silver shell particles were insignificant.

DISCUSSION The breakdown of the protection is especially noticeable. Color changes in the copper from red to blue-green indicated oxidation and a breakdown of the silver shell. This may indicate a need to better controlled shell thickness to allow for more deformation in the shell before copper growth, as silver sinters at lower temperature than copper. REFERENCES 1. Hague et al. Electron Mater Lett 7, 195-199, 2011 2. Lee et al. Nanotechnology 26, 455601, 2015 3. Miyakawa et al. Nanoscale 6, 8720-8725, 2014 ACKNOWLEDGEMENTS This research was funded by Dr. Jung-Kun Lee at the University Of Pittsburgh Swanson School Of Engineering. Studies performed at the MEMS department’s Materials Micro-Characterization Laboratory.

Table 1: Comparison of pre sintering coloring (top rows) of sample to post sintering color (bottom rows) 100°C 125°C 150°C 175°C 200°C Nanoparticle type Pre Cu NP sintering Post Sintering Ag NP

Pre sintering Post Sintering

Cu@Ag NP

Pre sintering Post Sintering


DESIGN OF AN ACTIVE TRUNK CONTROL AND BALANCING SYSTEM TO REDUCE FATIGUE DURING WALKING WITH AN EXOSKELETON Joshua Barron Neuromuscular Control and Robotics Laboratory, Department of Mechanical Engineering University of Pittsburgh, PA, USA Email: jtb81@pitt.edu, Web: http://www.engineering.pitt.edu/Labs/SHARMA/ INTRODUCTION Spinal cord injuries cause approximately 5,100 people to be diagnosed with paraplegia each year in the United States [1]. One potential device to help restore walking in patients with paraplegia is the use of powered and semi-powered orthoses. Powered exoskeletons are a viable type of powered orthoses, and much advancement has been made on the subject in recent years [2, 3, 4]. The use of an orthosis restricts the human leg to three rotational degrees of freedom; one at the hip, the knee, and the ankle [2]. Therefore, powered exoskeletons will often utilize four to six actuators to provide gate assistance to the subject, depending on the severity of the patient’s disability. It was hypothesized that a gyroscope-based balancing device could be used to help patients achieve balance while using an orthosis. This device could generate enough torque to balance any upperbody disturbance. An exoskeleton is currently under development by Dr. Sharma, consisting of a two motors at the hip and two wrap spring clutches with actuators at the knee. Combined with a control system and gait detection, this device has the potential to help paraplegic patients regain the ability to walk, and free them from the restrictions of a wheelchair. However, because there is no support beyond the lower limbs, the test subjects must resort to using a walker to remain upright and maintain balance while using the exoskeleton. This causes fatigue in the subject, and prevents the use of the upper limbs for other activities of daily living. The implementation of a gyroscopic trunk control system would provide stability during use, and allow the subject to perform actions with their upper limbs. APPROACH A gyroscopic balancing device relies on one or two rotating flywheels to produce a reactive torque by restricting gyroscopic procession. This torque can be applied about the base of the human trunk, providing balance assistance in a given plane of motion.

Therefore, the effectiveness and efficiency of the device are determined by the sizing of the flywheels. For this reason, the flywheels were designed first, and a housing was created to fit the required flywheel size. Simulations were then performed to consider the ability of the system, and also to implement PID control with the device. DESIGN Gyroscopes utilize two forms of rotational motion to generate torque. The flywheels spin about their main axis, usually with a constant high velocity, and then utilize a gimbal motor to pivot the gyroscope about an axis perpendicular to the main axis, generating a torque whose direction obeys the right hand rule. As the flywheel pivots about the gimbal axis, its torque develops a Sine and Cosine component, generating torque about two different axes. In this case, torque is only required in one axis to stabilize the subject, and excess torque out of plane could impede stabilization or even cause harm to the subject. To avoid out of plane torque, a design implementing two flywheels was selected. The flywheels rotate and gimbal in opposite directions, which causes out-ofplane torques to cancel and assistive torques to add. A model of this configuration was programmed in MATLAB and used to optimize the sizing of the flywheels based on the amount of torque required to stabilize a 50 kg, 0.5-meter tall trunk from a maximum initial offset of 20 degrees. The selected flywheels had a radius of 7 cm and a height of 0.4 cm, each weighing 4.83 kg. Due to testbed limitations and cost, a 20% scale model was also designed to match the full scale model as closely as possible. The scaled Flywheels had a radius of 5.37 cm and a height of 3.07 cm, weighing 2.18 kg. SIMULATION Simulations were performed in Simulink to study how quickly and effectively the system would respond to varying initial offset angles. The


dynamics of the trunk were modeled as a simple pendulum, and torque was applied about the pivot axis, with zero degrees representing the vertical position. The two focus metrics within the model were the recovery angle of the trunk, and the torque production of the gyroscope. A PID controller was implemented to control the reaction torque produced by the gyroscopes based on gimbal velocity, and the system recovered to 90% of steady state in 0.95 seconds, with a maximum overshoot of 26.78%. This can be seen in Fig. 1, represented in blue. After designing the scaled model to match the full system, a rise time of 0.88 seconds was achieved from the same maximum angle of 20 degrees. The slightly quicker response time was expected, as the inertia of the system is exponentially related to size and therefore was greatly reduced when the model was scaled. This also caused a higher overshoot, showing 48.07% at max offset. These results are shown in orange in Fig. 1.

Figure 3: Torque plot for 20% scale model

DISCUSSION From the simulations, it can be seen that this device could steady both a full scale model and a smaller model from the desired maximum offset angle. These devices generate enough torque for upper body balance, thus validating the previous hypothesis. However, since torque generation is directly related to the weight and inertia of the flywheel, the full scale device quickly became heavy and cumbersome, making it improbable for actual human use. Therefore, it was decided that this method of balance assistance was not the correct choice for a full scale exoskeleton, as it would be too heavy for human use, and could not hold an offset angle other than vertical due to an asymptote in reaction torque at a gimbal angle of 90 degrees. Other options to provide assistance, such as using high torque motors placed at the hip, will be explored and tested in the future.

In order to ensure that the models reacted similarly to offset angles, other aspects of the system had to be tuned. The key characteristic that affected how the system reacted was the torque produced by the gyroscopes, and its relation to the required torque to stabilize the trunk. The required torque was calculated using a static model at a certain offset angle Θ, assuming that all accelerations are small enough to ignore. This assumption was made because on a human test subject, accelerations would be minimized to prevent the possibility of injury. Fig. 2 shows the torque from the gyroscope vs. the required torque of the full scale system, while Fig. 3 represents the same comparison for the scaled model.

REFERENCES [1] Alibeji N, Kirsch N and Sharma N (2015) A Muscle Synergy-Inspired Adaptive Control Scheme for a Hybrid Walking Neuroprosthesis. Front. Bioeng. Biotechnol. 3:203. [2] Dollar, Aaron M., and Hugh Herr. "Lower Extremity Exoskeletons and Active Orthoses: Challenges and State-of-the-Art." IEEE Trans. Robot. IEEE Transactions on Robotics 24.1 (2008): 144-58. Web. [3] Pons, J.l., J.c. Moreno, F.j. Brunetti, and E. Roco. "Lower-Limb Wearable Exoskeleton." Rehabilitation Robotics (2007): Web. [4] Sanz-Merodio, Daniel, Manuel Cestari, Juan Carlos Arevalo, and Elena Garcia. "A Lower-limb Exoskeleton for Gait Assistance in Quadriplegia." 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO) (2012): Web.

Figure 2: Torque plot for full scale model

ACKNOWLEDGEMENTS I would like to thank Dr. Nitin Sharma, Swanson School of Engineering, and the Office of the Provost for sponsoring this project. I would also like to thank Dr. Sharma, Naji Alibeji, and Dr. Kirsch for their guidance and support throughout my internship

Figure 1: Offset angle of full and scale model


ELECTROMYOGRAPHY BASED CONTROL FOR LOWER LIMB ASSISTIVE THERAPY Amanda Boyer Neuromuscular Control and Robotics Laboratory, Department of Mechanical Engineering University of Pittsburgh, PA, USA Email: amb310@pitt.edu, Web: http://www.engineering.pitt.edu/Labs/SHARMA/ INTRODUCTION Robotics based neurorehabilitation is a promising technology. Recent research has shown that robotic therapy is as effective as traditional therapy [1]. In order to help patients regain leg mobility, This project aims to create a control system based on the electromyography (EMG) signal. The EMG signal is an electrical signal the muscle emits as it contracts [2]. This abstract will discuss the first step towards EMG-based intent estimation. The objective is to create a mathematical model that predicts the position of the leg based off of the EMG signal from the quadriceps muscles. Three mathematical models were tested: 1) a sigmoid artificial neural network, 2) a musculoskeletal model, and 3) a combined musculoskeletal-neural network model. Artificial neural networks have been used with EMG signals before to identify hand motions based on the forearm and for hand force estimation [3] [4]. METHODS In this study one adult participant’s knee joint angle was estimated based on the EMG signals. The participant was seated in a leg extension machine, and their joint angle was recorded using an encoder on the joint of the leg extension machine. After the skin was cleaned with rubbing alcohol, a reference electrode was placed on the knee and an EMG sensor was placed on the skin above the vastus medialus. The participant was instructed to move their leg in random patterns. DATA PROCESSING The signal was filtered with a 20 to 450 Hz bandpass filter and a gain of 1k was applied with a Delsys Amplifier. The data was recorded at a frequency of 1000 Hz. The data was then digitally rectified and filtered with a root-mean square moving average filter. The square root of the signal was then taken [5]. ARIFICIAL NEURAL NETWORK

The first mathematical model used was a sigmoid artificial neural network with a simulation of the leg extension machine. The filtered EMG signal and the square of the signal were inputted into the neural network. The neural network (one hidden layer with 30 nodes) was trained, using the error, to output the joint angle of the leg extension machine. The model was adjusted and used in real-time experiments. This model has three inputs: the EMG signal, the predicted joint angle from the neural network, and the filtered derivative of the predicted joint angle [3]. The block diagram of this system is shown in Figure 2.

Figure 1: Block diagram of three input neural network

MUSCULOSKELETAL MODEL The second model used was the musculoskeletal model. Using estimated parameters taken from a participant, the dynamics of the leg based on the position and activation voltages (EMG signal) were simulated. The output was then mapped to the range of the leg extension machine, 0-70 degrees. RESULTS The neural network simulations were run for 30,000 s with a real-time processor. After the network learned for 5 min. the simulation was run again with the same weights and after 5 s the learning was turned off. The neural network then output the predicted joint angle without the assistance of the error calculation.


below is the graph of the musculoskeletal model’s results.

Figure 2: EMG signal and position graph from neural network simulations. The red is the simulated position and the blue is the neural network's output.

The root-mean square error (RMSE) before the learning was turned off is 3.94° and the RMSE after was 5.26°. This showed promising results for the experiments. The online learning for the neural network experiments did not appear to be capable of learning in a feasible period of time with the EMG inputs or with the EMG, angle, and velocity inputs.

Figure 3: EMG signal and position graph from neural network experiments. The red is the simulated position and the blue is the neural network's output.

Figure 4 is an example of one of the experiments. The learning was turned off at 5 s in this example, and did not continue to replicate the movements of the leg. The experimental model had an RMSE value of 6.40° during the learning phase and 10.14° RMSE during the non-learning phase. The musculoskeletal model showed slightly better results, but with major limitations. It can only follow fast movements with an RMSE error of 9.08°. When it attempts to predict slower movements the RMSE increases to 21.85°. The RMSE for the whole data set was 14.87°. Figure 5

Figure 4: EMG signal and position graph from neural network experiments. The red is the simulated position and the blue is the neural network's output.

For the combined musculoskeletal-neural network the results were unstable, indicating an adjustment to the technique must be made. DISCUSSION The results from the methods for estimating human intention via EMG signals were not reliable enough to be used in experiments for assistive therapy. These methods need to be further researched to find a more reliable method. Some ideas include using batch processing to teach the neural network the EMG-joint angle relationship, or using another method, like Support Vector machine learning, to adjust the musculoskeletal model to fit all situations. After the position can be accurately estimated using the EMG signal, a motorized system can be created to make a robotic physical therapy machine for the lower limb. REFERENCES 1.Hay. WSJ Technology "The Rise of Robotics for Physical Therapy" 2016. 2.Delsys.com “Technical Note 101: EMG Sensor Placement”. 3.Szpala et al. Human Movement 12, 57-64, 2011. 4. F. Mobasser et al. 2005 IEEE Conf, pp. 825-830. 5.R. A. R. C. Gopura et al. Electrodiagnosis in New Frontiers of Clinical Research, 237-268, 2013. ACKNOWLEDGEMENTS Thanks to Dr. Sharma, the Swanson School of Engineering, and the Office of the Provost for funding this project.


ATOMIC-SCALE TOPOLOGY OPTIMIZATION BASED ON FINITE ELEMENT METHODS Clement N. Ekaputra and Albert C. To Department of Materials Science and Engineering University of Pittsburgh, PA, USA Email: cne2@pitt.edu INTRODUCTION Topology optimization is a useful method for finding the strongest material configuration for a given design space, load, and support condition. While large-scale, or continuum-scale, topology optimization methods such as the finite element method exist and generate configurations used in real-world situations, similar methods are not yet complete at atomic scales. It also cannot be assumed that ideal structures at continuum scale are also ideal at the atomic scale, since interactions between elements in a finite element model are different from atomic interactions. New models must therefore be developed, or existing continuumscale models modified. The primary aim of this project is to extend an existing finite-element topology optimization program to the atomic scale. The atomic-scale optimization program is based on the 88-line MATLAB finite element topology optimization program by Andreassen et al. [1]. This program optimizes and displays the configuration for minimum compliance, given a design space and amount of material. Changes made to the finiteelement program to account for atomic-scale interactions will be explored in the next section. METHODS The original program by Andreassen et al. uses a heuristic updating scheme to optimize the finite element configuration. With each iteration, the element densities are modified. Since the stiffness matrix is dependent on the element densities, the compliance changes. These element densities are modified so that the compliance will decrease. Therefore, the final iteration will be the ideal configuration, within a specified error margin. This heuristic updating scheme and the problem of minimum compliance remains intact in the atomicscale optimization program. In addition, the finite-element program assumes square elements with four nodes each. To remain

consistent, the atomic-scale program assumes atoms are arranged in a square lattice structure. In addition, artificially created square “elements� with atoms as the nodes are used. Modifications were made to the finite element program to account for differences between atomicand continuum-scale interactions. At the atomic scale, much more of the observed space is empty, in contrast to the continuum scale where the observed space is occupied by the material. Thus, in the atomic optimization program, it is assumed that interactions between atoms act as linear springs. The strength of this interaction is inversely proportional to the distance between the two atoms. The atomic-scale program also accounts for the greater range of interaction. In contrast to the finite element program, where only adjacent nodes affect each other, atoms may affect each other at larger distances. To account for this, the square elements used in the atomic program may possess more than four atoms. For example, if the maximum interaction distance is two, meaning that atoms affect each other up to the second nearest neighbor; elements containing nine atoms are used. These elements may also overlap in order to account for all possible interactions. Element stiffness matrices are created for each element, and entered into the global stiffness matrix to begin the heuristic updating process.

DATA PROCESSING Both the atomic and finite element topology optimization programs were run in MATLAB. The half MBB beam was used for all tests (see Figure 1 below), and a 150 by 50 element or atom design space was used. Configurations for the original finite element program, as well as the atomic program for maximum atomic interaction distances of one, two, and three atoms apart were obtained. Finally, compliances for all configurations were


calculated using the atomic stiffness matrix for consistency.

Figure 4: Atomic configuration, atomic interaction up to two atoms apart.

Figure 1: Half MBB beam. The grey box represents the

design space, the arrow the force, and the black triangles the boundary conditions.

Figure 5: Atomic configuration, atomic interaction up to three atoms apart.

RESULTS Overall, the atoms in the atomic program are more clustered than the elements in the finite element program. The finite element configuration resembles a truss, whereas the atomic configuration is a single beam. Additionally, the atomic configurations vary minimally with maximum interaction distance, but the compliances decrease. This is to be expected, since as more forces act on a single atom, the more it is restricted from moving, and the lower the compliance of the overall configuration. Figures 2-5 below display the finite element and atomic configurations and compliances.

DISCUSSION Although it is known there are physical differences in interactions between the continuum scale and the atomic scale, it can only be speculated that these differences cause the atomic configuration to be more clustered than the finite element configuration. At the continuum scale, as mentioned earlier, the space is solid and filled with the material, whereas the interactions at the atomic scale are similar to a set of springs, with hardly any material in between. This may explain the results, but still, it is unclear whether or not these results actually display the ideal atomic configurations.

Figure 2: Finite element configuration.

The atomic topology optimization program discussed needs future modifications. One issue with the current program is that it groups atoms into artificial elements, and modifies the density in the space between the atoms. A more natural way to optimize the structure would be to modify the density of each atom itself, rather than that of the artificial elements. Such revisions to the program are recommended before drawing a conclusion on whether or not the configurations are actually the physical ideal. .

Figure 3: Atomic configuration, atomic interaction up to one atom apart.

REFERENCES 1.Andreassen et al. Structural and Multidisciplinary Optimization 43, 1-16, 2010. ACKNOWLEDGEMENTS This project was supervised by Dr. Albert To at the Swanson School of Engineering at the University of Pittsburgh. Funding provided by the PPG Foundation.


CAPABILITIES OF TOPOLOGY OPTIMIZATION IN THE FIELD OF ADDITIVE MANUFACTURING Preston Shieh ANSYS Additive Manufacturing Lab, Department of Mechanical Engineering University of Pittsburgh, PA Email: pcs23@pitt.edu INTRODUCTION The recent advancement of additive manufacturing capabilities has opened the door for unique designs, making imagination truly the limit. One field that has been greatly impacted from this expansion has been the field of topology optimization. Topology optimization is a process where the mass in a model is strategically redistributed to reduce weight and maintain strength. Mass in areas of low local strain are removed and relocated to areas of high local strain. This results in structures with less mass but are still able to safely support a given load [1]. Results of such optimization are organic and unique. These lightweight, optimized structures were impossible to machine using conventional methods before additive manufacturing. However, these lightweight, topology-optimized structures can be easily produced using additive manufacturing. This type of weight reduction is a key to the aerospace field where it costs upwards of $1000 per pound to orbit [2]. Topology optimized, additive manufactured parts can save aerospace companies a substantial amount of money by reducing weight in the various mechanical components of their system. METHODS A bottle opener was designed to analyze the applications of topology optimization in reducing the mass of a structure under load. Not only does the geometry of the bottle opener have to be precise but the part is subjected to compression and tensile stresses similar to aerospace components for example. The bottle opener was designed in the shape of a panther, the tail being the bottle opening mechanism. Autodesk AutoCAD was used to design the tail as a simple lever-operated bottle opening device. Once the rough geometry was

sketched out, the tail model was exported in an IGES format for analysis in ANSYS Workbench 17.0. The tail was modeled as a static structural model and analyzed for the equivalent Von Mises strain. Stress concentrations were then identified. Using AutoCAD, features such as filleted corners and thicker cross-sections were incorporated into the design to minimize strain in concentrated stress areas. The redesigned model was then exported and analyzed again in Workbench to reevaluate the altered geometry. This process of analysis, redesign and then re-analysis was reiterated until the tail had an acceptable factor of safety. To improve material distribution and topology, the model was then imported to GENSIS TGAM (Topology Optimization for ANSYS Mechanical). Using the same boundary conditions that were used to minimize the strain, TGAM analyzed the model and generated a topology-optimized prototype of the tail. This model could be tweaked using various features such as initial mass fraction, minimum and maximum member sizes, frozen topology regions, and most importantly the isosurface value.

Figure 1: (Left) Original tail after finite-element analysis. (Right) Topology-Optimized, cross-sectional face frozen tail after TGAM analysis.

Each structure was classified with a specific isosurface value. This value governed the shape and volume reduction of the optimized part. For a given isosurface value (0-1), the extension created a


surface though the elements with matching values, and enclosed the elements whose density value is greater than the given value. This collection of enclosed surfaces then formed the topologyoptimized model of the original part. Six isosurface values provided relevant optimized structures. Two groups of data were created, each containing the same six isosurface values. In the first group, crosssectional faces of the tail (orthogonal to the XYplane) were topologically frozen. In the second group, profile faces of the tail (parallel to the XYaxis) were topologically frozen. During the process of redistributing material, the algorithm would not move material in the frozen regions. A total of twelve structures were then exported in a IGES file format for additional finite-element analysis in Workbench 17.0. Von Mises strain was recorded as was number of elements and remaining volume percentage. DATA PROCESSING The Von Mises strain of the original tail design was recorded to provide a baseline for an optimized structure comparison. The equivalent Von Mises stress was recorded for each optimized tail structures. Metrics such as original volume percentage was recorded to analyze the amount of material removed due to the TGAM software. Additionally, elemental metrics like skewness, element sizing and relevancy were documented to analyze the accuracy of the strain result. RESULTS Overall, the finite-element analysis results for the first group were inconclusive. Strains differed by large magnitudes. While it was expected that strain would marginally increase as more material was removed, it appeared that as more mass was removed, Von Mises strain decreased in the model. Additionally, at an isosurface value of .181, strain dramatically jumps. The finite-element results for the second group, where the profile faces were frozen, displayed much better results. As more material was removed in the second group, internal strain increased.

Figure 2: Graphs of both groups data points. Y-axis is the Von Mises strain (10^8) and X-axis is % volume removed. On the left is Group 1, right is Group 2.

DISCUSSION The lack of any correlation or pattern in the first tested group can attributed to many things. During the process of exporting the optimized CAD model to an IGES file format, the software could have improperly rendered surfaces between imports and exports. Errors such as these could lead to intersecting surfaces and gaps between surfaces, which would affect the results of finite-element analysis. The second group displayed results consistent with expectation. The Von Mises strain increased in small increments as more material was removed. Even though strain in the structure increased marginally, mass was considerably reduced. Given more time, mechanical tests could have been done to compare the internal strain to the strain predicted by the finite-element analysis software. NOTE I should have used ANSYS to solve for Von Mises stress instead of strain which would have resulted in more accurate results to compare to yield stress. REFERENCES 1.Christiansen et al. Combined shape and topology optimization of 3D structures, 25-27. 2.SPACEX ANNOUNCES LAUNCH DATE FOR THE WORLD'S MOST POWERFUL ROCKET, 1, 2011. ACKNOWLEDGEMENTS I would like to thank Dr. To, the Swanson School of Engineering and the Office of the Provost for allowing me this unique learning opportunity. Additionally, I would like to thank Dr. To’s graduate students’ Lin Cheng and Jason Oskin for the help they have provided me along the way.


MODELING AMPLIFIERS IN LINEAR AEROTECH INC. STAGES Gabriel Hinding Department of Mechanical Engineering, Swanson School of Engineering, University of Pittsburgh, PA, USA INTRODUCTION Precision motion is essential in a wide variety of applications, including materials science, physics, and even biology [1, 2, 3, 4, 5]. Improvements can be made by redesigning the hardware, but it is significantly more cost effective to upgrade the software instead. Regardless of the approach, it is advantageous for both a vendor and a consumer to have a better understanding of the equipment. A consumer who thoroughly understands the demands of their project can better select equipment without overpaying for unnecessarily high quality. A manufacturer can ensure they are getting the best performance possible out of their products, identify gaps or overlap in their product lines, and better serve the customer. This project focuses on linear stages, which are just one kind of precision motion equipment. However, they are widely used in manufacturing and in projects like semiconductor production, hard drive development, and precision machining. The overarching goal of the project is to build a framework for modeling linear stages. Specifically, three Aerotech Inc. stages (ABL1500, ALS130H, and PRO190LM) are examined with the intention of eventually developing a systematic way for manufacturers to virtually design and test any linear stage for explicit precision requirements. To do this, it is necessary to investigate various aspects of system dynamics, evaluating the increase in model accuracy achieved with each increased each step of model complexity. Often in the literature, a single aspect of a model is examined at a time. Less often is the system modeled as a whole as did Villegas, Hecker, PeĂąa, Vicente, and Flores [6]. Similarly, this project started with the simplest, autoregressive moving-average (ARMA) model. More complex parts will be added and the tradeoff for model complexity and accuracy will be quantified. The difference between this project and previous work is the intention to establish a methodical way to evaluate the expected accuracy of any linear stage, not just a model for a particular one. To summarize,

the end goal is to provide manufacturers with a systematic way to determine how many modeling steps they can take, how accurate each one is, and what the simplest order required is for the level of accuracy they desire. METHODS Over the course of the summer, an NDrive ML amplifier was modeled, using ARMA, ARMAX, and various other black box models [7]. Black box modeling only examines the inputs and outputs of a system, without any previous knowledge or examination of its inner workings. It is different than grey box modeling, which uses some physical understanding of the system to choose a model structure before identifying parameters [8]. By using a black box model, it is possible to focus strictly on the relationship between the input and output of the system. Thus, part of the investigation was to determine a suitable input signal, one that excites the correct parameters. It was critical to have a wide ranging frequency domain (from 1 to 2000Hz) in order to determine what frequencies had the biggest impact on the behavior of the amplifier. Although only three stages are examined, a successful model framework could be used for any linear stage. For the analysis, the input and output data for an amplifier was collected, and model parameters were identified that quantify the input-output relationship. The stage behavior was collected using an Aerotech Inc. program called A3200 Motion Composer (5.04.003), and the input and output currents of the amplifiers in each individual stage were recorded. The data used to create models was frequency domain data and the models were validated using time domain step data. DATA PROCESSING The current data was input into Matlab’s System Identification tool and the quality of each model was compared. A variety of models were tested included transfer function, state space, polynomial, spectral, and correlation models. A good model was one that


had a high fit percent for every recorded step size in the time domain while also not being unreasonably complex. RESULTS During the initial stage of the project, it became clear that transfer function models were the best way to describe the data. It was possible to get a fit percent upwards of 97% with a third order transfer function, and the models did not improve significantly as the order of the model increased. State space models were slightly (~1%) better but required a minimum order of 10. It was determined that the best transfer function model had three zeros and two poles After determining the best model using Matlab system identification, the data was next run through a unit gain model and a unit gain model with a time delay. The fit percent of these two models, especially the time delay model, were significantly better than the fit of the transfer function model Transfer Function Unit Model with Model Time Delay Step 1 97.8% 99.2% Step 2 97.7% 98.9% Step 3 95.8% 94.9% Step 4 8.0% 22.8% Table 1: Fit percentages of time domain current data from the ABL1500 stage

ABL1500

Table 1 shows the specific fit percentage of the two best models. The transfer function model fit the frequency domain data with a fit percentage of 91.2%. The other stage that was tested, the ALS130H, has extremely similar results. DISCUSSION The amplifier in the Aerotech Inc. stages is closed loop, thus it is not a surprise that the unit gain models resulted in the best time-domain fit percentages. The poor fit percentage of the fourth step can be explained by the amount of noise in the data; filtering the data resulted in fit percentages similar to the other steps. The project revealed that the amplifier During the system identification process, a Matlab script was written that would allow the user to select input and output datasets and the order of the transfer function model. The program would print the fit

percentages and a graph displaying the validation data and the model output. It is possible to evaluate other parts of the stages using the script. One of the most revealing models would be one that effectively predicted current input versus position output. This model would predict the behavior of the system completely, and would make systematic design and testing significantly easier. REFERENCES [1] S. Devasia, E. Eleftherious, and S. R. Moheimani, “A survey of control issues in nanopositioning,” Control Systems Technology, IEEE Transactions on, vol. 15, no. 5, pp. 802-823, 2007. [2] K. K. Tan, T. H. Lee, and S. Huang, Precision motion control: design and implementation. Springer Science & Business Media, 2007. [3] L. R. Harriott, “ Limits of lithography,” Proceedings of the IEEE, vol. 89, no. 3, pp. 366-374, 2001. [4] Q. Zou, K. Leang, E. Sadoun, M. Reed, and S. Devasia, “Control issues in high-speed afm for biological applications: Collagen imaging example,” Asian Journal of Control, vol. 6, no. 2, pp. 11591167, 2005. [5] R. M. Schmidt, G. Schitter, and A. Rankers, The Design of High Performance Mechatronics-: High-Tech Functionality by Multidisciplinary System Integration. IOS Press, 2014. [6] F. Villegas, R. Hecker, M. Peña, D. Vicente, G. Flores. "Modeling of a linear motor feed drive including pre-rolling friction and aperiodic cogging and ripple." International Journal of Advanced Manufacturing Technology 73 (2014). [7] Box, George EP, et al. Time series analysis: forecasting and control. John Wiley & Sons, 2015. [8] M.E. Khan and F. Khan. "A comparative study of white box, black box and grey box testing techniques." Int. J. Adv. Comput. Sci. Appl 3, no. 6 (2012). ACKNOWLEDGEMENTS Special thanks to the University of Pittsburgh for such a wonderful summer research opportunity. Additionally, thanks go to Dr. Jeffery Vipperman and Tim Ryan for assistance throughout the project. Funding was provided by the Swanson School of Engineering and the Office of the Provost.


CREATING AN OSTEOCHONDRAL BIOREACTOR FOR THE SCREENING OF TREATMENTS FOR OSTEOARTHRITIS Derek A. Nichols Swanson School of Engineering, Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: dan44@pitt.edu, Web: http://www.pitt.edu/~dan44/ INTRODUCTION A bioreactor is an apparatus in which tissues or cells are cultured, and it can be used to monitor the response to candidate drugs. In microfluidic systems, an exit for air bubbles is necessary as they tend to build up around the flow path; therefore, the flow path design must allow for the removal of bubbles without obstructing the transport of drugs and nutrients to the cells/tissues [1]. An example bioreactor and its negative (i.e., the flow path) are shown in Figures 1 and 2 below.

Velocities through the central chamber were measured in CFX Post. The design was altered by changing the diameter and height of the channel. The central velocities were plotted against these dimensions to determine any relationships between design features and central velocity. Each model was assessed based on the velocity of the fluid through the middle of the central chamber as this is a fair representation of drug exposure. RESULTS The first feature to be altered was the height of the step function channel. This was increased from 0.50 mm to 2.50 mm in 0.25 mm increments. The velocity through the central chamber was measured and plotted against the step height shown in Figure 3.

Figure 1: The physical bioreactor

Figure 2: Flow path of an example bioreactor

Cells are hosted in the central chamber within a scaffold with low permeability, resulting in a relatively low amount of drug exposure since most of the fluid will travel through the surrounding channel [1]. The goal of this project was to develop a bioreactor design that would maximize drug exposure, and this can be achieved by maximizing the velocity of the fluid through the central chamber. METHODS Models of the flow path were created using SolidWorks, a computer-aided design software, and tested using ANSYS Fluid Flow (CFX), a finite element analysis software. A volume flow rate of 1 mL/day was placed at the inlet, and the outlet was open to the environment. The central chamber was considered as filled with GelMA, a hydrogel with a permeability of 1*10-16 m2 and a porosity of 0.8 [2].

Figure 3: Positive relationship between central velocity and step height

This is a linear relationship with a coefficient of determination (R2) of 0.9998. This means that the relationship can be expressed with an equation seen below where V is the central velocity in meters per second and H is the height of the step function channel in meters.

đ?‘‰đ?‘‰ = 4.8817 ∗ 10−10 ∗ đ??ťđ??ť + 2.1137 ∗ 10−13

(1)


The next feature to be altered was the diameter of the channel and pores. The pores can only be as large as the channel, and from previous simulations, it is seen that the flow is maximized when the pores are the same size as the channel; therefore, the channel and pores will always remain equal in size and will increase/decrease as one. Dimensions ranged from 0.35 mm to 0.65 mm and increased in increments of 0.05 mm. Velocity versus channel diameter can be seen in Figure 4.

Figure 4: Negative non-linear relationship between central velocity and channel/pore diameter

The relationship is clearly nonlinear; however, there appears to be a consistent trend in the data. Taking the log (base 10) of both variables shown in Figure 5 presents a near perfectly linear trend in the data with a coefficient of determination of 0.9996.

Figure 5: Negative linearized relationship between central velocity and channel/pore diameter

This linear data is much easier to visualize and represent with an equation. The relationship can be seen in Equation 2 below where V is the central velocity of the central chamber in meters per second and D is the diameter of the channels and pores in meters.

(2) đ?‘‰đ?‘‰ = 7.328 ∗ 10−24 ∗ đ??ˇđ??ˇâˆ’3.4645 DISCUSSION The relationships expressed with Equation 1 and 2 show that the velocity of the fluid through the central chamber can be controlled by simply altering the geometry. With these relationships, it is apparent that the velocity of the fluid, and therefore the total drug exposure, can be maximized by maximizing the step height and minimizing the channel diameter. These dimensions are limited by the resolution of the 3D printer and the overall design of the model. The smallest void able to be printed with the 3D Systems Vyper (Rock Hill, SC) is 0.60 mm meaning that this is the minimum size that can be used for the channels. The step height can be extended only a certain amount before it runs into other portions of the model; therefore, the maximum size for the step height is 1.75 mm. This increase in central velocity is the result of an increase in the hydraulic resistance of the bioreactor. Hydraulic resistance is the resistance a volume experiences as it moves through the model. Hydraulic resistance and volume flow affect the pressure difference across the model as expressed by Equation 3. (3) Δđ?‘?đ?‘? = ďż˝ ∗ ďż˝ đ?‘‡đ?‘‡

Where Δđ?‘?đ?‘? is the pressure upstream minus the pressure downstream, Q is the volume flow rate, and RT is the hydraulic resistance. As the hydraulic resistance increases, the pressure will also increase, and an increased pressure is able to more effectively force fluid through the central chamber. For future work, each portion of the bioreactor will be studied in order to understand how each segment contributes to its own flux through the central chamber. An array of bioreactors is also currently being studied in order to see how the pressure changes as fluid runs through multiple bioreactors. REFERENCES 1.Lozito et al. Stem Cell Research & Therapy 2013, 4(Suppl 1):S6 2. Iannetti et al. PlosOne, submitted. ACKNOWLEDGEMENTS Simulations were run in the lab of Paolo Zunino, PhD. Funding was kindly provided by the Swanson School of Engineering and the Office of the Provost. I would also like to thank Dr. Riccardo Gottardi for his help and supervision.


Anti-tumor (M1) Macrophages Secrete Cytokines that Prime Breast Cancer Cells for Apoptosis 1

Maya McKeown1, Jennifer Guerriero PhD2, Anthony Letai MD PhD2 University of Pittsburgh Swanson School of Engineering, Pittsburgh, PA, 15213 2 Dana-Farber Cancer Institute, Medical Oncology, Boston, MA, 02215

Email: mam475@pitt.edu, Web: http://letailab.dana-farber.org/ INTRODUCTION Acquired resistance to chemotherapy is a persistent challenge in oncology. The goal of cancer immunotherapy is to break away from cytotoxic treatments and focus on harnessing the power of the immune system to eliminate tumors. In addition to manipulating the adaptive immune system (as in Tcell checkpoint blockade inhibition), there is increasing appreciation for the innate immune system in facilitating an effective anti-tumor response. Tumor associated macrophages (TAMs) can represent up to 50% of breast tumor mass as the most abundant immune cell population [1-2]. They release soluble factors that modulate many aspects of tumor progression, invasion, metastasis, and angiogenesis. They are traditionally classified into two phenotypes: the anti-tumor phenotype (M1) or the pro-tumor (M2) [3]. Clinically, high tumor density of TAMs has been associated with chemoresistance and a worse clinical outcome [4]. One mechanism that prolongs cancer cell survival is prevention of apoptosis or programmed cell death. As shown in Figure 1, there are two main pathways of apoptosis. The extrinsic pathway is triggered by death ligands from outside a cell and may proceed either by just a caspase cascade or by causing mitochondrial outer membrane permeabilization (MOMP) and release of cytochrome c, signaling a caspase cascade. The intrinsic pathway triggered in response to intracellular stress such as DNA damage, activates the pro-apoptotic BCL-2 family proteins of the mitochondria causing MOMP and leading to apoptosis as well.

Figure 1. The pathways of apoptosis [5].

BH3 profiling is a tool unique to the Letai Lab that measures MOMP as an indication of how close cells are to the threshold of mitochondrial apoptosis, categorizing cells as either “primed” or “unprimed” for apoptosis. The priming of cancer cells is a predictor of clinical response to conventional chemotherapy [6]. BH3 profiling shows that macrophage phenotype can enhance apoptotic priming of tumor cells, also inducing chemosensitivity. We hypothesize that conditioned media from M1 stimulated macrophages contain factors that initially trigger the extrinsic apoptosis pathway via death ligands, but proceed through the mitochondrial pathway. Such utilization of the mitochondrial pathway could be measured as an increase in MOMP and therefore detected by BH3 profiling. Evaluation of cell viability after inhibition of caspase-8, the link between extrinsic apoptosis and the mitochondrial pathway, will specify whether this route is indeed the preferred apoptosis pathway in sensitive breast cancer cell lines. METHODS To study how M1-secreted cytokines may alter a tumor cell’s “priming,” we stimulated a murine macrophage cell line with Interferon gamma (IFN-γ) and lipopolysaccharide (LPS) to induce an M1 anti-tumor phenotype. We then collected media cultured by these M1-stimulated cells. Previously, BT20 breast tumor cells were identified as uniquely sensitive to the conditioned media treatment, so we used BT20 cells as a positive control. Cells were treated with control (LPS + IFNg), “CM M1” (conditioned media from M1 macrophages stimulated by LPS + IFNg), or CM M1 with caspase-8 Inhibitor (R&D Systems Inc., Minneapolis, MN) and BH3 profiled to assess the priming of cells in response to the conditioned media treatment with and without caspase-8 inhibition. CellTiter-Glo luminescent cell viability assay and Caspase-Glo assays (Promega, Madison,


WI) were also used to compare cell viability, and caspase-3/7 and caspase-8 activity in response to CM treatment and caspase-inhibition. These assays were run cell lines previously identified as responsive or non-responsive to CM M1 treatment. DATA PROCESSING BH3 profiling measures mitochondrial outer membrane permeabilization (MOMP). As detailed in Ryan and Letai 2013, single cell suspensions are permeabilized with detergents and reducing agents to expose the mitochondrial matrix to the treatment of pro-apoptotic BH3 only peptides [7]. Staining with JC-1 dye allows for automated fluorescence readings of MOMP by the Safire2 plate reader (Tecan, Medford, MA). The detection of MOMP is reported as the percentage of mitochondrial priming and is normalized to positive control DMSO and negative control Alamethicin peptides. Relative luminescence from the Glo assays was also read by the Safire2 plate reader. For the CellTiter Glo assays, we compared data from the 20th minute of the luminescence read. For the Caspase-Glo studies, we calculated the area under the curve of the luminescent measurement at individual time points. All results were normalized to the control treatment group (LPS + IFNg). RESULTS BH3 profiling of BT20 cells shows that CM M1 treatment increase depolarization or “primes” tumor cells; however, upon addition of a caspase-8 inhibitor, apoptotic priming was rescued to levels similar to control, shown in Figure 2. Since BH3 profiling measures specifically mitochondrial priming, this suggests the priming induced by conditioned media depends on caspase-8 activation.

Figure 2: BH3 profile of BT20 cells showing depolarization of pro-apoptotic BH3 only proteins, Bim, HRK, and Puma.

Results from the CellTiter Glo and CaspaseGlo assays in Figure 3 show that in all cell lines, caspase-8 activity, as well as downstream caspase-3 activity, was indeed extremely inhibited with the addition of caspase-8 inhibitor to CM M1 treatment.

Figure 3: Cell viability, caspase-3, and caspase-8 activity after 48-hour incubation.

More importantly, cell lines known to be responsive to CM M1 treatment had increased cell viability with the addition of the inhibitor. Not only are cells less primed for mitochondrial apoptosis as evident in the BH3 profile, but now we show that lack of functional caspase-8 leads to a large reduction in cell death or overall apoptosis. DISCUSSION The results support our hypothesis in which a death ligand in the M1 conditioned medium activates the extrinsic apoptotic pathway, which activates caspase 8, but ultimately implicates some degree of mitochondrial apoptosis. Upon caspase-8 inhibition, the reduction in priming and increase in cell viability is most notable in positively responding cell lines. Understanding the differences between responding and non-responding cell lines as well as the cellular signaling will help discriminate which patients’ tumors would be most susceptible to TAM reprogramming to an M1 state. REFERENCES 1. Kelly et al. Br J Cancer 57(2), 174-177, 1988. 2. Leek et al. J Leukoc Biol 56(4), 423-435, 1994. 3. Mantovani et al. Trends Immunol 23(11), 549555, 2002. 4. Panni et al. Immunotherapy 5(10), 1075-1087, 2013. 5. Favaloro et al. Aging 4(5), 330-349, 2012. 6. Ni Chonghaile et al. Science 334(6059), 11291133, 2011. 7. Ryan and Letai. Methods 61(2) 156-164, 2013. ACKNOWLEDGEMENTS Thank you to Dana-Farber Cancer Institute for providing the resources and mentorship for this project. This summer internship was funded by the Swanson School of Engineering and Office of the Provost.


CHANGES IN PULMONARY ARTERIAL HEMODYNAMICS PRIOR TO LVAD IMPLANT AND THE ASSOCIATION WITH RV FAILURE Courtney Q. Vu, Timothy N. Bachman, Luigi Lagazi, Robert L. Kormos and Marc A. Simon Vascular Medicine Institute, Department of Bioengineering University of Pittsburgh, PA, USA Email: cqv1@pitt.edu, Web: http://www.vmi.pitt.edu/faculty/simon.html INTRODUCTION The left ventricle of the heart is responsible for pumping oxygenated blood to the rest of the body through the arteries, and the right ventricle (RV) is responsible for pumping deoxygenated blood to the lungs via the pulmonary artery. When the left ventricle is unable to complete this function, a patient will be categorized as having left ventricular failure (LVF). If severe, LVF patients will require implantation of a left ventricular assist device (LVAD). Post LVAD implantation, patients are at risk for subsequent right ventricular failure (RVF) that occurs when the right ventricle (RV) fails to adequately pump blood to the lungs. RVF occurrence is between 9 to 44 percent [1]. In a study done by Kormos et al, it was found that about 20 percent of patients that received a HeartMate II LVAD will experience some form of RVF [2]. RVF is usually first treated with inotropes that will increase myocardial contractility [3]. However, in severe cases of RVF, patients will require implantation of a right ventricular assist device (RVAD). Currently, the RVADs on the market are for short term use only. Implantation of an RVAD after implantation of an LVAD increases the risks of morbidity and mortality [1]. Analysis is done from data collected from a Swann-Ganz catheter that is used to collect pulmonary pressures and cardiac outputs [4]. Usually, the common place to insert the Swann-Ganz catheter is the right internal jugular vein. The gold standard for a comprehensive analysis of hemodynamics measurements is high fidelity micromanometry. However, it is rarely used in the clinical setting due to cost and fragility compared to the Swann-Ganz catheter. In a recent study done, it was shown that the measurements like heart rate, systolic blood pressure, diastolic blood pressure, dP/dtmax, dP/dtmin, and Cardiac Index taken from a SwannGanz catheter yields a negligible difference to the measurements taken from the high fidelity micro-

manometry in the right ventricle [5]. The purpose of the present study was to quantify changes in pulmonary arterial (PA) pulse pressure and compliance over time and to compare patients with RVF to those without. METHODS With permission from the University of Pittsburgh Institutional Review Board, screen captures of preoperative PA and RV waveforms from the right heart catheterization lab (RHC) and screen captures of PA waveforms at different points during the surgical procedure to implant an LVAD were collected. After obtaining the waveforms, they were re-digitized using a custom Matlab software that creates piecewise splines to generate a best-fit curve of the waveform data (see Figure 1). The best-fit curve was also generated using waypoints such as local pressure minima and local pressure maxima [5]. The data was then split up into individual beats to be selected to create an average representative beat. Once the system created the average representative beat, an analysis was performed to collect heart rate, PA systolic blood pressure, PA diastolic blood pressure, mean PA pressure, and PA pulse pressure (defined as the difference between systolic and diastolic blood pressure). Compliance was calculated using preoperative stroke volume measurements taken from the RHC as stroke volume/pulse pressure. Analysis was primarily focused on preoperative and perioperative timepoints before the implant even though other timepoints were collected. The definition of RVF in this study was inotrope usage for 14 days or longer, mechanical support from an RVAD, or hospitalization for RVF. Statistical analysis was done by using a paired t-test to compare between timepoints, while a Mann-Whitney U test was used to compare patients stratified by outcome.


RESULTS Out of 16 patients, 13 patients had analyzable data. For two of the patients, we used supplemental reports for stroke volume. Three of the patients experienced RVF, but only two had enough data to be used for compliance. Time between RHC to implant was 4.93 days ± 3.26 days with a range between 2-14 days. PA pulse pressure had a statistically significant decrease from the RHC (mean 26.41 ± 9.36) to the OR (mean 15.58 ± 6.58) pre-implant with a p-value of 0.002 and only one patient having a slight increase from the RHC to the OR (see Figure 1). PA systolic blood pressure also had a statistically significant decrease from the RHC (mean 53.86 ± 12.00) to the OR ( mean 41.91 ± 12.25) with a p-value of 0.003. There were no other significant differences between the timepoints with all subjects combined. When stratified by outcome between groups, there were no significant differences found.

Figure 1: Top: A re-digitized pulmonary arterial waveform. Bottom: PA pulse pressure differences between timepoints with mean and standard deviation.

DISCUSSION In this prospective study of hemodynamics pre and perioperatively during VAD implantation, there was a significant decrease in PA pulse pressure between the RHC to the OR measurements. This was due to a decrease in PA systolic blood pressure. There were no significant hemodynamic differences between patients with RVF and those without RVF. Limitations of the study include sample size and average representative beat generation. Future work will utilize the ECG for more accurate average representative beat generation, as well as analysis of more timepoints. Further, this study did not account for long-term RVF, due to the ongoing prospective nature of the trial. Data collection is ongoing. REFERENCES 1. Patlolla B, Beygui R, Haddad F. Right-ventricular failure following left ventricle assist device implantation. Current Opinion in Cardiology. 2013; 28: 223-233. 2. Kormos RL, Teuteberg JJ, Pagani FD, Russel SD, John R, Miller LW, Massey T, Milano CA, Moazami N, Sundareswaran KS, Farrar DJ, HeartMate IICI. Right ventricular failure in patients with the HeartMate II continuous-flow left ventricular assist device: Incidence, risk factors, and effect on outcomes. The Journal of Thoracic and Cardiovascular Surgery. 2010; 139: 1316–1324. 3. Hauptman PJ, Mikolajczak P, George A, Mohr CJ, Hoover R, Swindle J, Schnitzler MA. Chronic Inotropic Therapy in End-Stage Heart Failure. American Heart Journal. 2006; 152: 1096.e11096.e8. 4. Oudiz R Langleben D. Cardiac Catheterization in Pulmonary Arterial Hypertension: An Updated Guide to Proper Use. Advances in Pulmonary Hypertension. 2005; 4. 5. Bachman TN, Bursic JJ, Simon MA, Champion HC. A Novel Acquisition Technique to Utilize Swann-Ganz Catheter data as a Surrogate for Highfidelity Micromanometry within the Right Ventricle and Pulmonary Circuit. Cardiovascular Engineering ACKNOWLEDGEMENTS Support was provided by the Swanson School of Engineering at the University of Pittsburgh.


DEVELOPMENT OF TWO-PHOTON CALCIUM IMAGING METHODS FOR CIRCUIT MAPPING IN MOUSE MOTOR CORTEX Dillon Thomas, Brett Saltrick, Sandra Okoro, and Bryan Hooks Department of Neurobiology, Department of Bioengineering University of Pittsburgh, PA, USA Email: dst19@pitt.edu INTRODUCTION The motor cortex of mammals is important for initiating and controlling movement, and damage to it during stroke or injury results in movement-related symptoms for patients. The cortex is a layered structure, which contains thousands of cells, and decades of neuroscience research has categorized these into different cell types which differ in structure and function. These specific cell types are connected in circuits in specific ways and thus may be specialized for performing specific kinds of computations in the circuity of the motor cortex. Little is known about how the specific connectivity of various cell types since, until recently, the means to identify different cell types reliably was lacking. We thus seek to develop a high speed means to assess the connections of these specific cell types in the motor cortex. Ideally this will result in a method that permits faster mapping of brain circuits by using imaging to record responses in multiple cells per brain slice instead of using recording electrodes which doesn’t allow as many cells to be studied as quickly. METHODS The study consisted of using transgenic mice (PVCre mice, Jackson Labs)1,2 to target expression to a subset of interneurons. These mice were crossed to Ai96 GCaMP6s mice (Ai96 RCL-GCaMP6s mice, Jackson Labs)3-5 so that fast spiking inhibitory neurons in cortex could be labeled with a calcium indicator to monitor neural activity. Brains were sliced into 300 ¾m thick slices and put into artificial cerebrospinal fluid (ACSF) with oxygenation to keep cells live for imaging. A Leica TCS SP5 two-photon microscope was used to record calcium responses in cortex in vitro at 20x magnification. The M1 region of the cortex was observed while being stimulated in separate trials with an LED and an electrical stimulus isolator6-8.

The microscope was tuned to monitor 2-photon excitation at a wavelength of 940nm. To enhance excitatory responses, ACSF solution with a greater KCl concentration was also used to depolarize the slice. Images were analyzed in Matlab 2015b using FluoroSNNAP, a software that is able to deconvolve fluorescence signals and chart calcium events indicative of action potentials9. Before analysis, image stacks were stabilized to account for motion and drift using the software.

DATA PROCESSING Images from the Leica TCS SP5 were funneled into ImageJ and organized into tiff stacks. These stacks were then run through FluoroSNNAP for analysis. Waveforms were produced based off of the change in fluorescence of regions of interest within each stack. Control stacks with no stimulation were also analyzed to compare against each method of stimulation. RESULTS Analysis of multiple cells through two-photon microscopy yielded results with a large amount of noise as shown in Figure 1. For this region of interest (ROI) centered on the cell body of a neuron, the calcium fluorescence baseline is noisy and deviates over time. Other ROIs imaged in the tiff stack exhibited similar behavior in their calcium fluorescence response for all methods of stimulation. Since the spikes cannot be compared to a stable baseline the start and end of action potentials cannot


be

deduced.

Figure 1: Calcium event detection generated from FluoroSNNAP plotting Calcium Fluorescence vs Time. The unusual shape of the action potential illustrated by calcium fluorescence as well as a shifting baseline fluorescence are common elements of noise encountered during experimentation.

DISCUSSION The chaotic nature of the calcium fluorescence indicates a major error in either our preparation of cells or analysis of their response to stimuli. This noise may have been caused by electrical fluctuations in the neural network, residual light from optical stimulation, or using an insufficient indicator for neural activity. We believe that using a different indicator, such as Oregon Green Bapta – 1, for neural activity may allow enough noise to be eliminated to draw informative conclusions. Additionally, the method with which we are making slices of cortex could be proving fatal for the cells in the slice and may need to be done in a different fashion. Examining a group of cells in the motor cortex with two-photon microscopy did not produce interpretable action potential trains. Future experiments may be improved by using indicators for neural activity that are less disruptive in producing comprehensible results for two-photon microscopy. REFERENCES 1.Hippenmeyer S, Vrieseling E, Sigrist M, Portmann T, Laengle C, Ladle DR, et al. Adevelopmental switch in the response of DRG neurons to ETS transcription factor signaling. PLoS Biol. 2005;3(5):e159 2. https://www.jax.org/strain/017320

3.Madisen, L. et al. Transgenic mice for intersectional targeting of neural sensors and effectors with high specificity and performance. Neuron 85, 942–958 (2015). 4.Chen T. W., Wardill T. J., Sun Y., Pulver S. R., Renninger S. L., Baohan A., et al. (2013).Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature 499 295–300. 10.1038/nature12354 5.https://www.jax.org/strain/024106 6.Boyden, E. S., Zhang, F., Bamberg, E., Nagel, G. & Deisseroth, K. (2005) Millisecond-timescale, genetically targeted optical control of neural activity. Nature neuroscience 8, 1263-1268. 7. Petreanu, L., Huber, D., Sobczyk, A. & Svoboda, K. (2007) Channelrhodopsin-2-assisted circuit mapping of long-range callosal projections. Nature neuroscience 10, 663-668. 8.Hooks, B. M. et al. (2013) Organization of cortical and thalamic input to pyramidal neurons in mouse motor cortex. J Neurosci 33, 748-760. 9.http://www.seas.upenn.edu/~molneuro/FluoroSN NAP/fluorosnnap.pdf ACKNOWLEDGEMENTS Partial funding was provided by Dr. Bryan M. Hooks and the Swanson School of Engineering Department of Bioengineering.


WIRELESS MUSCLE STIMULATION DATA TRANSMISSION FOR PERIPHERAL NERVE PROSTHESIS DEVELOPMENT Adam L. Smoulder, Sudip Nag, Shih-Cheng Yen Singapore Institute for Neurotechnology National University of Singapore, Singapore Email: als299@pitt.edu, Web: http://www.sinapseinstitute.org/ INTRODUCTION Peripheral nerve injuries (PNIs) can lead to a loss of sensation and function at their corresponding areas of the body, which can permanently handicap an individual’s capabilities in society [1]. Due to the current obstacles in surgical and regenerative nerve repair caused by both the nonselective nature of nerve regeneration [2] and denervated muscle atrophying [3], a viable solution for the present would be the development of a prosthesis that wirelessly connects the proximal end of the injured nerve to the muscles it controls and relays signals between the two. Laboratories at the Singapore Institute for Neurotechnology (SINAPSE) have mapped out a viable manner of implementation for a prosthesis device of this nature (Figure 1), focusing on brachial nerve injuries that limit hand mobility and function [4]. Nerve electrodes interface with the proximal broken section of the brachial nerve to receive nerve signals (Step 1). A microcontroller then amplifies and decodes the signal content (Step 2). Data is wirelessly transmitted to another microcontroller near the distal muscles (Step 3), and finally hand muscles are stimulated as dictated by the signals received (Step 4).

Animal experiments being performed for muscle stimulation have been limited by the need of a USB cable connecting a computer and the microcontroller on the outside of the animal to transmit stimulation data. This greatly limits the animals’ movements during experimentation, which in turn limits the nature of the experiments being performed. The objective of this research was to interface the microcontroller currently in use on the outside of the animal with a WiFi module to permit wireless transmission of muscle stimulation data, permitting greater flexibility and authenticity of animal limb movement in future experiments towards peripheral nerve prosthesis development. MATERIALS AND METHODS To accommodate a combination of power and cost efficiency, the NodeMCU Lua ESP8266 (ESP) was the WiFi module selected for data transmission. The module was configured to accept AT-Commands (AT-Firmware v0.2). The Texas Instruments® MSP430F2274 (MSP) is the microcontroller currently used on the animal side. Because the MSP and ESP require 3.3V power sources, a TPS78833 VDO step-down was used from 5V source. TCP/IP protocol was used for data transmission. The MSP was programmed using IAR Embedded Workbench (C++). The program loaded onto the device caused it to, upon reset, send strings of ATCommands in a way readable to the ESP to configure an ESP as a TCP Server, then wait for a serial receive (Rx) interrupt triggered by the reception of user-decided data (incase further instructions are desired).

Figure 1. Conceptual design of peripheral nerve prosthesis device. This research is focused on the Muscle Stimulation, highlighted with the green box.

The final circuit schematic can be seen in Figure 2 below. The ESP and MSP’s UART serial transmit (Tx) and Rx pins were connected. To run the setup, the MSP was reset. This caused the MSP’s program to begin, configuring the ESP as a TCP Server with


a selected port number and default IP address. The computer was configured to be a TCP Client and was connected to the ESP’s TCP Server using the selected port number. Data transmission was performed and its rate was recorded by sending a 500-byte long message from the TCP Client at a frequency of 4 Hz (16kbps) for 15 seconds, then recording how long it took the TCP Server to receive all data. Transmission was considered complete once oscilloscope readings connected to the serial data Rx/Tx channels showed no further data received. The ESP and MSP modules were reset between each trial.

3.3Vè

Figure 2. Final circuit schematic that will standalone on the animal during muscle stimulation testing. The TPS78833 can take 5V (displayed) or 9V and output 3.3V, which is used for both the MSP and the ESP. The MSP and ESP are connected by their UART Rx/Tx lines.

RESULTS The average data transmission speed between the TCP Client and Server was 6.18 kbps (±2.22 kbps, n = 7). As Figure 3 shows, is only 39% of the transmission rate; however 100% of the data sent was received in all trials, meaning no bytes/packets were dropped. While this speed of data transmission is not particularly fast, it is sufficient for purposes of transmitting stimulation data. This speed will be pushed as needed with further development and testing. Time limitations prevented further work or testing from being performed, though further tests using increased message sizes and frequencies will be tested by those continuing this work.

Send rate Receive Rate

Figure 3. Comparison of the data send rate of the computer’s 500 byte message and the data receive rate of the ESP/MSP setup.

DISCUSSION Data was successfully transmitted at a viable speed between the computer and microcontroller through the ESP8266 WiFi module. While the ESP did not transmit the data as fast as the computer sent it, the speed was fast enough for the purposes of muscle stimulation data being sent. Code and protocol for device setup have been laid out to allow future experiments to transmit muscle stimulation data wirelessly, allowing greater design flexibility in experiments towards development of a peripheral nerve prosthesis device. Future work will primarily entail connecting this setup into the currently used device for muscle stimulation, then utilizing it in experiments. While this is being done, optimization of the data transmission speed will be performed by attempting to use different transmission protocols than TCP. REFERENCES 1.Ciaramitaro et al., J Peripher. Nerv. Syst. 15 120127, 2010 2. Alliodi et al., Prog. Neurobiol. 98 16-37, 2012 3.Sheshadri et al., 7th International IEEE/EMBS Conference on Neural Engineering 593-596 4.Nag et al., IEEE BioCAS Proceedings 388-391, 2014 ACKNOWLEDGEMENTS Funding was provided by the Swanson School of Engineering Summer Internship Program.


QUANTITATIVE ANALYSIS OF THE EYE VASCULATURE Felipe Suntaxi, Ning-Jiun Jan, Andrew Voorhees, Konstantinos Verdelis, and Ian A. Sigal Ocular Biomechanics Laboratory, Department of Ophthalmology University of Pittsburgh, PA, USA Email: fms15@pitt.edu, Web: http://www.ocularbiomechanics.com INTRODUCTION The vasculature of the eye is highly complex, consisting of multiple subsystems that have evolved to provide consistent and robust perfusion. These subsystems consist of many different vascular layers, each one supplying different portions of the eye and each layer having its own characteristic morphology and function. In addition, the opaque blood vessels are distributed such that they properly nourish all the tissues without interfering with the optical activity in the eye. This vascular arrangement is very complex; changes in its architecture can heighten risk for impaired blood flow, which can in turn lead to alterations in visual function. Abnormal perfusion has been linked to several eye diseases, such as diabetic retinopathy [1] and open angle glaucoma [2], both of which can lead to blindness. Hence, understanding these diseases require understanding the architecture of the eye vasculature. The majority of studies on morphological features of the eye vasculature have been qualitative [3], but there have also been a few studies that quantify the vasculature of the eye. Most of these analyses are done in 2D [4] using histological sections [5], which can limit the accuracy of certain morphological parameters. A 3D quantification would give more accurate measures of morphological features and better insight in the diagnosis and treatment of eye diseases. In studies of other organs, it has been shown that 3D microvasculature analysis is important for detecting changes in structure and hemodynamics of the vascular beds [6]. Our goal was to quantify the eye vasculature using 3D images and custom Matlab scripts. METHODS A high-resolution (isotropic 7.96µm pixel size) 3D image of a vascular corrosion cast of a monkey eye was obtained using micro-computed tomography (µCT). The images were manually segmented using a 3D analysis software (Avizo). The segmented images were then skeletonized using an image processing software (Fiji) to obtain the

centerline (one pixel diameter) of each vessel. Both the segmented volume and the corresponding skeleton were manipulated as binary matrices and analyzed using custom scripts in Matlab in order to calculate a set of parameters describing the vascular morphology. Using an open source Matlab script, branch nodes and end point nodes from the centerline skeleton were determined [7]. Using this node and skeleton information, we proceeded to calculate our parameters. DATA PROCESSING The Matlab script is capable of returning all the parameters with the only inputs being the binary volumes of the segmented images and the centerline skeleton. The software is also able to graph a 3D representation of the binary volume used. The parameters and their corresponding algorithm is listed in Table 1. Parameter

Description of Algorithm

Number of vessel segments

Defined as the number of segments of the centerline between nodes Total number of voxels that make up the segmented volume. Multiplied by the volume in mm³of each voxel Ratio of number of vessel segments to total volume in mm³ Sum of the Euclidian distances from voxel to voxel along the vessel segment of the centerline times the size of the voxel in µm For each point along the centerline, the shortest Euclidian distance to the vessel surface was calculated and multiplied times the size of the voxel in µm. The mean radius was calculated and multiplied by two.

Total Volume Branch Density Length

Mean Diameter

Bifurcation Angle

Max and Min diameter

Angle between two vessel segments branching from the same branch node calculated locally around each branch node, using a tortuosity threshold. For each segment, a vector is taken from a point of low tortuosity and the branch node. The angle was calculated between pairs of vectors On each vessel segment, the longest and shortest diameter calculated multiplied times the size of the voxel in µm

Tortuosity (Distance Metric)

On each vessel segment, ratio of Euclidian distance between endpoints to length of the vessel segment.

Table 1. Parameters Calculated and Algorithms

TESTING Accuracy of Parameters Using Vessel Model


In order to test the accuracy of the parameters calculated, we developed code that is capable of generating 3D binary representations of vessel-like structures. Using this vessel model of known dimensions, we can test the accuracy for most of the parameters. We tested our code using three different vessel models (500x500x500 pixels). The percent errors were calculated for each vessel model and the max error was reported for every parameter. Table 2 shows that most of the parameters worked ideally. Number of vessel segments, volume and branch density were calculated with the highest accuracy since their calculation consist of simple element counting. Length, diameter and bifurcation angle data were also calculated with high accuracy (6.1% the highest error). This error can be attributed to slight inaccuracies in determining the true centerline skeleton. We are in the process of modifying code to create a 3D vasculature model with curved vessels and dynamic vessel diameters within a vessel segment, so the accuracies of max and min diameter and tortuosity are yet to be determined. Parameter Number of vessel segments

Percent Error 0%

Total Volume

0%

Branch Density

0.75%

Length

6.1%

Mean Diameter

3.4%

Bifurcation Angle

5.1%

Max and Min Diameter Tortuosity

In process In process

Table 2. Accuracy of parameters

Measuring Parameters on 3D Images of Real Eyes We used our software to analyze portions of the central retinal artery, central retinal vein, choroid, and ciliary arteries. The segmented images were skeletonized and then the binary volumes were used as inputs to our software for quantification. From visual inspection, the skeletons were accurate. Table 3 displays the results for the parameters on each vascular subsystem. For the parameters that are measured per branch, the average of all branches is reported. RESULTS AND DISCUSSION The software was proven adequate to analyze the eye vasculature and enable quantification of the vasculature. This tool has the potential to identify clinically relevant vascular characteristics, such as choke points, where the blood flow can be compromised. The values obtained with the software

are consistent with previous qualitative studies [3]. However, the code can be subject to further improvement to make the calculations even more accurate and efficient. We are also working to improve software robustness and efficiency when handling large volumes. However, despite these limitations, the software still provides accurate measures of clinically relevant [8, 9] 3D morphological parameters of the ocular vascular system. This 3D approach extends previous qualitative and 2D studies, giving a deeper insight into eye anatomy. Parameter Number of vessel segments Total Volume (mm³) Branch Density (Segment numbers per mm³) Length (µm) Mean Diameter (µm) Bifurcation Angle (deg) Max and Min diameter (µm) Tortuosity

Choroid

Central Artery

Central Vein

Ciliary Arteries

14

74

15

77

0.0221

0.153

0.094

0.169

630.868

481.762

158.203

454.298

760.18

804.592

1321.832

632.96

35.984

47.304

72.704

38.896

82.138

84.018

80.632

96.316

48.88

62.11

93.88

51.28

17.47

31.52

55.92

22.95

1.137

1.179

1.224

1.198

Table 3. Quantification of different eye vasculature subsystems

REFERENCES 1. Kohner et al. Diabetes, 44(6), 603-607, 1995. 2. Spraul. Vision Research, 42(7), 923-932, 2002. 3. Zhang. Progress in Retinal and Eye Research 13(1), 243270, 1994. 4. McLeod et al. Investigative Ophthalmology & Visual Science, 35(11), 3799-3811, 1994. 5. Zhao et al. Eye, 14, 445-449, 2000. 6. Oses et al. Antheriosclerosis, Thrombosis, and Vascular Biology, 29(12), 2090-2092, 2009. 7. Kerschnitzki et al., Journal of Bone and Mineral Research, 28(8), 1837-1845, 2013. 8. Stanton et al. Journal of Hypertension, 13, 1724-8, 1995. 9. Diedrich et al. BioMed Central Bioinformatics, 12(Suppl 10), S15

ACKNOWLEDGEMENTS Funding by Swanson School of Engineering and the Office of the Provost, the Laboratory of Ocular Biomechanics, and the NIH R01 EY023966.


CATCH THE WAVE: USING PRIOR KNOWLEDGE OF ACTION POTENTIAL SHAPES TO IDENTIFY NEURONS IN CHRONIC RECORDINGS 1

Shruti K. Vempati1, Adam C. Snyder2, 3, Matthew A. Smith1, 2, 4 Department of Bioengineering, University of Pittsburgh, 2Center for the Neural Basis of Cognition, University of Pittsburgh, 3Department of Electrical and Computer Engineering, Carnegie Mellon University, 4Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA Email: skv7@pitt.edu, Web: http://smithlab.net

INTRODUCTION A key aim of neuroscience is determining computations performed by and interactions between individual neurons. This requires identifying when these individual neurons fire action potentials, through the use of electrode arrays implanted in the brain. However, extracellular voltage events recorded from these arrays in real biological systems are inherently noisy. This problem is exacerbated by the rapid increase in number of recording electrodes, often in the tens and sometimes hundreds per recording session. Spike sorting is the process by which action potentials are extracted from voltage waveforms and identified as belonging to particular neurons. Current sorting algorithms are time and labor intensive, often requiring manual oversight and correction. Additionally, these algorithms do not take the physiological processes of the neurons into account. With chronically implanted electrodes, the population of neurons recorded from remains fairly stable over consecutive days [1] and action potentials from different types of neurons have different and somewhat predictable characteristics [2]. We aim to use this prior knowledge about the statistics of real neural recordings to improve and expedite our sorting algorithm. METHODS Our supervised spike sorting algorithm, superSort, was developed in Matlab (Mathworks, Natick, MA). Voltage waveforms were initially passed to an artifact detector that detected noise based on voltage threshold crossing, high variance detection, and frequency domain analysis. We then performed principal components analysis (PCA), to approximate each waveform as a five-dimensional vector value. Waveforms with outlying values were excluded, and the PCA step was repeated. We iterated this process until no outliers remained. The

coefficients and the five components from the previous day’s sorted waveforms were used to reconstruct the waveforms and passed to the clustering step. If there were no prior components available, the components from that day were used. The five-dimensional waves were clustered into individual neuronal units using Gaussian mixture models. A randomly chosen subset of the data was used to select the model. The initial number and initial parameters of the Gaussian mixture components were chosen using the previous day’s sorted waveforms. If the previous initial parameters were not available, random initial parameters were used. The mixture model was trained using an expectation maximization algorithm and cross validation was used to determine the optimal model and number of Gaussian components. Following model selection, all data were fit to the model and cluster identities were assigned. To correct for waves that were previously thrown out due to slight time shifts, the waves were aligned and the iterative PCA and clustering steps were repeated. Lastly, any noise clusters were identified on the basis of biological plausibility and thrown out. DATA ANALYSIS We recorded extracellular voltage events with two 96-channel “Utah” arrays (Blackrock Microsystems) implanted in one adult male rhesus macaque monkey (Macaca mulatta). All procedures were approved by the Institutional Animal Care and Use Committee of the University of Pittsburgh and complied with guidelines set forth in the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. Voltage waveforms on fifty channels recorded over ten consecutive days were sorted with the current standard sorting algorithm [3], a version of superSort without the prior knowledge implementation, and a version of superSort with the prior knowledge implementation.


RESULTS The current standard algorithm identified four “neurons� on a representative channel on the tenth consecutive day (Fig. 1A), while both versions of superSort identified two neurons (Fig. 1B-C).

amplitude noise waveforms as belonging to a neuron, while the version with prior knowledge implementation had fewer misclassifications. The use of prior knowledge also led to clearer characteristic shapes. DISCUSSION The current standard algorithm overestimated the number of neurons on the representative channel. While the true number of neurons is not known, it is more likely that two neurons fired action potentials near the channel. The four neurons identified by the current standard had overlapping waveforms while the neurons identified by superSort had clearer separation. The shapes of the neurons were distinct in all three sorts, but the differences between the neurons were clearest in the version of superSort with the prior knowledge implementation. This enabled clearer conclusions to be drawn about the shapes and activity of the neurons. Small improvements such as these, when accumulated over many recording channels, resulted in a substantial increase in accuracy of sorting. Additionally, the time needed for manual correction decreased with use of the prior knowledge algorithm due to the lower number of misclassifications. We plan to further improve our algorithm by allowing for a human expert to make minor adjustments to the sorted waveforms, thereby adjusting the prior models.

Fig 1. Voltage-time course of 10,000 sample action potential waveforms from a representative channel sorted with the current standard algorithm (A), superSort without prior knowledge (B), and superSort with prior knowledge (C). Four neurons were identified by the current standard (red, green, blue, and pink) while two units were identified by each version of superSort (red and green).

The superSort version without the prior knowledge implementation misclassified a group of low

REFERENCES [1] D. B. McMahon et al., "One month in the life of a neuron: longitudinal single-unit electrophysiology in the monkey visual system," J Neurophysiology, vol. 112, pp. 1748-62, Oct. 2014. [2] D. A. McCormick et al., "Comparative electrophysiology of pyramidal and sparsely spiny stellate neurons of the neocortex," J. Neurophysiology, vol. 54, pp. 782-806, 1985-10-01 00:00:00, Oct. 1985. [3] S. Shoham et al., "Robust, automatic spike sorting using mixtures of multivariate tdistributions", J. Neurosci Methods, vol. 127, no. 2, pp. 111-122, Apr. 2003. ACKNOWLEDGEMENTS This project was jointly funded by Dr. Matthew A. Smith and the Department of Bioengineering at the Swanson School of Engineering.


DEVELOPMENT OF COMPUTATIONAL TOOLS FOR ANALYZING 3D IN VIVO DEFORMATIONS OF MONKEY OPTIC NERVE HEAD Ziyi Zhu, Huong Tran, Gadi Wollstein, Matt A. Smith and Ian A. Sigal Laboratory of Ocular Biomechanics, Departments of Ophthalmology and Bioengineering University of Pittsburgh, PA, USA Email: ziz12@pitt.edu INTRODUCTION Glaucoma is the second major cause of blindness worldwide. [1] The neural tissue loss in glaucoma initiates in the region called optic nerve head (ONH), where the axons converge and exit the eye through a collagenous structure called the lamina cribrosa (LC) (Figure 1). Elevated intraocular pressure (IOP) is the main risk factor for glaucoma, but patients vary in their sensitivities to the elevated IOP, and the mechanism of the neural tissue loss remains unclear. Our study utilized optical coherence tomography (OCT), a noninvasive imaging modality that provides real time, 3D, and high resolution images of the ONH region. The deformations of 4 important ONH structures were quantified: the neural tissue (inner limiting membrane-ILM), Bruch membrane (BM), the scleral canal opening (Bruch membrane opening-BMO), and the anterior LC (ALC) (Figure 1).

Figure 1 Example markings on a 2D slice through the ONH of an in vivo OCT image. Yellow dot was BMO, green was BM, blue was ALC, and red was ILM.

The goal of this project was to develop MATLAB tools to analyze the ONH structures and calculate parameters that describe their morphology, including the BMO planarity, the minimum rim width and the minimum rim area, in an efficient and user-friendly manner that will enable analysis of the effects of IOP.

METHODS ONH regions of monkey eyes were imaged in vivo with OCT while controlling IOP. Manual markings were made on radial slices of the OCT image. These markings were then reconstructed into 3D surfaces by custom MATLAB tools that were previously developed in the laboratory. All surface depths were relative to the best fit plane of the BMO, a structure that is commonly used as reference plane [2]. Surface Depth- ALC, BM, ILM Our program was designed to output the distribution curves and box plot of the surface depth, offering a more comprehensive understanding on the data compared to only descriptive statistic. BMO Planarity BMO planarity measured the extent to which the scleral canal deviated from a plane. It was computed as the average normal distance from the BMO to the best-fit plane, obtained using principal component analysis. Rim Width and Area A decrease in neuroretinal rim tissue thickness has been shown to be an indicator of glaucoma progression [3,4]. The minimum rim width was measured as the shortest distance from BMO points to the ILM surface and the minimum rim area was estimated by the triangulation of the end points of the minimum rim width segments. All the functions were incorporated into a graphical user interface and were validated with a set of monkey eye data including cases under baseline (15 mmHg), low (5 mmHg) and high (30 mmHg) IOP. RESULTS Surface Depth- ALC, BM, ILM The distribution curves and the box plot of the ALC surface depths in the test cases were presented in Figure 2. The deformation of ALC under low IOP was more noticeable than under high IOP.


All four parameters computed by the program for three test cases were summarized and compared with values from previous studies in Table 1.

Figure 2 Depth distribution curves (left) and box plot (right) of ALC depth

BMO Planarity BMO planarity was plotted with respect to the BMO plane (Figure 3). The planarity increased with high IOP, while decreased with low IOP (Table 1).

DISCUSSION Our project provided a tool to compute and visualize four parameters: the ALC, BM, ILM surface depth, the BMO planarity, and the minimum rim width/area, which are important parameters for characterizing the morphology of the ONH structures. The user friendly design, along with the user interface, allows the user to extract valuable information from data efficiently with ease. The program was tested with experiment data and also showed consistency with previous studies. While being a powerful tool, our MALTAB computation tools still has room for improvements. Specifically, we would like to improve robustness to incomplete data, such as due to blood vessel shadows. REFERENCES

Figure 3 3D view (left) and side view (right) of the BMO planarity under baseline condition. BMO markings (nasaltemporal side: red-blue dots) were plotted with respect to the outline of the BMO plane (black lines). Note that the z axis was stretched 20 times for illustration.

Rim Width and Area Both minimum rim width and area (Figure 4) decreased with increase in IOP, while increasing with decrease in IOP (Table 1).

[1] I. C. Campbell et al., "Biomechanics of the Posterior," Journal of Biomechanical Engineering, vol. 136, pp. 0210051-18, Feb. 2014. [2] S. Lee et al., “Optic Nerve Head and Peripapillary Morphometrics in Myopic Glaucoma,” IOVS, vol. 55, No. 7, pp. 4378-4393, July. 2014. [3] N. G. Strouthidis et al., “Longitudinal Change Detected by Spectral Domain Optical Coherence Tomography in the Optic Nerve Head and Peripapillary Retina in Experimental Glaucoma,” IOVS, vol. 52, No. 3, Mar. 2011. [4] B. Fortune et al., “Experimental glaucoma causes optic nerve head neural rim tissue compression: a potentially important mechanism of axon injury,” IOVS, In-Press, 2016.

ACKNOWLEDGEMENTS The project was funded jointly by the National Institutes of Health (R01 EY023966 and EY025011), the Swanson School of Engineering and the Office of the Provost.

Table 1 ONH parameters of the three test cases and baseline values reported in other studies

Condition Low IOP (5 mmHg) High IOP (30 mmHg) Baseline (15 mmHg) Baseline in literature

Mean ALC depth (µm) 150.0 165.8 171.0 ~175 [3]

BMO Planarity (µm) 7.12 9.98 7.80 ~7 [2]

Minimum rim width (µm) 195.5 187.2 192.5 ~300 [3]

Minimum rim area (mm2) 0.987 0.938 0.941 1.00±0.19 [4]


BIOMECHANICAL CONTRIBUTIONS OF UPPER CERVICAL LIGAMENTOUS STRUCTURES IN TYPE II ODONTOID FRACTURES Nicholas Vaudreuil MD, Rob Tisherman MD, Rahul Ramanathan, Robert Hartman PhD, Joon Lee MD, Kevin Bell PhD Ferguson Spine Laboratory, Department of Orthopaedic Surgery University of Pittsburgh, PA, USA Email: rar122@pitt.edu, Web: http://www.fergusonlab.pitt.edu INTRODUCTION Fractures of the odontoid process of the C2 vertebrae are a growing problem as the population ages [1]. One-year mortality has been reported as high as 2637% in patients over 65 years old [2, 3]. Type II odontoid fractures, as classified under the Anderson and D’Alonzo system, represent the majority of odontoid fractures seen across all age groups [4, 5]. Studies over the last twenty years have shown that there is a large difference in survival between surgical and non-surgical repair of odontoid fractures in the elderly, with surgical repair showing more favorable outcomes [6, 7]. However, surgical repair has greater short-term risks [8] and currently there is no consensus on how odontoid fractures in the elderly should be treated. Cadaveric modeling of Type II odontoid fractures may allow for subcategorization of Type II odontoid fractures by associated ligamentous injury. The purpose of this study was to measure biomechanical changes in a cadaveric model of odontoid fracture with combined ligamentous injury in the upper cervical spine. We hypothesized that injury to the odontoid along with the anterior longitudinal ligament (ALL) or the unilateral facet capsule (UF) will increase range of motion in flexion-extension, axial rotation, and lateral bending over odontoid fracture alone. Replay of the intact motion will allow this study to quantify how each soft tissue structure contributes to the motion and stability of the spine in the setting of a Type II odontoid fracture. METHODS The experiment was conducted using N=8 freshfrozen cadaveric specimens (4M/4F and aged 49-60 years old) of Occiput-C2 loaded on a serial linkage robot. Each spine specimen was maintained at -20°C until required for testing. After being thawed, dissected and instrumented, specimens were initially tested in an intact state, followed by mechanical fracturing of the odontoid, and then were randomly

assigned to either ALL transection at the C1-C2 level or unilateral facet capsule injury. Specimens underwent flexion-extension, lateral bending, axial rotation, and combined motions using hybrid-control fuzzy logic to 1.5Nm on a 6-DOF serial linkage robot at each stage of injury. Forces/moments were obtained using a 6-DOF load cell and the motion of each segment (skull, C1, C2) was measured with a VICON motion-tracking camera system. The robot uses hybrid-control to minimize off-axis forces which allows for better representation of cervical movement. The data outputted by the robot was then analyzed using a MATLAB program to observe trends in motion changes and other kinematics of the cervical joint. T-tests were performed and their respective p-values were obtained for statistical analysis, maintaining normality with respect to the intact state. RESULTS Robot data: By normalizing each state to the total motion of the intact state achieved by the prescribed moment target, we determined the individual contributions to stability by each tissue structure (odontoid, UF, ALL, and remaining soft tissue) (FIGURE 1).


Figure 1: Percentage contribution to total motion for each tissue component involved in C1-C2 stability (odontoid fracture, unilateral facet, anterior longitudinal ligament, and soft tissue) in each motion (flexion, extension, lateral bending, axial rotation) normalized to the intact state.

Kinematic data: Across all motions, fracture of the odontoid significantly increased the range of motion. Transection of the unilateral facet capsule increased lateral bending motion 38.2%±21.8% over odontoid fracture alone. Combined injury to the ALL and one facet capsule increased extension ROM over odontoid fracture by 27.0%±18.0%. The AP translation of C1-C2 joint was quantified for all motions. Odontoid fracture increased the AP translation to an average of 3.6mm in flexionextension. Performing axial rotation at the maximum flexion and extension, to simulate maximal traumatic motion, did not show increased ROM changes or anterior-posterior translation over axial rotation at a neutral position. DISCUSSION Previously published biomechanical testing of upper cervical spines has shown that the C1-C2 capsules and ALL do not contribute to a significant increase in axial rotation or AP motion of the upper spine following type II odontoid fractures [9]. Our study found that these structures do contribute significantly to stability over the normal motion path of the specimen, with the anterior longitudinal ligament providing extension stability, and the facet capsule contributing to stability in lateral bending and axial rotation. Our study found that Type II odontoid fractures with associated ligamentous injury significantly increase motion in the FE and LB directions. Additionally, we observed significant increases in AP translation only after cuts are made to the ALL or facet. These findings suggest that evaluation for associated ligamentous injuries in the setting of Type II odontoid fracture may be justified when determining choice of operative versus nonoperative management. Both physical exam maneuvers, such as a C1-C2 subluxation test to assess AP translation, and magnetic resonance imaging (MRI) of ligamentous structures may be useful tools in making this determination. Future studies are needed to assess these clinical correlations. Additional studies study should also be aimed at mechanical changes in odontoid fracture model after surgical fixation.

SIGNIFICANCE: The management of non-displaced type II odontoid fractures in the elderly is a difficult question for clinicians. Our study aims to add to the identify biomechanical changes in combined injuries with type II odontoid fracture to determine who might benefit the most from surgery in this patient population. REFERENCES: 1. H. Smith. and S. Kerr, “Trends in epidemiology and management of type II odontoid fractures: 20-year experience at a model system spine injury tertiary referral center,” J. spinal …, vol. 23, no. 8, pp. 501–505, 2010. 2. A. P. White, R. Hashimoto, D. C. Norvell, and A. R. Vaccaro, “Morbidity and mortality related to odontoid fracture surgery in the elderly population.,” Spine (Phila. Pa. 1976)., vol. 35, no. 9 Suppl, pp. S146–57, Apr. 2010. 3. M. Venkatesan, J. R. Northover, J. B. Wild, N. Johnson, K. Lee, C. E. Uzoigwe, and J. R. Braybrooke, “Survival analysis of elderly patients with a fracture of the odontoid peg.,” Bone Joint J., vol. 96-B, no. 1, pp. 88–93, Jan. 2014. 4. D. Pal, P. Sell, and M. Grevitt, “Type II odontoid fractures in the elderly: an evidence-based narrative review of management.,” Eur. Spine J., vol. 20, no. 2, pp. 195–204, Feb. 2011. 5. J. Chapman, J. S. Smith, B. Kopjar, et. al., “The AOSpine North America Geriatric Odontoid Fracture Mortality Study: a retrospective review of mortality outcomes for operative versus nonoperative treatment of 322 patients with long-term follow-up.,” Spine (Phila. Pa. 1976)., vol. 38, no. 13, pp. 1098–104, Jun. 2013. 6. A. Vaccaro, C. Kepler, and B. Kopjar, “Functional and quality-of-life outcomes in geriatric patients with type-II dens fracture,” J. Bone …, pp. 729–735, 2013. 7. M. J. Scheyerer, S. M. Zimmermann, H.-P. Simmen, G. a Wanner, and C. M. Werner, “Treatment modality in type II odontoid fractures defines the outcome in elderly patients.,” BMC Surg., vol. 13, p. 54, Jan. 2013. 8. B. I. Woods, J. B. Hohl, B. Braly, W. Donaldson, J. Kang, and J. Y. Lee, “Mortality in elderly patients following operative and nonoperative management of odontoid fractures.,” J. Spinal Disord. Tech., vol. 27, no. 6, pp. 321–6, Aug. 2014. 9. C. M. J. McCabe, S. D. McLachlin, S. I. Bailey, K. R. Gurr, C. S. Bailey, and C. E. Dunning, “The effect of softtissue restraints after type II odontoid fractures in the elderly: a biomechanical study.,” Spine (Phila. Pa. 1976)., vol. 37, no. 12, pp. 1030–5, May 2012.

ACKNOWLEDGEMENTS: Extended regards to the Ferguson Laboratory for Orthopaedic and Spine Research, University of Pittsburgh Swanson School of Engineering, and the Office of the Provost.


SOFTWARE DESIGN AND MECHANICAL VERIFICATION OF AN IMU SYSTEM TO MONITOR CERVICAL SPINE MOVEMENT Michelle Riffitts1, Marcus Allen2, Adrianna Oh3, Dr. Kevin Bell4 University of Pittsburgh Department of Bioengineering 2 University of Pittsburgh Department of Mechanical Engineering, 3University of Pittsburgh School of Medicine, 4University of Pittsburgh Department of Orthopaedic Surgery Email: mir67@pitt.edu

1

INTRODUCTION interACTION (patent pending) is a mobile application in development that monitors the joint motion of the knee in order to report patient exercise data to physical therapists and clinicians. By using Bluetooth connected inertial measurement units (IMUs) patients can perform assigned physical therapy exercises in their home and share the data with their therapists via the interACTION system. The interACTION system is designed to allow for immediate feedback regarding the patient’s exercises and progress in rehabilitation. The existing system can be altered in order to provide a more flexible and versatile platform to monitor body movement, resulting in the opportunity for rapid software prototyping. The objectives of this project were to 1) develop a universal software interface, 2) expand the system to the cervical spine, and 3) test the resulting system’s precision.

accelerometer, gyroscope, and magnetometer, whose function can be turned on and off. For exercises regarding the cervical spine, one sensor is placed on the head, and one on the chest, as pictured below in Figure 1. By calculating the quaternion difference between the sensors and converting the difference to a transformation matrix that models the movement of the joint, the Euler angles of the cervical spine can be extracted. The order of rotation of the transformation matrix for cervical spine movement is XYZ, with rotation about the X axis being flexion/extension, rotation about the Y being lateral bending, and rotation around the Z being rotation, as shown below in Figure 1.

MATERIALS AND METHODS The design criterion for this project included: connecting the IMUs via Bluetooth, aligning the IMUs to the body, running a pretest to ensure the sensors are working and recording properly, collecting data regarding the joint movement, and saving and exporting data for future use, all while being user friendly in order to ease the job of collecting data for clinicians. Matlab was selected as the preferred programming environment and through the use of a graphical user interface (GUI), clinicians enter information about the exercise, including patient details and the joint to be monitored. Once data collection starts, quaternions are continuously collected from YEI 3-Space Bluetooth IMU (Yost Labs, Portsmouth, OH) to use for joint angle calculation. The IMU consists of an

Figure 1. Axis Orientation To test the precision of the system developed, two sensors were placed on a gimbal with tri-axial rotation aligned to match the XYZ rotation order of the cervical spine, as shown below in Figure 2. The top sensor was rotated while the other remained stationary. On a patient this would be comparable to placing one sensor on the head and one on the chest, and then rotating the one on the head, in order to mimic neck motion. The unfixed sensor was rotated through 15 cycles of motion individually around the X, Y, and Z axis, with the intention of stopping at the same position each time.


Moving Sensor

Fixed Sensor

Figure 2. Gimbal Setup RESULTS AND DISCUSSION Figure 3 shows the GUI the user sees when opening the interACTION program. The program is fully functional and allows the clinician to complete all aspects of joint monitoring. From start to finish the clinician connects and aligns the sensors to the body, selects the order of Euler angle decomposition, runs a pretest to ensure the sensors are functioning properly, performs the exercises and collects data, and exports the data to excel for further processing. The setup of the GUI allows clinicians to follow a simple set of instructions in order to perform and record data relating to joint movement. The goals set in the initial phase of this project regarding the software and GUI setup were accomplished via the fully functioning system.

Angle of Flexion (degrees)

0.37째 for flexion (X axis) (Figure 4), 0.27째 for extension (X axis), 0.30째 for lateral bending (Y axis), and 0.35째 for rotation (Z axis). The standard deviations for all movements are low, with the largest being 0.35 of a degree. Figure 4 shown below is the raw data plot of fifteen cycles of flexion (rotation about the X axis) on the gimbal system. It is included to highlight the repeatability of the system. 80 60

Cycles of Flexion/Extension

40 20 0 -20 1 -40 -60

501

1001

Data Points

Figure 4. Representative plot showing 15 cycles of flexion. CONCLUSIONS Supported by tests performed and data collected, the interACTION system set up in Matlab is a precise tool that can be expected to provide repeatable results regrading movement of the cervical spine. For future considerations the overall accuracy of the system should be considered, along with the impact, if any, of rotation around more than one axis. The effect of the magnetometer in the IMU sensor should also be considered, and monitored to see if there is a way to account for the drift of the sensor when the magnetometer is turned off. Ultimately, the system should also be tested on human subjects. ACKNOWLEDGEMENTS The generous support of the Swanson School of Engineering, the University of Pittsburgh Office of the Provost, the Ferguson Laboratory and the Coulter Foundation is gratefully acknowledged.

Figure 3. Matlab interACTION GUI As for the precision testing, the standard deviations calculated from the fifteen trials of moving the sensors on the gimbal system about each axis were:


ENGINEERING THE BONE-CARTILAGE INTERFACE Kalon J. Overholt, Riccardo Gottardi, Rocky S. Tuan Center for Cellular and Molecular Engineering, Department of Orthopaedic Surgery University of Pittsburgh, PA, USA Email: kjo34@pitt.edu, Web: http://ccme.pitt.edu/ INTRODUCTION In synovial joints, articular cartilage provides a surface of contact to protect the underlying bone. Despite the close proximity of bone and cartilage, these tissues occupy extremely different environments in vivo. Chondrocytes (chondral tissue cells) thrive in a hypoxic environment low in glucose. Osteoblasts (bone cells) however, are best suited to a normoxic, glucose-rich medium.1 Hence, the ideal conditions for chondral tissue are undesirable and potentially toxic for osseous tissue, and vice-versa. Therefore, engineered in vitro models have focused on either bone or cartilage studied in isolation. This study aims to validate a microphysiological tissue model that contains bone and cartilage together as an osteochondral (OC) complex. The cells that compose this biphasic tissue are sourced from mesenchymal stem cells (MSCs), pluripotent stem cells that can be stimulated to differentiate into chondrocytes and osteoblasts. A stem-cell based microphysiological model will enable the study of communication between bone and cartilage at the osteochondral interface, a phenomenon that is only beginning to be understood. An important goal of this research is to model osteoarthritis (OA), a debilitating disease affecting millions of people worldwide. A locus of the disease has been identified at the OC junction and we ultimately aim at identifying potential treatments for OA.2 METHODS This study utilized a novel 4-chambered bioreactor system3 to separately circulate two media streams, specific to cartilage and bone, through osteochondral tissue cores. To address the efficacy of creating a functional in vitro complex, one aspect of this project evaluated the bioreactor’s capacity to host native tissue plugs. Native osteochondral explants were obtained from the trochlear region of female human knees after total joint replacement surgery. The OC plugs were cultured in the bioreactor using

chondrogenic and osteogenic media streams (flow rate = 0.083mL/h) for 28 days. The samples were subjected to live/dead staining and H&E staining at 3, 7, 14, and 28 day time points. The effluent media provided another area of analysis through a lactate dehydrogenase assay. Fluorescein sodium salt, a small fluorescent molecule, was introduced to one flow stream (concentration = 0.6mg/mL, flow rate = 0.1mL/h) in order to quantitatively measure perfusion through the tissue. As Pan et al. indicated, fluorescein transport is a useful marker to study the communication of bone and cartilage at their interface. A separate aspect of the study involved differentiation analysis of MSC chondrogenesis, osteogenesis, and adipogenesis. The stem cells were derived from the bone marrow of female donors of 50, 53, and 75 years of age. These cells were expanded in both fetal bovine serum-containing media (FBS) and xeno-free media (XF) for 10 days, then seeded into 6-well plates (105 cells per well) and grown in chondrogenic, osteogenic, and adipogenic media for 28 days. The differentiated cells were assessed by histology, using Alcian blue, Alizarin red, and Oil Red O staining, respectively. Their differentiation capacity was also assessed using RTPCR to detect chondrogenic, osteogenic, and adipogenic markers. DATA PROCESSING Bisected OC cores were imaged with live/dead fluorescence in representative areas of bone and cartilage, as well as at the interface. These samples were then sectioned to 6 microns, mounted and stained, and imaged. Fluorescein transport was quantized by collecting the effluent media for fluorescence spectroscopy. The resultant data showed the average of three simultaneous trials. PCR outcomes were assessed by calculating CT, dCT, and ddCT averaged over 2 replicates for chondrogenic genes (Aggrecan, Collagen II, SOX 9) and osteogenic genes (BSP-2, Osteocalcin, Osteopontin.)


RESULTS Quantitative flow analysis determined that when the osteochondral plugs were placed in the bioreactor there was no undesirable mixing or leaking of the media between the upper and lower chambers. However, the tissue was capable of transporting a fluorescent marker from the bottom stream to the top stream, beginning initially with no signal and increasing in intensity within a 3 day time period (Figure 1.) Live/dead staining demonstrated that the tissues remain viable in the bioreactor environment throughout a 14-day window. Between the 14 and 28-day time points, significant dieback occurred in the cartilage segment, with minimal cell loss in bone. Histological staining showed tidemarks and visual indications of healthy tissue.

Table 1: Osteogenic and Chondrogenic gene expression.

DISCUSSION The available data suggests that the bioreactor system can maintain tissue viability for 14 days or longer, a sufficient time period to study the degradative effects of osteoarthritis. During this time interval, the bioreactor will be suitable to host both native and engineered tissue complexes. The observed tissue dieback may have been a result of the high-impact load delivered when extracting the tissue cores. Our study of fluorescein transport has shown that the osteochondral interface is a permeable barrier, one that may allow slow transport of cytokines that cause OA inflammation and degradation. In later studies, the bioreactor will be used to create engineered in vitro osteochondral constructs using differentiated MSCs. The experimentation undertaken in this project shows great promise for creating a realistic microphysiological model for osteoarthritis progression. Figure 1: Fluorescence spectroscopy shows that native osteochondral tissue performs transport of small molecules in the bioreactor environment. Fluorescein (376 Da) was introduced into the bottom (bone) medium stream and successfully perfused into the top (cartilage) medium stream.

Table 1 (below) displays representative data from RT-PCR. The gene expression and histology outcomes showed strong differentiation in all three lineages.

REFERENCES 1. Alexander, P., et al. Exp Biol Med (Maywood). 2014, 239, 1080. 2. Lin, H., et al. Mol. Pharmaceutics. 2014, 11, 2203−2212. 3. Lozito TP, et al. Stem Cell Research & Therapy. 2013, 4(Suppl 1):S6. 4. Pan, J., et al. Journal of Orthopaedic Research. 2009, 27(10), 1347–1352. ACKNOWLEDGEMENTS University of Pittsburgh Swanson School of Engineering Summer Research Fellowship, Ri.MED Foundation, CASIS GA-2016-236.


PREDICTING MUSCLE FORCE OUPUT USING EMG ACTIVITY Michael Adams, Tyler Simpson, Carl Beringer and Robert Gaunt, PhD Rehab Neural Engineering Lab, Swanson School of Engineering University of Pittsburgh, PA, USA Email: mra63@pitt.edu INTRODUCTION The fundamental goal of hand prosthetics is to accurately mimic human function. As the technology behind these devices has progressed, the ability to simulate human function has improved as well. In hand amputees with intact forearms, using information from the forearm muscles has been shown as an effective method of prosthetic control [1]. Because these muscles are still functional, electromyography (EMG) can be used to record the electrical activity in the muscle. Through understanding how EMG activity correlates to hand movement in healthy individuals, we can use forearm muscle activity to drive hand prosthetics in a more realistic and functionally useful way. An important step in this process is understanding how EMG activity is correlated with the force output of the muscle. Research groups using surface electrodes have addressed this topic with reasonable success [1][2][3]. Unfortunately, surface electrodes suffer from many practical use problems, many of which are solved by intramuscular electrodes. However, no consensus exists for the most effective way to process EMG data recorded using intramuscular electrodes. The goal of this project was to adapt popular motor unit sorting methods to analyze intramuscular EMG activity and then predict muscle force output within the limitations of the experimental setup for our main project. METHODS The study involved a healthy adult male with no upper limb impairment. The participant was implanted with 16 intramuscular electrodes in the forearm and several surface electrodes on the upper arm. In addition, the participant wore a custom made glove fitted with Hall Effect sensors to measure movement kinematics such as finger position and joint velocity. The subject completed 217 trials with each lasting approximately 30 seconds. During these trials,

forearm EMG activity and hand/wrist kinematics were recorded as the participant completed a wide variety of finger and wrist movements. The focus of this study involved trials where the participant was tasked with pressing a button with a designated target force and then maintaining that force once the specified threshold was reached. The button press was repeated 6 times in each trial, and a separate trial was conducted for 2N, 5N, and 10N target force. DATA PROCESSING The majority of this project focused on processing and analyzing the data collected during the experiment. The first component of our analysis was filtering. The Hall Effect sensors used to measure hand kinematics were a significant source of noise in the raw EMG data. To remove this noise, a custom filtering protocol was developed and applied. In addition to magnetics noise filtering, the EMG signal was also high-pass filtered using a 4th order Butterworth filter with a 500 Hz cutoff frequency to remove movement artifacts. After the identifiable noise in the EMG signal was filtered, we sought to extract the contribution of individual motor units that compose the overall signal. To sort the motor units, we used the two most prominent sorting programs currently available: Offline Sorter and EMGLAB. Offline Sorter is most commonly used to sort action potential waveforms in the brain, but for the purposes of this project it was used to sort spikes in the motor units of the forearm muscles. After configuring the sort parameters in the program, it was determined that the automatic K-Means sort method verified by a 3D cluster plot of the first three principal components would optimally fit our experiment. EMGLAB is a Matlab based program and is currently the standard for sorting EMG activity recorded from the periphery [4]. EMGLAB automatically sorts the input signal based on waveform templates it identifies over a user selected


time interval. EMGLAB required less effort to configure because it’s intended purpose more closely fit our desired functionality. Both Offline Sorter and EMGLAB output the sorted units with each timestamp where the unit’s waveform was identified. With this information, the overall fire frequency was calculated by summing the number of spikes across all units that occurred within 2ms time bins. To determine how well this fire frequency rate corresponded with muscle force output, we ran a cross correlation with the force data recorded from the button during the trials. RESULTS Overall, the correlation between fire frequency and force output was disappointingly inconsistent. Our analysis yielded correlations as high r = 0.75 but as low as r = 0.06, with the average correlation across all trials being r = 0.27 ± 0.15. There was no clear evidence that one sorting method was more effective or yielded higher correlations than the other. Figure 1 illustrates the variability in the correlation as a result of the sorting methods. r = -0.20894

r = -0.33734

DISCUSSION Two main problems limited the consistency of our results. Our experimental setup did not lend itself to robust motor unit sorting, and our sort methods were unreliable. Because our project was an offshoot of a larger overall project, the experimental setup was not designed with high accuracy motor unit sorting in mind. Whereas the studies we referenced had multiple electrodes in each muscle, our experiment had just one electrode which made it impossible to compare and confirm the quality of our unit sorting. The sorting methods implemented in this study also presented problems. Offline Sorter had difficulty aligning spike waveform shapes which lead to errors in the unit sorting results. Some difficulty was expected because Offline Sorter is not optimized for EMG data, but the inconsistent results across trials made it difficult to mitigate this error. EMGLAB also had drawbacks in the context of our experiment. EMGLAB is no longer maintained, so many Matlab version compatibility problems arose which hindered the performance of the program. This most notably affected the automated sort feature, which was largely ineffective at template matching waveforms and resolving superpositions. The inconsistencies in both sort methods required manual sorting which introduced error and inconsistency, and explains a large degree of the variability in correlations. Despite being unsuccessful in our effort to predict muscle force output with fire frequency, this project helped determine the limitations of our experimental setup. Moving forward we plan to examine more robust methods of predicting muscle force output and reconsider the methods by which data is recorded from the experiments. REFERENCES 1.R. Boostani and M. Moradi. Physiological Measurement 24, 309, 2003. 2.S. Muceli et al. The Journal of Physiology 593, 3789-3804, 2015. 3.F. Negro et al. Journal of Neural Engineering 13, 2016. 4. K. McGill et al. Journal of Neuroscience Methods 149, 121-133, 2005.

Figure 1: All graphs share the same 6-26 second time interval from one trial. The top two graphs show the result of sorting the same data with different sorting methods. Both show the spike times for each identified motor unit (black) and the resulting fire frequency (blue). The bottom graph shows the raw EMG signal (blue) mapped over the force exerted on the button (pink). The distinct differences in the top two graphs explain the inconsistent r values.

ACKNOWLEDGEMENTS This work was supported by the Swanson School of Engineering and the Office of the Provost.


EFFECTS OF PHASE-DELAYING OPTOGENETIC STIMULATION OF THE SUPRACHIASMATIC NUCLEUS ON MOOD-RELATED BEHAVIORS Christine Heislera,b, Chelsea Vadnie Ph.D.a, Ryan Logan Ph.D.a, Colleen McClung Ph.D.a Department of Psychiatry, University of Pittsburgh Medical School, Pittsburgh, PA, USA b Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA Email: cnh27@pitt.edu, Web: http://tnp.pitt.edu/

a

INTRODUCTION Mood disorders such as major depressive disorder (MDD) affect about 9.5 percent of the adult U.S. population in a given year [1]. There are numerous associations between circadian rhythm dysregulations, or misalignment of an organism’s internal ‘clock’, and mood disorders [2-9]. For example, there is an association between the severity of phase delay, a type of circadian dysregulation in which activity onsets shift to a later time, and the severity of depression in MDD patients [7]. Also, MDD patients tend to exhibit eveningness, or heightened activity later in the day, further demonstrating the correlation between phase delays and mood [2,5,6]. However, the exact causal relationship between circadian phase delays and mood disruptions is unclear [3,10,11]. Therefore, the primary aim of this project is to better understand how phase-delaying of the suprachiasmatic nucleus (SCN) through optogenetic stimulation disrupts mood-related behaviors. The suprachiasmatic nucleus is the master pacemaker of circadian rhythms. The SCN receives light input through the retina and in turn regulates a variety of other circadian outputs, including neuronal firing rates in various brain regions and hormone production [4,5,11]. However, many other brain regions, including some that modulate mood, also receive light input [11]. Therefore, we will use optogenetics, a precise neuronal stimulation technique, to directly target the SCN and avoid triggering SCN-independent effects of light [11,12]. Observing the phase shifts and behavioral changes that occur following optogenetic stimulation of the SCN will offer a better understanding of way in which it directly controls mood related behaviors. METHODS The study consisted of fourteen transgenic vesicular GABA transporter Cre recombinase (Vgat Cre) X

channelrhodopsin (ChR2) mice. Seven mice received stimulation and seven were no-stimulation handling controls. Following optical fiber implantation surgeries, all mice were singly-housed in Piezo Sleep boxes in a reverse dark-light cycle for two weeks (on 5:00pm, off 5:00am) and then were kept under constant darkness for the remainder of the experiment. Food and water were provided ad libitum. Mice received chronically implantable fibers (400 µm diameter core, efficiency ≥85%) targeted above the SCN. (A/P= -0.1 mm, M/L= 0.0mm, D/L= -5.0 mm). The mice were optogenetically stimulated with a blue laser (473 nm) for one hour at a frequency of 8 Hz with a 10ms pulse width at circadian time (CT) 15 every two days, where CT12 corresponds to the activity onset of the mice. A light pulse at CT15 has been shown to produce a robust phase delay in mice with a similar genetic background (C57/129), therefore we chose to stimulate our mice at CT15 [10]. Our stimulation parameters were based on Jones et al. (2015) demonstrating that mice will entrain to repeated optogenetic stimulation of the SCN [11]. After 16 days, mood-related behavior will be assessed using the following tests: open field, elevated plus-maze, locomotor, and forced swim test. The behavioral tests will be conducted during the active phase (CT 14-18) of the mice, ensuring minimal sleep disruption. The open field and elevated plus-maze assess anxiety-related behavior. The forced swim test assesses depression-related behavior. DATA PROCESSING Free-running activity data were collected using Piezo Sleep Software, which measures activity in terms of


Days

the amount of pressure applied and motion within the cage. These values are exported to ClockLab in an actogram for activity onset analysis. Using this data, we calculated CT15 for each mouse and optogenetically stimulated them at this time. Figure 1 below is a double-plotted actogram where each horizontal line represents a day and the vertical lines represent activity bouts over a 24 h period.

12

0

12 Time (hr)

0

Figure 1: Activity of a stimulated mouse over time. Shaded grey area depicts when mice were placed in constant darkness. Stimulation at CT 15 marked with an arrow produced a clear delay of activity onset on the following day.

RESULTS Using a two-way repeated measure ANOVA, we found a significant effect of stimulation [F(1,12) = 19.06 p = 0.0009] on the amount of phase-delay shown by the mice, shown in Figure 2 below.

Figure 2: Amount of phase shift in circadian activity exhibited by stimulated and control mice. *p < 0.01. n=7.

DISCUSSION In order to elucidate the relationship between phasedelays and mood, we first established a model of phase-delay. We were able to induce a phase-delay by stimulating the SCN at CT15, the early part of the dark phase, similar to previous studies that induce

phase delays after administration of a light-pulse at CT15 [13]. Furthermore, we were able to induce chronic phase-delays. Analysis of mood-related behaviors in response to this disruption will provide data to illuminate the relationship between chronic phase delays and mood disorders. This study of phase delays and future studies of phase-advances may help elucidate the mood-related effects of chronic phase shifting such as jetlag and help to develop bright light therapy paradigms for individualized treatment of patients with mood disorders. Phase delays that occurred in control mice could be due to natural shifting of endogenous rhythms in the free-running phase, external light factors, or handling stress. In future cohorts, we will minimize possible phase-shifting effects of external disruption to further validate the phase-delay model. REFERENCES 1. DSBA: Depression and Bipolar Alliance. 2. McClung et al. Society of Biological Psychiatry. 74 (4), 42-49, 2013. 3. Parekh et al. Front. Psychiatry. 6, 187, 2016. 4. Logan et al. Behav Neurosci. 128 (3): 387-412, 2014. 5. Hickie et al. BMC Medicine. 11, 79, 2013. 6. Emens et al. Psychiatry Res. 168, 259-261, 2009. 7. Lewy et al. J Clin Psychiatry.76 (5), e662–e664, 2015. 8. Jud et al. Biol. Proced. 7 (1), 101-116, 2005. 9. Landigraf et al. Adv Therapy. 27, 796, 2010 10. Honrado et al. J Comp Physiol A. 178, 563-570, 1996. 11. Jones et al. Nature Neuroscience. 18, 373-375, 2015. 12. Deisseroth. Scientific American. 2010. 13. Morin et al. J Biol Rhythms. 29 (5), 346-354, 2014. ACKNOWLEDGEMENTS Mice were kept in the Translational Neuroscience Lab at Bridgeside Point II, University of Pittsburgh Medical Center. Funding was provided jointly by Dr. Colleen McClung, the Swanson School of Engineering, and the Office of the Provost.


MAPPING THE EXTRACELLULAR MATRIX: AN AUTOMATED COMPARISON OF THE DISTRIBUTION OF EXTRACELLULAR MATRIX MOLECULES IN THE BRAIN Jessie R. Liu, Michel Modo Regenerative Imaging Laboratory, McGowan Institute for Regenerative Medicine University of Pittsburgh, PA, USA Email: jrl99@pitt.edu INTRODUCTION The extracellular matrix (ECM) comprises nearly 20% of the neural tissue volume, yet the distribution of ECM within normal brain tissue remains poorly documented. The ECM plays various functional roles in both the neuropil and neurovasculature. Thrombospondin, in the neuropil, for example, participates in cell signaling events such as cell migration and pre-synaptic differentiation [1]. Laminin, a main structural component of the basal lamina and mainly thought to be present in the vasculature, plays a role in some similar functions to that of thrombospondin, such as cell migration and synaptic formation [1, 2]. Additionally, laminin is important in the adhesion and attachment of neural cells [1]. Also in the neurovasculature, collagen IV is a fibrous matrix protein that can serve as an adhesive substrate for neurons, as well as play a role in axon guidance [1, 3]. With such integral roles in various functional cellular events, elucidation of the normal distribution of these molecules can give insight into how these are changing under pathological conditions, such as stroke or Alzheimer’s. Here, thrombospondin, collagen IV, and laminin are quantified using CellProfiler [4] in an automated, high throughput approach, to help characterize the distribution of these molecules with major cell phenotypes in normal rat striatum. METHODS Male Sprague-Dawley rats (Taconic Labs) were perfused transcardially at 12 weeks of age with 4% paraformaldehyde and PBS prior to 30% sucrose cryoprotection. Fixed excised brains were sectioned at 50 µm on a cryostat (Leica) before immunohistochemistry. Sections were washed 3x for 5 minutes each with PBS before overnight (~16 hours) incubation at 4 °C with primary antibodies diluted in PBS with 0.5% Triton X-100 (Sigma)—

Fox3 for neurons (1:500; ab177487, Abcam), GFAP for astrocytes (1:3000; ab4674, Abcam), RECA-1 for endothelial cells (1:100; ab9774, Abcam), collagen IV (1:150; ab6586, Abcam), and laminin (1:500; ab11575, Abcam). Corresponding Alexa Fluor secondary antibodies (1:500; Molecular Probes) diluted in PBS were applied for 1 hour at room temperature (21 °C). Secondary antibodies were removed and sections were washed 3x with PBS before counterstaining with the nuclear marker Hoechst 33342 (1 µg/ml in PBS) for 5 minutes. Sections were washed a final 3x with PBS before being coverslipped with Vectashield mounting medium (H1000, Vector Labs). Images were acquired with an AxioImager M2 microscope (Zeiss) in conjunction with Stereo Investigator software (MBF) before being processed in an automated CellProfiler pipeline. Modules in the pipeline removed background before identifying cell objects as part of certain phenotype populations. Identified phenotype populations were then analyzed by the pipeline to identify objects that also co-localized with ECM molecules. Additionally, the percent area coverage of each ECM molecule was quantified. The percentages of cell phenotype populations identified as colocalizing with ECM molecules was calculated. Averages were taken per section per animal and data was graphed using GraphPad Prism 7 as the mean ± standard deviation. Two-way ANOVA and post hoc Sidak tests were used with significance set at p<0.05. RESULTS Thrombospondin, collagen IV, and laminin all appear evenly distributed throughout the striatum (Figure 1A), as well as along the anterior-posterior axis. It was visually evident that almost all neurons and a moderate amount of astrocytes co-localized with thrombospondin and that almost all endothelial


cells co-localized with collagen IV and laminin, within the striatum. Indeed, neurons co-localized significantly more than astrocytes (p<0.0001) with an anterior to posterior average of 93.94% of neurons colocalizing with thrombospondin versus 47.18% of astrocytes (n=5) (Figure 1B). For endothelial cells, 85.47% co-localize with collagen IV (n=5), while a similar 76.58% co-localize with laminin (n=1). The percent area coverage of all three ECM molecules did not significantly differ between each other or anterior to posterior in the striatum and was, on average, 8.14% for thrombospondin, 5.73% for collagen IV, and 8.87% for laminin (Figure 1C). DISCUSSION For all three ECM molecules, the co-localizations do reflect the functional roles in which they participate. Thrombospondin, which can be produced by astrocytes [2], plays a role in many neuronal events including cell migration, synaptic formation, and synaptic plasticity and this active role is reflected in the high percentage of neurons that were identified as co-localizing with thrombospondin. Additionally, both collagen IV and laminin are known to be prominent molecules in the vasculature and the high percentage of endothelial cells that co-localize with both molecules demonstrates this association. With thrombospondin, the difference in co-localization between neurons and astrocytes may suggest that thrombospondin plays a more functional role in neurons than in astrocytes. Further, the similar

percent area coverage of each of the ECM molecules may suggest that each of the molecules has an equal spatial distribution while fulfilling different functional roles. The co-localization of thrombospondin with neurons and astrocytes and of collagen IV and laminin with endothelial cells, as well as the percent area coverage of each ECM molecule, gives insight into the compartmentalization of the ECM and its contact with cells in the normal brain. Although further analysis would be needed to understand the significance of the differences in co-localization, this automated quantification of the ECM is a rapid and unbiased method to map ECM molecules in the brain. REFERENCES 1. Roll, L. and A. Faissner. Front Cell Neurosci 8, 219, 2014. 2. Dityatev, A. et al. Trends Neurosci 33.11, 503512, 2010. 3. Hubert, T. et al. Cell Mol Life Sci 66.7, 12231238, 2009. 4. Lamprecht, M.R. et al. Biotechniques 42.1, 7175, 2007. ACKNOWLEDGEMENTS This research was partially funded by the National Institute for Neurological Disease and Stroke (R01NS08226). Jessie R. Liu was supported by the Department of Radiology and the Swanson School of Engineering Bioengineering Department of the University of Pittsburgh.

Figure 1: A. Histology of the colocalization of thrombospondin (TSP) with neurons (Fox3) and astrocytes (GFAP) and of collagen IV and laminin with endothelial cells (RECA1) with Hoechst labeling all nuclei. Scale bar represents 100 Âľm. B. Average percentage (over anterior to posterior) of neurons and astrocytes that co-localize with TSP, which significantly differed between neurons and astrocytes (p<0.0001). C. Percent area coverage of each ECM molecule, which did not significantly differ anterior to posterior. All error bars represent the standard deviation.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.