U N I V E R S I T Y O F P I T T S B U R G H | S WA N S O N S C H O O L O F E N G I N E E R I N G
2019
UNDERGRADUATE RESEARCH PROGRAM
Welcome to the 2019 Issue of the Swanson School of Engineering (SSoE) Summer Research Abstracts! Every year, the SSoE invites undergraduate students to propose a research topic of their interest to study for the summer and to identify a faculty mentor willing to serve as their mentor and sponsor their project. Students work on innovative research with leading scientists and engineers while spending their summer at the University of Pittsburgh and other globally renowned institutions abroad. Several students spent their internships in Singapore at the National University of Singapore, while others were in Chile at the Universidad de Concepción and at the Ort Braude College in Karmi’el, Israel. Within the Pitt community, several departments outside of SSoE hosted summer students: Physical Medicine and Rehabilitation, Orthopedic Surgery, Ophthalmology, Health and Rehabilitation Sciences, Psychiatry, and Cardiothoracic Surgery. It was exciting to see such a diverse range of fields in which our engineering students have become involved! There are multiple programs that offer summer research opportunities to the SSoE undergraduate students, the largest of these being the Summer Internship Program jointly sponsored by the SSoE and the Provost. This year, the program funded nearly 80 students, with generous support from the SSoE, the Office of the Senior Vice Chancellor for Research, the Office of the Vice Provost for Undergraduate Studies, Kennametal, and student mentors. Further, the SSoE study abroad program assisted students who participated in the international internships listed in the previous paragraph. The following mentors provided support to our Summer Internship Program this year: Steven Abramowitch, Howard Aizenstein, Fabrisa Ambrosio, Heng Ban, Ipsita Banerjee, Matthew M. Barry, Mostafa Bedewy, Kurt Beschorner, Harvey Borovetz, Bryan Brown, Markus Chmielus, William Clark, Xinyan Tracy Cui, Richard Debski, Dan Ding, Susan Fullerton, Mark Gartner, Alan George, Brian Gleeson, Leanne Marie Gilbertson, Ren Hongliang, Jingtong Hu, Karl Johnson, Robert Kerestes, John Keith, Takashi D. Y. Kozai, Eon Soo Lee, Jung-Kun Lee, Sangyeop Lee, Paul Leu, Lei Li, Zhong Li, Steven R. Little, Michael T. Lotze, Louis Luangkesorn, Patrick Loughlin, Spandan Maiti, James McKone, Natasa Miskov-Zivanov, Michel Modo, Ian Nettleship, Julie A. Phillippi, Anne Robertson, Felipe Sanhueza, Anthony St. Leger, George Stetten, Nitish Thakor, Jonathan Vande Geest, Sachin Velankar, Goetz Veser, David A. Vorp, Douglas J. Weber, Justin Weinbaum, Feng Xiong, Kevin Fong Xuanyo, Judith Yang, Bo Zeng, Liang Zhan, Jianying Zhang. We are grateful for their support and to each of the faculty mentors who opened their laboratories to the students this summer. We would also like to acknowledge the faculty reviewers from each of the six SSoE departments for their assistance in reviewing the proposals. Thank you for your time in this invaluable program! As required of the internship program, each student submitted a poster abstract to a professional conference. A primary conference submission is Science 2019, hosted at the University of Pittsburgh every October. Bioengineering students, in particular, often submit their work to the Biomedical Engineering Society (BMES); all students were encouraged to submit their work to any professional conference(s) that their respective mentor(s) suggest. Interns and other summer students were also invited to submit abstracts for consideration for a full manuscript in Ingenium: Undergraduate Research in the Swanson School of Engineering. Ingenium
provides undergraduate students with the experience of writing manuscripts and graduate students, who form an Editorial Board, with experience in the peer-review and editing processes. We hope you enjoy this compilation of the innovative, intellectually challenging research that our undergraduate students took part in during their summer internships at SSoE. In presenting this work, we want to acknowledge, once again, all of the faculty mentors who made available their laboratories, their staff, and their personal time to assist the students in further igniting their interest in research. David A. Vorp, Associate Dean for Research Mary Besterfield-Sacre, Associate Dean for Academic Affairs
Student
Student Mentor(s) Department University of Pittsburgh unless otherwise noted
Mentor Primary Department(s) University of Pittsburgh unless otherwise noted Bioengineering
Title (*abstract witheld)
1
Rosh Bharthi
Bioengineering
Michael T. Lotze
TUMOR-DERIVED EXOSOMES EXPRESS p62/SQSTM-1 AND PROMOTE A TOLEROGENIC DENDRITIC CELL RESPONSE
2
Neharika Chodapaneedi
Bioengineering
Xinyan Tracy Cui
Bioengineering
ELECTROCHEMICAL DETECTION OF DOPAMINE AND MELATONIN
3
Michael Clancy
Bioengineering
Patrick Loughlin
Bioengineering
EXPLORING SENSORYMOTOR CONTROL THROUGH VIRTUAL OBJECT MANIPULATION
4
Julie Constantinescu
Bioengineering
Spandan Maiti
Bioengineering
CORRELATIONS BETWEEN NON-INVASIVE CLINICAL MARKERS AND BIOMECHANICAL STRESS FOR PREDICTING AORTIC DISSECTION IN ANEURYSMAL PATIENTS
5
Juan Correa
Bioengineering
Douglas J. Weber
Bioengineering
FEATURE VALIDATION AND ONLINE VISUALIZATION OF FOREARM HD-EMG IN SUBJECTS WITH NEUROMUSCULAR DEFICITS
6
Srujan Dadi
Bioengineering
Nitish Thakor
Bioengineering, John Hopkins University
CHARACTERIZATION OF NOVEL RARE-EARTH DOPED NANOPARTICLES IN STEM CELL THERAPY FOR ISCHEMIC STROKE
7
Mitchell Dubaniewicz
Bioengineering
Takashi D. Y. Kozai
Bioengineering
*INHIBITION OF Na+/H+ EXCHANGER MODULATES MICROGLIA ACTIVATION FOLLOWING MICROELECTRODE IMPLANTATION
8
Arijit Dutta
Bioengineering
Eon Soo Lee
Mechanical and Industrial Engineering, New Jersey Institute of Technology
VISUALIZATION AND CHARACTERIZATION OF ETCHED-BASED ON-CHIP PLASMA SELF-SEPARATION
9
Larissa Fordyce
Bioengineering
David A. Vorp
Bioengineering
*MODELING WALL STRESS IN ABDOMINAL AORTIC ANEURYSMS USING MACHINE LEARNING
10
Julia Foust
Bioengineering
George Stetten
Bioengineering
STABILEYES – NEW ASSISTIVE TECHNOLOGY FOR NYSTAGMUS TO PRODUCE A STABILE REAL-TIME VIDEO IMAGE
11
Lauren Grice
Bioengineering
Michel Modo
Bioengineering
EVALUATION OF TRACTOGRAPHY TO VISUALIZE NEURONAL CONNECTIONS IN THE HUMAN TEMPORAL LOBE
12
Jocelyn Hawk
Bioengineering
Richard Debski
Bioengineering
3D RECONSTRUCTION OF THE GLENOHUMERAL CAPSULE IN PATIENTS WITH AND WITHOUT A SHOULDER DISLOCATION
13
Catherine Grace Hobayan
Bioengineering
Jianying Zhang
Chemical Engineering
EFFECT OF METFORMIN ADMINISTRATION ON TENDON WOUND HEALING
14
Vivan Hu
Bioengineering
Ren Hongliang
Bioengineering, National University of Singapore
GLIOMA SEGMENTATION USING 3D REVERSIBLE UNET
15
Patrick Iyasele
Bioengineering
Justin Weinbaum
Bioengineering
MECHANICAL CHARACTERIZATION OF SILK DERIVED VASCULAR GRAFTS FOR HUMAN ARTERIAL IMPLANTATION
16
Sneha Jeevan
Bioengineering
Nitish Thakor
Bioengineering, John Hopkins University
DEVELOPMENT OF SELECTIVE NEURAL ELECTRODE FOR CLOSED LOOP NEUROMODULATION OF NEUROGENIC BLADDER
17
Sara Kron
Bioengineering
William Clark
Mechanical Engineering and Materials Science
DEVELOPMENT OF A WALLMOUNTED DYNAMOMETER FOR ARM AND SHOULDER PHYSICAL THERAPY
18
Jane Lasak
Bioengineering
Fabrisa Ambrosio
Physical Medicine and Rehabilitation
A NOVEL ROLE FOR PROCESSING BODIES IN AGE-RELATED IMPAIRMENT OF MUSCLE STEM CELL SELF-RENEWAL
19
Jinghang Li
Bioengineering
Jonathan Vande Geest
Bioengineering
*FINITE ELEMENT EVALUATION OF VARIOUS STENT MECHANICAL PROPERTIES IN A KNEE BENDING MECHANICAL ENVIRONMENT
20
Eileen Li
Bioengineering
Zhong Li
Orthopedic Surgery
ROBUST OSTEOGENESIS OF MESENCHYMAL STEM CELLS IN 3D BIOACTIVE HYDROGEL NANOCOMPOSITES REINFORCED WITH GRAPHENE NANOMATERIALS
21
Zixie Liang
Bioengineering
Bryan Brown
Bioengineering
MANUFACTURING A POLYELECTROLYTE COATING ON CONTACT LENSES USING AUTOMATED VS. MANUAL TECHNIQUES FOR THE TREATMENT OF DRY EYE DISEASE
22
Emily Lickert
Bioengineering
David A. Vorp
Bioengineering
CHARACTERIZATION OF IN VITRO PROTEIN RELEASE AND IN VIVO MACROPHAGE RECRUITMENT BY CYTOKINE RELEASING MICROSPHERES
23
Maxwell Lohss
Bioengineering
George Stetten
Bioengineering
THE CON-TACTOR: A NOVEL TACTILE STIMULATOR THAT MAKES AND BREAKS CONTACT WITH THE SKIN
24
Liam Martin
Bioengineering
Steven Abramowitch
Bioengineering
WHAT IS THE IMPACT OF PREGNANCY AND CHILDBIRTH ON THE COMBINED SACRUM/COCCYX SHAPE?
25
Nikita Patel
Bioengineering
Nitish Thakor
Bioengineering, John Hopkins University
APPLICATION OF LIMB CRYOCOMPRESSION TO REDUCE SYMPTOMS OF PACLITAXEL-INDUCED NEUROPATY
26
Kevin Pietz
Bioengineering
Ipsita Banerjee
Chemical Engineering
BIOPRINTING OF iPSCDERIVED ISLET ORGANOIDS
27
Seth Queen
Bioengineering
Jonathan Vande Geest
Bioengineering
INCREASING THE ALIGNMENT OF ELECTROSPUN BIOPOLYMER HYBRID MATERIALS FOR TISSUE ENGINEERING APPLICATIONS
28
Apurva Rane
Bioengineering
Anne Robertson
Mechanical Engineering and Materials Science
METHODOLOGY FOR COMPREHENSIVE HISTOLOGICAL ANALYSIS OF INTACT ANEURYSM SPECIMENS
29
Sreyas Ravi
Bioengineering
Spandan Maiti
Bioengineering
BIOMECHANICAL ASSESSMENT OF BICUSPID AND TRICUSPID AORTIC VALVE PHENOTYPE TO AORTIC DISSECTION RISK
30
Yannis Rigas
Bioengineering
Anthony St. Leger
Ophthalmology
ENGINEERING OCULAR PROBIOTICS TO MANIPULATE LOCAL IMMUNITY AND DISEASE
31
Yousif Shwetar
Bioengineering
Dan Ding
Health and Rehabilitation Sciences
COMPARISON OF PREDICTIVE EQUATIONS FOR RESTING ENERGY EXPENDITURE ESTIMATION IN MANUAL WHEELCHAIR USERS WITH SCI
32
McKenzie Sicke
Bioengineering
Nitish Thakor
Bioengineering, John Hopkins University
CHARACTERIZATION OF NOVEL RARE-EARTH DOPED NANOPARTICLES FOR CLINICAL APPLICATION
33
Thomas Skoff
Bioengineering
Howard Aizenstein
Psychiatry
UNDERSTANDING MECHANISMS OF LEARNING AND UTILIZATION OF IMPLICIT EMOTION REGULATION
34
Joseph Sukinik
Bioengineering
Harvey Borovetz
Chemical Engineering
DESIGNING A TESTING PROCEDURE RELATING EYE MOVEMENT AND POSTURAL CONTROL IN ADHD PATIENTS
35
Ashlinn Sweeney
Bioengineering
Jonathan Vande Geest
Bioengineering
HYDROGELS AS A SCAFFOLD FOR PIG LAMINA CRIBROSA CELLS
36
Patrick Tatlonghari
Bioengineering
Anne Robertson
Mechanical Engineering and Materials Science
CALCIFICATION IN CEREBRAL ARTERIES AND ITS RELEVANCE TO ANEURYSMS
37
Claire Tushak
Bioengineering
Kurt Beschorner
Bioengineering
EFFECT OF SHOE TREAD GEOMETRY AND MATERIAL HARDNESS ON SHOEFLOOR FRICTION AND UNDER-SHOE FLUID FORCE ACROSS CONTINUED SHOE WEAR
38
Jessica Weber
Bioengineering
Douglas J. Weber
Bioengineering
THE EFFECT OF PROPRIOCEPTIVE INPUT ON BCI CONTROL
39
Benjamin Wong
Bioengineering
Nitish Thakor
Bioengineering, John Hopkins University
ELECTROMYOGRAPHY OF THE EXTERNAL URETHRAL SPHINCTER DURING STIMULATION OF THE PELVIC NERVE IN A RAT SPINAL CORD INJURY MODEL
40
Kevin Xu
Bioengineering
Mark Gartner
Bioengineering
A COST-EFFECTIVE CELLULAR & SATELLITE EQUIPPED WILDLIFE TRACKER
41
Eyram Akabua
Chemical and Petroleum Engineering
Goetz Veser
Chemical Engineering
INTENSIFYING OXIDATIVE DEHYDROGENATION VIA NANOSTRUCTURED CATALYSTS
42
Samantha Bunke
Chemical and Petroleum Engineering
Susan Fullerton
Chemical Engineering
*REPLACING PLASTIC PACKAGING WITH BIODEGRADABLE CALCIUM ALGINATE
43
Adam Carcella
Chemical and Petroleum Engineering
Steven R. Little
Chemical Engineering
*A CONTROLLED RELEASE RETINOIC ACID DELIVERY SYSTEM TO ENHANCE CILIOGENESIS OF THE SINONASAL EPITHELIUM
44
Claire Chouinard
Chemical and Petroleum Engineering
Felipe Sanhueza
Materials Science, Universidad de Concepciรณn
THREE-DIMENSIONAL NICKEL FOAM AND GRAPHENE ELECTRODE IN MICROBIAL FUEL CELL APLLICATION: STUDY OF BIOFILM COMPATIBILITY
45
Ruby DeMaio
Chemical and Petroleum Engineering
Karl Johnson
Chemical Engineering
ATTEMPTING TO CHARACTERIZE HYDROXYL GROUP ROTATION ON GRAPHANOL
46
Michael GreshSill
Chemical and Petroleum Engineering
Judith Yang
Chemical Engineering
DEVELOPING AUTOMATED METHODS TO EXTRACT ATOMIC DYNAMICS FROM IN SITU HRTEM MOVIES
47
Thomas Henry
Chemical and Petroleum Engineering
James McKone
Chemical Engineering
CHARACTERIZATION OF REDOX FLOW BATTERY KINETICS USING A FLOW CHANNEL ANALYTICAL PLATFORM
48
Allie Schmidt
Chemical and Petroleum Engineering
Karl Johnson
Chemical Engineering
THE EFFECT OF THE DIFFUSION OF COMPLEX MOLECULES THROUGH MOFs FOR THE CAPTURE AND DESTRUCTION OF TOXIC CHEMICALS
49
Kaitlyn Wintruba
Chemical and Petroleum Engineering
Julie A. Phillippi
Cardiothoracic Surgery
ADVENTITIAL EXTRACELLULAR MATRIX FROM AMEURYSMAL AORTA EXHIBITS LESS PERICYTE CONTRACTILITY
50
Yingqi Yi
Chemical and Petroleum Engineering
Lei Li
Chemical Engineering
WETTING TRANSPARENCY OF MONOLAYER GRAPHENE ON 4 WT% AGAROSE GEL
51
Ananya Mukherjee
Civil and Environmental Engineering
Leanne Marie Gilbertson
Civil and Environmental Engineering
THE ROLE OF OXYGEN FUNCTIONAL GROUPS IN GRAPHENE OXIDE MODIFIED GLASSY CARBON ELECTRODES FOR ELECTROCHEMICAL SENSING OF NADH
52
Evan Becker
Electrical and Computer Engineering
Natasa MiskovZivanov
Bioengineering
NESTED EVENT REPRESENTATION FOR AUTOMATED ASSEMBLY OF CELL SIGNALING NETWORK MODELS
53
Eli Brock
Electrical and Computer Engineering
Robert Kerestes
Electrical and Computer Engineering
EVALUATING DECARBONIZATION STRATEGIES FOR THE UNIVERSITY OF PITTSBURGH
54
Austin Champion
Electrical and Computer Engineering
Feng Xiong
Electrical and Computer Engineering
EMERGING MEMORY DEVICES: SILK BASED RRAM AND 2D MATERIAL BASED SYNAPTIC ANALOG MEMORY
55
Ryan Estatico
Electrical and Computer Engineering
Alan George
Electrical and Computer Engineering
THE SPEEDY DOCTOR
56
Richard Gall
Alan George
Jiangyin Huang
Electrical and Computer Engineering Electrical and Computer Engineering
DESIGN OF USB TO UART PCB
57
Electrical and Computer Engineering Electrical and Computer Engineering
58
Sabrina Nguyen
Electrical and Computer Engineering
Robert Kerestes
Electrical and Computer Engineering
ANALYZING RENEWABLE GENERATION FOR THE UNIVERSITY OF PITTSBURGH
59
Matthew Reilly
Electrical and Computer Engineering
Feng Xiong
Electrical and Computer Engineering
EPITAXIAL GROWTH OF WO3 ON LaAlO3
60
Brendan Schuster
Electrical and Computer Engineering
Alan George
Electrical and Computer Engineering
TRADE STUDY OF SHREC CENTER NEXT GENERATION SPACE CO-PROCESSOR
61
Gouri Vinod
Electrical and Computer Engineering
Kevin Fong Xuanyo
CONVOLUTIONAL NEURAL NETWORKS FOR DEVICE COMPACT MODELING
62
Chen Wang
Electrical and Computer Engineering
Liang Zhan
Electrical and Computer Engineering, National University of Singapore Electrical and Computer Engineering
Jingtong Hu
FEASIBILITY STUDY OF KINETIC, THERMOELECTRIC AND RF ENERGY HARVESTING POWERED SENSOR SYSTEM
ELECTROCARDIOGRAM SYSTEM ON THE SMARTPHONE
63
Hongye Xu
Electrical and Computer Engineering
Jingtong Hu
Electrical and Computer Engineering
FEASIBILITY STUDY OF KINETIC, THERMOELECTRIC AND RF ENERGY HARVESTING POWERED SENSOR SYSTEM
64
Keting Zhao
Electrical and Computer Engineering
Jingtong Hu
Electrical and Computer Engineering
FEASIBILITY STUDY OF KINETIC, THERMOELECTRIC AND RF ENERGY HARVESTING POWERED SENSOR SYSTEM
65
Angela McComb
Industrial Engineering
Mostafa Bedewy
Industrial Engineering
LASER-INDUCED NANOCARBON FORMATION FOR TUNING SURFACE PROPERTIES OF COMMERCIAL POLYMERS
66
Sooraj Sharma
Industrial Engineering
Paul Leu
Industrial Engineering
DURABLE, ANTI-SOILING, SELF-CLEANING, AND ANTIREFLECTIVE SOLAR GLASS
67
Yiqi Tian
Industrial Engineering
Louis Luangkesorn
Industrial Engineering
THE EFFORTS OF APPLYING DAILY STRENGTH AND CONDITIONING RECORDS AND TECHNICAL TESTING DATA INTO ATHLETE INJURY PREDICTION MODELS
68
Fan Zhang
Industrial Engineering
Bo Zeng
Industrial Engineering
BRANCH AND BOUND FOR UNRESTRICTED CONTAINER RELOCATION PROBLEM
69
Megan Black
William Clark
Mechanical Engineering and Materials Science
GEMINI XPROJECT: TANKER WATER METERING
70
Joseph Damian
Mechanical Engineering and Materials Science Mechanical Engineering and Materials Science
Jung-Kun Lee
Mechanical Engineering and Materials Science
FACILE PREPAPRATION OF BISMUTH VANADATE PHOTOANODES WITH DUAL OXYGEN EVOLUTION CATALYST LAYERS
71
Christine Determan
Mechanical Engineering and Materials Science
Ian Nettleship
Mechanical Engineering and Materials Science
DEVELOPMENT OF MG – CA – Y BASED ALLOYS FOR ENGINEERING/BIOMEDICAL APPLICATIONS
72
Brian Gentry
Mechanical Engineering and Materials Science
John Keith
Chemical Engineering
ACCOUNTING FOR LOCAL SOLVENT EFFECTS AND PROTONATION STATES IN MACROCYCLIC BINDING
73
Samuel Gershanok
Mechanical Engineering and Materials Science
Brian Gleeson
Mechanical Engineering and Materials Science
THE EFFECTS OF SURFACE DEFORMARION ON ALUMINA SCALE ESTABLISHMENT ON THE Ni-BASED ALLOY HAYNES-224™
74
Asher Hancock
Mechanical Engineering and Materials Science
Matthew M. Barry
Mechanical Engineering and Materials Science
NUMERICALLY RESOLVED RADIATION VIEW FACTORS VIA HYBRIDIZED CPU-GPU COMPUTING
75
Jeffrey Martin
Markus Chmielus
Mechanical Engineering and Materials Science
MANUAL BINDER JET PRINTER
76
Lorenzo Monteil
Mechanical Engineering and Materials Science Mechanical Engineering and Materials Science
Markus Chmielus
Mechanical Engineering and Materials Science
EFFECT OF PRINT PARAMETERS ON DIMENSIONAL ACCURACY AND SINTERING BEHAVIOR OF BINDER JET 3D PRINTED WATER AND GAS ATOMIZED INCONEL 625
77
Tyler Paplham
Mechanical Engineering and Materials Science
Markus Chmielus
Mechanical Engineering and Materials Science
CHARACTERIZATION OF HIERARCHICAL STRUCTURES IN REMELTED NI-MN-GA SUBSTRATES FOR DIRECTED ENERGY DEPOSITION MANUFACTURING OF SINGLE CRYSTALS
78
Jerry Potts
Mechanical Engineering and Materials Science
Heng Ban
Mechanical Engineering and Materials Science
WIRELESS SIGNAL TRANSMISSION THROUGH HERMETIC WALLS IN NUCLEAR REACTORS
79
Pierangeli Rodriguez De Vecchis
Mechanical Engineering and Materials Science
Markus Chmielus
Mechanical Engineering and Materials Science
EFFECTS OF PRINTING PARAMETERS ON DENSITY AND MECHANICAL PROPERTIES OF BINDER JET PRINTED WC-Co
80
Maya Roman
Mechanical Engineering and Materials Science
William Clark
Mechanical Engineering and Materials Science
XPROJECT: TANGIBLE SECURITY FOR INTERNET OF THINGS DEVICES
81
Sarah Snavely
Mechanical Engineering and Materials Science
William Clark
Mechanical Engineering and Materials Science
XPROJECT: FURNACE DESIGN FOR EVALUTION OF HOT TOP MATERIAL
82
Benjamin Sumner
Mechanical Engineering and Materials Science
Sangyeop Lee
Mechanical Engineering and Materials Science
THE APPLICATION OF GPUCOMPUTING TO NANOSCALE THERMAL TRANSPORT SIMULATIONS
83
Michael Ullman
Mechanical Engineering and Materials Science
Matthew M. Barry
Mechanical Engineering and Materials Science
ANALYTICAL MODEL VALIDATION FOR MELTING PROBE PERFORMANCE USING APPLIED COMPUTATIONAL FLUID DYNAMICS
84
Nikolas Vostal
Mechanical Engineering and Materials Science
Sachin Velankar
Chemical Engineering
ELECTROSPINNING CRIMPED MICROFIBERS FOR ARTERIAL GRAFT PRESSURE SUPPORT
85
Ji Zhou
Mechanical Engineering and Materials Science
Anne Robertson
Mechanical Engineering and Materials Science
DISTRIBUTION OF BLEBS ON THE INTERCRANIAL ANEURYSM WALLS
TUMOR-DERIVED EXOSOMES EXPRESS p62/SQSTM-1 AND PROMOTE A TOLEROGENIC DENDRITIC CELL RESPONSE Rosh Bharthi, Carolina Gorgulho, Nils Ludwig, Theresa L. Whiteside, Michael T. Lotze DAMP Laboratory, UPMC Hillman Cancer Center, Department of Immunology University of Pittsburgh, PA, USA Email: roshbhar@pitt.edu INTRODUCTION Exosomes are nanovesicles that range from 30150nm in size and are secreted by cells. They are formed by the endosomal compartment; ESCRT proteins on endosome surfaces recognize ubiquitinated proteins and form intraluminal vesicles containing the ubiquitinated proteins [1]. The multivesicular body then fuses with the cell membrane, thus releasing the exosomes into the surroundings [1]. Tumor-derived exosomes (TEX) contain cargo that can modulate the immune response by suppressing certain cells and stimulating others. TEX hold a variety of components, including mRNA, miRNAs, signaling molecules, and cytoskeletal proteins [2]. TEX suppress CD8+ T cell proliferation and promote regulatory T cell (Tregs) proliferation [2]. Additionally, TEX contain MHC molecules and cytokines that can stimulate antigen-presenting cells [2]. Dendritic cells (DCs), a type of antigen-presenting cell, ingest exosomes, with maturation regulated by the presence of DAMPs (damage-associated molecular patterns) and PAMPs (pathogenassociated molecular patterns). Tumor cells secrete DAMPs freely into tissues, which recruits and activates DCs and other antigen-presenting cells. TEX from gastric cancer cell lines contain HMGB1, a crucial DAMP molecule [3]. Autophagy is the process of degrading intracellular contents to regenerate amino acids, sugars, nucleotides, lipids, and other monomeric units [4]. This process is enhanced in cells under stress, such as tumor cells, in order to meet cellular demands. There is a direct relationship between autophagy and exosome release; increased autophagy is correlated with increased exosome production, although the mechanism is not known [5]. Additionally, TEX may contain cargo that induces increased autophagy in recipient cells [5]. Understanding the autophagyTEX relationship is important as it may show a pathway for how stressed tumor cells can send
signals to tumor cells and immune cells in the tumor microenvironment. The purposes of this study were: 1) to determine whether autophagy induced by apoptotic deficiency was correlated with increased exosome production, 2) to identify whether the protein cargo contained notable DAMPs, autophagic proteins such as SQSTM1/p62 (a scaffolding protein consumed in autophagy), and other signaling molecules, and 3) to evaluate whether TEXs could promote DC maturation. METHODS Wild-type and apoptosis-defective HCT116 human colorectal tumor cells (p53 KO, PUMA KO, and BAX KO) were cultured in RPMI with 10% exosome-depleted fetal bovine serum for 72 hours. Exosomes were isolated from the supernatant of cultured cells via mini size-exclusion chromatography (mini-SEC) to limit soluble proteins in the sample and were characterized by transmission electron microscopy imaging (TEM), BCA protein estimation, and Western blot for the endosomal protein TSG101, which is present in nearly all exosomes. The BCA protein estimation was used to quantify exosome production from the wild-type and knockout cell lines, which have been shown to exhibit increased autophagy. Western blot was used to determine whether HMGB1, SQSTM-1, LC3, and PD-L1 were present in exosomes. Monocyte-derived human immature dendritic cells (DCs) from healthy donors were co-incubated with 12Îźg exosomal protein for 24h and 48h. DCs were analyzed by flow cytometry for markers indicating activation, including CD14, CD25, CD80, CD86, and HLA-DR. DATA PROCESSING The flow cytometry data was gated for size and granularity, living cells using Zombie viability dye, and cells exhibiting CD11c, a marker for dendritic cells.
RESULTS Exosomes were successfully isolated and characterized by TEM, BCA protein estimation, and Western blot for TSG101. BCA protein estimation indicates that there is no significant difference among cell variants’ exosome production (Figure 1). Western blot indicates that SQSTM-1 was present in TEXs from all cell variants, but neither HMGB1, LC3, nor PD-L1 were present. Finally, flow cytometry data shows that TEXs cause a less tolerogenic response in DC maturation compared to DCs that were co-cultured with tumor cells from each cell group. CD80 and CD86 expressions in the exosome-treated groups were not as high as the LPStreated positive control group, whereas expression of both molecules in tumor cell-treated DCs was much lower (Figure 2). Figure 2: Maturation of DC by TEX. Flow cytometry data indicates the percent population containing DC markers. “WT,” “p53 KO,” “PUMA KO,” and “BAX KO” refer to DCs co-incubated with whole tumor cells from the respective cell line. Data is presented as mean±SD; n=3. ANOVA was run with Tukey post-test. Orange line shows mean of control group.
Figure 1: BCA protein estimation data shows concentration of protein in isolated TEX samples when normalized per million cells. Error bars represent SEM; n=4. Data was run with ANOVA.
DISCUSSION Apoptotic-deficient cells with increased autophagy do not increase exosome secretion. HCT116 TEXs contain p62/SQSTM-1, with a modest suppressive role but less than that observed with whole tumor cells. HMGB1 was not detected in our TEX (yet found in gastric cancer cell lines reported); this could be because we isolated TEX via mini-SEC, limiting the presence of soluble proteins in the media that may permeate others’ samples [3]. Future steps involve further understanding the type of cargo carried by TEXs and whether other autophagic proteins or DAMPs are released from colorectal and renal cell lines through this process. Additionally, TEXs from other tumor cell lines and clinical samples will be tested to observe their cargo and how they modulate an immune response. REFERENCES 1. Edgar. BMC Biol 14:46, 1-7, 2016. 2. Whiteside. Future Oncon 13:28, 2583-2592, 2017. 3. Xu et al. J Cell Sci 131:15, 1-11, 2018. 4. Weiner, Lotze. NEJM 366:12, 1156-1158, 2012. 5. Zhang et al. Mol Cancer 17:146, 1-16, 2018. ACKNOWLEDGEMENTS Funding was provided by the Swanson School of Engineering, the DAMP Laboratory at UPMC Hillman Cancer Center, and the Office of the Provost at the University of Pittsburgh.
ELECTROCHEMICAL DETECTION OF DOPAMINE AND MELATONIN Neha Chodapaneedi NTE Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: nbc7@pitt.edu, Web: https://www.engineering.pitt.edu/CUI/ INTRODUCTION Neurological disorders greatly impact the quality of life of patients and are the leading cause of disabilities. These disorders often involve abnormalities in the electrical and biochemical signaling that arise from neurotransmitter imbalances. For example, Parkinson’s disease is caused by a lack of dopamine (DA), a neurotransmitter that plays a major role in regulating the body’s motor system. Melatonin (MT) is an electroactive hormone whose functions range from regulating the circadian rhythm in the brain to providing neurorestorative effects in treating the central nervous system injuries [1,2] and neuroprotection in diverse models of neurodegeneration including Parkinson's disease [3,4]. To fully understand and treat neurological diseases, monitoring the concentrations of neurotransmitters and hormones in the brain is critically important. This work presents two optimized square wave voltammetry (SWV) electrochemical methods for the detection of: (1) MT, using carbon fiber microelectrodes (CFEs), pristine and PEDOT/CNT coated, (2) DA, using microelectrode arrays (MEA) with graphene contacts. METHODS The electrochemical properties of the CFEs were evaluated in vitro in artificial cerebrospinal fluid (aCSF) by cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). During the CV tests, the working electrode potential was swept between −0.6 and 0.9 V vs. Ag/AgCl at 3 scan rates, respectively of 100 mV/s, 1 V/s and 400 V/s. During the EIS measurements, a sine wave (10 mV RMS amplitude) was superimposed onto the open circuit potential while varying the frequency from 1 to 105 Hz. Both coated and uncoated CFEs were tested. The CFEs were electrochemically coated starting from a freshly prepared PEDOT/CNT aqueous suspension of 1 mg/mL-1 COOH-CNTs containing
0.01 M of 3,4-Ethylenedioxythiophene monomer. The electrochemical deposition was carried out in potentiostatic mode (75 mC/cm2 charge cut off). The polymerization potential was set to 0.9V vs Ag/AgCl reference electrode. SWV was used to electrochemically detect MT concentrations both in vivo and in vitro. The SWV waveform was repeatedly applied from 0.4 V to 0.9 V at 10 Hz with a 50mV pulse amplitude and a 5mV step height every 15 seconds. Potential was held at 0.4 V between scans. In vitro MT calibrations were performed using freshly prepared MT solutions dissolved in aCSF in a concentration range of 0.1 to 5μM. Electrode sensitivity was determined by the linear slope of the calibration plot relating MT peak current at 0.75 V to MT concentration. The in vivo performance of the CFEs were determined through acute surgical experiments conducted in visual cortex of isoflurane (2% by volume) anesthetized mice (weight, 28-35 g; The Jackson Laboratory, Bar Harbor, ME). All animal procedures were performed according to protocols approved by the University of Pittsburgh Institutional Animal Care and Use Committee. We recorded the response to an intraperitoneal (i.p.) injection to the animal with 30mg/kg and 180 mg/kg concentrations of MT. In vivo MT concentration was determined for all in vivo experiments by converting SWV peak current to MT concentration using the post-calibration curve of electrode sensitivity. In addition to using CFEs to detect MT, MEA with 32 graphene contacts were used to detect DA using SWV with a modified waveform. MEA was fabricated by Dr. Cohen-Karni Lab at Carnegie Mellon University. The electrode size is 50x50μm2 which is determined to be optimal for DA detection. SWV waveform was repeatedly applied from -0.2 V to 0.4 V at 10 Hz with a 50mV pulse amplitude and a 5mV step height every 15 seconds. Potential was held at 0 V between scans. The DA sensitivity was tested in a contaminant solution of ascorbic acid (200 μM), uric acid (10 μM), and DOPAC (10 μM) from a concentration of 0.01 to 5μM.
1.5
RESULTS We showed that PEDOT/CNT coatings drastically decrease impedance of the CFEs in all of the frequency ranges of EIS (Figure 1A). We optimized an in vitro SWV waveform in order to detect MT and obtain a linear calibration curve (Figure 1B). The calibration curves reported in Figure 1B show a higher MT sensitivity for coated CFEs, but with a higher variability. The in vitro SWV MT calibrations conducted on bare and coated CFEs in aCSF reveal clear MT peaks, respectively at 0.75V and 0.58V (Figure 1C/D). 10
5
10
4
B) 3.5 3.0
bare CFE PEDOT-CNT coated CFE
10
3
10
2
Current (nA)
|Z| (kW)
A)
10
1
10
2
10
3
4
10
y=0.574+0.0572
1.5 1.0
0
5
10
C)
1
2
3
4
5
MT concentration (µM
Frequency (Hz)
0.00 0.6 0.7 0.8 Potential ( V vs Ag/AgCl)
0.0
0.75
Current (nA)
Current (nA)
1.0
0.50 0.25
0.5
0.00 0.6
0.7 0.8 Potential (V vs Ag/AgCl)
0.0
30mg/kg MT -0.5
0.5
0.6
0.7
0.8
180 mg/kg MT 0.9
Potential (V vs Ag/AgCl)
-0.5
0.5
0.6
0.7
0.8
0.9
Potential (V vs Ag/AgCl)
Figure 2: A/B) SWV in response to respectively 30mg/kg (A) and 180 mg/kg (B) i.p. MT injections in vivo. In inset, the baseline subtracted SWV reveal the MT peaks at 0.75V, proportional to the corresponding concentration.
Furthermore, we determined an optimal SWV waveform able to detect small DA concentration in a contaminant solution of ascorbic acid (200 μM), uric acid (10 μM), and DOPAC (10 μM) using graphene based MEA. The SWV data collected in vitro in aCSF revealed a clear DA peak at 0.18V (Figure 3A). The calibration curve in figure 3B presents a linear trend only with the lower concentrations. A)
B)
2.0
0.0 0
0.5
B)
0.05
bare CFE PEDOT-CNT coated CFE
y=0.1079+0.0176
10
1.0
2.5
0.5
10 1
A) Current (nA)
DATA PROCESSING MATLAB scripts allowed us to calculate the calibration curves. MT and DA peaks were isolated from the nonfaradaic background current for each SWV scan by subtracting a modelled polynomial baseline. OriginPro 8.5 was used to do statistical analysis and plot the SWV data.
Current (nA)
1.5
D)
Figure 1: A) Impedance magnitude comparison of bare vs. coated CFE. B) Calibration curves of bare vs. coated CFEs in 0.1-5um MT concentration range. C/D) Average (n = 5) in vitro SWV MT calibrations conducted on bare CFEs (C) and PEDOT/CNT coated CFEs (D) in aCSF.
We were able to clearly detect MT in vivo after i.p. injection of 30mg/kg (Figure 2A) and 180kg/mg (Figure 2B) MT concentrations. The maximum concentrations detected in vivo are 0.09μM and 4.3μM for 30mg/kg and 180 mg/kg respectively.
Figure 3: A) In vitro SWV DA calibration conducted on array reveals clear DA peak. B) Calibration curves in the concentration range 0.1 to 5 uM for the different channels tested.
DISCUSSION We optimized two square wave voltammetry (SWV) electrochemical methods for the detection of: (1) MT, using carbon fiber microelectrodes (CFEs), bare and PEDOT/CNT coated, (2) DA, using microelectrode arrays (MEA) with graphene contacts. We were able to detect, for the first time to the best of our knowledge, MT in vivo using the optimized waveform in combination with CFE. We were able to detect a low concentration of DA in vitro using 50x50μm2 graphene base MEA in presence of interferents. REFERENCES 1. M. J Jou, et al. J. Pineal Res. 2004; 37:55–70 2. A. Golabchi, et al. Biomaterials 2018; 180: 225239 3. R. Sharma, et al. Brain Research 2006; 1068:230– 236 4. J. C Mayo, et al. Endocrine 2005; 27:169–178 ACKNOWLEDGEMENTS I would like to thank Dr. Tracy Cui, Dr. Elisa Castagnola, the Swanson School of Engineering and the Office of the Provost.
Exploring Sensory-Motor Control Through Virtual Object Manipulation Michael Clancy, Sudarshan Sekhar, Aaron Batista, Patrick Loughlin Department of Bioengineering, University of Pittsburgh Contact Author Email: M.Clancy@pitt.edu Introduction: A challenge in brain-computer interface design, and neuroscience in general, is understanding how the brain decodes sensory information about its surroundings and in turn generates neural signals that drive movement. To address this challenge, the standard approach has been to have subjects perform simple movements in virtual environments, such as reaching for, or tracking, an object while the activity of small groups of neurons is recorded. One such virtual task is the Critical Stability Task (CST) [Quick et al., J. NeuroPhys. 2018], wherein the subject must use ongoing sensory feedback to generate continuous hand movements that stabilize an inherently unstable system. Thus, the CST enforces a tight link between sensory information and motor response. The two aims of the current study were: (1) to implement and test modifications to the CST, and (2) to model and simulate the behavioral response of Rhesus monkeys performing the CST as a first step towards system identification of their underlying sensory-motor control. Methods Aim 1: Extensions of the CST The CST is a virtual object manipulation task wherein subjects must make hand movements in order to keep a cursor from drifting left or right away from screen-center. It is implemented via the discrete-time algorithm [Quick et al., J. NeuroPhys. 2018], đ?‘Ś[đ?‘› + 1] = đ?‘Žđ?‘Ś[đ?‘›] + (đ?‘Ž − 1)đ?‘Ľ[đ?‘›] (1) where x[n] and y[n] are the hand and cursor position, respectively, at time instant n, y[n+1] is the cursor position to be rendered at the next update, and a>1 is a parameter that determines the difficulty of the task: larger values cause the cursor to diverge faster. In order for the cursor not to move (i.e., such that đ?‘Ś[đ?‘› + 1] = đ?‘Ś[đ?‘›]), the subject must make hand movements that are exactly equal and opposite to the current cursor position, x[n] = -y[n]. This ideal performance is not possible because of sensory-motor noise and delays, and hence subjects will lose control of the cursor at some value of the parameter a, called the ``critical instability value’’ (CIV). In theory, only information about the position of the cursor is necessary to control the CST in this instantiation. Variations of the CST that require additional sensory information about the cursor, such
as its position and velocity, can be developed by increasing the ``order’’ of the system. For example, in contrast to the first-order version of the CST in Eq. (1), the second order implementation given by đ?‘Ś[đ?‘› + 1] = 2đ?‘Žđ?‘Ś[đ?‘›] − đ?‘Ž2 đ?‘Ś[đ?‘› − 1] + (đ?‘Ž − 1)2 đ?‘Ľ[đ?‘›] (2) can not be stabilized by hand movements that depend only on the current cursor position, but rather require hand movements given by đ?‘Ľ[đ?‘›] = đ??ž0 đ?‘Ś[đ?‘›] + đ??ž1 đ?‘Ś[đ?‘› − 1] (3) in order for the cursor not to move, where đ??ž1 = đ?‘Ž2 â „(đ?‘Ž − 1)2 and đ??ž0 = (1 − 2đ?‘Ž)â „(đ?‘Ž − 1)2. Hence, in this second-order version of the CST, hand movements depend on the current and prior cursor positions, which is analogous to requiring information about cursor position and velocity. In the first aim of this study, we implemented higher-order versions of the CST, to allow future experiments that investigate the sensory requirements necessary to perform the task. Aim 2: Simulation and Modeling The second aim of this study was to simulate the behavioral response of Rhesus monkeys performing the (first-order) CST. With reference to the feedback control model diagrammed in Fig. 1, the ultimate goal is to quantify the monkey’s underlying sensory-motor control via system identification methods to compute the motor and sensory functions, Hm and Hs, respectively.
Figure 1. A CST feedback control model. The blue pathway designates motor control based on sensory information (red) about the cursor. A future goal is to identify the unknown motor and sensory functions, Hm and Hs, by introducing motor and/or sensory perturbations (dm and ds).
Simulations of CST performance were generated by a neural network designed using The Deep Learning and Machine Learning toolboxes provided by Mathworks. A Long Short-Term Memory (LSTM) neural network was trained on Rhesus monkey behavioral data (hand and cursor motion) obtained during CST trials (140 training trials). At each time
increment, the network was trained to predict the subject’s hand position at the next time increment, given information about the current cursor and hand. The cost function was the root mean squared error (RMSE) between the actual and predicted hand position. Different LSTM network architectures and types of training data (i.e., position only v. position + velocity) were investigated. The best network contained one LSTM layer with ten neurons, a dropout rate of 45%, and used the current hand position along with the current cursor position and velocity in order to predict the next hand position. After training, the optimized network was tested on new data (37 trials) to assess its ability to predict the next hand position at each time increment. The network was also tested on its ability to perform the CST in place of a live subject; for these tests, the network was seeded with an initial hand position (i.e., hand position at the start of the trial) drawn from a distribution created from subject data. Results Aim 1: Fig. 2(a) shows the success rate as a function of increasing parameter value (i.e. increasing task difficulty) for a human subject (MC) performing the first-order CST, while 2(b) shows the success rate for the second-order CST. In both cases, performance degrades as the task becomes more difficult; the value at which the subject fails 50% of the time on average (i.e., the CIV) is marked by ‘x’ in each plot.
(a)
(b)
Figure 2. (a) Success rate of subject over a range of parameter values for the (a) first-order CST and (b) second order CST. The best fit line (blue) is a sigmoidal curve in (a) and a linear line in (b). The critical instability value (CIV) marked by ‘x’ and displayed in the top right corner provides the value at which the subject can successfully complete the CST 50% of the time.
Aim 2: Fig. 3(a) shows the cursor and hand position of a subject from one trial of the (first-order) CST, along with the hand position predicted by the neural network at each subsequent time increment, given the current cursor and hand information. The RMSE between the actual and predicted hand positions was 0.38 for this trial; the overall RMSE of the 37 test trials was 0.37 +/- .17. Fig. 3(b) shows three different trials of the neural network performing the CST.
(a)
(b)
Figure 3. (a) Cursor (red) and hand (green) position of a subject during one trial of the first-order CST, along with the predicted hand position (black dotted). (b) Three different trials of the neural network performing the CST.
Conclusion: A benefit of the CST is that successful task execution necessitates the use of sensory feedback in order to generate appropriate hand movements. In addition, it can probe a subject’s skill and behavior over a range of task diffficulty that can be experimentally controlled. Before this study, only a first-order version of the CST had been used (Eq. (1)), which in theory requires information about only the cursor position in order to succeed at the task. Higher-order system dynamics require additional sensory information, and hence place a greater demand on sensory-motor performance. In Aim 1 of this study, higher-order versions of the CST were successfully implemented and tested (Eq. (2) & Fig. 2). We found that, indeed, the 2nd order version was more challenging than the 1st order version, as evidenced by the much smaller parameter values for which the subject was successful (Fig. 2). The higher-order CST opens up avenues for future experimental investigation into the sensory information utilized by subjects to perform the task. The neural network simulations conducted under Aim 2 provided a first step toward system identification of the motor and sensory pathways during CST performance. Interestingly, while in theory the first-order CST requires sensory information only about the position of the cursor, best performance at the CST by a neural network was achieved utilizing information about the cursor position and velocity, as well as the previous hand position. Future research will explore whether subjects utilize similar sensory information and whether this changes with task difficulty. Acknowledgements: Funding provided by the University of Pittsburgh Swanson School of Engineering, and NIH R01 HD090125.
CORRELATIONS BETWEEN NON-INVASIVE CLINICAL MARKERS AND BIOMECHANICAL STRESS FOR PREDICTING AORTIC DISSECTION IN ANEURYSMAL PATIENTS Julie A. Constantinescu Computational Solid Biomechanics Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: jac395@pitt.edu INTRODUCTION Type A Aortic Dissection (TAAD) is a highly lethal disease that starts from an intimal tear in the inner layer of an aneurysmal ascending aorta and delaminates the aortic wall. Currently, the clincial indicator for predicting dissection risk is when the maximum orthogonal diameter of the ascending aorta becomes greater than 5.5 cm. However, clinical records revealed that as high as 62% of patients with aortic diameters less than 5.5 cm experience a dissection event [1]. Therefore, an improved metric for predicting dissection is needed. From a biomechanics point of view, dissection involves the biomechanical failure of the aortic wall that occurs due to high stress in excess of strength. The objective of my work is to identify possible correlations between normalized aortic wall stress and simple patient-specific clinical data, such as height, body mass index (BMI) and body surface area (BSA). Identification of strong and statistically significant correlations will lead towards the establishment of a biomechanically-informed clinical metric to predict patient-specific dissection risk. METHODS Ascending aortic aneurysmal patients with bicuspid aortic valves (BAV, n=19) and tricuspid aortic valves (TAV, n=20) and patients who experienced aortic dissection (TAAD, n=14) were chosen as the study population, each with at least three longitudinal CT scans. From these scans, solid models of the aorta were reconstructed, smoothed and meshed for finite element analysis. Aortic models were loaded to an internal pressure of 200 mmHg using a custom membrane finite element program with cohort-specific stiffness parameters [2] and the 95th percentile longitudinal (LONG) and circumferential (CIRC) wall stress was extracted. The 95th percentile stress was chosen to remove high stress artifacts near boundaries but still capture
the high stress magnitude and location. The 95th percentile stress was normalized by three different measurements of aortic diameter at diastole – one from clinical records (ECHO), one as measured by the surgeon (AoD) and one derived from the CT scan (CT). These diameters represent increasingly accurate and patient-specific measurements, respectively. Aortic Height Index (AHI) was defined as aortic diameter, in mm, as measured by the echocardiogram, divided by the patient’s height in cm. BMI Index was defined as aortic diameter, in mm, divided by the patient’s BMI, in kg/ . Aortic Index (AI) was defined as aortic diameter, in mm, divided by the patient’s BSA, in . SPSS (IBM, New York, US) was used to perform the statistical analysis. RESULTS AHI, BMI Index and AI were plotted against ECHO, AoD and CT LONG and CIRC stresses and separated by BAV, TAV and TAAD populations. Between all three indices, there were no statistically significant correlations for all three normalized LONG and CIRC stresses for the TAV population. As such, the following figures only include those with statistically significant correlations. In Figure 1a, the ECHO LONG stress of the TAAD population is moderately to strongly correlated with AHI (r=-0.792, =-0.452). Figure 1b reveals that the ECHO LONG stress of the BAV population is moderately correlated with BMI Index (r=0.409, =0.453). For AHI (Fig. 1c), ECHO CIRC stress for the TAAD population (r=-0.662, =-0.614) has a strong correlation.
Figure 1: Comparison of ECHO LONG and CIRC aortic wall stress with AHI for the TAAD cohort (a,c) and with BMI Index for the BAV cohort (b). r and denote Pearson’s and Spearman’s correlation, respectively, and n represents the number of patients in each cohort.
For the BAV population, both the AoD LONG (r=0.452, =-0.407) and CIRC stresses (r=-0.462, =0.468) are moderately and negatively correlated with AHI (Fig. 2a,d). For the TAAD population, we see a moderate and positive correlation between AoD LONG stress and BMI Index (r=0.426, =0.353) and a strong and positive correlation between AoD LONG stress and AI (r=0.722, =0.629) (Fig, 2b,c). Likewise for the TAAD population, in Figures 2e and 2f we see moderate to strong correlations between AoD CIRC stress and BMI Index (r=0.452, =0.468) and AoD CIRC stress and AI (r=0.622, =0.406).
Figure 3: Comparison of CT LONG aortic wall stress with AI for the TAAD cohort (a) and CT CIRC aortic wall stress with AHI for the TAAD cohort (b).
DISCUSSION Taken together, these results indicate that there are moderate and negative correlations between AHI and ECHO LONG and CIRC wall stress for TAAD populations and AHI and AoD LONG and CIRC wall stress for BAV populations. Additionally, there is a moderate and positive correlation between the CT CIRC stress and AHI for the TAAD population. With respect to BMI Index, there is a moderate correlation with ECHO LONG stress for the BAV population and a moderate correlation with AoD LONG and CIRC stresses for TAAD populations. Finally, the AoD LONG and CIRC stresses of TAAD populations are strongly correlated with AI and the CT LONG stress of the TAAD population is strongly correlated with AI as well. Our analysis reveals that of all three methods of normalizing stress, AoD LONG and CIRC stresses resulted in more instances of statistically significant correlations for the clinical markers studied. Future research will include additional non-invasive patient specific stiffness data and more patients from each population.
Figure 2: Comparison of AoD LONG and CIRC aortic wall stress with AHI for the BAV cohort (a,d), BMI Index for the TAAD cohort (b,e) and AI for the TAAD cohort (c,f).
In Figure 3a, the CT LONG stress of the TAAD population is positively and moderately to strongly correlated with AI (r=0.535, =0.494). In Figure 3b, we see a similar, positive and moderate correlation between CT CIRC stress and AHI (r=0.338, =0.474).
REFERENCES 1. Erbel, R et al. Euro Heart J 35, 2873-926, 2014. 2. Pichamuthu, J et al. Ann Thorac Surg 96, 2154, 2013. ACKNOWLEDGEMENTS Thank you to my lab mentor Ronald Fortunato for his guidance and assistance this summer. Summer research funding was provided by the Swanson School of Engineering and the Office of the Provost at the University of Pittsburgh.
FEATURE VALIDATION AND ONLINE VISUALIZATION OF FOREARM HD-EMG IN SUBJECTS WITH NEUROMUSCULAR DEFICITS J. Sebastian Correa. Jordyn Ting, Devapratim Sarma, Douglas J. Weber Rehab Neural Engineering Lab, Department of Bioengineering University of Pittsburgh, PA, USA Email: jsc82@pitt.edu, Web: http://www.rnel.pitt.edu/ INTRODUCTION Spinal cord injuries (SCI) result in damage to the corticospinal tract which disrupts the body’s motor and sensory systems. This leaves survivors reliant on caretakers to perform activities of daily living for them. Damage to the corticospinal tract can make muscle fibers discharge spontaneously, or not at all, due to the hindered signal from the brain to the muscle fibers. These electrical potentials can be measured by electromyography (EMG) through electrodes on the surface of the skin. High-density EMG (HD-EMG) electrode sleeve arrays have been developed to measure EMG, using a large number of tightly spaced electrodes resulting in a high spatial resolution of EMG signals across one or more muscles (Fig. 1). The HD-EMG sleeve’s highdensity coverage allows for the assessment of activity across the entire forearm. This is important when studying pathological changes in the EMG of SCI patients as it allows for a more accurate judgement of the deficits in the EMG. To be able to use these signals to control assistive devices, a correct processing technique must be established to extract the EMG features. The processing steps in this are critical in allowing for the accurate control of assistive devices, potentially significantly improving the lives of SCI patients. Commonly evaluated features include the integral of average value, the variance, root-mean-square (RMS) , and the number of zero crossings [1]. We analyzed different filtering parameters to determine the appropriate processing steps before extracting EMG features. Filtering EMG data involves eliminating noise from different frequency components to show activity-
related signals. The signal to noise ratio (SNR) of the EMG data can be analyzed to measure the quality of the EMG signals before calculating these features. SNR was evaluated across multiple bandpass filtering parameters in SCI and able-bodied datasets to determine the ideal settings. METHODS Two individuals with cervical spinal cord injuries and one able-bodied subject participated in this study. Participants were seated in front of a monitor and presented with cues instructing them to attempt one of four hand movements in a random order: the cylindrical grasp, tripod grasp, lateral grasp, and open hand. Muscle activity was recorded from their forearm using the HD-EMG sleeve during the experiment. The raw data was recorded at a 10 kHz sampling rate using 150 monopolar channels (Intan Technologies, Los Angeles, CA). Raw EMG data collected from the electrode sleeve array was processed before extracting any features using MATLAB (Mathworks, Natick, MA) (Fig. 2). In post-processing, the monopolar signals at adjacent electrodes were differenced to remove common noise across the electrode array. Onehundred and thirty-five bipolar signals were used for the remaining analysis after removal of the most proximal row of electrodes. The bipolar signals were filtered using a 4th-order Butterworth filter between varying frequencies. Data shown in figure 4 was filtered with the parameters of 30-410 Hz. Various EMG features were extracted from the filtered signals consecutively in 100 millisecond bins. The RMS was used in this analysis. The SNR of the EMG data was calculated to measure the quality of the data. This was performed by taking the ratio of the average of the filtered and rectified data in the active period to that of the rest period. The active (red) and rest (white) periods are shown in the plots found in Fig. 2. The SNR for all 135 bipolar channels was calculated and averaged to yield one SNR value
Figure 2. Procedure to process raw data into an RMS feature set by first differencing raw monopolar signals to remove noise and then filtering. Data shown comes from a single channel of the sleeve array for a patient with SCI (top) and an able-bodied participant (bottom).
across varying band-pass filter parameters. The low frequency cutoff parameter varied from 20-100 Hz in 10 Hz increments while the high frequency cutoff parameter varied from 400-500 Hz in 10 Hz increments. RESULTS There is a dramatic decrease in the SNR values of SCI to able-bodied participants. This trend is true across all four hand movements (Fig. 3). The filter parameters of 30-410 Hz were found to be the most ideal settings for the SNR of EMG data. The low frequency cutoff parameter of 30 Hz proved to be the optimal filter across all four hand movements. The high frequency cutoff parameter of 410 Hz was only the optimal parameter for the three grasp movements. The hand open movement was optimized at a high setting of 440 Hz as the SNR was 1.1847. The parameter of 410 Hz was still chosen as the optimal setting as it provided the highest SNR in the majority of hand movements. DISCUSSION The large difference in the SNR of the SCI to ablebodied participants is likely due to damage of the corticospinal tract in SCI patients. This hinders the strength of their forearm EMG signals. This damage may also cause spontaneously
discharging muscle fibers which impede the quality of the EMG data and therefore the SNR. This effect was most notably seen in the tripod grip as the SNR of the SCI patient was only about 25% of the ablebodied SNR. This may be due to the tripod grasp requiring the most control over the individual fingers, compared to the other grips, as fine control may be heavily affected after SCI. The idea of each grasp requiring different levels of hand and finger control also caused a difference in which of the four movements saw the lowest SNR, among the two groups. While the hand open movement had the lowest SNR for able-bodied participants, the tripod grasp saw the lowest amongst SCI participants. The ideal setting of 30410 Hz used in the filter for SCI data will need to be tested in further work to prove it can be useful in controlling assistive devices. If this technology is to be used to enhance the lives of those affected by SCI, the most accurate method of extracting EMG features will be necessary. REFERENCES 1. Zardoshti-Kermani et al. IEEE Transactions on Rehabilitation Engineering 3, 324-333, 1995. ACKNOWLEDGEMENTS I would like to thank Dr. Douglas Weber, the Swanson School of Engineering, the Office of the Provost, Jordyn Ting, Devapratim Sarma, Caroline Schoenewald, and Eli Sigman for the contributions to this paper.
CHARACTERIZATION OF NOVEL RARE-EARTH DOPED NANOPARTICLES IN STEM CELL THERAPY FOR ISCHEMIC STROKE Srujan Dadi, McKenzie Sicke, Yuan Jun, and Dr. Nitish Thakor Singapore Institute for Neurotechnology, Department of Bioengineering National University of Singapore, Singapore Email: SRD76@pitt.edu INTRODUCTION Nanoparticles (NPs) are small, typically under 100 nM, structures with a wide range of clinical applications in real-time, noninvasive medical imaging and diagnostics [1]. Recently developed rare-earth doped nanoparticles (RENPs) can be modified for specific uses such as fluorescence, cell targeting, photoacoustic (PA) imaging, and more [2]. Such particles are currently being made by collaborators at Singapore University of Technology and Design. These specific nanoparticles contain a NaYF4 core doped with rare-earth elements and have a wide variety of imaging uses, such as fluorescence and PA imaging. One diseases of focus by this group is ischemic stroke. Ischemic stroke is a medical emergency that requires immediate medical attention, typically within 4.5 hours of stroke onset, to recover the penumbra region around the infarct area before irreversible necrosis occurs [3]. A potential treatment has emerged by way of stem cell injection, theorized to preserve the penumbra by releasing trophic factors in the ischemic site that enhance endogenous healing mechanisms [4]. In particular, the use of mesenchymal stem cells (MSCs) has been shown to decrease stroke infarct size in rats [4]. While MSC therapy has shown potential for clinical significance, tracking these cells post injection is essential for further studies and examination of treatment efficacy. RENPs with high PA capabilities such as those created by collaborators offer the opportunity to integrate this imaging. This study aims to explore the biocompatibility characteristics of the most recently developed RENPs to quantify their in vitro cytotoxicity and establish procedures for evaluating in vivo efficacy of IV MSC therapy after RENP uptake in a PTI rodent model.
METHODS Rat mononuclear bone marrow cells were isolated and purified from the tibia and femur of a male 4-6 week old Sprague-Dawley rat after sacrifice. Once the passaged stem cells reached 80% cell density, the culture was set to be used for the MTS live assay to examine the cytotoxicity of the RENPs . The RENPs were added to solution and diluted to the four different concentrations (2.5, 25, 100 and 200 ug/mL respectively) to be analyzed against a blank and control group. The plate was incubated for one day followed by examination with a microplate reader using 490nm light to measure absorbance. Cell viability for the various NP containing wells were calculated using absorbance relative to the control. A photothrombotic ischemia (PTI) Sprague-Dawley rat model was used for in vivo MSC injection testing. A cranial window was created over a distal branch of the middle cerebral artery (MCA). A photosensitizer was injected into the tail vein to circulate to the brain of the rat. A 532 nm laser illuminated the vessel for approximately 15 minutes, interacting with the photosensitizer to generate the occlusion. After 3 days, the rat was sacrificed for examination of the infarct area. The brain was removed for fixation, slicing, and TTC staining for observation of the infarct area. RESULTS There was no significant difference in cytotoxicity observed across the RENP concentrations tested, as shown in Figure 1. Each MSC sample maintained an average viability greater than 80%, demonstrating reasonable biocompatibility. Postsurgical functionality improvements in the PTI rat indicated that the occlusion may have resolved itself, and TTC staining revealed an infarct size that was smaller than those observed in control subjects of previous PTI studies, shown in Figure 2.
design has good biocompatibility. Looking forward, this data would be promising for moving the project to clinical implementation. The PTI protocol tested will need to be improved for future data collection to evaluate the efficacy of RENP-aided MSC therapy.
Figure 1: MSC Viability at different concentrations of nanoparticles. MTS assay for cytotoxicity of RENPs in MSCs was conducted with MSC viability calculated as a percent absorbance relative to the control. No significant difference was found between average viability for any of the experimental groups (p > 0.05).
Figure 2: TTC stain of brain slice from PTI induced control group rat. The circled region is the observable infarct area. This white tissue is indicative of blood loss from this region of the brain. Volume of this slice is to be compared to the volume of the infarct area for PTI + stem cell group rat.
DISCUSSION The TTC stained brain in Figure 2 further suggests that the occlusion was not fully completed, as the observable infarct area is not as white as expected nor as large [5]. This can infer that the occlusion either did not form properly or was resolved quickly after initial stroke induction. This data is meant to be compared to the rat PTI + stem cell group results, however the test for the control group may need to be done again with procedural alterations. While all data collected for this study was preliminary and cannot be used for conclusive results, they do suggest that the current RENP
REFERENCES 1. Yhee et al. “Nanoparticle-Based Combination Therapy for Cancer Treatment”, Current Pharmaceutical Design, 2015. 2. Christian et al. “Nanoparticles: Structure, Properties, Preparation and Behaviour in Environmental Media”, Ecotoxicology, 2008. 3. Manning et. Al. “Acute Ischemic stroke Topical Reviews”, Stroke, 2014. 4. Hao, et al. “Stem Cell-Based Therapies for Ischemic stroke”, BioMed Research International, 2014. 5. Liao et al. “Improving Neurovascular Outcomes with Bilateral Forepaw Stimulation in a Rat Photothrombotic Ischemic Stroke Model”, Neurophotonics, 2014. ACKNOWLEDGEMENTS I would like to thank SINAPSE, the National University of Singapore, and Dr. Nitish Thakor for encouraging me to excel in the research environment and pursue a path of my own during the past 8 weeks. Thank you to Yuan Jun, Gayathiri Magarajah, and Dr. Aishwarya Bandla for answering all my questions and helping me understand the applications, purpose, and vision of the project, as well as allowing me to contribute a small part to their larger focus. I would like to thank McKenzie Sicke for working through both the failures and successes of these projects with me. I would also like to thank Nikita Patel, Evan Liu, Taylor Medina, Benjamin Wong, and Rupsa Acharya for contributing in ideation to my project as well as enlightening me on the interesting work they conduct. I would like to thank the University of Pittsburgh Swanson School of Engineering, the Office of the Provost, and the Office of Engineering International Programs for providing me with the opportunity to conduct this research through the SERIUS program, as well as providing me with funding for the work I had the honor of partaking in.
INHIBITION OF Na+/H+ EXCHANGER MODULATES MICROGLIA ACTIVATION FOLLOWING MICROELECTRODE IMPLANTATION Mitchell Dubaniewicz, James Eles, Steven Wellman, Franca Cambi, Dandan Sun, and Takashi Kozai Bio-Integrating Optoelectric Neural Interface Cybernetics (B.I.O.N.I.C.) Lab, Department of Bioengineering University of Pittsburgh, PA, USA Email: mtd48@pitt.edu, Web: http://www.bioniclab.org/ INTRODUCTION Neural electrodes are an important tool for neuroscience research and show great potential for clinical use. However, the use of neural interfaces to treat neurological disorders and control prosthetics is limited by biological challenges impairing chronic recording performances. One biological failure mode involving glial scarring around implants is implicated in reducing the efficacy of chronic electrode applications. Microglia are a significant contributor to scarring, transitioning to a state of activation, migrating, and encapsulating probes upon cortical implantation [1].
immediately following implantation and throughout the duration of the experiment. Animals were anesthetized lightly with isoflurane (<1.5%) and zstack images were taken with a two-photon scanning laser microscope at 6, 12, 24, 48, and 72 hours postimplantation using sulforhodamine 101 (SR101) as a vascular contrast agent (Fig. 1A).
Na+/H+ exchanger isoform-1 (NHE1) is necessary for various microglial functions and has been implicated in pro-inflammatory responses to tissue injury [2]. HOE-642 (cariporide) is an inhibitor of NHE1 and has been shown to depress microglial activation and inflammatory response in an ischemic stroke model [2, 3]. Therefore, we hypothesized that HOE-642 treatment can attenuate microglial activation and coverage of intracortical array. METHODS All protocols were approved by the University of Pittsburgh Institutional Animal Care and Use Committee. Transgenic mice expressing a green fluorescent protein (GFP) in microglia (CX3cr1GFP) were anesthetically induced with intraperitoneally administered ketamine/xylazine (90/8 mg/kg). A unilateral 4x4mm craniotomy was performed over somatosensory cortex, and a nonfunctional, four shank Michigan-style microelectrode array was implanted to a depth of 300 Âľm below the pial surface. Craniotomies were sealed with a silicone elastomer and glass coverslip to create an optical imaging window consistent with previous experiments [4]. HOE-treated mice (n=5) were additionally intraperitoneally injected with 0.25 mg/kg HOE-642 twice a day, eight hours apart
Figure 1: A) 2D plane within a merged z-stack at 12 hours in a HOE-642-treated mouse showing microglia (green) and blood vessels (red). Probe outlined in yellow. Example ramified (B) and transitional (C) microglia. D) Thresholded GFP (microglia) signal on surface of probe (yellow outline).
Z-stacks were analyzed using ImageJ (National Institutes of Health). Microglial activation was quantified through changes in processes extension, soma migration, and morphology. Microglia typically reside in a ramified state but can enter a transitional state with fewer, longer projections directed toward the probe when activated. Microglia were visually labeled as ramified (1) or transitional (0) (Fig. 1B-C). Their distance from the probe throughout time was also recorded to calculate cell body velocity in the direction of the probe.
Additionally, a transitional index (T-index) based on length of the longest process extending toward (n) and away (f) from the probe and directionality index (D-index) based on number of processes toward (n) and away (f) from the probe were calculated with the following formula: (đ?&#x2018;&#x201C; â&#x2C6;&#x2019; đ?&#x2018;&#x203A;) đ??źđ?&#x2018;&#x203A;đ?&#x2018;&#x2018;đ?&#x2018;&#x2019;đ?&#x2018;Ľ = +1 (đ?&#x2018;&#x201C; + đ?&#x2018;&#x203A;) Microglial encapsulation of the probe was quantified as the percent surface coverage. Z-stacks were rotated so that the surface of the electrode was parallel to the plane of view. Then, a binary mask of a sum projection of the fluorescence within 10 Âľm above the probe surface was created using an IsoData threshold (Fig. 1D). The percent of area with signal within a manually drawn outline of the probe was measured. Two-way ANOVA with Tukey post-hoc tested significance of GFP labeled surface coverage. RESULTS In HOE mice, all microglia within 50 Âľm of the probe were transitional (0) at 6 and 72 hours (Fig. 2). All microglia at 200-250 Âľm were initially ramified at 6 hours and later averaged 0.75 ramification at 72 hours. Similarly, at 200-250 Âľm, the T-index decreased from 0.99 to 0.86 from 6 to 72 hours. The D-index decreased from 1.10 to 0.89. As can be seen in Figure 2, the radius of activation where fifty percent of microglia are expected to be activated was 108.8 Âľm at 6 hours and 177.9 Âľm at 72 hours. This can be compared to previous control data where microglia had a radius of activation of 130 Âľm at 6 hours while all microglia within 300 Âľm were activated by 72 hours [5]. Average soma velocity of microglia in the HOE-treated mice toward the probe was 6-12 hr: -0.34 Âľm/h, 12-24 hr: 0.98 Âľm/h, 24-48 hr: 0.67 Âľm/h, and 48-72 hr: 0.52 Âľm/hr.
Figure 2: Ramification of microglia in HOE-642-treated mice with distance from the probe surface in time. Logistic regressions shown.
HOE mice (n=5 6-48 hr, n=4 72 hr) had significantly less surface coverage (p<0.001) compared to control mice (n=5) (Fig. 3). Temporally, the surface coverage was relatively stagnant for the first 48 hours, but coverage at 72 hours was significantly higher than that at 12 and 24 hours.
Figure 3: Mean+SEM percent surface coverage of probe in HOE-642-treated (blue) vs. control (red) mice throughout time (* p<0.05, ** p<0.01).
DISCUSSION The negative velocity at 6-12 hours could reflect a lack of motility combined with swelling between the probe and microglia. Initiation of cell body migration towards the probe after 12 hours is consistent with previous work [5]. The decrease in microglial encapsulation and scarring of HOEtreated mice as shown by the lower surface coverage suggests that pharmacological inhibition of Na+/H+ exchanger has potential to increase electrode-tissue integration and recording efficacy in neural electrode applications. REFERENCES 1. Kozai et al. J Neural Eng 9, 066001, 2012. 2. Song et al. Glia 66, 2279-2298, 2018. 3. Shi et al. J Neurochem 119, 124-135, 2011. 4. Kozai et al. J Neurosci Methods 258, 46-55, 2016. 5. Wellman et al. Biomaterials 164, 121-133, 2018. ACKNOWLEDGEMENTS This work was supported by the Swanson School of Engineering, the Office of the Provost, NIH NINDS R01NS094396 and a diversity supplement to this parent grant, and NIH NINDS R21NS10809.
Visualization and Characterization of Etched-Based On-Chip Plasma SelfSeparation Arijit Dutta, Mentor: Benjamin Ingis, Advisor: Dr. Eon Soo Lee Advanced Energy Systems and Microdevices Laboratory Department of Mechanical and Industrial Engineering New Jersey Institute of Technology, NJ, USA Email: ard116@pitt.edu, Web: http://www.lee-research.org/ INTRODUCTION Sensing One of the most important aspects of cancer Supply Channel treatment is detecting it as soon as possible. A new Channel Etched quicker and more cost-effective solution is being Region developed in the form of a micro biochip which, Figure 1-Example of testing using a few microliters of the patientâ&#x20AC;&#x2122;s blood, can BMF environment under microscope. detect the type and progression of cancer based on biomarkers in the blood. Such a device could be used METHODS at regular doctor appointments and give results There are two parts for this specific project. The first within a few minutes of taking blood. part is to run and record the tests with the different parameters being tested. The other part is to develop A very important aspect of this device is the filtration some type of algorithm to automatically calculate the of the blood. The antigens, typically in the fluid instantaneous flowrate. plasma part of blood, must be filtered from the red blood cells to improve the accuracy of the sensors. The filtration of blood on a microscopic scale has been researched but not much progress has been made. In this experiment Armour Etch cream is used on a microscope slide. This cream, typically found out local hardware stores for frosting glass, etches microscopic cavities into the slide. Silicon-polymer channels, made of Polydimethylsiloxane (PDMS), are placed on the etched pattern. Blood mimicking fluid (BMF) is then put into the supply channel and filters through the sensing channel, where the sensors would be placed, by means of the microscopic cavities. The purpose of this specific research project was to quantitively determine how different parameters affect the flowrate of BMF into the sensing channel. The different parameters included the varying distance gap between the two channels, the amount of time the cream was allowed on the slide before washed off, and the number of times the cream was placed on the slide. Figure 1 gives a descriptive image of what the testing environment looks like through a microscope. Figure 1- A picture of what a test run looks like under a microscope. The u-shaped figure is the sensing channel and the gray rectangle below is the supply channel. The large
The first course of action was to develop a procedure for obtaining a consistent clean-cut etching region. The solution was to laser-cut out rectangle, 2000 by 3000 Îźm, from a sticker label sheet. This sheet would be placed on the slide and the cream would be applied to the open rectangle region. After determining the proper etching method, testing began. However, it was not very successful as the PDMS channels were not sealing properly to the slide. This allowed the BMF to leak into the unsealed regions instead of the channels to ruin the experiment. To deal with this issue, several devices were created for an easier and more accurate channel placement to ensure proper and firm placement. Creating and using these tools did help yield more successful trials but were not consistent enough to draw data from. A possible cause for this problem was with the mold where the channels were produced from. Upon closer inspection, some residue, most likely from the developmental process, was left on parts of the mold. To test this, several casts were made from the mold and the same channel pairs were tested from each cast. As shown below, all three channel pairs showed leakage from the same area. The dark black line on either side of the etched region
meant that filtering was happening on the sides of the etched region rather than through the region itself as seen in figure 2. This meant that at least some parts of the mold had an innate problem that would lead to any channels produced from it to be prone to seal failure.
The MATLAB program was used on the few successful tests. On these tests, the program would pick up and record change when none was present. To fix this, the image processor must be refined further to only pick up change that is happening. CONCLUSION AND FUTURE WORK The work that was completed within this Summer, though not specifically accomplishing the objectives listed above, progresses the state of the testing further. Thus, whenever the lab decides to continue with this testing, it will be further along the process.
Figure 2-Example of seal leakage from the mold testing.
The other part of this specific experiment was to create an algorithm to quantitively calculate flowrate of BMF into the sensing channel. To do this, a MATLAB program was developed to track the movement of the dark black meniscus of the liquid. With the microscope camera held constant, the only movement recorded was the BMF. The code took a frame and subtracted the previous frame from it to get the change. The change was recorded in binary as if there was change, it would be recorded and white, and if there was not, it would remain black. This process would continue each frame until the end. The total change would be given in the # of pixels which would be multiplied by the area it represents, about 0.003025 mm^2, and by channel depth for volume. Thus, from this, the flowrate would be given as well as the total volume. Figures 3 and 4 show what the process and result would look like.
Figure 3- MATLAB Code. Left side is what is shown in reallife through the microscope. The right side is the binary image that picks up what will be used for the total pixel amount.
As for the actual etching parameter testing, a solution must be determined on how to go about the etching mold problem. Either the working channels within the etching mold are identified and are the only ones that will be used for testing or a whole new etching mold must be produced. With this new mold produced, it must be verified that there is no significant developmental residue that would possible affect the channel cast production. Once this is determined the different parameters can be tested. The other aspect of the future work involves the editing the MATLAB code so that is works as accurately as possible. For example, as stated before, the main problem with the code is that it is picking up too much flow that is not actually happening and so is skewing results. The possible solution to this would be to refine the image processing further so that only flow that is occurring is picked up. Once the etching mold and MATLAB code problems are fixed, the parameter testing can finally begin to yield results. REFERENCES 1. Nunna, B.B., Mandal, D., Zhuang, S., Lee, E.S. (2017).'Innovative Point-of-Care (POC) Micro Biochip for Early Stage Ovarian Cancer Diagnostics' Sensors & Transducers journal. Volume 214, Issue 7, July 2017, Pages 12-20. ACKNOWLEDGEMENTS This work was supported by the NSF grant EEC18523.
Figure 4-The left side shows the flowrate per second on the left and the total volume each second on the right side.
MODELING WALL STRESS IN ABDOMINAL AORTIC ANEURYSMS USING MACHINE LEARNING Larissa Fordyce1, Timothy Chung2, and David Vorp1,2,3,4,5,6,7 University of Pittsburgh Departments of 1Bioengineering, 2Surgery, 3Cardiothoracic Surgery, and 4 Chemical and Petroleum Engineering, 5Clinical & Translational Sciences Institute, 6Center for Vascular Remodeling and Regeneration, and 7McGowan Institute for Regenerative Medicine, Pittsburgh, PA Email: lnf16@pitt.edu, Web: https://www.engineering.pitt.edu/vorplab/ INTRODUCTION Abdominal aortic aneurysms (AAAs) occur when the wall of the aorta enlarges into a balloon-like shape. AAAs cause about 15,000 deaths in the US each year, with a mortality rate around 90% once ruptured [1]. Recent studies have sought to improve our understanding of AAA rupture risk by analyzing the morphological factors that are most pertinent to risk, finding that pressure-induced wall stress may affect AAA rupture risk [2] [3]. Computational simulations such as finite element analysis (FEA) are the standard method of modeling AAA wall stress but are time consuming and impractical for large scale implementation. Conversely, new machine learning (ML) algorithms are trained to predict a target value and can be applied to make accurate predictions quickly. These ML algorithms may be useful for predicting wall stress much more rapidly than simulations. Therefore, this study aimed to predict wall stress in AAAs with ML to replace computational simulations as a part of AAA risk analysis. METHODS Computed tomography angiogram images of five AAA cases were reconstructed as a triangular mesh to be analyzed using ABAQUS simulation software. The simulated von Mises (N/cm^2) wall stress values were taken as ground truth values for these cases. Files for each of these five cases containing the nodes, elements, and connectivity information of the triangular meshes were then implemented in a MATLAB (MathWorks, vR2019a) script to calculate morphological features for each node such as distance to the centerline, tortuosity, and Gaussian curvature. The script exported these features along with the von Mises stress values computed by ABAQUS simulations, resulting in an individual data file for each case. These five individual datasets were then combined to create a large combined dataset, used for optimization of the ML model. In a
Python script, the combined dataset was randomly split into 80% training data and 20% testing data. The training data was implemented in the Tree-Based Pipeline Optimization Tool, which compares combinations of different data processing techniques, ML algorithms, and algorithm hyperparameters and chooses the one “pipeline” that performs the best on that specific dataset [4]. While optimizing the pipelines, their performances were analyzed with 10-fold cross validation scored by negative mean squared error (NMSE). The pipeline with the lowest NMSE was used to make predictions of stress for the 20% test data and the average percent error of the test predictions was calculated. This trained model was saved and exported to be used for predicting the stress values of the individual case datasets. In a separate Python script, the trained model was uploaded and used to predict stress for all nodes in each of the five cases. Percent errors between the ABAQUS values and the predictions were calculated. These errors were averaged for each case, then averaged over the five cases, to examine the model’ prediction ability. Next, the stress predictions, ABAQUS stress values, and errors were exported to another MATLAB (MathWorks, vR2019a) script that processed these values for modeling in ParaView (National Technology & Engineering Solutions of Sandia, LLC (NTESS), Kitware Inc, v5.7.0). The ABAQUS stress values, ML predicted stress values, and percent error for each prediction were then visualized in ParaView to compare the ML predictions to the ABAQUS simulations. RESULTS During optimization, the Random Forest Regression model had the best cross validation score of -0.1336 NMSE. The predictions made on the 20% testing set during training had a percent error of 5.69% ± 19.01% from the ABAQUS values. After loading the
saved model to predict stress for all individual cases, the average error was 0.88% ± 0.12% across the five cases, and these stress predictions were not statistically different than those of the ABAQUS simulations (p = 0.94). The model took an average of 2.42 ± 0.45 seconds to complete predictions for each case. Figure 1 shows the front and back side of one AAA case used in this study which had an average prediction percent error of 0.95% ± 4.89%. The maximum error in this case was 216.69%, with that particular node having an ABAQUS stress value of 1.42 N/cm^2 and a predicted stress of 4.49 N/cm^2.
Figure 1. ParaView Visualization of Representative AAA. Column A) von Mises stress values from the ABAQUS simulation. Column B) von Mises stress values predicted via machine learning with the Random Forest model. Column C) Percent error between the ABAQUS stress values and ML predictions. To improve visualization, the percent error is scaled approximately to the 99th percentile (6.26%) so that any error value greater than the 99 th percentile will appear as dark red.
DISCUSSION The low average errors of stress prediction during training and testing of the Random Forest model indicate a promising future for ML models to replace computational analyses of AAA wall stress. An area worth studying further is the uncharacteristically high maximum error in some cases. For example, the case pictured in Figure 1 had a high maximum error;
however, the average error (0.95% ± 4.89%) and the 99th percentile error (6.26%) are still relatively low. Further study should be performed to determine the cause of such anomalies and to find a solution to improve the model’s ability in the areas of high error. A limitation of this study is that there were only five training cases for the model, and more training data will result in a more robust model. Similarly, this model was not cross validated using new cases, which is an important way to confirm the model’s conditioning. Future work will include more rigorous validation using a larger training set and including new cases for testing, as well as examining ways to improve areas with high error. Overall, based on the low percent errors of stress predictions made by the Random Forest regression model, our results are encouraging for the potential replacement of computational simulations with machine learning for the estimation of AAA wall stress. REFERENCES 1. McGloughlin, T.M. & Doyle, B.J. New approaches to abdominal aortic aneurysm rupture risk assessment; engineering insights with clinical gain. Arterioscler Thromb Vasc Biol, 2010; 30:1687-1694. 2. Shum J, Martufi G, Di Martino E, et al. Quantitative assessment of abdominal aortic aneurysm geometry. Ann Biomed Eng. 2011;39(1):277–286. 3. Vorp, D.A., Biomechanics of abdominal aortic aneurysm. J Biomech. 2007;40(9):1887–1902. 4. Olson, R.S., Urbanowicz R.J., Andrews P.C., Lavender N.A., Kidd L.C., Moore J.H. Automating Biomedical Data Science Through Tree-Based Pipeline Optimization. EvoApplications 2016: Applications of Evolutionary Computation, 2016; 9597:123-137. ACKNOWLEDGEMENTS We thank the Swanson School of Engineering and the Office of the Provost and the Swanson School of Engineering at the University of Pittsburgh for funding that made this research project possible.
STABILEYES – NEW ASSISTIVE TECHNOLOGY FOR NYSTAGMUS TO PRODUCE A STABILE REAL-TIME VIDEO IMAGE Julia Foust1, Linghai Wang1, Holly Stants2, William Smith2, Roberta Klatzky3, George Stetten1 1 Visualization and Image Analysis Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA 2 University of Pittsburgh Medical Center, PA, USA 3 Carnegie Mellon University, PA, USA Email: jsf53@pitt.edu, Web: http://vialab.org/index.html INTRODUCTION Nystagmus is an intricate visual condition in which the eyes make involuntary, repetitive motions that often result in a blurred or unstable visual field and reduced depth perception. The condition has an estimated prevalence of 0.24% in the general population (approximately 785,000 individuals in the United States) [1]. Individuals with congenital nystagmus, which develops at birth or shortly thereafter, can often adapt to these hinderances. However, acquired nystagmus, which develops later in life, often coincides with other neurological conditions and poses an overall greater threat to quality of life, to which it is harder to adjust [2]. Current treatment options for nystagmus include corrective eyeglasses or contact lenses, medications, surgery, and rehabilitation therapy [3]. Unfortunately, specialized technology developed for such therapy often has imperfect success and/or is prohibitively expensive, especially for people with limited means. One example of such technology uses custom software with off-the-shelf gaming hardware (Tobii-EyeX) to correct pathological nystagmus [4]. Our research focuses on making this type of technology readily available to individuals with acquired pendular nystagmus, in which the abnormal eye motion is roughly sinusoidal, in their everyday lives. Our free application (app) will run on any smartphone or tablet, using the front-facing camera to detect periodic eye motion and, subsequently, stabilizing a real-time image of the environment for the user, captured by the back-facing camera. We present here a method for detecting eye motion using the simple strategy of finding repetitive motion in the image with first-order moment calculations, from which image translation will be informed.
METHODS Pupil-tracking is an established technology, generally requiring either high quality cameras or specialized hardware mounted around the eyes. We intend instead to use the lower quality front-facing cameras in mobile smart devices, which are readily available to most people. The premise of our system is that we need only detect the frequency and phase of eye movements, not actual pupil location, to stabilize the image for the user with nystagmus. Our present algorithm is implemented in C++ using the open-source computer vision library, OpenCV. We take advantage of OpenCV’s face classifier to identify a rectangular region containing both eyes. We then use a sum-absolute-difference motion tracking algorithm to track this region of the face while tolerating some facial drift. As the eyes move back and forth within this region, the changing location of the iris (colored part of the eye) against the sclera (white part of the eye) should produce a first-order moment (“center of mass”) that varies temporally in phase with eye motion. To test this algorithm in an efficient and controlled manner, we developed a system to create “pseudonystagmus” videos constructed from sets of facial images of normal volunteers. Images of each of three volunteers were captured by the internal camera of a laptop computer while the subject focused on a dot at known locations along the x-axis of the computer display. These volunteers included two females and one male of varying ethnicities, all of whom were in the age range of 19 to 21 years old. Two of the four sets of images included the subject wearing prescription eyeglasses (as one participant recorded two sets of images, one with and one without eyeglasses). Each set of images was used to create four “pseudo-nystagmus” videos showing sinusoidal eye movement with a frequency of 0.01 or 0.05 cycles per frame, resulting in a total of 16 test videos.
DATA PROCESSING The testing of our algorithm was completed using qualitative and quantitative analyses. During the processing of a “pseudo-nystagmus” video, each frame was automatically displayed with a box drawn around the region being used for moment calculations. This was observed to gauge the accuracy of the algorithm in estimating this region correctly. We then examined the data produced by the algorithm, graphing the first-order moment along the x-axis vs. frame for each video and assessing their expected periodicity visually. Finally, we applied a Fast Fourier Transform (FFT) to quantify the dominant frequency of each processed video. RESULTS Preliminary results for this method using the 16 “pseudo-nystagmus” videos showed reliable tracking of periodic eye motion with an acceptable signal-to-noise ratio. Visual observations of the graphs showed clear periodic motion detected by the algorithm for all videos. Furthermore, 14 of these 16 videos had periodicity verified and frequency correctly identified by FFT analysis. Thus, 87.5% of the “pseudo-nystagmus” videos were both qualitatively and quantitatively determined to be periodic with a known frequency. Figure 1 shows the moment vs. frame data from one of the successful videos. The average speed of the algorithm (running
Figure 1: Computed first-order moments of automatically identified region containing both eyes when testing a “pseudo-nystagmus” video with eye motions modeling a sine wave of frequency 0.01 cycles per frame.
on an Apple Macintosh laptop) was approximately 39 frames per second, fast enough to run in real time.
DISCUSSION We are developing a new app for any mobile smart device, using this ubiquitous platform to provide a stabilized image of the environment in real time for patients with acquired pendular nystagmus. Our initial results are encouraging in terms of reliability and speed. The qualitative and quantitative analyses demonstrated the effectiveness of first-order moment calculations in detecting periodic eye motion. Given that acquired pendular nystagmus typically exhibits a frequency in the range of 2 to 7 cycles per second, our algorithm’s processing rate is promising towards the goal of a real-time app [5]. We are now working to incorporate a phase-locked loop into our algorithm to extract the frequency and phase information from the first-order moment calculations. Once completed, this will be used to move the image on the device’s screen back and forth at the same rate as the user’s eye motion, with the user manually adjusting the amplitude of the shift for optimal improvement of visual interpretation of the environment. We are expecting to combine the StabilEyes app with a new peripheral device called FingerSight, which we are also developing in our lab. FingerSight employs a miniature camera and four vibrators mounted on the finger via a ring to guide individuals with visual impairments. In this collective system, the view of the environment being stabilized would be acquired by moving the hand instead of the mobile smart device. REFERENCES 1. Sarvananthan et al. Investigative Ophthalmology & Visual Science 50.11, 5201-5206, 2009. 2. American Nystagmus Network, 2019. 3. Thurtell et al. “Treatment of Nystagmus and Saccadic Oscillations,” University of Iowa Healthcare, 2013. 4. Pölzer et al. 2017 39th Annual International Conference of the IEEE of the EMBC, July 11-15, 2017, Seogwipo, South Korea. 5. Straubea et al. European Journal of Neurology 19.1, 6-14, 2011. ACKNOWLEDGEMENTS Funded through a Brackenridge fellowship from the Honors College and by a grant from the Center for Medical Innovation, both at the University of Pittsburgh.
EVALUATION OF TRACTOGRAPHY TO VISUALIZE NEURONAL CONNECTIONS IN THE HUMAN TEMPORAL LOBE Lauren Grice, Chandler Fountain, and Michel Modo Regenerative Imaging Laboratory, McGowan Institute for Regenerative Medicine University of Pittsburgh, PA, USA Email: leg68@pitt.edu, Web: http://www.radiology.pitt.edu/ril.html INTRODUCTION Historically, research on neurological disease has focused on modeling whole brain structures to identify abnormalities. However, because of technological limitations, little research has been performed to establish a map of white matter connections between grey matter structures in the human brain. When considering connectivity within the brain, the temporal lobe (TL) region is of particular interest as it contains important grey matter structures like the hippocampus and amygdala which play significant roles in memory and processing of emotions. Additionally, the TL is targeted by debilitating conditions like epilepsy and Alzheimer’s disease. In an effort to better understand the function of the TL, this project addresses the need for visual maps of microstructural anatomy in a healthy, human TL to develop accurate representations of hippocampal connections and their patterns of cortical innervation. In this study, mesoscale, T2-weighted, diffusion tensor imaging (DTI) was used to quantify changes in neuronal connectivity. DTI is a specialized form of magnetic resonance imaging (MRI) that is sensitive to the magnitude and orientation of water movement [1]. Software can be used to detect the water diffusion vectors in an image to calculate both the mean diffusivity (MD) and fractional anisotropy (FA), or the directional dependence of diffusion in tissue [2]. Furthermore, using computational methods, the vectors in each voxel, or unit, of a DT image can be weighted by FA and “traced” to create streamlines, or three-dimensional representations of neuronal bundles. In healthy brain tissue, the flow of water molecules in white and grey matter demonstrates low diffusion and tends to be directionally-dependent, or anisotropic. Because of this diffusion behavior, constructed streamlines in DTIs of healthy brains should demonstrate high microstructural organization. However, in conditions like epilepsy and Alzheimer’s disease, brain cells die and axons degenerate, resulting in altered diffusion behavior and in turn altered streamline organization.
Therefore, the knowledge of normal neuronal connections in the TL produced by this study will serve as a valuable point of comparison for future research on underlying structural abnormalities in cases of neurological disease. METHODS The images analyzed in this study were from three whole, post-mortem, left human TLs. Diffusion weighted MR data were acquired on a 11.7T Bruker BioSpin MRI (DTI EPI; TR = 500 ms; TE = 0.384 ms; b-values = 1000, 2000, 4000, 8000 s/mm2; number of directions per shell = 40, slice thickness = 0.5 mm, FOV = 62.7 mm x 3.8 mm x 97.9 mm; Matrix = 128 x 128 x 196; total number of voxels = 1,218,380; Resolution = 0.01256 mm3/voxel). Using DSI Studio, a tractography software, threedimensional maps of anatomical regions of interest (ROIs) were manually drawn on the MR images showing MD. ROIs delineated the following grey matter structures: the hippocampus (HC), subiculum, amygdala (AG), caudate, putamen, and TL gyri (i.e. the temporal pole, superior temporal gyrus, middle temporal gyrus, inferior temporal gyrus (ITG), fusiform gyrus, and parahippocampal gyrus). The ROIs were drawn by selecting appropriate voxels on each slice of the DT image. Then, the selected voxels for each ROI were compiled to create a 3-D representation from which a corresponding FA value was extracted. Deterministic tractography was performed to trace FA vectors and form streamlines within and extending from the hippocampus. Streamline direction interpolation was determined using a Euler tracking algorithm terminated at a seed (beginning point of streamline) count equal to 10x the total number of voxels in each sample. Tractographic reconstruction parameters included a FA threshold of 0.02, an angular threshold of 60°, a tracing step size of 0.25 mm (half the size of a voxel edge), minimum streamline length of 1 mm (length of a connection between at least two voxels), and a maximum length of 60 mm (sample TL height).
Figure 1. (A) Transverse view of one of the TL MD maps with the hippocampus delineated by a 3D ROI. (B) Result of streamline reconstruction by computational detection and tracing of the FA vector in each voxel in the HC ROI. Streamlines within, extending from, and ending in the HC are shown. Streamline colors represent its 3D direction in space, with red=lateral/medial, green=superior/inferior, and blue=anterior/posterior. Tractography reveals four bundles of neurons extending from the HC. At the anterior HC, streamlines connect to the ITG and AG while at the posterior HC, streamlines join the cingulum bundle (cgb) and selenium of the corpus callosum (scc). (C) Isolated streamlines seeded (beginning) in the ITC ROI and ending in the HC ROI.
RESULTS Figure 1 illustrates extra- and intrahippocampal streamlines that were successfully reconstructed. Furthermore, this approach demonstrates for the first time that visual representations of both white and grey matter connectivity in the TL can be probed using tractography. The visualization of neuronal connections in figure 1B and 1C show a precise location for streamline termination as well as provide a qualitative understanding of their arrangement. In figure 1B, at the anterior end of the HC, efferent neurons connect to two grey matter structures: the AG and ITG. Connections to the AG are few in number, but highly organized. Streamlines extending to the ITG terminate on both medial and lateral edges of an anterior segment of the gyrus. Conversely, as shown in figure 1C, efferent connections from the ITG appear to fan out within the HC. In figure 1B at the tail, or fimbria, of the posterior HC, axonal projections join the selenium of the corpus callosum (scc) and a telencephalic white matter tract, the cingulum bundle (cgb). This result verifies the previously known connection of the HC to the cgb and supports the fact that DTI can contribute to the discovery of grey matter interconnectivity within the TL [3]. DISCUSSION The tractography results from this study reveal that the main grey matter structures a healthy adult HC
innervates with are the AG and ITG. Previous studies have shown that functions such as perception, imagination, and episodic memory engage the anterior HC [4]. Since the HC plays a major role in memory formation and has been shown to atrophy in cases of Alzheimer’s disease and dementia, it is possible that a change in the interconnectivity of the HC with the AG and ITG could be an indication of disease [5]. Chiefly, though, the significance of reconstructing streamlines from DTIs was to gain an accurate, comprehensive, and novel characterization of tissue microstructure within the TL and, specifically, the HC. Importantly, reconstructed streamlines from this study will later help to characterize the pathological basis of diseases that manifest in the TL, like epilepsy and Alzheimer’s disease. REFERENCES 1. Mori & Zhang. Neuron 51.5, 527-539, 2006. 2. C.H. Sotak. NMR Biomed 15, 561-569, 2002. 3. Wu et al. Front Neuroanat 10, 10-84, 2016. 4. Zeidman & Maguire. Nature Reviews Neuroscience 17, 173–182 ,2016. 5. Mu & Gage. Mol Neurodegener, 6, 6-85, 2011. ACKNOWLEDGEMENTS Lauren Grice was supported by the University of Pittsburgh Swanson School of Engineering and Office of the Provost throughout the entirety of this research.
3D Reconstruction of the Glenohumeral Capsule in Patients with and without a Shoulder Dislocation Jocelyn L. Hawk, Robert T. Tisherman, Christopher M. Gibbs, Volker Musahl, Albert Lin, Richard E Debski Orthopaedic Robotics Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: jlh283@pitt.edu Web: https://www.engineering.pitt.edu/labs/ORL/ INTRODUCTION The glenohumeral joint is the most commonly dislocated joint, usually by an anterior shoulder dislocation [1]. This type of injury can result in permanent deformation of the glenohumeral capsule, which causes greater capsular laxity and increased capsular volume [2]. A common surgical procedure to reduce capsular volume is capsular plication, but currently, how the capsule is plicated is largely subjective without taking into account the magnitude and location of non-recoverable strain in the capsule. Thus, there exists a need to quantify injury to the glenohumeral capsule and individualize surgical repair. It is currently unknown whether changes in capsular volume following a shoulder dislocation can be quantified using MR arthrogram. Therefore, the aim of this study is to reconstruct 3D models of the glenohumeral capsule from MR arthrogram to assess capsular volume in healthy patients and patients who have undergone one or more anterior shoulder dislocations. METHODS MR arthrograms of the glenohumeral joint in healthy subjects (n=8) and subjects that had sustained at least one anterior shoulder dislocation (n=8) were acquired. The capsular space was defined as the space within the glenohumeral capsule filled with the contrast agent during the MR arthrogram. The capsular space, humerus, and glenoid were segmented in MIMICS (version 17.0, Materialise NV, Belgium) from the coronal, sagittal, and axial view for each subject. This created a mesh, a 3D model made up of triangles, for each view of each subject. Meshes created from each view were then combined in MeshLab by overlaying them (version 1.3.4, ISTI, Italy) to make a higher resolution mesh for each subject (Figure 1), and the volume of the capsular space was determined using MeshLab. The volume of the
capsular space was also calculated with the superior portion removed because the inferior region of the capsule was expected to experience the greatest injury. This was standardized between subjects by removing any capsular space above the greater tuberosity of the humeral head. These volumes were then normalized to the size of the humeral head by fitting a sphere to the humeral head and dividing the capsular volume by the radius of the sphere cubed. The capsular volumes of each group were compared with a two-sample t-test with significance set at p<0.05.
Figure 1: Posterior view of a 3D reconstruction of the
glenohumeral capsular space (green) and humeral head (blue).
RESULTS The average total capsular volume of the injured group was found to be 65% larger than the capsular volume of the healthy group (p=0.027) (Figure 2). There was no significant difference in capsular volume when the superior part of the capsule was removed (Figure 3). A power analysis was conducted for capsular volume with the superior portion removed, and a total of 26 more subjects would be needed for the results to be significant.
subjects calculated from 3D reconstruction of MR arthrogram. No significant difference was found.
Figure 2: Total normalized glenohumeral capsular volume in healthy subjects and injured subjects calculated from 3D reconstruction of MR arthrogram. Normalized capsular volume was 65% larger in the injured subjects than in the healthy subjects.
DISCUSSION 3D models of the capsular space for the glenohumeral joint were successfully reconstructed from MR arthrograms and were able to show a significant difference in capsular volume between healthy and injured subjects. The findings of this study are consistent with previous literature in that capsular volume was found to increase following a shoulder dislocation [3]. This is due to the capsule becoming stretched out and permanently deformed during the dislocation. This study has shown that capsular volume can be quantified by 3D reconstruction of MR arthrogram. This method was able to quantify total capsular volume but may not be ideal for examining injury to specific regions of the capsule due to the large slice thickness of the MR arthrograms that was used to create the models. Also, the joint position and amount of contrast agent injected into the joint was not standardized between subjects. Future studies should utilize a methodology to assess injury to specific regions of the capsule by determining the volume of each specific region of the capsule following a shoulder dislocation. By quantifying capsular injury using MRI, surgical repair to the glenohumeral capsule following a shoulder dislocation can be individualized from patient to patient. Injury specific repair could reduce the chance of recurrent shoulder instability, improving the life of the patient. REFERENCES 1. Abrams et al. JBJS. 2014 2. Park et al. 2014. 3. Dietz et al. 2005. KSSTA
Figure 3: Normalized glenohumeral capsular volume with superior portion removed in healthy subjects and injured
ACKNOWLEDGEMENTS This research was conducted at the Orthopaedic Robotics Laboratory and was funded by the Swanson School of Engineering and the Office of the Provost.
EFFECT OF METFORMIN ADMINISTRATION ON TENDON WOUND HEALING Catherine Grace P. Hobayan, Arthur R. McDowell, Jianying Zhang, and James H. Wang MechanoBiology Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: cph28@pitt.edu, Web: http://pitt.edu/~mechbio INTRODUCTION Metformin is a hypoglycemic anti-inflammatory drug commonly used for treatment of Type II diabetes. High mobility group box 1 (HMGB1) is an alarmin protein released from necrotic cells to induce inflammatory responses in the human body. Metformin has been shown to bind to the acidic tail of HMGB1 and inhibit its inflammatory activity in a concentration-dependent manner [1]. The inflammatory activity of HMGB1 has also been shown to contribute to tendon wound healing mechanisms by recruiting paratenon cells to migrate to the wounded area [2]. No prior studies have analyzed the effect of metformin in the context of tendinopathy. In this pilot study, we hypothesize that metformin reduces the extent to which wounded tendons are healed because of its inhibition of HMGB1 activity. This information will be beneficial for obese patients with Type II diabetes who regularly take metformin and have tendinopathy. METHODS In this pilot study, we used one experimental group and one control group with 3 mice per group. 2month-old Ai9 transgenic male mice were used because they are genetically engineered such that markers for the cells of interest will express fluorescent markers [3]. Each mouse received 100 μL of 20 mg/mL tamoxifen injections for 5 consecutive days in order to induce the fluorescence of Scleraxis (Scx) and ⍺-smooth muscle actin (⍺SMA), which are markers of tendon and paratenon cells, respectively [3]. 1 week after the tamoxifen injections, the experimental mice received an intraperitoneal injection of 160 mg/kg metformin per day for 3 consecutive days. 1 week after metformin injections, tendon-bone interfaces of the Achilles tendons of all mice were surgically wounded using a 0.5 mm biopsy punch.
Frozen tissue sections of each tendon were collected 4 weeks after surgery. Fluorescence microscopy was used to trace the origin of cells that migrated to the wounded area by visualizing for Scx and ⍺-SMA positive cells. These methods were used to analyze whether the metformin affects the extent of tendon wound healing by affecting cell migration to the wounded area. DATA PROCESSING Fluorescence and brightfield images for visualizing Scx and ⍺-SMA were merged. Cell count was determined manually by counting the number of red and green dots in the images. RESULTS Figure 1 shows that cell densities for both tenocytes and paratenon cells were higher in the absence of metformin, based on the fluorescence image microscopy images shown in Figure 2.
Figure 1: Cell count for tenocytes and paratenon cells in the right Achilles tendons of 2-month-old Ai9 transgenic mice.
Figure 2 shows that 4 weeks after surgery, paratenon cells migrate into the center of the tendon in the control group, but this does not occur when metformin is administered in the experimental group. DISCUSSION The results may indicate that metformin has inhibited the activity of HMGB1 and prevented proper wound
healing by blocking the recruitment and migration of paratenon cells to the wounded area. This is consistent with the aforementioned literature in that the presence of metformin inhibited inflammatory activity that may have contributed to proper tendon wound healing. We have demonstrated that the metformin administration before tendon injury inhibits the migration of paratenon cells to the wounded area of an Achilles tendon. Future studies should incorporate longer time points (e.g., 8 weeks after surgery) and involve metformin administration both before and after injury, as well as analysis of patellar tendon wound healing. Hematoxylin & eosin staining should be used to analyze alterations in tissue structure of wounded tendons in response to metformin. Immunohistochemistry should also be
used to determine the presence and activity of HMGB1 in future studies. Furthermore, similar studies should be done to analyze the effects of metformin in female mice, as well as mice of different age groups. REFERENCES 1. Horiuchi et al. J Biol Chem 292(20), 8436-8446, 2017. 2. Akbar et al. RMD Open 3(2), e000456, 2017. 3. Dyment et al. PLoS ONE., 9(4), e96113, 2014. ACKNOWLEDGEMENTS We thank Dr. Feng Li, Dr. Kelly Williamson, the Division of Laboratory Animal Resources (DLAR), and the Swanson School of Engineering (SSOE) Undergraduate Summer Research Internship for their support on this project.
Figure 2: Fluorescence microscopy images (20x) of right Achilles tendons from 2-month-old Ai9 transgenic mice from control and experimental groups at 4 weeks after surgical wounding.
GLIOMA SEGMENTATION USING 3D REVERSIBLE U-NET Vivian Hu, Mobarakol Islam, and Hongliang Ren Medical Mechatronics Lab, Department of Biomedical Engineering National University of Singapore, Singapore, Singapore Email: vih15@pitt.edu INTRODUCTION Primary brain tumors (PBTs) develop in brain tissue (neurons or glial cells) or regions directly surrounding the brain such as blood vessels, nerves, and glands [1]. Malignant PBTs affect around 200,000 people every year [2]. The evaluation of brain tumor MRI scans is important for diagnosing and treating patients with brain cancer. Appropriate treatment is chosen by monitoring progression of the tumor with intensive neuroimaging protocols before and after the treatment [3]. Gliomas, a type of primary brain tumor, are fatal yet due to variations in appearance, shape, and histology, they can be difficult to assess. In current clinical routines and studies, the evaluation of MRI scans is limited to qualitative measurements (presence of a characteristic tissue) or basic quantitative measurements (largest visible diameter of a lesion) [3]. Additionally, on an MRI scan, subregions of the tumor differ in intensity, shape, and size [4]. By having an automatic, accurate, and reproducible method of segmenting brain tumor sub-regions of MRI scans, diagnosis of brain cancer and treatment planning can be improved. To achieve segmentation of glioma multimodal MRI scans, deep learning is utilized. Convolutional neural networks (CNNs) are able to take in input images to achieve tasks such as classification, segmentation, or detection of objects. Segmentation of an image involves classifying each pixel into a category label and is useful for biomedical applications such as examining cells or tissues. A partially reversible U-Net architecture is used to predict segmentation of data for the Multimodal Brain Tumor Segmentation Benchmark (BraTS) 2018 Validation Data: Segmentation Task. Mean dice scores for the enhancing tumor, whole tumor, and tumor core sub-regions are determined.
METHOD The training dataset used was the BraTS 2018 database [3][5]. It contains multimodal MRI scans for 285 patients with 210 high-grade glioma and 75 low-grade glioma. The BraTS 2018 validation set consists of multimodal MRI scans for 66 patients. Each patient had MRI scans with the following four modalities: T1, T1c, T2, and FLAIR (Figure 1).
Figure 1: The BraTS 2018 dataset contains multimodal MRI scans with the modalities T1, T1c, T2, and FLAIR. It also contains a manually annotated segmentation of tumor subregions, or ground truth (GT). The red, yellow, and green labels represent necrotic, enhancing, and edema regions.
Each image is a 3D scan of the brain with dimensions of 240 x 240 x 155. Labels for tumor sub-regions were enhancing tumor (ET), tumor core (TC), and whole tumor (WT). The annotated labels are colored red, yellow, and green and represent necrotic, enhancing, and edema regions. ITK-SNAP is used to visualize the MRI scans [6]. The model architecture used was the partially reversible U-Net with PyTorch framework [7]. Images of dimension 240 x 240 x 155 are randomly cropped to size 128 x 128 x 128 3D voxels. Data is normalized to zero mean and standard deviation. Data augmentation is added by flipping horizontally. Training and testing were performed using a Nvidia GEFORCE GTX 1080 Ti. The model was trained using a batch size of 1 for 150 epochs. After training the model, the model was tested on the validation data to determine the epochs that produced the best mean dice of the whole tumor. Once the epochs with the highest mean dice scores were determined, the model was tested on the BraTS 2018 Segmentation Task validation data.
RESULTS After training the partially reversible U-Net model, the top 5 epochs that produced the highest mean dice score for the whole tumor based off of annotated validation data were determined. Once the epochs producing the best mean dice were determined, they were used to predict segmentation labels for the official BraTS 2018 Validation Data: Segmentation Task. The mean dice for the ET, WT, and TC sub-regions were calculated. For epoch 114, the model obtained mean dice scores of 0.727, 0.893, and 0.764 for the enhancing tumor, whole, and tumor core sub-regions respectively. Using the weights for epoch 114, predicted labels for the BraTS 2018 Validation Data: Segmentation Task are visualized (Figure 2).
patients in the challenge validation data, predictions of the tumor sub-regions are visualized in Figure 2. CONCLUSION Using the partially reversible U-Net architecture, multimodal MRI scans of low-grade and high-grade gliomas were segmented into whole tumor, enhancing tumor, and tumor core sub-regions with adequate mean dice scores. This architecture shows promise as a method to analyze gliomas, potentially leading to improved diagnosis and treatment for patients with brain cancer. 3D concurrent spatial and channel squeeze & excitation (scSE) will be added to enhance segmentation accuracy by bootstrapping the global spatial representation; the concept of scSE blocks are adapted from 2D application and will be utilized for 3D segmentation [8]. In the future, this model can be extended to other biomedical segmentation tasks. REFERENCES 1. Brain Tumors. American Association of Neurological
Figure 2: The T1c, T2, and FLAIR modalities of the brain tumor visualized with the predicted tumor sub-regions.
DISCUSSION The glioma MRI scans of 281 patients were used to train a partially reversible U-Net model that was then used to predict the segmentation of 66 patients. When tested on validation data with known segmentation annotations, the model could achieve sufficient dice accuracy up to 0.872 when the model ran for 150 epochs. The difference could be due to overfitting to the training data. When the top 5 epochs with the highest dice accuracy on validation data were submitted for the BraTS 2018 Validation Data: Segmentation Task, epoch number 114 achieved the highest mean dice for the ET, WT, and TC at values 0.727, 0.893, and 0.764 respectively. Although annotations are not given for the 66
Surgeons. 2019. 2. Kheirollahi, M., et al. Advanced biomedical research, 4 (2015). 3. Menze, B. H., et al. IEEE transactions on medical imaging, 34(10) (2014). 4. Multimodal Brain Tumor Segmentation Challenge 2018. Section for Biomedical Image Analysis (SBIA). 2019. 5. Bakas, S., et al. Scientific data, 4, 170117 (2017). 6. Yushkevich, P. A., et al. Neuroimage, 31(3), 11161128 (2006). 7. BrĂźgger, R., et al. arXiv preprint arXiv:1906.06148 (2019). 8. Roy, A. G., et al. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018.
ACKNOWLEDGEMENTS I acknowledge support from the Swanson School of Engineering and Office of Engineering International Programs for providing funds. I would like to thank Mobarakol Islam and Dr. Hongliang Ren for their assistance during my project for the SERIUS Program at NUS.
Mechanical Characterization of Silk Derived Vascular Grafts for Human Arterial Implantation Patrick Iyasele, Eoghan Cunnane, Katherine Lorentz, Justin Weinbaum, and David Vorp Vascular Bioengineering Laboratory, McGowan Institute for Regenerative Medicine University of Pittsburgh, PA, USA Email: pii4@pitt.edu, Web: www.engineering.pitt.edu/vorplab/ INTRODUCTION Cardiovascular disease is the leading cause of death worldwide, with most deaths associated with coronary heart disease, cerebrovascular disease, peripheral arterial disease, and deep vein thrombosis. These diseases often occur from the narrowing and blockage of blood vessels leading to reduced blood flow and tissue damage due to inadequate nutrient supply [1]. The preferred treatment for occluded small diameter arteries (such as the coronary arteries) is revascularization surgery utilizing vascular grafts. During this surgery, a graft is used to replace or bypass the damaged or occluded vessel. Around 400,000 coronary artery bypass grafting (CABG) procedures are performed each year in the United States alone [2]. The saphenous vein and internal thoracic artery are commonly used for autografts but those are limited in quantity and require invasive surgery to harvest and utilize [1]. Tissue engineered vascular grafts (TEVGs) are currently being studied and developed as alternatives. An ideal TEVG should match the mechanical properties of native tissues. Tangential modulus is a useful property to ascertain in this regard as it describes the stiffness of a material at certain mechanical strains that are experimentally tested during uniaxial extension. Stiffness determines a vessels mechanical reaction to cardiac pressure waves. Replicating vessel stiffness of native tissue is critical to developing accurate blood vessel replacements [3]. The TEVG tested in this study is made from Bombyx mori (BM) silk. Silk from B. mori not only has impressive mechanical properties but also has environmental stability, biocompatibility, controlled proteolytic biodegradability, morphologic flexibility, and the ability for amino acid side chain modification to immobilize growth factors [4]. The objective of this study was to calculate the tangential modulus of a BM silk derived TEVG and compare it to the tangential modulus of the native vessels it is intended to replace.
METHODS The grafts were fabricated by loading a 6% BM silk solution (by weight) into molds, which have an inner diameter of 5.5 mm, an outer diameter of 7.1 mm, and a length of 15 cm. They were frozen at -20° C and then lyophilized. An electrospun layer made up of a mixture of silk and polycaprolactone (PCL) was then added at 200 RPM and 50 cycles/min to provide structural support and strength [3]. The scaffold was bulk seeded with 120 million human adipose-derived stem cells using a custom vacuum rotational seeding device. Two sheep carotid arteries were explanted. The explanted carotids and the TEVG were cut into ring specimens which were tested under uniaxial tension to determine their stress-strain response and tangential modulus at both low (1.3) strain ratio and high (1.9) strain ratio (Figure 1 a,b).
Figure 1a: Silk TEVG stress curve. Low 1.3 strain ratio region labeled orange. High 1.9 strain ratio region labeled red.
Figure 1b: Left Sheep carotid stress curve. Low strain ratio region labeled orange. High strain ratio region labeled red.
DATA PROCESSING The diameter, width and wall thickness of each of the ring specimens were measured using Image-J by measuring length in captured images and compared to a ruler that was also in frame using a pixel to millimeter scaling factor. Force and extension values were extracted from an Instron 5543A uniaxial
extension tester to be converted into stress and strain ratio values. Stress was calculated by equation 1: đ?&#x2018;&#x2020;đ?&#x2018;Ąđ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018; =
đ??šđ?&#x2018;&#x153;đ?&#x2018;&#x;đ?&#x2018;?đ?&#x2018;&#x2019;
eq. 1
đ??´đ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018;&#x17D;
Strain ratio was calculated using equation 2: đ?&#x2018;&#x2020;đ?&#x2018;Ąđ?&#x2018;&#x;đ?&#x2018;&#x17D;đ?&#x2018;&#x2013;đ?&#x2018;&#x203A; đ?&#x2018;&#x2026;đ?&#x2018;&#x17D;đ?&#x2018;Ąđ?&#x2018;&#x2013;đ?&#x2018;&#x153; =
đ??¸đ?&#x2018;Ľđ?&#x2018;Ąđ?&#x2018;&#x2019;đ?&#x2018;&#x203A;đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x153;đ?&#x2018;&#x203A; đ??ˇđ?&#x2018;&#x2013;đ?&#x2018;&#x17D;đ?&#x2018;&#x161;đ?&#x2018;&#x2019;đ?&#x2018;Ąđ?&#x2018;&#x2019;đ?&#x2018;&#x;
+1
eq. 2
The low and high strain ratio regions tangential moduli were obtained by isolating the stress-strain curve at a 1.2-1.4 strain ratio region and at a 1.8-2.0 strain ratio region respectively. Tangential modulus was calculated by taking the linear slope at the low and high strain ratio regions. RESULTS Five replicates for both sheep carotid arteries as well as the TEVG were tested. The tangential modulus for the TEVG was 0.11Âą0.02 MPa in the low strain ratio and 0.15Âą0.03 MPa in the high strain ratio region (Figure 2). The high strain ratio region tangential modulus was significantly higher for the left and right carotids compared to the TEVG (4.3 Âą 1.3 MPa and 3.6 Âą 0.66 MPa vs. 0.15 Âą. 03 MPa, respectively; p=0.0002 and 0.0001 using one-way ANOVA, Îą= 0.05). The low strain ratio region tangential modulus was not statistically different between the left and right carotid artery and the TEVG (0.07 Âą 0.03 MPa and 0.12 Âą 0.04 MPa vs. 0.11 Âą 0.02 MPa, respectively; p=0.927 and 0.202).
Figure 2: Tangential moduli between the left and right sheep carotids and the silk graft at low and high strain ratio regions.
DISCUSSION The BM silk TEVG has a similar tangential modulus to a native sheep carotid artery at low strain ratios but is significantly different at high strain ratios. The difference in tangential modulus between the high strain ratio regions of the TEVG and sheep carotids may be because the TEVG was not remodeled by macrophages and arterial cells as it would be postimplantation [5]. Vascular remodeling is an active process of structural change that involves changes in at least four cellular processes: cell growth, cell death, cell migration, and the synthesis or degradation of extracellular matrix [6]. Cells would not have added any structural proteins to the vessel to increase stiffness since they did not have an implantation or in vitro culture period. This study could be improved by increasing the sample size of all groups tested. The limited sample size hinders us from determining if this is a real effect and from determining if our fabrication technique is consistent. A future study would be to repeat the scaffold seeding and tension test to obtain more replicates for the data set. REFERENCES 1.Pashneh-Tala, Samand et al. â&#x20AC;&#x153;The TissueEngineered Vascular Graft-Past, Present, and Future.â&#x20AC;? Tissue engineering. Part B, Reviews vol. 22,1(2016):68-100. 2.Go, Alan S et al. â&#x20AC;&#x153;Heart disease and stroke statistics--2013 update: a report from the American Heart Association.â&#x20AC;? Circulation vol. 127,1 (2013): e6-e245. 3.Stankus, John J et al. â&#x20AC;&#x153;Fabrication of cell microintegrated blood vessel constructs through electrohydrodynamic atomization.â&#x20AC;? Biomaterials vol.28,17(2007):2738-46. 4.Vepari, Charu, and David L Kaplan. â&#x20AC;&#x153;Silk as a Biomaterial.â&#x20AC;? Progress in polymer science vol. 32,8-9 (2007): 991-1007. 5.Theodoridis, Karolina et al. Tissue Engineering Part C: Methods. Dec 2017.ahead of print 6.Renna, NicolĂĄs F et al, â&#x20AC;&#x153;Pathophysiology of Vascular Remodeling in Hypertension,â&#x20AC;? International Journal of Hypertension, vol. 2013, ArticleID 808353, 2013. ACKNOWLEDGEMENTS Thanks to the Swanson School of Engineering and Dr. Vorp for funding that made this work possible. Thanks to Dr. Jonathan Vande Geest for his generous gift of sheep arteries.
DEVELOPMENT OF SELECTIVE NEURAL ELECTRODE FOR CLOSED LOOP NEUROMODULATION OF NEUROGENIC BLADDER Sneha Jeevan, Marlena Raczkowska, Nitish Thakor Singapore Institute of Neurotechnology, Department of Bioengineering National University of Singapore, Singapore, Singapore Email: snj10@pitt.edu, Web: http://sinapse.nus.edu.sg INTRODUCTION Micturition, or urination, involves a complex and highly distributed neural pathways and organ structures to voluntarily expel urine from the body. The primary function of the lower urinary tract (LUT) is to transform the flow of urine into an intermittent evacuation from the body [1]. Because the spinal cord plays such an essential role to the function of the LUT, lesions to the spinal cord can cause severe damage that leads to diseases such as lower urinary tract dysfunction (LUTD). [2]. After spinal cord injury, the excitatory and inhibitory inputs to the bladder and EUS are lost while the storage reflexes are preserved, causing loss of voluntary control of the lower urinary tract. Lower urinary tract dysfunction due to spinal cord injury leads to neurogenic bladder, resulting in urinary incontinence and poor quality of life. To combat LUTD, sacral anterior root stimulation (SARS) acts as a form of open-loop stimulation to modulate bladder function. The Brindley system combines SARS and dorsal root rhizotomy to abolish neurogenic detrusor overactivity. It is comprised of an internal unit, consisting of electrodes, connecting cables and a receiver block. The electrodes, as shown in Fig. 3A, are attached to the anterior roots S2-S4 within the spinal cord, areas that are primary responsible for voluntary micturition. The electrodes are then guided to an internal receiver [3], which can then be activated by the external component of the system. The external stimulating device is placed on the skin over the implanted receiver to evoke stimuli [4]. By emitting radiofrequency waves to induce electrical stimuli, stimulation programs can then evoke micturition in the patient. While the procedure has shown to have good clinical results in the restoration of bladder function in spinal cord injury patients, it is not easily applicable in clinical practice. Furthermore, the open-loop basis of the system, with no feedback signals or possibility of adjustment of the
stimulation, can result in only partial voiding and nerve habituation due to overstimulation. To adjust the system to better suit patientsâ&#x20AC;&#x2122; needs, a closedloop stimulation system is required. Closed-loop systems utilize an adaptive approach to allow for minimally invasive procedures and further control by the patient. Such systems act as common feedback control system to restore neural function. By implementing a feedback control approach for bladder emptying, the stimulation signals will more cater towards the patientâ&#x20AC;&#x2122;s individual needs. This is particularly important in patients with underactive neurogenic bladder, as the open-loop nature of currently used treatments, such as SARS, can result in overstimulation of the nerves and increase postvoid residual volume [5]. Conditional stimulation given in a closed loop system allows the bladder to fill to a greater volume before micturition, preventing voiding loss and maintaining urinary continence. To further increase voiding efficiency, selective neuromodulation that targets different branches of nerves can be used within the adaptive closed-loop system. By utilizing the closed-loop system for neural modulation, we can create a selective neural electrode to use in closed-loop system. The newly created electrode will stimulate the parasympathetic branches pelvic nerve to promote precise bladder contractions, improve different branch stimulation and thereby increase voiding efficiency. METHODS Electrodes must have high corrosion resistance, high modulus elasticity, and high durability. Platinum-iridium and titanium-based electrodes have been shown to be most effective in meeting such criteria. Platinum has been used in a variety of medical devices to treat ailments such as heart disease, stroke, neurological disorders, and other life-threatening conditions. Platinum-iridium is highly biocompatible, durable, radiopaque and
electrically conductive. 100 μm Pt-Ir wire would extend in a 6-pronged ring out of the hook. Such a design would allow for the electrode to grasp onto the specific parasympathetic branch within the pelvic nerve, promoting selective stimulation of the nerve and thereby causing the bladder to the contract and induce voiding. To test the efficacy of the electrode, rat models who have undergone surgical spinal cord injuries would be needed. After an incision in the upper pelvic area, the modified hook electrode would wrap around the distinct branches. ENG and EMG recordings would be taken to determine voiding efficiency of the altered hook electrode. RESULTS Selective stimulation of branches of the pelvic nerve have shown to affect the volume voided by the rat models previously used. The previously used hook electrodes were shown to be able to adequately wrap around the targeted parasympathetic nerve branch. The void outcome was able to reach high volumes at lower stimulation amplitudes, proving that closed-loop neuromodulation systems can efficiently increase voiding while decreasing the intensity of the stimulation signal. DISCUSSION The use of closed-loop selective neuromodulation in treating underactive neurogenic bladder shows higher voiding efficiency and accuracy. The feedback control provided by the closed-loop stimulation system allowed for more voiding volume without needing stronger stimulation signals. Selective branch stimulation of the pelvic nerve proved to minimize the need for a strong stimulation to induce micturition in the rat. The electrode proposed will need to be placed in rat
models in further trials to determine if the design adequately performs different branch stimulation. REFERENCES 1. Podnar, Simon Neurophysiology of the neurogenic lower urinary tract disorders, Clinical Neurophysiology, Volume 118, Issue 7, 2007, Pages 1423-1437, ISSN 1388-2457 2. Tai C, Roppolo JR, de Groat WC. Spinal reflex control of micturition after spinal cord injury. Restor Neurol Neurosci. 2006;24(2):69–78. 3.Janssen, Dick & MJ Martens, Frank & Wall, Liesbeth & MK van Breda, Hendrikje & Heesakkers, John. (2017). Clinical utility of neurostimulation devices in the treatment of overactive bladder: Current perspectives. Medical Devices: Evidence and Research. Volume 10. 109-122. 4. Martens FM, Heesakkers JP. Clinical results of a brindley procedure: sacral anterior root stimulation in combination with a rhizotomy of the dorsal roots. Adv Urol. 2011;2011:709708. 5. Raczkowska M. N., W. Y. X. Peh, Y. Teh, M. Alam, S. Yen and N. V. Thakor, "Closed-Loop Bladder Neuromodulation Therapy in Spinal Cord Injury Rat Model," 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), San Francisco, CA, USA, 2019, pp. 147150. ACKNOWLEDGEMENTS I’d like to thank my Principle Investigator Dr. Nitish Thakor for allowing me this opportunity to work in his research lab at SINAPSE. I would also like to thank my mentor, Marlena Raczkowska, for all the advice and guidance she has given me over the period of this internship and inspiring me to further pursue the field of bioelectronic medicine. This research was being funded by the Swanson School of Engineering, the Office of the Provost, and the Office of Engineering International Programs.
Development of a Wall-Mounted Dynamometer for Arm and Shoulder Physical Therapy Sara E. Kron University of Pittsburgh Makerspaces, Innovation and Entrepreneurship University of Pittsburgh, PA, USA Email: sek154@pitt.edu INTRODUCTION The purpose of this project was to continue development of a previously iterated custom dynamometer system for measuring force production in throwing athletes. These measurements are typically carried out with a handheld dynamometer that measures peak isometric force in a variety of arm positions including thumb up scaption, palm down abduction, side-lying ER neutral, side-lying IR neutral, and serratus flexion. A significant problem with this method is the variability that occurs in the use of the handheld device. Measured forces are not consistent because it is difficult to replicate the position and orientation of the sensor from one test to the next, and the holder of the sensor does not provide rigid support, so a pure isometric force is not measured. To address these shortcomings, a rigid mounting structure was developed in a previous project that allows the dynamometer to be held in multiple positions and orientations, accommodating a variety of athlete heights while allowing the full set of desired measurements. The present project involves further improvements on the custom system. In previous phases of the project a rod with a commercial handheld dynamometer was attached to an adjustable linear slider and inserted into a wallmounted rail, as shown in Figure 1. In doing so, the system could be adjusted to accommodate different heights for multiple testing positions. This system exhibited the desired functionality and ease-of-use but had significant deflection about the vertical axis of the system. As a result, this phase of the project was tasked primarily with improving the rigidity of the system.
Figure 1: Custom dynamometer system mounted to wall, where red arrow shows the primary direction of deflection.
METHODS A functional duplicate of the previous system was built in order to better understand and measure the reported deflection. In order to quantify this data the deflection of the dynamometer for a given applied load was measured. The system was tested in three different positions, two of which were measurement configurations for typical athletes and the third was chosen to produce the minimum deflection because of its alignment with wall fasteners. After gathering this initial data, two new prototypes were ideated and modeled, machined, and assembled. Testing using the aforementioned protocol revealed that neither of these systems was able to significantly decrease the deflection measured in the initial system. Because most of the movement resulted from torsion in the rail itself, further ideation proposed that a flatter, plate-like rail would inherently decrease the possible torque and improve the system. A new design was developed to address the deformation, as shown in Figure 2.
Figure 2: The revised design implemented a flatter, plate-like rail to reduce torsion.
DATA PROCESSING For the three measured positions, as shown in Figure 3, deflection distances from the dynamometer to a static object were measured while static forces were applied to the dynamometer Improvement to the system was gauged by comparing the deflection of the different systems in the same positions.
Figure 3: Calipers were used to measure the resting distance from a ladder before and while a force was applied to the system.
RESULTS As briefly mentioned previously, the initial proposed solutions to the deflection were not able to significantly decrease the deflection measured, bringing a previous high deflection of 0.99 inches down only 1.0% to 0.98 inches. The final system design, which utilizes a flatter wall-mounted rail, was able to briefly decrease deflection for a time, but unexpectedly the screws used to physically fix the rail to the testing wall began to rip out from the wall after a large force was applied. Although this weakness in the system continues to cause deflection nearly consistent with the original system, because the total deflection is nearly the same as in the previous system, and the anchors are moving, then at least some of this deflection is due to the anchors and not due to the flexibility of the system. Therefore, once proper anchors are implemented, the system will be improved. DISCUSSION Although this phase was unable to obtain reliable quantitative data on the improvement of the deflection in the wall-mounted system due to unexpected failures in the mounting techniques, the CAD model, deflection calculations, and vendor data on the components used in the system suggest that it will significantly decrease the deflection compared to the previous system. In a next phase of the project, different wall anchors and/or techniques should be implemented to allow for more accurate testing, which would allow for definitive conclusions. ACKNOWLEDGEMENTS Dr. Clark and Brandon Barber, as well as my fellow interns, Megan Black, Maya Roman, and Sarah Snavely, acted as excellent resources. The Swanson School of Engineering and the Office of the Provost jointly funded this project.
A NOVEL ROLE FOR PROCESSING BODIES IN AGE-RELATED IMPAIRMENT OF MUSCLE STEM CELL SELF-RENEWAL Jane E. Lasak, Amin Cheikhi, Amrita Sahu, Fabrisia Ambrosio McGowan Institute for Regenerative Medicine, Department of Physical Medicine and Rehabilitation University of Pittsburgh, PA, USA Email: Jal281@pitt.edu Web: https://mirm-pitt.net/ INTRODUCTION Aging results in impaired physical function and decreased regenerative capacity due to metabolic and biochemical changes within the skeletal muscle. Muscle stem cells (MuSCs) represent the primary reserve cell population responsible for dictating the skeletal muscle regenerative cascade. As a result of aging, MuSCs display a decreased capacity to become activated after injury and self-renew1. The concept of self-renewal relies on the transfer of distinct mRNA decapping and degradation proteins via sub-cellular structures2. One distinct feature that affects this transfer is the cytoplasmic environment. When exposed to harsh environments, a cell transforms into a solid-like state, inhibiting the mobility of proteins and other cellular structures that transport nutrients2. Changes in acidity or electric potential gradients may lead to this result3, ultimately impairing stem cell function. We hypothesize that a group of cytoplasmic, nonmembranous structuresâ&#x20AC;&#x201D;known as processing bodies (p-bodies)â&#x20AC;&#x201D;are responsible for the storage and transfer of mRNAs that dictate MuSC selfrenewal4. Moreover, we predict that p-body formation is repressed in aged MuSCs due to their weakened cellular pathways, such as notch signaling, thereby delaying the cellâ&#x20AC;&#x2122;s response to environmental cues5. To test this hypothesis, we investigated markers of self-renewal and p-body formation in young and old MuSCs in vitro. METHODS Young (3-4 months) and old (22 months) male C57BL/6 mice were injured using a 10 đ?&#x153;&#x2021;L injection of cardiotoxin (1 đ?&#x153;&#x2021;g/mL) to bilateral tibialis anterior muscles. The next day, MuSCs were isolated from all limb muscles using fluorescence activated cell sorting and labeled surface markers Sca1 (-) and đ?&#x203A;ź-7 (+), as previously described1. Isolated cells were cultured in growth media for 48 hours at 37°C and 5% CO2. Cells were fixed in 2% Paraformaldehyde
and subsequently prepared for immunofluorescence staining. DCP1A and H2A.Z proteins were fluorescently tagged with antibodies against p-bodies and MuSC asymmetry, respectively (Invitrogen Rb đ?&#x203A;ź DCP1A [EPR13822], 1:250; Cell Signaling Technologies Rb đ?&#x203A;ź H2A.Z, 1:400). Measurements of MuSC asymmetry provide an indication if a cell is reverting to a quiescent stateâ&#x20AC;&#x201D;a feature that directly relates to self-renewal capacity. Since it has been suggested that p-body formation is linked to changes in the cytoplasmic environment, such as intracellular pH2, we also seeded young and aged MuSCs on 35 mm MatTek dishes. Using an Invitrogen pHrodo green AM fluorescent probe, pH variance before and after a stressor cocktail (Valinomycin and Nigericin) was analyzed over time, by widefield live cell imaging. After measuring pH intensity for twenty-one minutes, the stressor cocktail was added, and pH intensity was further captured for another 10 minutes. Images and corresponding intensity values were captured every 3 minutes for both differential interference contrast (DIC) and FITC channels. DATA PROCESSING To detect discrete p-body structures, Z-stack renderings were captured using a Zeiss semiconfocal microscope. The slice with the most focused image was used to measure total intensity and area of H2A.Z and DCP1A signal for selected regions of interest (ROIs) via ImageJ analyses. To normalize the data, intensity per area values were calculated for both H2A.Z and DCP1A. Representation of this data was developed using Graphpad Prism Software. Live cell imaging data was analyzed using NIS Elements software systems. Using the DIC channel, ROIs were selected, and measurements of pH intensityâ&#x20AC;&#x201D;expressed in the FITC channelâ&#x20AC;&#x201D;were
recorded for each ROI corresponding to each time segment before and after the addition of stressor cocktail. To normalize the data, sum intensity per area values were calculated at each time point per cell. Using these values, calculations for average summation of intensity per area over all time points for MuSCs, before and after drug treatment, were completed and graphical representations were developed using Graphpad Prism Software. RESULTS Shown in Figure 1A, discrete p-body structures were more visible within the cytoplasm in young MuSCs. In addition, there was a significant correlation between the average intensity per cell of DCP1A and H2A.Z signal in young MuSCs (p<0.05). However, this relationship was disrupted with aging (p>0.05) (Figure 1B). Live cell imaging data revealed that old MuSCs initially measured more acidic than young counterparts, which is consistent with their decreased formation of p-bodies (data not shown). Interestingly, whereas young MuSCs maintained a stable, more basic pHâ&#x20AC;&#x201D;even in the presence of stressâ&#x20AC;&#x201D;the pH intensity of old MuSCs increased in basicity by 30% (data not shown).
DISCUSSION These experimental findings suggest that old MuSCs display a decreased capacity to form discrete pbodies structures in vitro. In addition, the agedependent correlation between DCP1A and H2A.Z intensities in MuSCs suggests that p-body formation may contribute to cellular self-renewal capacity. Furthermore, we posit that the more acidic environment of old MuSCs may be a contributor to the inhibition of p-body formation. This environmental change could lead to a weakening of cellular signaling and communication amongst the MuSC populations. Since it is already established that aged stem cells display a decreased regenerative capacity5, it is plausible that p-bodies found in young and aged MuSCs carry altered mRNA components that do not as effectively support self-renewal. Future studies designed to investigate the p-body cargo that may facilitate this self-renewal process are warranted. In addition, it would be interesting to interrogate whether and how potential differences in p-body size and abundance may dictate age-related changes in MuSC function. REFERENCES 1. Sahu et al., Nature Communications, 9, 2018. 2. Munder et al., eLife, 5, e09347, 2016. 3. Iwasa et al., Stem Cells Int., 2017, 2017. 4. Aizer et al., Company of Biologists, 127, 44434456, 2014. 5. Conboy et al., Cell Cycle, 4, 407-410, 2005. ACKNOWLEDGEMENTS Authors are grateful for help from Zachary Clemens, Abish Pius, Sunita Shinde, Sruthi Sivakumar, and the Center for Biological Imaging. Funding for this research project was provided by Dr. Fabrisia Ambrosio, the University of Pittsburgh Swanson School of Engineering and the Office of the Provost.
FINITE ELEMENT EVALUATION OF VARIOUS STENT MECHANICAL PROPERTIES IN A KNEE BENDING MECHANICAL ENVIRONMENT Jinghang Li, and Jonathan Vande Geest Soft Tissue Biomechanics Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: jil202@pitt.edu, Web: https://www.engineering.pitt.edu/stbl/ INTRODUCTION Stenting is a common treatment to restore a normal blood flow in a clotted artery. Stents scaffolding the narrowed vessel open are under stresses from the cyclic pressure generated by the heart as well as the mechanical pressure from the deployment environment. For example, deployed stents in the popliteal artery are under great stress due to patients’ daily knee bending activity. Most commonly, the commercially available stents on the market are made from Nitinol and CobaltChromium. In this study, using finite element method, we compared the maximum stresses on the stents that are of different material properties after bending, including Nitinol, Cobalt-Chromium, and a compliance matched material developed in the Vande Geest laboratory. METHODS Commercially available software Abaqus CAE was employed to create the finite element models for the stent and the balloon. Because of the stent’s symmetry only 1/8 of the geometry was created for the simulation to save computational time. User defined super elastic material was used to mimic the Nitinol response and elastic material property of 210GPa Young’s modulus and 0.29 Poisson’s ratio was imposed for cobalt-chromium. Additionally, we also used an isotropic hyperelastic material on the stent geometry. For the hyperelastic material we chose Neo Hookean to be our material model with C10 = 64MPa and D = 0. The C10 value is from our previous study in the Soft Tissue Biomechanics Laboratory (STBL) on the vascular graft compliance optimization [1]. C3D8I and SFM3D4R were used to mesh the stent and the balloon, respectively. The simulations were conducted in two steps. First, a radial displacement of 1.69mm was used to complete the balloon expansion. Interaction between the stent and the balloon was modelled as general hard contacts with no friction coefficient. After the stent was fully expanded, the
deformed geometry was exported. Then a separate model was created to simulate the bending process. In the new model, the whole artery is 11 mm long, 0.4 mm thick, and had an inner diameter of 2 mm. A Hyperelastic material model was used for the artery geometry with C10 = 4.7kPa [1]. The stent is assembled at the center of the artery as shown below in Figure 1. The stent outer surface and the artery inner surface were then tied together. Additionally, the two artery ends are fixed in all directions but have a rotational displacement of 40 degrees relative to the principle axis. All simulations have a total time period of 1s in each step. An automatic time incrementation scheme with an initial increment of 0.1s was used for all steps. All simulations were carried out using static general solver (Dassault Systemes 2016).
RESULTS Tables 1 and 2 report the maximum values of Von Mises and principal stress on the stent and artery in each material case. Additionally, a contour plot of the assembly after bending is also shown below. Material Nitinol Cobalt Chromium Compliance Matched
Von Mises (kPa) 4.07 4.19
Max. Principle Stress (kPa) 6.5 6.81
3.4
5.41
Table 1. Maximum values of Von Mises and principal stress on the inner artery surface after bending for the different stent materials
Material Nitinol Cobalt Chromium Compliance Matched
Von Mises (MPa) 40.9 52.9
Max. Principle Stress (MPa) 42.6 55.2
2.73
2.95
Table 2. Maximum values of Von Mises and principal stress on the stent geometry after bending for the different stent materials
Figure 1. Contour plot of the stent and artery assembly before the bending deformation. The stent is assembled around the center of the artery. Additionally, the stent outer surface and the artery inner surface are tied together.
DISCUSSION In conclusion, the maximum principal stress and Von Mises stress values within the stent are relatively similar for both metallic materials but much larger compared to that of our hyperelastic material. In a dynamic mechanical environment like that of the bending knee, higher stress generally result in higher fracture rate of the device. Use of our novel materials in this application may be an attractive option to reduce device failure rates when treating peripheral arterial disease using an endovascular device. Stresses within the artery are similar for all materials tested, however our device did have the lowest values overall. This may suggest that using our material may result in a lower degree of stress-mediated vascular smooth muscle cell activation, proliferation, and subsequent intimal hyperplasia. Further tests are required to further explore our materials in this application using invitro and in-vivo assessment techniques. REFERENCE: [1] Tamimi, Ehab A., et al. â&#x20AC;&#x153;Computationally Optimizing the Compliance of Multilayered Biomimetic Tissue Engineered Vascular Grafts.â&#x20AC;? Journal of Biomechanical Engineering, vol. 141, no. 6, 2019, p. 061003., doi:10.1115/1.4042902. ACKNOWLEDGEMENTS Computational resources were provided by Prof. Vande Geest at the University of Pittsburgh. The study was cofounded by the Soft Tissue Biomechanics Lab, the Swanson School of Engineering and the Office of the Provost.
Figure 2. Contour plot of the stent and artery assembly after the bending deformation. Each of the artery end has a rotational displacement of 40 degrees from the principle axis.
ROBUST OSTEOGENESIS OF MESENCHYMAL STEM CELLS IN 3D BIOACTIVE HYDROGEL NANOCOMPOSITES REINFORCED WITH GRAPHENE NANOMATERIALS Eileen Lia, b, Zhong Lib, Colin Del Dukeb, Hang Lina,b a
b
Department of Bioengineering Department of Orthopedic Surgery, Center for Cellular and Molecular Engineering University of Pittsburgh School of Medicine, PA, USA Email: enl19@pitt.edu, Web: https://www.ccme.pitt.edu/
INTRODUCTION Large bone defects, fracture-delayed unions, and non-unions have become more prevalent. A major issue for the treatment of bone injury is obtaining functional bone grafts for repair. Currently, autografts, allografts and xenografts are widely used clinically for bone defect management. Autografts remain the â&#x20AC;&#x153;gold standardâ&#x20AC;? treatment, but have very limited availability. This method basically creates another area of injury for the patient and allows for the chance of many more complications to occur [1]. Allografts and xenografts are harvested from deceased donor and animals, respectively, thus making availability less of an issue. However, they pose high risks of disease transmission and immunological rejection [1]. To combat this, the applications of bone grafts engineered from stem cells and synthetic, bioactive materials are attracting more and more attentions. Nanomaterials, with their unique physical and chemical characteristics, have been used in a broad array of biomedical applications. It has been reported that certain nanomaterials can help promote protein absorption and trigger signaling pathways, which may be exploited to direct cell behavior [2]. For example, if used appropriately, nanomaterials may assist in upregulating osteogenic differentiation of mesenchymal stem cells (MSCs) for creating tissue engineered bone, therefore eliminating the need and complications associated with the use of autografts, allografts or xenografts. Recent studies have utilized 2D graphene nanomaterials and their derivatives such as graphene oxide (GO) in the hope that these nanomaterials can provide satisfactory mechanical and biological environments for stem cell-based bone tissue engineering. Silica-coated graphene oxide (SiGO) is of particular interest in bone tissue engineering as silicon (Si) is a key trace element that has been reported to be essential for maintaining healthy bone and promote bone remodeling [3].
In this research, we hypothesize that the incorporation of SiGO nanosheets in 3D hydrogel scaffolds can significantly enhance the osteogenic differentiation of human MSCs for potential bone repair and regeneration applications. METHODS With IRB approval (University of Washington), MSCs were isolated from femoral heads and trabecular bone of human patients undergoing total knee arthroplasty. GO was synthesized using a modified Hummers method and then converted to SiGO nanosheets via a sol-gel method reported in our previous study. The SiGO nanosheets were combined with 15% methacrylated gelatin (SiGO/GelMA) at 1mg/mL. This composite solution was used to resuspend MSCs at 20M cells/mL and photocrosslinked to create 3D cell containing scaffolds. The scaffolds were cultured in osteogenic media (DMEM with 10% FBS, 1% AntibioticAntimycotic, 10 nM dexamethasone, 0.1 mM Lascorbic acid 2-phosphate, and 10 mM betaglycerophosphate, 10 nM vitamin D3) for 4 weeks with no supplement of osteogenic growth factors. The cytocompatibility of SiGO/GelMA was analyzed using the Live/Dead cell viability assay. Real-time polymerase chain reaction (RT-PCR) was performed to analyze osteogenic gene expression. Histological staining and immunohistochemistry (IHC) was used to further assess osteogenesis quality. Pure GelMA scaffolds were prepared and cultured under identical conditions for comparative purposes. RESULTS The viability of cells encapsulated in the scaffolds were unaffected by SiGO addition, as proved by Live/Dead staining (Figure 1A). The expression levels of major osteogenic marker genes, including osteocalcin (OCN) and bone morphogenetic protein 2 (BMP2) (Figure 1B), were quantified with reverse transcriptase polymerase chain reaction (RT-PCR)
and generally the highest expression of these proteins were in the SiGO/GelMA group. Alizarin red staining, Von Kossa staining and Calcein Green staining all indicated significantly more homogeneous and robust MSC calcification in SiGO/GelMA scaffolds than in the other groups (Figure 1C-D). Through Immunohistochemistry staining, the highest amount of alkaline phosphatase (ALP) and OCN was identified in the SiGO scaffolds (Figure 1E).
DISCUSSION The high expression levels of major osteogenic marker genes for the scaffolds indicate that the MSCâ&#x20AC;&#x2122;s have undergone osteogenic differentiation. These marker genes were expressed at higher levels for SiGO-containing scaffolds, suggesting that SiGO is more osteoinductive than unmodified GO. Furthermore, the Alizarin red staining and Von Kossa staining, which indicate calcium and phosphate deposition, respectively, show more robust and homogenous mineralization in the SiGO group. This is further supported by the fluorescent calcein green staining, which demonstrates the homogeneous 3D distribution of calcium minerals throughout the SiGO/GelMA scaffolds in a larger quantity than in pure GelMA. This suggests that the scaffold biomaterial has an effect on transporting certain proteins that assist in upregulation of osteogenesis in the MSCs, which warrant further investigation. It is worth noting that the SiGO nanomaterial did not elicit any adverse effect on cell viability, as proved by the live/dead assay. Overall, SiGO provides a robust environment for the differentiation of MSCâ&#x20AC;&#x2122;s into bone tissue. CONCLUSION The results suggest that in comparison to other nanomaterials, SiGO may hold immense potential in MSC-based bone tissue engineering and regeneration. We believe the mechanically strong core and biologically active shell of SiGO nanoplatelets synergistically promote osteogenic differentiation. For future research, mechanical testing will be performed on the scaffolds to quantify their compressive modulus and western blot will be utilized to decipher the molecular mechanisms underlying the osteo-inductive properties of SiGO. REFERENCES 1.
Aitken G, et al. Benefits and associated risks of using allografts, autograft and synthetic bone fusion material for patients and service providers. JBI Database of Systematic Reviews and Implementation Reports 8(8), 1-13, 2010. 2. McMahon R, et al. Development of nanomaterials for bone repair and regeneration. J Biomed Mater Res 101B(2), 387-397, 2013. 3. Gotz W, et al. Effects of Silicon compounds of biomineralization, osteogenesis, and hard tissue formation. Pharmaceutics 11(3) 117-144, 2019.
.
Figure 1. A) Live/dead staining of MSCs in gelMA and SiGO/gelMA on day 4 (scale bar = 200um). B) RT-PCR results after 4 weeks of culture. C) Alizarin red staining of scaffolds after 4 week culture D) Fluorescent imaging of calcein green in scaffolds after 4 week culture. E) ALP and OCN staining of scaffolds after 4 week culture (scale bar = 1mm)
ACKNOWLEDGEMENTS Thanks to the Department of Orthopedic Surgery and the Center for Cellular and Molecular Engineering, special thanks to Dr. Hang Lin and Dr. Zhong Li
MANUFACTURING A POLYELECTROLYTE COATING ON CONTACT LENSES USING AUTOMATED VS. MANUAL TECHNIQUES FOR THE TREATMENT OF DRY EYE DISEASE 1
Zixie Liang1, Alexis Nolfi1,2, and Bryan Brown1,2 University of Pittsburgh, 2McGowan Institute for Regenerative Medicine, Pittsburgh, PA
INTRODUCTION Dry eye disease (DED) is a prevalent disease in the US and worldwide affecting millions of people, especially middle-aged and over [1]. Patients with dry eye disease often experience visual disturbances, eye dryness, irritation, and light sensitivity, which decreases the quality of life [2]. Treatments currently only provide transient and temporary relief and do not change the underlying disease [3]. Many treatments also cost too much and have side effects [3]. Therefore, a new treatment for DED is desired. Past research revealed that DED is a disease with a core mechanism of inflammation and that this inflammation is mediated by macrophages [4]. A treatment for DED may be achieved by manipulating the polarization of macrophages to shift from a proinflammatory (M1) to an anti-inflammatory (M2) phenotype [4]. Previous studies focusing on polypropylene mesh have shown that immunomodulatory cytokines that promote an M2 phenotype can be released from a nanometerthickness polymer coating loaded onto the surface of a biomaterial implant [5]. Therefore, we propose to use this immune-modifying drug with a polymer delivery system to create a coating on a contact lens. We hypothesize that this will give a sustained release of drug over time to remedy the defect of current DED treatment. To manufacture these coated lenses, we either produced them using a machine in an automated way or produced them manually by hand. The machine coated vs. hand coated method was compared during this experiment. METHODS A layer-by-layer procedure was conducted in order to deposit a uniform coating capable of releasing immune-modifying drugs. Chitosan (2mg/mL) was used as the polycation and dermatan sulfate (2mg/mL) was used as the polyanion in order to build up base layers. For manually produced lenses, lenses were placed in a cage and moved by hand between polymer solutions and washes (Fig.1). Residual polymer was tapped off in-between steps. For lenses
made in an automated way, lenses were secured between tweezers and clamped into a Silar Controller (MTI Corporation, Richmond, CA) automated staining apparatus where the machine moved lenses between solutions and washes (Fig.2). After 10 cycles of non-drug-containing base polymer layers were added, a mixture of immune modifying drug (1.5ď g/mL) and dermatan sulfate (2mg/mL), along with chitosan, was used to build up another 40 drugcontaining layers on the lenses. Uncoated lenses were used as a control.
Figure 1. Set up of hand dipping method (i), and the lens cage (ii) used to secure lens.
Figure 2. Set up of machine dipping method (i), and the tweezer (ii) used to secure lens.
Alcian blue, a blue stain that stains glycosaminoglycan components, staining was performed to confirm lenses had been successfully coated with polymers in a uniform and conformal way. Stain was observed and captured by camera. A controlled drug release assay and subsequent ELISA assay were performed to test if the drug was able to be released in a relatively more sustained way and if there were differences in release amount or kinetics due to differences in manufacturing. A graph was
generated showing cumulative release of drug over time from the ELISA assay. RESULTS Images of Alcian blue stained lenses were captured and shown below in Figure 3. Both the hand dipped lens (i) and machine dipped lens (iii) were stained blue while the uncoated control lens (ii) remained clear. The hand dipped lens appears to be more uniformly stained, while the machine dipped lens has an unstained portion on the top. At the same time, the machine dipped lens shows a darker staining than the hand dipped lens. Figure 3. Alcian blue staining of hand dipped (i), uncoated control (ii), and machine dipped (iii) lenses.
From ELISA, a graph of cumulative release of drug over time was made and shown in Figure 4. For both coating methods, the amount of drug released increases rapidly in the first 50 hours and reaches its maximum at around 100 hours. However, the machine dipped method produced lenses that released more immune modifying drug than the hand dipped method. The uncoated control lens shows no release of drug.
Figure 4. Cumulative release of drug comparing lenses produced in an automated way (red circles) and lenses produced in a manual way (blue circles). Uncoated control lenses (green circles) show no release of drug.
DISCUSSION Blue stain on the hand dipped and machine dipped lenses indicate that both the manual and automated coating method can successfully load the coating on to the lens. The unstained portion on the machine dipped lens shows that the machine dipped method is
less uniform due to the tweezer used for attachment to the machine. The darker staining on the machine dipped lens suggests a larger amount of polymer deposition. This result also corresponds with the cumulative release over time analysis that shows that manually dipped lenses, while having similar release kinetics, tend to release less drug overall than lenses dipped in an automated way. This could be attributed to the more thorough polymer removal and drying between steps in the manual method. CONCLUSION This investigation demonstrated that a uniform coating consisting of chitosan and dermatan sulfate can be coated onto contact lenses in order to deliver immune-modifying drugs and that this coating is more uniformly applied when using a non-automated manufacturing technique. The length of release is the same for both automated and non-automated methods, but the automated produced lens released more immune modifying drug than the nonautomated lens. Future studies will continue to investigate the amount and release time of immunemodifying drugs from contact lenses that can be manufactured using numerous techniques in order to identify the best technique for project scale-up. REFERENCES 1. DA Schaumberg, DA Sullivan, et al. August 2003. Prevalence of dry eye syndrome among US women. Am J Ophthalmol 136(2):318â&#x20AC;&#x201C;326. 2. W Stevenson, SK Chauhan, et al. January 2010. Dry eye disease: an immune-mediated ocular surface disorder.Arch Ophthalmol 130(1):90-100. 3. 2007. The definition and classification of dry eye disease: report of the Definition and Classification Subcommittee of the International Dry Eye WorkShop. Ocul Surf 5(2):75â&#x20AC;&#x201C;92. 4. I You, T Coursey, et al. August 2015. Macrophage Phenotype in the Ocular Surface of Experimental Murine Dry Eye Disease. Arch Immunol Ther Exp (Warsz) 63(4): 299-304. 5. D Hachim, S LoPresti, et al. October 2016. Shifts in macrophage phenotype at the biomaterial interface via IL-4 eluting coatings are associated with improved implant integration. Elsevier Biomaterials 112(2017): 95-107. ACKNOWLEDGEMENTS Study conducted at the McGowan Institute for Regenerative Medicine under Dr. Bryan Brown. Many thanks to Alexis Nolfi for guiding the experiments and for being an excellent mentor.
CHARACTERIZATION OF IN VITRO PROTEIN RELEASE AND IN VIVO MACROPHAGE RECRUITMENT BY CYTOKINE RELEASING MICROSPHERES Emily L. Lickert, Katherine L. Lorentz, Brittany R. Rodriguez, Mostafa Shehabeldin, Morgan V. Fedorchak, Justin S. Weinbaum, Steven R. Little, Charles S. Sfeir, David A. Vorp Vascular Bioengineering Laboratory, Department of Bioengineering, University of Pittsburgh, Pittsburgh PA, USA Email: eml103@pitt.edu INTRODUCTION Cardiovascular disease is the leading cause of death in the United States, plaguing 610,000 people annually [1]. It is estimated that 25% of the deaths that occur each year in the US are due to cardiovascular disease. In many cases of the disease, plaque buildup occurs within the arteries-blocking the blood flow-and revascularization is required. Presently, there are two widely used methods for replacement during small diameter (<5mm) revascularization. One commonly performed revascularization method is a coronary artery bypass which utilizes small diameter autologous grafts as the channel for bypass. The clinical gold standard for autologous grafts uses the saphenous vein as a homologous graft replacement. On the surface, the ease of harvest of the saphenous vein makes it seemingly the best option, but there are multiple risk factors at work. If the vein of the patient has already been removed, or is damaged in any way, this option is immediately invalid. Veins also possess different mechanical properties than arteries leading to graft stenosis over time. Since this method of revascularization relies on the basis of veins being used in place of arteries, complications arise making them a less than ideal option. A second type of small diameter vascular graft involves synthetically manufactured grafts, whose benefits include faster production and reproducibility. They, however, have low biocompatibility due to synthetic materials being exposed to blood, which commonly leads to thrombosis and graft rejection. Our lab has been working towards developing a cell-free tissue engineered vascular graft (TEVG) using a silk scaffold combined with an inflammatory mediating cytokine to produce a more biocompatible, clinically ideal graft option. This approach aims to combine the best attributes of both approaches into a more
biocompatible, reproducible graft option. The goal of this summer research was to characterize the release profile of our inflammatory mediating cytokine from within our silk scaffolds and later observe the effects of cytokine delivery in vivo. METHODS A proprietary silk solution was mixed with polylactic co-glycolic acid (PLGA) microparticles (MP)s. Two groups were assessed in this study: inflammatory mediating cytokine-loaded MPs and blank (PBS)loaded MPs. The silk solutions were injected into custom cylindrical molds mimicking the size of a rat aorta. The molds were then allowed to gel before being frozen, lyophilized, ethanol treated, and electrospun with a mixture of silk and polycaprolactone (PCL) in hexafluoro-2-propanol (HFIP) to increase mechanical stability. The cytokine- (n=10) and blank-loaded (n=10) scaffolds were implanted into Lewis rats as abdominal aortic interpositional grafts. After 1 wk in vivo, the patency of the scaffolds was determined using angiography and explanted. Immunofluorescent chemistry (IFC) staining was then used on the slides to determine the amount of macrophage recruitment that occurred after 1 wk in vivo. Additionally, cytokine (n=12) and blank (n=12) scaffolds were each placed into 1 mL of PBS in an end-over-end turner at 37°C. Samples from the releasate were collected each day for 8 d and an ELISA was used to determine the protein release. Patency for the in vivo study was determined by the visual flow of contrast through the graft via angiogram, which means the graft remained clear of blood clots or other occlusions after 1 wk in vivo. Images were taken of the IFC stained slides on a microscope and the representative images are presented with a 40% brightness and 20% contrast (Fig. 1). Macrophage recruitment is determined through the images by the differences in color between the primary delete image and the sample image. Positive staining of macrophages within the
explants were observed in the 550nm wavelength and expressed in red. An ELISA was performed on the samples produced by the in vitro study. Absorbance readings were taken as per ELISA protocol and normalized according to the standard curve. Averages were taken for each day, and these data points are depicted graphically via scatter plot for visual comparison (Fig. 2).
cytokine release described by the in vitro assay in vivo, explants would need to occur 1-3 days post implantation. In vitro assays suggest that almost all (93%) of the cytokine release occurs within the first 24 h. These assays are designed to replicate the circumstances in
Fig 2. Depicts graphically that 93% of overall cytokine release from within the scaffolds occurred within the first day under in vitro conditions.
Fig 1. Representative images of the IFC stained explanted aortic grafts. Pinkish hue portrays overall macrophage presence in the sample.
RESULTS The in vivo study resulted in 100% (10/10) cytokineloaded MP TEVGs remaining patent, while only 50% (5/10) blank MP TEVGs remained patent after 1 wk in vivo. We also qualitatively observed an increased number of macrophages (shown in red) present in and around the cytokine loaded MP explants versus the blank MPs after 1 wk in vivo (Fig. 1). In vitro release assays show that cytokines imbedded into the scaffolds release 93% of the overall release within the first day (Fig. 2). DISCUSSION We determined that delivery of inflammatory mediating cytokines within our silk-based TEVGs increases their 1 wk macrophage infiltration as well as patency rate. This suggests a connection between cytokine delivery and patency within the first week in vivo. The prevention of acute thrombosis with the delivery of inflammatory cytokines may be due to the early macrophage cellularization of the scaffold pores [2]. To fully characterize the immune response in vivo due to cytokine release, a study similar to this one would need to be done but differing in the time points at which explants are performed. To see the
which the graft would be once placed in vivo, but it is important to note this experiment’s limitations. It replicates situations such as temperature, CO2 levels, and some minor agitation, but it does not fully replicate the trauma the graft undergoes due to blood flow occurring post implantation. We hypothesize that, based on the in vitro assay data (Fig. 2), the early release of an inflammatory chemoattractant corresponds to early cellular infiltration. Future projects will hopefully confirm an increased amount of macrophages within the first 24 h in vivo by explanting grafts earlier than 1 wk. The process of using artificial MSC’s loaded into TEVG’s could be used as a clinically ideal alternative to cell-filled grafts for small diameter vascular grafts. The combination of the data generated from this project has helped us better understand how various types of MP loaded TEVG’s affect patency rates in vivo. REFERENCES 1.Center for Disease Control and Prevention 2015. 2.Turner et al. Molecular Cell Research. 1843-22014 ACKNOWLEDGEMENTS Funding was provided by the NIH (R01HL130077), and both the Swanson School of Engineering and the Office of the Provost at the University of Pittsburgh.
THE CON-TACTOR: A NOVEL TACTILE STIMULATOR THAT MAKES AND BREAKS CONTACT WITH THE SKIN 1
Maxwell Lohss1, Roberta Klatzky, PhD2, George Stetten, MD, PhD1 Visualization & Image Analysis Lab, Bioengineering Dept., 2Carnegie Mellon University Psychology Dept. University of Pittsburgh, PA, USA Email: max.lohss@pitt.edu, Web: https://www.vialab.org/index.html
INTRODUCTION Affecting 40 million people in the US, peripheral neuropathy is a growing public health concern resulting in numbness, pain, and weakness from small nerve fiber damage [1]. Diagnosing peripheral neuropathy typically involves physical examinations and skin biopsies, which are often performed after permanent nerve damage has already occurred [2]. To assist with early detection of peripheral neuropathy, we aim to develop a novel tactile stimulator. Haptic devices called tactors are commonly used in consumer equipment and research to stimulate the sense of touch with vibration, generally through continual contact with the skin. For example, a technology called FingerSight, developed in our laboratory, uses a video camera to communicate with tactors mounted on the finger to permit a blind person to feel the visual environment [3]. Some issues with such tactors include difficulty in distinguishing temporal sequences and decreased sensitivity to the vibration over time [4]. The goal of our work is to improve on traditional tactors and develop a device for quantifiable diagnosis of peripheral neuropathy by providing the user with an easily detectable sensation that facilitates temporal pattern perception and avoids the loss of sensitivity resulting from continuous vibration. To this end, we have designed a novel haptic stimulator that makes and breaks contact with the skin, thereby creating tactile sensations in the form of repeated discrete onsets and offsets. Our new device, which we call the Con-Tactor, contains a delicate lever arm with a foot-like projection that uses electrical conductivity between the device and the skin to control its movement. METHODS Device Description & Design The Con-Tactor prototype was designed as a handheld tool for convenient application of the vibrating tip to various locations on the skin as shown in Figure 1. A cantilever arm extends from the fulcrum
through a housing where the coil of a solenoid is attached to push the lever up or down. This motion is relative to magnets mounted within the solenoid and above the housing, thus constituting a configuration known as a voice coil. The lever extends beyond the housing where a gold-plated tip can make and break contact with the skin. The design maximizes displacement of the tip for relatively small currents in the coil, while providing the small driving force required at the skin for impact detection. The current in the coil is controlled by an Arduino microprocessor connected to a custom linear amplifier circuit capable of producing both positive and negative currents. A separate conductivity circuit detects the skin as a resistance of up to 30 MΩ through the gold-plated tip relative to a ground plate. A set of LEDs is used to indicate skin contact and provide feedback to adjust displacement of the cantilever arm. The Arduino program generates square wave voltage inputs at frequencies ranging from 1 – 40 Hz and coil voltages in the range of ±1.5 V. MAGNETS SOLENOID
FULCRUM SKIN CONTACT
Figure 1: Con-Tactor prototype constructed from parts which were 3D printed in polylactic acid (PLA). The cantilever beam functions as a third-class lever controlled by the push and pull of the solenoid electromagnet.
Calibration & Classification of Device The spring constant of the cantilever beam was calculated using a US-Trader-Pro Class II mass scale. The Con-Tactor was held in a PanaVise clamp
and leveled horizontally. Then the clamp was slowly lowered until contact was made between the goldplated tip and the scale platform. Spacers ranging from 1 mm to 10.5 mm were placed between the gold-plated tip and the scale platform. After zeroing the scale for the mass of the spacer, force measurements were recorded for the corresponding displacement distance. Figure 2 shows the resulting linear relationship of displacement versus force. Using Hooke’s Law, the cantilever spring constant was calculated to be 0.0032 N/mm.
Figure 2: Displacement versus force for the Con-Tactor cantilever beam. The relationship was linear with and R2 value of 0.9985, demonstrating that the device functions a classical cantilever spring.
Preliminary Testing In-house testing of the Con-Tactor was performed on the thenar eminence at the base of the thumb. The hand was held in place while the Con-Tactor was secured in the PanaVise clamp as shown in Figure 3. The position of the Con-Tactor was adjusted using a calibration algorithm to establish proper make-andbreak behavior with the cantilever movement. Preliminary tests were performed at frequencies from 1 – 40 Hz.
RESULTS Stimulation by the Con-Tactor was easily perceived on the thenar eminence, demonstrating that the leverarm design provides the necessary displacement for making and breaking contact with the skin. Generation of a square wave voltage showed large displacement, with the mechanical resonance of the lever arm contributing significantly to its movement. Resonance of the cantilever was achieved at 33 Hz (+/-2 Hz), showing approximately 1 cm displacement in both directions from equilibrium. Placing 1.5 V across the solenoid resulted in a maximum force of 0.0042 N. DISCUSSION Based on preliminary tests, the Con-Tactor shows promising potential as a stimulation device for research into the sensation of low-intensity impact and diagnostic point localization tests for neurological disorders in the clinical setting. Formal testing will include free magnitude estimation to characterize how perceptual intensity varies with input parameters and skin sites as well as threshold measurements. The fast-adapting and slow-adapting nerve fibers in the skin are stimulated at different frequencies, ranging from 0.4 – 500 Hz based on the skin mechanoreceptor being stimulated [5]. The vibrators used to establish these thresholds were in constant contact with the skin, resulting in sustained skin deformation. By allowing the skin to rebound in between contact cycles, the Con-Tactor will likely show a different relationship between frequency and free magnitude estimation. Through further analysis of tactile perception during make-and-break skin contact, we aim to potentially quantify early nerve fiber damage in patients experiencing peripheral neuropathy. REFERENCES 1. A. Hovaguimian et al. Curr Pain Headache Rep. 15:193–200, 2011. 2. S. Yagihashi et al. J Diabetes Investig. 2: 18–32, 2011. 3. G. Stetten et al. IEEE 2:1-9, 2014. 4. D. Pyo1 et al. EuroHaptics 7283: 133–138, 2012. 5. J. Wolfe et al. Sensation & perception 5: 420-461, 2018.
Figure 3: Testing on the thenar eminence. The hand was secured with the palm facing upward. The calibration algorithm was used to position the lever arm at the same contact position for all testing.
ACKNOWLEDGEMENTS Funded through the Swanson School of Engineering REU, NSF grant IIS-1518630, and the Center for Medical Innovation (CMI) at the University of Pittsburgh.
What is the Impact of Pregnancy and Childbirth on the Combined Sacrum/Coccyx Shape? Liam C. Martin, Megan R. Routzong and Steven D. Abramowitch Translational Biomechanics Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: lcm50@pitt.edu, Web: https://www.engineering.pitt.edu/StevenAbramowitch/ INTRODUCTION Hormonal changes during pregnancy are known to cause tissue remodeling—resulting in the connective tissue softening at the pubic symphysis and sacroiliac joints—presumably to facilitate vaginal delivery [1]. Previous work by our lab demonstrated significant mechanical engagement of the fetal head with the sacrum and coccyx during vaginal delivery, causing the muscles and connective tissues anchored and engaged with these bones to stretch [2]. This suggests two potential sources for permanent pelvic shape changes—increases in pressure and tissue remodeling during pregnancy and/or injury during vaginal delivery. Thus, we aimed to determine whether this maternal soft tissue softening during pregnancy or possible injury during delivery results in a measurable change in the combined sacrum/coccyx shape by comparing midsagittal angles and curvature indices between nulliparous, gravid, and parous women. We hypothesized that these measures would differ significantly between groups and be consistent with remodeling that would facilitate vaginal delivery. METHODS This retrospective, IRB approved study consisted of 62 female subjects between the ages of 20 and 49 that had a computed tomography (CT) or magnetic resonance imaging (MRI) pelvic scan. Subjects were sorted into groups based on their parity (number of births) and gravidity (number of pregnancies) resulting in 20 nulliparous (have never given birth), 16 gravid and vaginally nulliparous (pregnant and have never given birth vaginally), and 26 parous (have given birth) subjects. Using HOROSTM, the midsagittal slice was identified, and 13 sacrum and coccyx angle and curvature measurements were made using definitions from previous literature [3]. Our values were found to be consistent with the existing literature, providing validation that they were performed correctly. Statistical analyses were conducted in SPSSTM and consisted of a One-Way Independent
MANOVA followed univariate ANOVAs with multiple comparisons and Benjamini-Hochberg. Homogeneity of variances were tested, and independent samples were assumed [4]. Measures with significant differences between groups (p<0.05) were followed-up with additional multiple comparisons. RESULTS Overall, differences between groups (nulliparous, gravid, and parous) were significant at the multivariate level (p<0.001). After correcting for multiple comparisons, three of the measures had significant univariate results: the coccygeal curvature index (p<0.001), sacrococcygeal curvature index (p=0.008), and sacrococcygeal angle
Figure 1. This image visualizes the three significant measures. The pink line is the sacral curved length, the red line is the coccygeal curved length, and they combine to create the sacrococcygeal curved length. The superior blue line is the sacral straight length, while the inferior is the coccygeal straight length, and these combine to create an included angle—the sacrococcygeal angle. The yellow line is the sacrococcygeal straight length.
(p=0.010) (Table 1) (Figure 1). For these significant measures, the only groups that differed significantly were the nulliparous and gravid groups, though only
the coccygeal curvature index results are shown visually (Figure 2a). When further separating those groups into subgroups based on parity, we can isolate the effect of pregnancy from that of vaginal delivery (Figure 2b). Qualitatively, we see that these subgroups differ more for the gravid group than the parous group. A
B
Figure 2. A) A boxplot displaying the coccygeal curvature indices for each group with only significant p-values are shown. B) Displays each subgroup defined by parity.
DISCUSSION Overall, these results support the hypothesis that pregnancy and childbirth results in significant changes to the combined maternal sacrum/coccyx shape. This is consistent with shape changes that would be more favorable for vaginal delivery and in some cases appear to be permanent. A curvature index is the ratio of the straight length to the curved length, so a structure with a value of 100 is perfectly straight while those with lower values are more curved. This data shows that
the curvatures become straighter and the sacrum and coccyx more aligned during pregnancy. This means suggests that the coccyx moves posteriorly with respect to the sacrum during pregnancy and doesn’t fully return to its nulliparous shape after delivery. This is demonstrated as the parous shapes regress back towards the nulliparous values but still straddle the gravid and nulliparous values, which is why the parous values do not differ from the other groups. Only the nulliparous and gravid groups differed significantly. This implies that permanent change may occur during pregnancy or delivery as the nulliparous shapes are not fully recovered. The differences with increasing parity for the vaginally nulliparous, gravid group, further suggest that pregnancy alone results in unrecoverable pelvic shape changes as these subjects never had a vaginal delivery. While this study cannot directly determine whether these changes during pregnancy are a result of tissue remodeling or increases in pressure resulting from the growing fetus, future studies investigating circulating hormonal levels (e.g. relaxin) are recommend. Additionally, longitudinal studies should focus on the effect of multiple pregnancies and mode of delivery to further explain the differences seen here. This study only looked at mid-sagittal shape differences but provides motivation for a 3D analysis utilizing segmented shapes to investigate shape variations in the entire bony pelvis. REFERENCES [1] Soma-Pillay, P. Cardiovascular Journal of Africa, 27(2), 89-94, 2016. [2] Routzong M. R. Interface Focus, 9(4), 2019. [3] Woon, J. T. K. European Spine Journal, 22(4), 86370, 2013. [4] Benjamini Y. Journal of the Royal Statistical Society, 57(1), 289-300, 1995.
ACKNOWLEDGEMENTS The Swanson School of Engineering undergraduate research grant and NSF GRFP Grant #1747452 are acknowledged for supporting this research
Table 1: Averages, Standard Deviations, and Univariate p-values of the Significant Measures Measure Nulliparous Gravid Parous (Mean ± std) (Mean ± std) (Mean ± std) Coccygeal Curvature Index 78.7 ± 6.6 89.2 ± 10.0 80.0 ± 5.5 Sacrococcygeal Curvature Index 73.3 ± 5.8 79.2 ± 3.7 77.6 ± 5.4 Sacrococcygeal Angle 92.8° ± 10.9° 109.3° ± 9.4° 101.9° ± 11.0°
ANOVA p-value < 0.001 0.008 0.010
APPLICATION OF LIMB CRYOCOMPRESSION TO REDUCE SYMPTOMS OF PACLITAXEL-INDUCED NEUROPATHY Nikita Patel, Aishwarya Bandla, PhD and Nitish V. Thakor, PhD Singapore Institute for Neurotechnology, National University of Singapore Singapore, Singapore Email: nap82@pitt.edu INTRODUCTION Chemotherapy induced peripheral neuropathy (CIPN) is a common dose-limiting side effect of the chemotherapeutic drug paclitaxel [1]. Currently, there is no effective CIPN prevention strategy, and severe cases of CIPN result in chemotherapy dosereduction or cessation of treatment [2]. A systematic review of studies reporting the prevalence of CIPN in patients with various types of cancer showed the prevalence of CIPN in patients undergoing chemotherapy to be 68.1% (57.7-78.4) when measured within one month after cessation of chemotherapy, and 30.0% (6.4-53.5) when measured at 6 months or later [2]. Though CIPN prevalence decreases with time, 30.0% of patients still suffer from CIPN symptoms 6 months or more after termination of chemotherapy treatment. Limb cryocompression is currently being investigated in clinical trial for efficacy in prevention and amelioration of CIPN in patients undergoing paclitaxel chemotherapy [3]. This study introduces a device to administer limb cryocompression to reduce symptoms associated with CIPN. Study objectives include identifying correlations between sensory nerve conduction data and patient age, ethnicity, and cooling temperature to help identify optimal parameters at which nerve conduction is best restored. METHODS Nerve conduction studies are an important diagnostic tool in identifying neuropathy and nerve damage. In this clinical trial, nerve conduction studies served as the primary assessment method for assessment of the extent of peripheral neuropathy. Nerve conduction study data is recorded prechemotherapy (time point 1), post-chemotherapy (time point 2) and 3 months post-chemotherapy (time point 3). All nerve conduction study data was recorded by the physicians involved in the clinical trial in Microsoft Excel. Data analysis were
conducted using MATLAB. Left and right side nerve conduction were averaged, and error was reported as standard error of the mean. Nerve conduction parameters tested include onset latency (ms), peak latency (ms), peak amplitude (uV), and velocity (m/s). DATA PROCESSING All data was compiled in Microsoft Excel, and analyzed and graphed using a MATLAB script. RESULTS Nerve conduction data for the ulnar digit V nerve shows no significant difference across all parameters at time point 1 (pre-chemotherapy). At time point 2, patients under the age of 50 show a peak amplitude of 14.3 Âą 0.8 uV, whereas patients above the age of 50 show a peak amplitude of 10 Âą 2.5 uV. At time point 3, patients under 50 and patients above 50 showed a significant difference across all parameters tested. Onset latency, peak latency and peak amplitude were significantly higher for patients under 50, whereas velocity was significantly higher for patients above 50 (Figure 1). All other sensory nerves tested showed no significant correlation between nerve conduction study data and patient age.
Figure 1. Nerve conduction study data for the sensory ulnar digit V nerve grouped by patient age. Study parameters were taken from 6 patients under the age of 50 (red line) and 5 patients above the age of 50 (black line)
Nerve conduction data for the supperoneal foot nerve showed no significant difference across all parameters at time point 1 (pre-chemotherapy). At time point 2 (post-chemotherapy), onset latency, peak latency and velocity showed significantly different values for patients of Chinese and Malay descent. Patients of Chinese descent showed significantly higher latency values and significantly lower velocity values than patients of Malay descent. At time point three, only peak amplitude values significantly differed between patients of Chinese and Malay descent; patients of Malay descent showed significantly higher peak amplitude values than patients of Chinese descent (Figure 2).
Figure 3. Nerve conduction study data for the ulnar digit V nerve grouped by patient ethnicity. Study parameters were taken from 8 patients of Chinese descent (red line) and 3 patients of Malay descent (black line)
DISCUSSION Though nerve preservation significantly varied for patients under the age of 50 and above the age of 50, peak amplitude values were higher whereas latency values were higher and velocity values were lower for patients under the age of 50. This shows higher preservation of peak amplitude, but lower preservation of latency and velocity in patients under 50 as opposed to patients above 50 at the ulnar digit V nerve. Figure 2. Nerve conduction study data for the supperoneal foot nerve grouped by patient ethnicity. Study parameters were taken from 8 patients of Chinese descent (red line) and 3 patients of Malay descent (black line)
Nerve conduction data for the ulnar digit V nerve showed no significant difference for onset and peak latency and velocity at time point 1. For peak amplitude, patients of Chinese descent showed significantly higher values than patients of Malay descent. At time point 2, only peak latency values showed no significant difference. Onset latency and peak amplitude showed significantly higher values for patients of Chinese descent, and patients of Chinese descent showed significantly lower velocity values. At time point 3, patients of Chinese descent showed significantly higher latency and peak amplitude values, and significantly lower velocity values (Figure 5).
There was considerable variability in correlation between nerve conduction study parameters and patient ethnicity. Due to this variability, no significant conclusions can be drawn about difference in effect of cryocompression treatment grouped by patient ethnicity. Future studies include analyzing motor nerve conduction study data and increasing the subject pool at time point 3, which may allow more significant correlations to be determined. REFERENCES [1] Quasthoff, S. et al., J Neurol (2002) 249: 9. [2] Seretny, M., et al., PAINÂŽ (2014) 155(12): 2461-2470. [3] R. Sundar, et al., Frontiers in Oncology (2017) ACKNOWLEDGEMENTS University of Pittsburgh Swanson School of Engineering, the Office of the Provost, and the Office of Engineering International Programs
BIOPRINTING OF iPSC-DERIVED ISLET ORGANOIDS Kevin Pietz, Remziye Erdogan, Shivani Gandhi, Ravikumar K, PhD, Prashant Kumta, PhD, Ipsita Banerjee, PhD Banerjee Laboratory, Department of Bioengineering, Department of Chemical Engineering University of Pittsburgh, PA, USA Email: kwp6@pitt.edu INTRODUCTION Bioprinting provides opportunities not yet encountered in the field of regenerative medicine and organoid development. Printing of bioinks containing iPSC(induced Pluripotent Stem Cells)derived islet organoids can help the field of Type 2 Diabetes research with applications in culture platforms, organ-on-a-chip for multi-modal drug testing, and vascularized organoids. However, bioprinting of iPSCs remains largely unexplored, and much work must be done to progress the field. Here we attempt to print iPSC-derived islet cells in an alginate bioink and monitor viability postprinting as a first step on the road towards iPSC differentiation in vascularized 3D printed structures. Hinton et al. [1] have developed an alginate-based printing protocol with a gelatin-CaCl2 support bath. Some fibroblast printing was completed with this technique, but this does not indicate if the less robust pluripotent cells can survive the same process. The previous work by Richardson et al. [2] indicated that alginate is a good candidate for a 3D support structure for iPSC growth and differentiation. This concept coupled with the Freeform printing technique [1] is expected to have potential applications for 3D printing of iPSCs. However, previous studies show the difficulties of single cell iPSC printing, and the challenges that remain to be overcome. METHODS Printing materials were prepared so as to achieve a balance in biocompatibility and physical printing parameters, such as viscosity. An alginate based bioink and gelatin-CaCl2 support bath were determined to best fit cell printing requirements. Alginic acid (2 % w/v) was dissolved in warm DMEM/F12 medium with continuous stirring in order to formulate the bioink. The DMEM/F12 helped with viability by providing nutritional support to the cells prior to and during the printing
process. Consumer grade gelatin (4.5 % w/v) was dissolved in varying concentrations of CaCl2 (50 mM â&#x20AC;&#x201C; 70 mM) in water to make a gelatin bath based on the protocol used by Hinton et al. [1] under sterile conditions. A consumer grade blender was used to prepare the gelatin and CaCl2 support bath. Printing of cell laden bioink was conducted using the Bio X bioprinter from Cellink. The STL files of the 3D structures printed were prepared in the 3D design software, Inventor. iPSCs were cultured in a pluripotent state using mTeSR stem cell medium until ready for testing. These cells were then dissociated into single cells and encapsulated in alginate using the protocol previously used in Richardson et al. [2] to allow for three-dimensional support and growth into clusters of cells. They were then differentiated using the growth factor combination used in Veres et al. [3] until the maturation phase of the pancreatic islet development process.
Figure 1: Image of alginate stained for visibility printed into gelatin-CaCl2 support bath in the form of a spiral.
In preparation for cell printing, cells were decapsulated from alginate using EDTA solution, resuspended in the bioink, and loaded into the bioprinter for printing. The cell solution was printed with a 22G nozzle at 5 kPa printing pressure and 12 mm/s printing speed for different structures, such as spiral and sinusoidal strings.
The printed structures were removed from the gelatin support bath and placed in a culture dish, where it was rinsed to remove CaCl2 and given growth media. Live/dead images were taken after 24h in culture to assess cell viability. Calcein and EthD1 dyes were used with fluorescent imaging to label cells which were alive and dead, respectively. Phase images were taken to check for decapsulation from the alginate support structure. RESULTS The optimization of printing pressure, speed, and bioink composition yielded parameters appropriate for smooth printing of the structures (Fig. 1) and for the cells to survive. Multiple bioinks at various concentrations were tested, including polyethylene oxide, alginate alone, partially polymerized alginate, alginate-collagen, and alginate-agarose combinations. These bioinks failed to meet either the physical printing or biocompatibility requirements. Factors such as viscosity and ability to maintain structure were important for printing capabilities, while cell toxicity was important to consider once the structures were printed using cell laden bioink.
Figure 2: iPSC clusters suspended in the alginate structure postprinting. The edge of the alginate print can be seen at the top of the image.
The alginate-based bioink with a gelatin-CaCl2 support bath to print into was determined to be the optimal configuration for printing. The liquid alginate was able to extrude from the nozzle smoothly while it could maintain its shape once polymerized in the support bath. The alginate could support cells while not causing harm, and the shear force during printing was not high enough to damage cells while extruding. Phase images reveal iPSC clusters with no decapsulation (Fig. 2). Live/dead images show high viability for the day immediately after printing (Fig. 3). Large clusters of cells remain alive (green regions) while primarily
the smaller clusters or single cells show death (red dots).
Figure 3: Live/dead image of Y1 iPSC clusters one day post-printing. Green shows live cells, while red shows dead cells.
DISCUSSION Bioprinting of 3D structures is a challenging pursuit in part because of the large number of parameters that affect the final structure, such as print speed, print pressure, nozzle diameter, bioink composition, and support bath composition. Bioprints succeed most of the time, but occasionally the alginate forms a bunched-up structure rather than the designed print. This is thought to be due to a high CaCl2 concentration and a large nozzle size. Lower Ca2+ ion concentrations and smaller nozzles are currently being tested to determine if the issue is better resolved with a different combination of parameters to ensure that the prints are completed more consistently. The live/dead images show high viability for large cell aggregates but not small clusters of cells. The death seen with the small clusters is consistent with previous work in our lab, which indicates single cells and smaller groups of cells are less likely to remain viable when compared to larger aggregates. Further studies are currently underway to examine phenotype of printed cells and ability to survive for extended culture. REFERENCES 1. Hinton et al. Science Advances vol 1, no. 9, 2015. 2. Richardson et al. Tissue Engineering vol 20, no. 23 & 24, 2014. 3. Veres et al. Nature vol 569, 368-373, 2019. ACKNOWLEDGEMENTS Thank you to Connor Wiegand for helping prepare cells for experiments. Thank you to funding from NSF, the Swanson School of Engineering, and the Office of the Provost.
Increasing the Alignment of Electrospun Biopolymer Hybrid Materials for Tissue Engineering Applications Seth Queen, Reza Behkam, Jinghang Li, Jonathan Vande Geest Soft Tissue Biomechanics Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: seq4@pitt.edu, Web: https://www.engineering.pitt.edu/stbl/ INTRODUCTION Electrospinning is a cost-efficient method of producing polymer nanofibers. Many of the polymers used in the process of electrospinning are compatible with human biology, and the orientation of the fibers can actually alter how cells tend to grow as well as the strength of the polymer construct. As the fibers align in a parallel orientation, the cells growth is increased, and the polymer becomes stronger. At the Soft Tissue Biomechanics Laboratory (STBL), electrospinning is a common practice for our vascular team. Many of the projects require electrospun constructs to be impanated into arteries to help combat vascular thrombosis or restenosis. This requires high cell proliferation and a strong polymer to withstand the mechanically challenging environment found in arteries, particularly the peripheral artery near the knee, where bending, twisting, and compression all occur. The objective of this project is to determine what electrospinning parameters allow for a more parallel orientation of fibers. METHODS Each construct analyzed was spun from a 10% (w/v) solution of polycaprolactone (PCL) in 1,1,1,3,3,3hexafluoro-2-propanol (HFP). The PCL used to make our solution has a molecular weight of 80,000 (Sigma-Aldrich) and required 24-hours to completely dissolve. With a ready solution, it was placed into a 5-mL syringe with a 11.99 mm diameter needle. It is charged at 15 kV and directed along the electric field to a grounded 90 mm diameter mandrel. The linear speed is kept at 300 mm/s, construct length at 30 mm, and distance between needle point and the mandrel at 10 cm and are the constant variables. The independent variable becomes the rotational speed of
the mandrel, and the dependent variable is the fiber angle when compared to the principle axis. The rotational speeds tested are 500 rpm, 1500 rpm, and 2500 rpm. The PCL was then removed from the mandrel, placed between two slides as one sample. At each speed, only one sample was collected Each sample was viewed under STBLâ&#x20AC;&#x2122;s multiphoton microscope and imaged in hyper-stack format with a 10x objective. Each hyper-stack was then analyzed using a code published previously [1], which was then modified by Reza Behkam and Jinghang Li. The results will then allow for us to test our hypothesis on a smaller scale, using a 1 mm diameter rod, rather than the 90 mm diameter mandrel. These samples will then be analyzed in the same way the previous samples were. DATA PROCESSING The images produced by the multiphoton are manipulated by the code to determine the principle axes. Once that is determined, the code divides the images into smaller sections, which are then analyzed to compare average fiber angle to the principle axes. Within these smaller images, any black noise is excluded by the code. These angles were then compiled into a histogram plot for easy comparison amongst experimental groups. RESULTS Amongst the results, it was determined that the higher the rotational speed, the more parallel the fibers would be oriented. Removal of the sheet from the 90 mm mandrel required patience and uniform pulling in order to get the best results with no hindered fibers. These results showed us that over 1500 rpms was more aligned than the 500 rpm sample, and the 2500 rpm was more aligned than the 1500 rpm.
Moving to the 1 mm rod, we removed the 1500 rpm group. This was done because the previous samples showed that fiber alignment keeps increasing with speed, rather than reach a threshold of alignment at a certain rotational speed. When comparing the two images below (Fig 1 A and B) you can visibly see the alignment of the fibers as the speed has increased from 500 to 2500 rpms
(A) 500 rpm
having 2-4 fibers in each group, while the 2500 rpm histogram shows that the majority of the angles range from 74.6° to 104.6°, with a two fibers between 54.6° and 64.6° and one fiber being between 114.6° and 124.6° DISCUSSION The results produced in this experiment match our hypothesis that as the rotational speed increased, the fiber orientation became more aligned. It should be known that the data collected comes from a limited sample size(n=1). If our data holds true to a larger sample size, then all constructs can be produced at higher rotational speeds creating a construct to. Increase cell proliferation and overall strength. With increased fiber alignment, we expect to see construct that will be able to withstand the torsion, bending and compression that comes with the mechanically challenging environment that can be found in arteries. To move forward with this experiment, it should be first repeated to prove the results are consistent with the data shown. From there, we can look into how linear speed affects fiber alignment.
(B) 2500 rpm
Figure 1: (A) Image of fibers after spinning with a rotational speed of 500 rpm with principle directions. (B) Image of fibers after spinning with a rotational speed of 2500 rpm with principle directions
Figure 2 below shows the histogram plots of the angles collected in Figure 1. As it can be seen, the histogram plot for the rotational speed of 500 rpm has angles ranging from 47.9° to 127.9°, with most
REFERENCES 1. Sander, E.A. and Barocas, V.H. 2008, "Comparison of 2D Fiber Network Orientation Measurement Methods," Journal of Biomedical Materials, Research Part A, 88, pp. 322-331. ACKNOWLEDGEMENTS Support for this project was provided by the SSOE summer research program as well as the Center for Medical Innovation (CMI, grant F_207-2017, PI: Vande Geest).
Figure 2: (A) 500 rpm
(B) 2500 rpm 14 12
6
Fequency
Frequency
8
4 2
10 8 6 4 2
0 53
63
73
83
93
Angle
103 113 123
0 60
70
80
90
Angle
100
110
120
METHODOLOGY FOR COMPREHENSIVE HISTOLOGICAL ANALYSIS OF INTACT ANEURYSM SPECIMENS Apurva Rane, Eliisa Ollikainen, MD PhD, Anne Robertson PhD Robertson Laboratory, Department of Mechanical Engineering University of Pittsburgh, PA, USA Email: apr38@pitt.edu INTRODUCTION Rupture of intracranial (brain) aneurysms (IAs) at cerebral artery bifurcations is the most common cause of subarachnoid hemorrhage (SAH).[1] SAH is a sudden manifestation of the often fatal (30-40%) intracranial bleeding that may occur without warning in anyone afflicted with an intracranial aneurysm, but their risk for rupture cannot be predicted.[1,2] The subsequent intracranial bleeding may cause disability, if the patient survived. Risk factors for IA formation and rupture are smoking, female sex, and hypertension. Hypercholesterolemia, a risk factor for atherosclerosis, plays an unknown role in IAs. Currently the prevention of rupture is achieved by clipping the IA neck or filling the IA sac with a platinum coil [2] to isolate the IA wall from the circulation. A central objective of the research in our group is to determine how the processes of IA wall degeneration and repair are related to mechanical and biomechanical factors/properties in the IA. Prior studies in our group have focused on associations between local changes to the IA wall and both local hemodynamics and mechanical stresses. More recent work has assessed these changes using histological sections from selected/limited IA wall areas. However, due to the heterogeneous nature of the IA tissue, two challenges are associated with such selective use of histological sections: i) the location of the histological images must be mapped back to the sample and ii) a large number of sections are needed to properly represent the diverse wall phenotypes found across the IA specimen. This paper will describe the techniques developed during the summer to address both these challenges. METHODS Aneurysm samples were harvested following surgical clipping of IA domes from three collaborating clinical sitesâ&#x20AC;&#x201D;Alleghany General
Hospital (Pittsburgh, Pennsylvania), University of Illinois at Chicago (Chicago, Illinois), and Helsinki University Hospital (Helsinki, Finland). Patients gave informed consent, and the study was approved by the institutional review boards of the collaborating institutions. Clinical information, 3D imaging, intraoperative videos, ex vivo microâ&#x20AC;&#x201C;computed tomography (CT) (Skyscan 1272, Bruker Micro-CT, Kontich, Belgium) and multiphoton microscopy (MPM) were collected and interpreted [3]. These tools make it possible to visualize the collagen fiber architecture and cellular content throughout the wall thickness in a nondestructive manner in intact samples. Prior to imaging, the sample was oriented in a purposeful way with a north and south direction. Small ink markings (made by using Davidson Marking Systems Tissue Dyes) were placed on both sides of the tissue sample (Fig 1), in a grid like pattern, which can be viewed and used as fiducial markers during subsequent MPM and histological analysis. Alternating dye colors were used to help distinguish tissue regions. Images of the marked samples were taken using a dissection microscope. The sample was then snap frozen in liquid nitrogen and cryosectioned in sequences of ten slides with 5-micron thick sections and then four slides with 8-micron thick sections (in all cases, three sections per slide). Adjacent 5-micron sections were stained with a repeating set of classical histology stains such as hematoxylin and eosin stains (nucleus) and Oil Red O (neutral lipid). Immunohistochemical stains were also performed including aSMA (alpha smooth muscle cells), CD31 and CD34 (endothelial cells), and CD45 (leukocytes). The 8-micron sections were used for MPM imaging of collagen fibers. Due to the heterogenous structure of the IA wall, the staining sequence was repeated every 250 microns, forming numerous units of similarly stained sets of slides.
Table 1: Choices for histological assessment of aneurysm tissue Marker Hematoxylin and Eosin
What marker stains for Cell Nucleus
Oil Red O Alpha Smooth Muscle Actin CD31+ CD34 CD45 CD68
Lipids Alpha Smooth Muscle Endothelium Cells Neovessles Leukocytes Macrophages
DATA PROCESSING After staining, the slides were scanned using a Zeiss Slide Scanner Axio Scan at a 40X magnification. This enable sections from the entire tissue sample to be imaged at very high resolution, overcoming one of the previous challenges. IA walls were assessed from these images. Regions of walls where classified into one of four wall types (A-D), previously defined by Frösen et al., 2004. Wall type A is the “normal wall” with intact endothelium and linearly organized smooth muscle cells (SMCs). Wall type B is a thickened wall with disorganized SMCs. Wall type C is a hypocellular wall with myointimal hyperplasia and organized luminal thrombus. Lastly wall type D is an extremely thin hypocellular wall with organized luminal thrombus. [4] Here hypocellular means sparse in SMCs. The IAs were also scored for the presence or absence of myointimal hyperplasia, decellularization, fibrosis, and fresh or organized thrombosis. The status of luminal endothelium was scored according to the presence or absence of an intact EC monolayer of CD31.[2] RESULTS A
1
1
2
15
3
29
4
43
5
56
6
72
7
86
8
9
10
B
North
101
115
129
1mm
South
Figure 1: Figure 1 Mapping of histological section to IA sample. (A) Dissection scope image of sample with lines for each slide set, (B) Representative section stained with Oil Red O. (C) Plane of histological section on sample.
Using the dye markings, visible in both dissection image of the intact sample, Fig. 1(A) and surface of the histological sections 1(B), lines were marked on images from the dissection scope showing the location of 250-micron sets, Fig. 1(A). This was done in an iterative process to adjust for damaged sections that could not be
Purpose IEL presence, Thrombus presence, fibrous structure presence Presence of lipid pool and/or lipid strands Wall types focused on aSMA cell orientation Indicates whether the endothelium is intact Indicates whether the intact endothelium is in neovessels Indicates presence of leukocytes and if scattered/dense Indicates presence of macrophages and if scattered/dense
used. Missing sections within a set resulted in the non-uniformity in spacing of the sets in 1(A). DISCUSSION Assessment of histology in tissue sections can provide detailed information about a tissue sample. However, it is a destructive process that provides independent slices that ideally must be mapped back to the intact sample. The approach used here provided a methodology for a more comprehensive understanding of the histology. First, a protocol was developed to serially section the entire aneurysm. This was possible using a new instrument for automated slide scanning and associated software that stitched images together. Thus, a more complete analysis could be performed by the sheer number of slices that could be analyzed. Additionally, the physical mapping of the tissue sections back to the physical samples was made possible using tissue dyes that appeared in both the dissection scope images and the tissue slices. These combined tools enabled a comprehensive analysis of sections across the entire sample, a critical approach for IA walls are highly heterogenous. RESOURCES 1.Cebral J et al. Int J Numer Meth Biomed Engng. 2018; e3133. 2.Ollikainen E. Thesis, Atherosclerotic and inflammatory changes in saccular intracranial aneurysms. Helsinki University Print, 2018. 3. Nieuwkamp DJ, Setz LE, Algra A, et al. Lancet Neurol 2009; 8:635–42. 4.Frösen Jet al. Stroke. 2004; 35:2287-2293 AKNOWLEDGMENTS The authors gratefully acknowledge support from the National Institutes of Neurological Disorders and Stroke 1R01NS097457-01 (AR, EO, PSG, AMR), research fellowship from the Swanson School of Engineering (AR), and support from the Office of the Provost (AR). The authors also gratefully acknowledge the expertise and work of Benjamin Popp for scanning the slides using the Zeiss Slide Scanner.
BIOMECHANICAL ASSESSMENT OF BICUSPID AND TRICUSPID AORTIC VALVE PHENOTYPE TO AORTIC DISSECTION RISK Sreyas Ravi Center of Bioengineering, Department of Bioengineering University of Pittsburgh, PA, USA Email: srr57@pitt.edu INTRODUCTION Acute ascending aortic dissection, also known as Type A dissection, or TAAD, is a condition in which a tear in the intimal layer of aortic wall tissue allows blood to collect within and delaminate the vessel wall. As fluid continues to enter between the wall layers, the wall itself becomes pressurized, compromising flow by constricting the vessel as aneurysmal volume increases. Once too large, patients are at risk for blockages of the vessel and, possibly, a rupture of the vessel itself. Without surgery, aortic dissection is lethal, and, despite advancements in imaging and surgical procedures, there still remains a lack of understanding of risk factors involved in the development of aneurysms in the ascending limb of the aorta. Currently, surgical repair of ascending aorta is based upon having an aneurysmal diameter greater than 5.5 cm [1]. However, data from the international registry of acute aortic dissection suggests 62% of patients with TAAD have aortic diameters distinctly less than 5.5 cm, demonstrating the necessity for an improved metric in characterizing aneurysms [2]. One factor highly contested as an aneurysmal risk factor is aortic valve phenotype. Typically, the aortic valve is a tricuspid valve (TAV), but bicuspid aortic valves (BAV) are the most common congenital heart condition that occurs in about two percent of the population [3]. According to clinicians, it is of common opinion that BAV patients are of greater risk for dissection than TAV patients. Thus, quantification of aortic wall stress has the potential to elucidate this as a dissection risk indicator. In order to investigate if BAV patients are biomechanically at greater risk for aortic dissection, stable BAV, stable TAV, and dissected TAAD patients were characterized in terms of longitudinal and circumferential stresses, providing insight into how the biomechanics of these groups may play a role in treating these patients.
METHODS The study consisted of 55 subjects collected by the Department of Cardiothoracic Surgery and the University of Pittsburgh Medical Center in Shadyside. Scans were collected with patient consent and IRB approval. CT scans of patient chest cavities were taken, and GE proprietary segmentation software within the Volume Viewer was used to isolate the aorta, generating a 3-dimensional image. The images were exported as DICOMs within a new set of aorta-specific axial slices. Any artifacts in the resulting surface model were removed in Mesh Mixer version 4.0 (Autodesk). Additionally, surface models were smoothened to ensure that they contained one surface and contained enough details to capture the shape of the aorta. The smoothened engineering model was then meshed with 3-noded triangular elements in Trelis (Sandia National Labs). The resulting finite element meshes for the aortic models contained 30,000-50,000 elements to be used for computational model formation. Two different pipelines were utilized in simulating stress conditions for the pressurized vessels. In the initial analysis of the aortic surfaces, stiffness parameters were taken from literature, in which parameters were taken from tensile loading conditions of surgically removed aortas from respective patients. A custom nonlinear finite element software was used to pressurize the model to 200 mm Hg (approximately 26.67 kPa). The material of the aorta wall was taken to be isotropic and hyperelastic [2]. Finalized models were viewed through Paraview (Kitware) to display the map of longitudinal and circumferential stresses across the aortic surface model. Nodal stresses across the map were exported in an Excel document, in which mean and standard deviation were calculated to find the maximum at the 95th percentile of stresses to exclude noise. When
comparing between phenotypes, IBM SPSS was utilized to create bar charts with calculated averages of both longitudinal stresses. The same software was used to create longitudinal scatter plots, in which stresses were plotted against days before the last scan, or in the case of TAAD patients, the day of dissection. RESULTS Bar charts in Figure 1 display the averaged results of the longitudinal and circumferential stresses experienced by the aortic surfaces when pressured to 200 mmHg. Stress values with values greater than two standard deviations from the control mean were excluded from the averages. A
aneurysmal BAV and TAV patients (p = 0.01 for both stresses and cohorts). Interestingly, both component of stresses for BAV and TAV cohorts were statistically similar. Fig 1 reveals that both the longitudinal and circumferential stresses for dissection patients are statistically significantly elevated compared to aneurysmal BAV and TAV patients. Interestingly, both component of stresses for BAV and TAV cohorts were statistically similar. From a biomechanics perspective, these results are contrary to published literature. Although, clinically, BAV patients are thought to be more vulnerable to dissection than other patients with aneurysms, the results of this study show that biomechanically BAV may not be as high of a risk as previously thought. Demonstrating biomechanical stresses similar to TAV patients, it is evident that BAV and TAV phenotypes may have similar impacts on aneurysmal development, possibly negating BAV as a biomechanical risk factor in ascending aortic dissection. These findings support the use of biomechanicsbased paradigms to predict aortic dissection. Investigating patient groups, we can continue to look into a patient-specific metric based on biomechanics rather than aortic dimensions that will improve our ability to classify aneurysms and predict dissection patterns. By delving into this new approach, physicians can have a better method to determining whether intervention is required to treat aneurysms more accurately.
B
REFERENCES [1] Emerel, L et al. Journal in Thoracic and Cardiovascular Surgery. 10.1016/j.jtcvs.2018.10.116. [2] Pichamuthu, J et al. The Society of Thoracic Surgeons. 96:2147-54, 2013. [3] Borger MA, et al. J Thorac Cardiovasc Surg. 2018; 156:473-480. DISCUSSION Figure 1: Aortic surfaces with mapped longitudinal (A) and circumferential (B) stresses, with number of subjects for BAV, TAV, and TAAD phenotypes displayed in the bars themselves. Statistical significance compared to dissected patients is displayed above BAV and TAV stable patients.
Fig 1 reveals that both the longitudinal and circumferential stresses for dissection patients are statistically significantly elevated compared to
ACKNOWLEDGEMENTS The first author would like to thank Dr. Spandan Maiti for allowing him to conduct research in his lab. The first author would also like to thank Dr. Thomas Gleason from the University of Pittsburgh Medical Center for patient aortic CT scans. The first author would also like to thank the Swanson School of Engineering for funding this research.
ENGINEERING OCULAR PROBIOTICS TO MANIPULATE LOCAL IMMUNITY AND DISEASE Yannis Rigas, Benjamin Treat, Anthony St. Leger Ocular Microbiome and Immunity Laboratory, University of Pittsburgh School of Medicine University of Pittsburgh, PA, USA Email: YER4@pitt.edu INTRODUCTION Recently, our lab has discovered that the conjunctiva (the mucosal tissue under and around the eyelid), is robustly colonized by specific Corynebacterium species. Our lab uses a Corynebacterium mastitidis model, which can colonize mice and tune the immune response on the ocular surface. Specifically, C. mast was able to induce the production of interleukin (IL)-17 from immune cells, which triggered a series of downstream events that led to the production and release of anti-microbials into the tears. This mechanism significantly enhanced the eye’s ability to resist infections from the blinding pathogens, Pseudomonas aeruginosa and Candida albicans [1]. Since C. mast can colonize the ocular surface, the long-term goal of this project is to engineer C. mast so that it can potentially act as a natural vehicle to deliver therapeutics locally at the ocular surface to mitigate the effects of diseases like Dry Eye Disease and Sjögren’s Syndrome. A similar approach has been successfully implemented in models of colitis [2]. Here, we took the first steps towards the goal of developing an ocular probiotic by genetically modifying the genome of C. mast so that mCherry, a fluorescent protein, would be expressed allowing for real-time in vivo detection of C. mast throughout the life of the host. In this study, we have shown that our genetically modified strain of C. mast fluoresces, colonizes the eye, and is able to illicit an immune response, thus confirming that our approach may be modified in the future to observe how specific gene(s) of interest may affect ocular immunity and disease. METHODS We modified C. mast using a plasmid containing genes for the expression of mCherry, Kanamycin resistance, and a transposase. Electroporation induced pore formation in the bacterial cell wall allowing for the entry of the plasmid into the cytosol of the cell. From there, the transposase
coordinates random transposon integration into the bacterial genome. We screened for successful integration of the genome by incorporating stringent selection with antibiotics that would select for our transposon insertions. Candidates were further screened under a fluorescent microscope to detect red fluorescence which would indicate mCherry expression and that the integration occurred at an actively transcribed genome locus. Colony PCR for the mCherry gene eliminated any candidates that may have been auto-fluorescent, and ensured only transposon mutants were chosen. Full genome sequencing will be done to determine the location of the transposon insertion. We then tested the candidates for the ability to colonize mice and induce an immune response that is similar to the WT strain of C. mast. Mice infected with the four candidates were also put though further immunological tests in order to determine if these bacteria were able to maintain the same immune response as the WT C. mast. DATA PROCESSING Flow cytometry was used on the four candidates as well as a control C. mast in order to observe the amount of fluorescence as well as the immune response. Mean fluorescence was found from these 5 samples and used to determine mCherry fluorescence intensity. In addition, PCR screening of candidates exhibited a single band on the gel when testing for both C. mast identity as well as presence of the transposon insert. The presence of these bands allowed for further confirmation that we had genetically modified C. mast and not a contaminant. Since gdT cells are able to produce IL-17 we were able to quantitatively analyze how the immune response was affected by the introduction of the candidates. RESULTS Despite all the candidates harboring the transposon and maintaining antibiotic resistance, there were
A W T C . m ast
B
c a n d id a te # 6
W T C . m ast
c a n d id a te # 6
Neutrophil
IL-17+ of gdTCR+ % of IL-17+ gdTCR+ cells
250 # of Neutrophils
variable levels of fluorescence due to the random nature of transposon insertions of mCherry into the bacterial genome. Each candidate expressed more fluorescence than the wild type C. mast, however, one candidate expressed 5-fold more fluorescence compared to other candidates (Figure 1B-C). Upon ocular colonization with the candidates, all four of the candidates were able to successfully colonize the eye, suggesting that this process did not alter any genes critical for colonization. As seen in Figure 2, mice inoculated with the candidates did recruit neutrophils while IL-17 producing gdT cells were also present in the conjunctiva similar to WT C. mast.
200 150 100 50 0 AS1
6
7
14
21
80 60 40 20 0 AS1
6
7
14
21
Figure 2. Flow cytometry measure amount of gdTCR and neutrophils present due to inoculation of 4 candidates as well as WT C. mast.
DISCUSSION The varying amounts of fluorescence was to be expected as transposon insertions are notoriously variable while some variations do not allow transgene expression. It is promising that all candidates harbored more fluorescence than the wild type C. mast. In addition, the fact that all four samples were able to colonize and induce an immune response means that we have not deleted genes required for colonization and/or ocular immunogenicity. By continuing to generate transposon mutants in the same fashion, it will be possible to screen for loss of colonization, and begin to determine genes vital for this process. By genetically engineering C. mast we have also paved the way for further genetic manipulations that can lead to the development of future therapeutics for ocular diseases. REFERENCES 1.St. Leger et al. Immunity 47, 148-158e5, 2017. 2. Steidler et al. Science 289, 1352-1355, 2000.
m C h e r r y In t e n s it y M e a n F lu o r e s c e n c e In t e n s it y
C
1500
1000
500
0 W
T
6
7
1
4
2
1
Figure 1. Screening C. mast transposon insert candidates for mCherry expression. A. Fluorescence microscopy of individual WT C. mast or transposon candidate bacteria. B and C. Flow cytometry measuring WT or transposon candidate mCherry expression levels and mean fluorescence intensity.
AKNOWLEDGMENTS Anthony St. Leger, PhD, Benjamin Treat, PhD, Dana Previte, PhD, Kate Carroll, PhD, Hongmin Yun, PhD, Heather Buresch, the Swanson School of Engineering, and the Office of the Provost.
COMPARISON OF PREDICTIVE EQUATIONS FOR RESTING ENERGY EXPENDITURE ESTIMATION IN MANUAL WHEELCHAIR USERS WITH SCI Yousif Shwetar, Akhila Veerubhotla, and Dan Ding Human engineering Research Laboratories, Department of Veterans Affairs, Pittsburgh PA University of Pittsburgh, PA, USA Email: yjs4@pitt.edu, Web: http://herl.pitt.edu INTRODUCTION Adults with disabilities have a 66% higher rate of obesity when compared to individuals without disabilities (1). Additionally, manual wheelchair users (MWUs) are considered to have the lowest levels of physical activity (PA) of those with disabilities. As a result of exhibiting lower levels of PA, MWUs have a higher incidence of experiencing primary health complications such as type 2 diabetes and cardiovascular diseases, and secondary health complications such as pain, weight gain, fatigue, and depression. For successful weight management and prevention of primary/secondary health complications, accurate assessment of total daily energy expenditure (TDEE) is imperative. Among the different components of TDEE, resting energy expenditure (REE) contributes to about 60%-80% of TDEE in those with SCI (2). Although REE can be accurately measured for each individual with SCI using direct and indirect calorimetry, these techniques require trained users and specialized equipment which are not readily available or always feasible to use. In light of this, studies have developed predictive equations to estimate REE in SCIs based on demographic and or anthropometric variables. However, none of these predictive REE equations have been assessed using an out-ofsample data set. The goal of this study is to compare existing predictive REE equations for MWUs with SCI against REE measured using a metabolic cart (criterion measure).
METHODS A literature search was conducted to obtain all studies that had developed a predictive REE equation for the SCI population. Four predictive equations for SCIs were obtained from three publications (3 – 5), utilizing variables such as weight, months since injury (MSI) and anatomical level of lesion (AL) as shown in Table 1. Note that AL was measured as increasing from the top vertebrae down. For example, T2 was given a value of 9. The data was gathered from two studies across three locations, including the Human Engineering Research Lab (HERL) in Pittsburgh PA, the Human Performance Lab at Lakeshore Foundation in Birmingham AL, and the James J Peters Veterans Affairs Medical Center, Bronx, NY. Both of these studies were approved by the US department of Veterans Affairs Central Institutional Review Board. The inclusion/exclusion criteria for individuals in both studies were that 1) they are within the age range of 18 – 65, 2) they are medically stable, being at least one-year post injury, and 3) they use a manual wheelchair as their primary means of mobility for at least 40 hours/week. A total of 49 individuals were recruited with SCI (9F, 40M; mean age 40.8 ± 12.5, mean weight (kg) 82.0 ± 18.9, mean AL 16 ± 3, mean MSI 143 ± 119). Once subjects gave consent and completed a demographic questionnaire, anthropometric
Table 1: Predictive REE Equations for SCI-MWUs Author
Population
Sample Size and Diagnoses
Cox (3)
SCI
Total – 22 Quadriplegia Paraplegia Brown-Sequard Syndrome
Paraplegic
Alexander (4)
Paraplegic
Paraplegia – 24
Foley (5)
SCI
SCI – 21
Equation REE = 23.4 × (Weight) REE = 27.9 × (Weight) REE = 21.4 × (Weight) REE = 413.317 + 11.4 × (AL) + 13 × (Weight) – 0.672 × (MSI)
SCI – Spinal Cord Injury, AL – Anatomical Level of Injury, MSI – Months Since Injury, Weight (kg)
measurements were done. Subjects were then equipped with a portable K4b2 metabolic cart (COSMED Inc, Rome, Italy) and asked to rest, not move, talk, or fall asleep for 20 minutes while energy expenditure (EE) was measured. The portable metabolic cart recorded the per minute oxygen intake and the per minute carbon dioxide output. The metabolic cart then implements these values into the weir equation, giving EE in units of kcal/min. DATA PROCESSING Subject data including weight, AL, and MSI were then implemented into the predictive equations to calculate each individual predicted REE. EE data from the K4b2 in units of kcal/min were then averaged across the entire 20-minute resting period and converted to kcal/day, giving the criterion REE measurement. Calculating the predictive equations along with the criterion measurement was done using MATLAB 2018b (MathWorks Inc, Natick, MA, USA) while all statistical analysis was performed using IBM SPSS Statistics v. 25 (IBM, Armonk NY, USA)
agreement obtained from single measure ICC coefficients, with all values being lower than 0.5. These equations may be inaccurate due to the use of weight as a predictor variable. Weight doesn’t specify whether mass is fat or fat free mass, which can result in large differences in actual REE measurements. The development of future REE equations should look to utilize more direct variables for determining levels of metabolism such as fat free mass. Future studies should look to utilize more standard methods of measuring REE, by using a ventilated hood with participants resting in supine position for at least 30 minutes.
RESULTS Outputs of each predictive equation for all subjects were compared with their criterion measure, giving mean absolute error (MAE), mean absolute percent error (MAPE) and intraclass correlation (ICC) single measure coefficients, all shown in Table 2. MAE measurements for three body mass index (BMI) subgroups, including normal weight (18.5 – 24.9), overweight (25.0 – 29.9) and obese (>30.0) can be found in Table 3.
REFERENCES 1. Rimmer JH, Wang E. Obesity prevalence among a group of Chicago residents with disabilities. Archives of physical medicine and rehabilitation. 2. Buchholz AC, Pencharz PB. Energy expenditure in chronic spinal cord injury. Current opinion in clinical nutrition and metabolic care. 2004;7(6):635 3. Cox SA, Weiss SM, Posuniak EA, Worthington P, Prioleau M, Heffley G. Energy expenditure after spinal cord injury: an evaluation of stable rehabilitating patients. J Trauma. 1985;25(5):419 4. Alexander LR, Spungen AM, Liu MH, Losada M, Bauman WA. Resting metabolic rate in subjects with paraplegia: the effect of pressure sores. Archives of physical medicine and rehabilitation. 1995;76(9):819-22. 5. Foley S, Langbein WE, Williams KJ, Collins E, Wydra N, Nemchausky B. Estimation of resting energy expenditure in persons with spinal cord injuries. Journal of the American Dietetic Association. 2004;104:17.
DISCUSSION Across all groups, none of the SCI specific equations accurately predicted REE when compared to the criterion measure. This is shown by the poor
ACKNOWLEDGEMENTS Study was conducted at HERL. This paper does not represent the views of the Department of Veterans Affairs or the United States Government.
Table 2: MAE, MAPE, ICC Single Measure Equations Cox (All) Cox (Paraplegia) Alexander Foley
MAE (kcal/day) 383±368 546±433 396±355 427±336
MAPE (%) 21±21 31±29 20±18 21±14
ICC (Single Measure) 0.242 0.180 0.225 0.173
Table 3: MAE across BMI Groups Equations Cox (All) Cox (Paraplegia) Alexander Foley
Normal Weight 331±326 497±371 358±319 451±349
MAE (kcal/day) Overweight 302±364 516±401 307±359 414±363
Obese 370±300 764±336 307±303 149±328
Characterization of Novel Rare-Earth Doped Nanoparticles for Clinical Application McKenzie Sicke, Srujan Dadi, Yuan Jun, and Prof. Nitish Thakor SINAPSE Lab, Department of Bioengineering National University of Singapore, Singapore Email: mms223@pitt.edu INTRODUCTION A. Nanoparticles Nanoparticle technology has been a rising interest for clinical applications in medical imaging and diagnostics. Rare-earth doped nanoparticles (RENPs) can be modified for specific uses such as fluorescence, cell targeting, photoacoustic (PA) imaging, and more [3]. Iterations of these particles have been developed at Singapore University of Technology and Design by collaborators to create a particle with multiple imaging and therapeutic use cases to treat diseases such as ischemic stroke. B. Ischemic Stroke and Stem Cell Therapy Ischemic stroke is a medical emergency that requires immediate medical attention, typically within 4.5 hours of stroke onset, to recover the penumbra region around the infarct area before irreversible necrosis occurs [6]. Mesenchymal stem cells (MSCs) are proven to be an advantageous treatment option by migrating to the lesion site, modulating immune response, and stimulating neuronal proliferation and remodeling [7]. IV administration has shown promising results as a noninvasive mode of delivery [8]. Treatment of ischemic stroke can be studied using a photothrombotic ischemia (PTI) model developed according to Watson et al [9]. C. RENP-Aided MSC Therapy While MSC therapy has shown potential for clinical significance, tracking these cells post injection is essential for further studies and examination of treatment efficacy. RENPs offer the opportunity to integrate this imaging. This study aims to explore the biocompatibility characteristics of the most recently developed RENPs to quantify their in vitro cytotoxicity and establish procedures for evaluating in vivo efficacy of IV MSC therapy after RENP uptake in a PTI rodent model. Evaluation and quantification of these properties can provide insight on the feasibility of pairing these techniques clinically.
METHODS A. MSC Culture Rat mononuclear bone marrow cells were isolated from the tibia and femur of a male 4-6 week old Sprague-Dawley rat after sacrifice. Cells were purified and incubated at 37â&#x201E;&#x192; in rat MSC maintenance medium. Cultures were repeatedly passaged to provide a sufficient cells for study. B. In vitro Cytotoxicity For the in vitro cytotoxicity study, NaYF4:Yb20Er2@NaYF4 RENPs synthesized by thermal decomposition of rare earth trifluoroacetate precursors and surface modified with hyaluronic acid (HA) using a ligand exchange process were obtained for use in an MTS assay with solutions of RENPs at experimental dilutions. The samples were analyzed in a microplate reader and percent viability was calculated for each RENP dilution group using relative absorbances. C. In vivo PTI Model A protocol for in vivo study of RENP-MSC-paired intravenous stroke intervention was also initially implemented by an experienced Research Assistant using a PTI rodent model established according to Watson et al [9]. Arterial occlusion is induced in a male 4-6 week old Sprague-Dawley rat via thrombolytic clot created by IV photosensitizer administration and activation from illumination of the artery of choice by a laser at the designated activation wavelength [10]. After PTI induction, a solution of PBS was injected via tail vein to be used as a control against future iterations of the study. At 48 h after PTI onset, the rat was sacrificed and the brain was rapidly removed for TTC stain analysis. The infarcted tissue remained unstained (i.e., white), whereas normal tissue was stained red. RESULTS A. MSC Culture Proper MSC growth and morphology is observed in Figure 1. Cells were passaged once after successful growth, demonstrating the optimized protocol.
occlusion in a distal branch of the MCA. Demonstration of procedure for the control group, has provided a basis for the full experimental study.
Figure 1. Growth and morphology of MSCs isolated from rat bone marrow. Left is 4x picture from original culture after 5 days of incubation, scale bar 0.2 mm. Right is 40x picture of first passage stem cells after 3 days of growth, scale bar 0.05 mm. Morphology of MSC shows proper adhesion to surface of culture flask for continued survival.
B. MTS Assay A first trial of the MTS assay was conducted to obtain preliminary data on the cytotoxicity of the RENPs with regards to the MSCs. A sample size of n = 3 wells was used for each control group. After a one-way ANOVA analysis it was found that there was no significant difference between average MSC viability for any of the tested concentrations of RENP. Results are shown in Figure 2.
Figure 2. MTS assay for cytotoxicity of RENPs in MSCs was conducted with MSC viability calculated as a percent absorbance relative to the control. No significant difference was found between average viability for any of the experimental groups (p > 0.05).
C. PTI Study Arterial occlusion was visibly noted after administration of Rose Bengal and illumination of the vessel [Figure 3].
Figure 3. Visualization of cortical blood vessel occlusion after PTI surgery. The effect of focal PTI induction is demonstrated by comparing the selected cortical blood vessel before (left) and after (right) Rose Bengal administration and illumination, with visible occlusion being observed after clot generation.
The TTC staining of the rat brain section shows a visible infarct area noted in Figure 4, demonstrating that the PTI induction successfully created an
Figure 4. TTC stain of brain slice from PTI induced control group rat. The circled region is the observable infarct area. White tissue is indicative of blood loss from this region of the brain.
DISCUSSION The MTS assay only represents preliminary data since a single assay was run, however the results demonstrate MSC preservation steadily above 80%. This suggests that the new RENP tested may have improved biocompatibility compared to previous generations. In the representative trial of the PTI study, the occlusion may not have been complete. The infarct area observed in the TTC stain was fairly small compared to prior control results from other studies [10]. This may suggest that a partial clot auto-resolved after initial stroke induction. Procedural alterations may need to be made before future studies are conducted. Ultimately, the results obtained from these short-term studies are not meant to provide stand-alone data. However, the results contribute important proof-of-concept for further studies characterizing the clinical applications and compatibility of RENPs, specifically in regards to ischemic stroke treatment. The decreased cytotoxicity shown in the single-trial MTS study of the new RENP generation give indication that the specific coating design may be approaching a more optimal form. REFERENCES [3] Sheng et al. Materials Science & Engineering., 70, 340-346, 2017. [6] Manning et al. Stroke. 45, 640-644, 2014 [7] Velthoven et al. Pediatric Research, 71, 4, 2012 [8] Hao et al. BioMed Research International, 2014 [9] Watson et al., Neurol. 17(5), 497â&#x20AC;&#x201C;504 (1985). [10] Liao et at. Neurophotonics 1(1), 011007, 2014 ACKNOWLEDGEMENTS Funding provided by the Swanson School of Engineering. Research conducted with the SERIUS program at the National University of Singapore
Understanding Mechanisms of Learning and Utilization of Implicit Emotion Regulation Thomas R. Skoff, Akiko Mizuno, PhD, Maria Ly BSc, Nishita Muppidi, Xiao Yang, PhD, Faiha Khan, BSc, Maria Kovacs, PhD, Howard Aizenstein, MD, PhD, Helmet T. Karim, PhD Geriatric Psychiatry Neuroimaging Laboratory, Departments of Psychiatry and Bioengineering Email: trs95@pitt.edu, Website: http://gpn.pitt.edu/ INTRODUCTION Emotion dysregulation is a core feature of major Since their choice is subjective, we depressive disorder (MDD) (1). Emotion hypothesized that those without MDD would regulation is a process taught in psychotherapy to learn to choose the side that results in a happy help alter emotions felt about particular face more often – a form of implicit emotion situations. Implicit emotion regulation is a regulation. We further hypothesized that those process that is automatic and involves regulation with MDD would either learn this process more of emotions without explicit awareness (2). slowly reflecting psychomotor retardation or However, the mechanisms of how individuals would learn but choose the side that results in a learn and utilize implicit emotion regulation are sad face reflecting a process meant to reduce unknown. Past studies have utilized money as a prediction error – i.e., depressed individuals may reward during tasks (3), however there is see the world negatively and by seeing more implicit/automatic rewards in other cues – like a negative faces, this reflects their worldview and face with a sad or happy expression. results in lower prediction error. Major depressive disorder, which is We conducted an analysis using a characterized by emotion dysregulation, is a Hierarchical Drift-Diffusion Model, which fits major public health challenge. Approximately four parameters: initial bias; time to start forty percent of patients drop from care within the learning; learning rate; and decision boundary first month of treatment and over half who remain (likelihood to choose one side over another at the in treatment do not respond (4). Failure to end). Based on our hypotheses, groups would not respond to treatment can increase suicide risk, differ by initial bias or time to start learning but contribute to worsening of medical coeither the depressed group will have a lower morbidities, disability, cognitive impairment and learning rate and no difference in decision death. Research shows that single doses of boundary or the same learning rate and those with antidepressants can alter neural networks, which depression having a flipped decision boundary may alter automatic or implicit emotion (preferring negative side over another). We regulation (5). It is possible that by utilizing novel conducted a preliminary analysis on a sample of and sensitive tasks, we may be able to detect these 19 participants. subtle changes in behavior and determine early on whether someone will benefit from treatment METHODS or not. We recruited 19 participants (age 18-50) with and We developed a computerized reward without major depressive disorder. In these learning task (Implicit Bias Emotion Learning participants, we developed and tested the first Task, IBELT) that uses emotional faces as a implicit emotion regulation task in MATLAB reward rather than more explicit rewards such as using Psychtoolbox. Participants are shown a money. Participants are shown a face and they random face and two sex matched names on the have to choose between two names that “fit best” bottom of the screen (sex and race match – this is completely subjective. Unbeknownst to demographics of Pittsburgh, PA). Participants are the participants, choosing one side results in the instructed to select a name that they subjectively next face having a sad expression while the other decide “best fits” the face. Unbeknownst to the side the next face will have a happy expression participants, choosing one side results in the next (this is probabilistic and only happens 80% of the face being sad, and the other results in the next time). Once they learn this association, the coding face being happy (Figure 1) – this occurs 80% of of the sides is flipped to understand how quickly the time. The sides are randomized across they implement a learned regulation strategy. participants.
disorder in a follow-up analysis. We may be able to utilize this task to identify subtle, early changes in behavior that may indicate future response.
Figure 1: Participants are shown a face, one side results in a sad face and the other results in a happy face (e.g., Selecting Joe produces a happy face while Jeff produces a sad face). This is an implicit reward.
Participants are shown 80 faces at which point the bias is flipped (i.e., the side associated with happy and sad faces are flipped) and they are shown another 40 faces. We used the HDDM, coded in Google Colab using Python 2.7.15+, to fit four parameters of learning: (1) initial bias – how likely they choose one side over another in the beginning; (2) non-response time – amount of time it takes before learning occurs; (3) drift rate – learning rate; and (4) decision boundary – how likely they choose positive over negative faces. We then fit that model with the last 40 trials in a similar way to measure how well they implement learned emotion regulation strategies. RESULTS Our preliminary analysis compared participants with a propensity to choose sad compared to happy faces. We found that those with a propensity to choose happy faces compared to sad faces had a shorter non-decision time (p=0.16) and a slower drift rate towards happy faces (p=0.05) (Figure 2) but did not show differences in starting bias or decision threshold. CONCLUSIONS Our preliminary analysis suggests that those with a propensity for choosing sad faces spent a longer time prior to learning and had a faster learning rate towards sad faces. We will test an association between these factors and major depressive
Figure 2: Participants who lean towards sad faces have a faster learning rate and nondecision time.
REFERENCE 1. Albert KM, Potter GG, Boyd BD, Kang H, Taylor WD. Brain network functional connectivity and cognitive performance in major depressive disorder. Journal of psychiatric research. 2019;110:51-6. 2. Gyurak A, Gross JJ, Etkin A. Explicit and implicit emotion regulation: a dual-process framework. Cognition and emotion. 2011;25(3):400-12. 3. Kim H, Shimojo S, O'Doherty JP. Overlapping responses for the expectation of juice and money rewards in human ventromedial prefrontal cortex. Cerebral cortex. 2010;21(4):769-76. 4. Olgiati P, Serretti A, Souery D, Kasper S, Kraus C, Montgomery S, et al. Attrition in treatment-resistant depression: predictors and clinical impact. International clinical psychopharmacology. 2019;34(4):161-9. 5. Palhano-Fontes F, Barreto D, Onias H, Andrade KC, Novaes MM, Pessoa JA, et al. Rapid antidepressant effects of the psychedelic ayahuasca in treatment-resistant depression: a randomized placebo-controlled trial. Psychological medicine. 2019;49(4):655-63. ACKNOWLEDGEMENTS Partial funding was provided by the Geriatric Psychiatry Neuroimaging Laboratory, the Swanson School of Engineering, and the Office of the Provost
Designing a testing procedure relating eye movement and postural control in ADHD patients Joseph Sukink 1, Orit Braun Benyamin, PhD.2 1 University of Pittsburgh Department of Bioengineering, Pittsburgh, PA, USA 2 Ort Braude College Department of Mechanical Engineering, Karmiel, Israel Email: jos205@pitt.edu INTRODUCTION ADHD is one of the most common medical diagnoses made regarding children and adolescents, with the behavioral disorder affecting as much as 5-11% of children in the USA, or almost 6.4 million children nationwide [1]. This behavioral disorder can manifest in three ways: inattentive, hyper-active, or a combination of both [2]. This can lead to children performing poorly in school or lead to difficulties in home life due their inability to pay attention or their impulsivity. Additionally, this can lead to increased stress for family and friends, and depression and marital discord are commonly cited as well [1]. One of the major difficulties surrounding ADHD treatment is that currently diagnosis is confusing and difficult [1]. There is a plethora of factors that affect the accurate diagnosis, including but not limited to how variable the children are, their environments are, as well as the differences in perception of the prevalence of ADHD is from country to country. The lack of knowledge about what really causes ADHD and understanding about what it means to have ADHD versus being a normal child means that ADHD is often under-diagnosed, misdiagnosed, and undertreated. There are several articles that cite postural control as a good indicator of ADHD [4-5]. Our hypothesis is that eye movement could also play a large part in ADHD diagnosis, and so we sought to design an experiment looking at the relationship between eye movement and postural control when given various cognitive tasks. METHODS In order to approach this relationship, we designed an experimental set-up that both tracks eye-movement as well as sway during several cognitive tasks. The subjects were instructed to stand on the force-plate with the smallest base of stability, with heels and toes together, at a constant distance away from a computer monitor. This computer monitor was hooked up to the Tobii Dynavox PCEye Mini Eye Tracker which would track the eyeâ&#x20AC;&#x2122;s location on the screen in real time. We
originally sought to see how increased cognitive load would affect sway area and eye movement by asking subjects to complete a series of tasks ranging from staring at a stationary dot to a moving dot and finally asking them to locate Waldo within a chaotic image. We hoped to relate eye saccades and fixations to Center of Pressure (COP) position and velocity [6]. However, the eye tracker we selected was not efficient enough for velocity measurements. Therefore, we decided to add visual distractions and see how they would affect ADHD patients. Anecdotally, children with ADHD, being very inattentive and impulsive, are very quick to notice or be distracted by sounds or objects outside of the given task. Thus, we decided to add distractions to each task given to the subjects. Each subject was given four separate tasks. The first was to merely stare at a dot in the middle of the screen. The second was to follow a dot moving in a lemniscate pattern around the screen. The third is another stationary dot, and the last task was to follow a dot moving towards the center of a spiral. Following these four, the subject was given some time to relax before they were given the same task with added distractions. Each task lasted for thirty seconds, and the subjects were given a 15 second break between tasks. Before each trial the subjects were given consent forms as well as questionnaires such as the DSM for identifying ADHD in order to verify the subjects claim of whether they have ADHD or not. If the answers are inconclusive the subject will be excluded, and all ADHD subjects will be asked to refrain from using any prescribed medication for 24 hours in advance of testing. PRELIMINARY RESULTS Following the implementation of each of the various stages we observed a huge difference in adults with ADHD and children with ADHD, suggesting that children can grow to control these impulses [7]. Focusing on teenagers, we generated preliminary results for a girl with ADHD and one without ADHD.
See the figure below to see the differences between no distractions and distractions for the girl with ADHD.
distraction from the task at hand, momentarily forgetting what she is supposed to do. FUTURE PLANS In the future we hope to recruit more teenage subjects (13-16) in the hopes of seeing how adding multiple types of distractions, including loud noises or flashes of light would affect both the eye movement and the sway pattern of ADHD and non-ADHD subjects. Additionally, we hope to be able to add parameters regarding saccades and fixations of eye movement and how they correspond with sway patterns [6]. This project has the potential to influence and reshape how we currently diagnose ADHD, a consistent and unfortunate problem plaguing children today.
Figure 1: Center of Pressure and eye movement plots for the second task without distractions.
Figure 2: Center of Pressure and eye movement plots for the second task with three distractions at 7, 14, and 21 s. Based on the above plots, we observed specifically the two seconds before and after each disturbance. We noticed that in general the mediolateral sway range, mediolateral mean velocity, and sway area increased when the first disturbance was introduced but as the following distractions were introduced each of them began to decrease, potentially indicating that the subjects began to adjust to the distractions or anticipate them [8]. The subject with ADHD appeared to have much larger changes in sway parameters when introduced to the distractions but experienced the same acclimation behavior. The other behavior we noticed was that the eye movement of the ADHD subject was more erratic than the non-ADHD subject following a distraction. The non-ADHD subject merely glanced at the distraction while the subject with ADHD continued to glance around the screen. This may be due to the ADHD subject being rattled or exhibiting additional
REFERENCES [1] Hamed, Alaa M et al. “Why the Diagnosis of Attention Deficit Hyperactivity Disorder Matters.” Frontiers in psychiatry vol. 6 168. 26 Nov. 2015 [2] Attention Deficit Disorder Association. “What is ADHD?”. (1998). [3] Mash EJ, Johnston C. J Consult Clin Psychol. 1983 Feb; 51(1):86-99. [4] Schmid, Maurizio, Conforto, Silvia et al. “Cognitive load affects postural control in children”. (2007). [5] Shorer, Zamir et al. “Postural control among children with and without attention deficit hyperactivity disorder in single and dual conditions”. (2 Feb 2012). [6] Deans, Pamela, O’Laughlin, Liz et al. “Use of Eye Movement Tracking in the Differential Diagnosis of Attention Deficit Disorder (ADHD) and Reading Disability”. Psychology. (August 21, 2010) [7] Weiss, G., & Hechtman, L. T. Hyperactive children grown up: ADHD in children, adolescents, and adults. (1993). [8] Sjöwall, Douglas, Roth, Linda, et al. “Multiple deficits in ADHD: executive dysfunction, delay aversion, reaction time variability, and emotional deficits”. Journal of Child Psychology and Psychiatry. (2012). ACKNOWLEDGEMENTS We would like to thank the Swanson School of Engineering, the Office of the Provost, and the International Program at the University of Pittsburgh for providing the funding for this project.
HYDROGELS AS A SCAFFOLD FOR PIG LAMINA CRIBROSA CELLS Ashlinn M. Sweeney, Jr-Jiun Liou, and Jonathan P. Vande Geest Soft Tissue Biomechanics Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: ams622@pitt.edu, Web: https://www.engineering.pitt.edu/stbl/ INTRODUCTION Primary open angle glaucoma (POAG) is the leading cause of irreversible blindness worldwide. Racioethnic background is known to play a role in the prevalence of POAG. Those of African descent and Hispanic ethnicity are at higher risk than those of European descent. POAG is linked to extracellular matrix remodeling in the posterior poll of the eye. These mechanical changes are also related to a buildup of intra ocular pressure and elevated TGFβ2 expression. It is still unclear the mechanism that links these to the pathogenesis of POAG. In order to investigate the mechanisms that cause POAG, we are attempting to make a model system by seeding lamina cribrosa (LC) cells and astrocytes into a decellularized eye to the same cell density as the original tissue. This process is depicted in the schematic in Figure 1. One important step in this process is finding a method to deliver the cells to the decellularized eye in a way that also restores the mechanical properties to the eye.
crosslinkers Ammonium Persulfate and Tetramethyl ethylenediamine. GelMA is made by combining gelatin methacrylate and the photo initiator Lithium phenyl-2,4,6-trimethylbenzoylphosphinate (LAP). ColMA was made by combining collagen methacrylate with the photo initiator Irgacure and a neutralizing solution. At different time points, Live/Dead staining was preformed using ethidium homodimer-1 and calcein. RESULTS Rheology was performed on the PEG hydrogels showed that the PEG 2000 had a storage modulus of 1.41 0.797 kPa and that the PEG 10000 had a storage modulus of 11.3 1.07 kPa. Both gels were shown to hold pressure when the decellularized eye was infiltrated by the PEG gels. The relatively strong mechanical properties of the PEG gels were encouraging; however, when they were seeded with gels, a very small amount of the cells lived. The GelMA and ColMA gels were much more promising when concerning cell viability. Representative images are shown in Figure 2.
Figure 1: Schematic of the project. 1)Decellularize - remove the cells and genetic material. 2)Isolate cells - harvest and culture a primary cell line. 3)Recellularize - grow the isolated cells in the decellularized tissue.
METHODS Different hydrogels (PEG 2000, PEG 10000, GelMA, and ColMA) were seeded with pig lamina cribrosa cells and cultured for multiple days. The Poly(ethylene glycol)-diacrylate (PEG) gels were made using Poly(N-isopropylacrylamide) and the
Figure 2: Live/Dead staining 4 days after cell seeding at 10x. Almost all the cells in both PEG gels are dead. GelMA shows significantly more live cells than ColMA and the PEG gels. Scale bar is 100Âľm.and the green line (-) is the second stride following GVS.
DISCUSSION It is possible that something in the PEG hydrogel is killing the cells. A future study could test a PEG hydrogel using different crosslinkers, functionalize PEG with RGD peptides, or combine the PEG hydrogel with another gel such as GelMA or ColMA. Rheology and the pressurization test should be performed on GelMA and ColMA to
compare them to the PEG gels and see if they have adequate mechanical properties. ACKNOWLEDGEMENTS S I would like to thank Swanson School of Engineeringâ&#x20AC;&#x2122;s Summer Internship Program and NIH, R01EY020890 grant for funding.
Calcification in Cerebral Arteries and its Relevance to Aneurysms Patrick Tatlonghari; Chao Sang; Piyusha Gade, PhD; Anne M. Robertson, PhD Robertson Laboratory, Department of Mechanical Engineering University of Pittsburgh, PA, USA Email: pat54@pitt.edu INTRODUCTION Calcification in the vascular system is a common phenomenon that arises in response to several pathological conditions, such as aging, diet, disease, etc. Two types of calcification occur in the vascular wall, non-atherosclerotic calcification in the media and atherosclerotic calcification in the intima. Medial calcification occurs via rapid mineralization due to an upregulation of calcifying genes and an absence of lipid pools [1]. Atherosclerotic calcification in the intima is a slow, inflammation driven process where the prevalence of lipid deposits is sufficient to cause calcification without upregulating certain genes [2]. In either case, calcification in the arterial wall often leads to adverse consequences. While the formation of calcification is widely understood, its role in cerebral vascular disease has only received limited attention. A recent study found that calcification, both atherosclerotic and non-atherosclerotic, is more predominant in intracranial aneurysms (IAs) than previously reported and may play an important role in rupture risk [3]. The present study aims to determine whether this high prevalence is already present in the walls of the cerebral vessels in the Circle of Willis, a circular junction of arteries found in the inferior part of the brain, where most cerebral aneurysms are found. Or, whether increased prevalence of calcification is a feature of the pathology of IAs. METHODS A Circle of Willis obtained at autopsy from an 88 year old female patient was fixed in 4% PFA, and sectioned into 14 segments for high resolution micro-CT scanning (Skyscan 1272, Bruker Micro-CT, Kontich, Belgium). Samples were prepared with dye and beads markers and placed in a 1.5mL clear microcentrifugal tube atop a
moist kimwipe to prevent dehydration. Styrofoam was places in the tube to prevent movement of the sample. After mounting, samples were scanned 180 degrees around the vertical axis with a scan resolution of 3ď m, rotation step size of 0.2 degrees, 50kV and 200ď A and frame averaging of 8. 3D reconstructed images were created from micro-CT data using NRecon (Bruker Micro-CT, Kontich, Belguim). Images were processed to segment out the wall components and Styrofoam using Simpleware ScanIP (Synopsys, Mountain View, USA) [4]. Briefly, three distinct masks were generated from the 3D images, a tissue mask, calcified tissue mask, and a non-calcified tissue mask. The tissue mask was generated using segmentation, morphological filters, and gaussian smoothing to remove unwanted components. The calcification mask was generated using thresholding to segment out the higher density material and a series of Boolean operations to remove surface noise caught in the threshold. A last set of Boolean operations was performed to subtract the calcification mask from the tissue mask to create a non-calcified tissue mask. Mask volumes were then calculated to determine volume fraction of calcification.
RESULTS Calcification was present in all segments of the Circle of Willis with volume fractions ranging from 0.0571% to 7.35%. Yellow atherosclerotic plaques were visibly seen on the internal carotid arteries (ICA) and middle cerebral arteries (MCA). This plaque can be associated with and without calcification. The ICA showed a large calcified area associated with the atherosclerotic plaque (Figure 1: II-IV). The MCA, however, showed a yellow plaque with no calcification associated with it (Figure 1: V-VII) Non-atherosclerotic calcifications (no associated lipid pools in micro-CT or yellowed region) were found in all segments. The basilar artery only had nonatherosclerotic calcification with a volume fraction of 3.22% (Figure 1: VIII-X).
those reported for IAs [3]. These results suggest the high prevalence of calcification already exists in the cerebral vessels prior to aneurysm formation. Further studies are needed to increase the sample size and to extend the quantitative comparisons with size and spatial distribution in IAs.
REFERENCES
Figure 1. Images of full Circle of Willis (I) and 3 representative samples from this vascular segment (II-X) showing atherosclerotic yellow tissue from the dissecting scope (II,V,VIII). Calcification can be seen as bright spots in the slices from 3D reconstruction microCT data (III,VI,IX) and as a yellow color in the ScanIP 3D image (IV,VII,X).
DISCUSSION This study is the first to demonstrate that calcification can be present throughout the Circle of Willis and display both atherosclerotic and non-atherosclerotic calcification types. As for extracerebral vessels, atherosclerosis can exist without calcification. The range of volume fraction of calcification was consistent with
[1] Amann K. et al. Clin J Am Soc Nephrol. 2008; 3(6): 1599-1605 [2] Lanzer, P. et al. Eur Heart J. 2014; 35:1515-1525 [3] Gade, P.S. et al. Thrombosis and Vascular Biology (in press, 2019) [4] Cebral J.R. et al. Int J Numer Method Biomed Eng. 2018.
ACKNOWLEDGEMENTS The authors gratefully acknowledge support from the National Institutes of Neurological Disorders and Stroke 1R01NS097457-01 (PT, CS, PG, AMR), research fellowship from the Swanson School of Engineering (PT) and support from the Office of the Provost (PT). Human cerebral arteries were provided from the Alzheimer Disease Research Center (ADRC) (Grant No. NIA P50 AG005133).
EFFECT OF SHOE TREAD GEOMETRY AND MATERIAL HARDNESS ON SHOE-FLOOR FRICTION AND UNDER-SHOE FLUID FORCE ACROSS CONTINUED SHOE WEAR Claire M. Tushak, Paul J. Walter, Sarah L. Hemler and Kurt E. Beschorner Human Movement and Balance Laboratory, Department of Bioengineering University of Pittsburgh, PA, USA Email: cmt98@pitt.edu, Web: https://www.engineering.pitt.edu/hmbl/ INTRODUCTION Falls resulting from slipping are among the most common causes of non-fatal workplace injuries [1]. More than 225,000 non-fatal workplace injury cases in the private industry resulted from slips, trips, and falls in 2017 [1]. Slips often result from a decrease in friction between the shoe and the floor in the presence of a liquid contaminant [2]. The available coefficient of friction (ACOF) is a measure of the friction between the surfaces of the shoe tread and the floor. Research has shown that when the ACOF falls below the required coefficient of friction (RCOF) for dry walking, there is increased risk of slipping [3]. Previous studies have found that tread wear results in decreased ACOF and increased slip risk [2,4]. The underlying mechanism of this reduced friction is increased under-shoe fluid pressures, indicating reduced efficacy in draining under-shoe fluids [2]. Outsole design (tread geometry and hardness) also influence friction values in boundary lubrication [5]. Additionally, prior studies have reported that volume loss in wear is inversely proportional to material hardness in elastomers [6]. However, the impact of tread design on wear-initiated loss in friction has not been systematically studied. This research aims to further the understanding of slip risk in progressive wear across shoes of varying tread geometry and material hardness. Specifically, the effects of wear, tread design, and material hardness on ACOF and fluid force will be tested. METHODS Nine custom right shoes were manufactured and provided for this research. These shoes included three tread geometries each developed at three material hardness levels. A custom accelerated wear apparatus was used to simulate shoe wear. Abrasive paper was slid across
the heel of each shoe at 9.33 m/s for 10 seconds at three angles of 17°±0.5°, 7°±0.5°, and 2°±0.5° defined relative to the horizontal, consistent with a prior study [4]. Abrasive grease (Formax, No. F26) was applied prior to wear at each angle to minimize increase in temperature. Shoes were worn at a 6° inversion angle to replicate typical gait. Three cycles of wear were completed. ACOF and fluid force were measured using a custom portable slip tester. This device consists of motors (LinMot®, Elkhorn, WI, USA) to control vertical force and sliding speed, a 6DOF force plate (BERTEC Corporation, Columbus, OH, USA), and four fluid pressure sensors (Gems ® 3100R10PG08F002). Shoes were tested on vinyl composite tile with a diluted glycerol solution (90% glycerol, 10% water by volume) which was spread prior to each trial. The onset of a slip was mimicked by maintaining the shoe’s angle at 17°±0.5° defined relative to the horizontal, speed at 0.3 m/s, and normal force at 250 N±10 N [4]. Shoes were tested at a 6° inversion angle to replicate typical gait. Each ACOF and fluid force measurement is the average of five slip trials at consistent mediolateral positions across the array of fluid pressure sensors. ACOF and fluid pressures were measured after each wear cycle. DATA PROCESSING ACOF was calculated as the magnitude of the resultant shear force divided by the normal force. Fluid force was calculated using numerical integration of fluid pressure, as in prior research [4]. A repeated measures ANOVA statistical analysis was performed on the collected data. Independent variables included shoe tread design and material hardness (nominal), as well as wear cycle (continuous). The dependent variables were ACOF and fluid force (continuous). Interaction effects of wear cycle and shoe tread design as well as of wear
cycle and shoe material hardness were tested for significance. A logarithmic transformation was used for fluid force data in order to achieve normally distributed residuals. RESULTS Consistent with previous literature [4], there was a general decrease in ACOF measurements and increase in fluid force for each shoe across wear cycles (Figures 1, 2). The average decrease in ACOF was 0.055; the average increase in fluid force was 3.1 N.
Tread design (p = 0.0003) and wear (p = 0.0006) influenced ACOF. The interaction effect of tread and wear on ACOF was not significant. The effects of hardness as well as the interaction of hardness and wear on ACOF were not significant. While the effect of tread on fluid force was significant (p = 0.0018), the effects of wear and the interaction of tread and wear on fluid force were not significant. Neither the effect of hardness nor the interaction effect of hardness and wear on fluid force were significant. DISCUSSION Consistent with prior research, tread type influenced ACOF [5]. ACOF also decreased as wear increased, consistent with previous studies [2,4]. Fluid pressures serve as the underlying mechanism to reduced friction in wear [2], which could explain fluid force also being influenced by tread.
Figure 1: Changes in ACOF across wear. Each color represents three tread types. Solid, dashed, and dash-dot lines represent low, intermediate, and high hardness levels respectively.
The lack of impact of material hardness as well as its interaction with wear on ACOF and fluid force may indicate the possibility that these design factors did not modulate the impact of wear on slip risk. Inducing additional wear may provide more conclusive results concerning the impact of wear on fluid force, due to larger worn regions leading to higher fluid forces [4]. REFERENCES 1. U.S. DOL BLS. Nonfatal Cases Involving Days Away from Work (2017). 2. Beschorner et al. J Biomech 47, 458-463, 2014. 3. Burnfield, Powers. Ergo 49, 982-995, 2006. 4. Hemler et al. Appl Ergo 80, 35-42, 2019. 5. Jones et al. Appl Ergo 68, 304-312, 2018. 6. Hakami et al. Trib Intl. 135, 46-54, 2019. ACKNOWLEDGEMENTS This work was funded by NIOSH R01OH010940, the Swanson School of Engineering, and the University of Pittsburgh Office of the Provost. VF Corporation provided custom footwear for this study.
Figure 2: Changes in fluid force across wear. Each color represents three tread types. Solid, dashed, and dash-dot lines represent low, intermediate, and high hardness levels respectively.
THE EFFECT OF PROPRIOCEPTIVE INPUT ON BCI CONTROL Jessica C. Weber, Monica F. Liu, Douglas J. Weber Rehabilitation and Neural Engineering Laboratories, Department of Bioengineering University of Pittsburgh, PA, USA Email: jcw85@pitt.edu, Web: http://rnel.pitt.edu INTRODUCTION Whether a person is reaching for a coffee cup or conducting precise neurosurgery, sensory feedback is necessary for movement. Sensory feedback is used to guide and correct movement, and the way our body receives sensory information is a multifaceted, complex process. The primary somatosensory cortex (S1) encodes proprioceptive information such as arm kinematics. In contrast, motor commands have been shown to be represented in primary motor cortex (M1). M1 neurons encode for the force, direction, extent, and speed of a movement. Along with motor inputs, sensory inputs have been observed in the primary motor cortex, assisting in sensory guidance of movement. Thus, sensory feedback has been shown to influence movement, and vice versa. The neural mechanisms underlying how sensory feedback influences movement is not well established. In this project, we will examine these neural mechanisms using a brain-computer interface (BCI). Planning and executing a movement rely on both visual and proprioceptive feedback. Neurons in M1 are tuned to factors such as velocity of movement or endpoint position. However, how sensory feedback changes the activity of neurons in M1 is unclear. When recording neural activity of a population, we cannot observe all the neurons that are contributing to movement. By using a BCI, we explicitly control the exact neural activity that maps to a particular movement. This allows us to directly observe how different sensory inputs lead to changes in the population of neurons that control movement. Ultimately, we aim to improve BCI control by integrating sensory input. This research will help to improve BCI control and allow improvements in future development of BCIs. Ideally, improved BCI control could lead to better prosthetics and improve the quality of life of people with spinal cord injury and other neurological disorders. METHODS
In this study, a human participant with spinocerebellar degeneration was implanted with two 96-channel intracortical microelectrode arrays in the hand and arm region of her primary motor cortex (M1). Neural signals recorded from these arrays were used for velocity-based control of a robotic arm. The subject was asked to move the robotic arm horizontally across a line as many times as possible in a 1-minute period. She did so under four different conditions: no vision and no proprioception, vision and no proprioception, no vision and proprioception, and vision and proprioception. Both neural data and kinematics data were recorded as the robot arm moved back and forth across a neutral point. Movement of the subjectâ&#x20AC;&#x2122;s arm for the proprioceptive conditions were recorded with motion capture (mocap) cameras and position and velocity data was recorded from sensors on the robotic arm. Typically, BCI decoders are trained on subjects observing and imagining movement, i.e. no proprioceptive feedback. In this experiment, the BCI decoder was trained for different sessions with vision alone or with both vision and proprioception. DATA PROCESSING Motion capture data of the robotic arm was aligned with sensor data from the robot arm by cross correlating the traces and finding the lag time that gave the best correlation between the two. Neural data was collected in peristimulus time histograms which could be smoothed to produce a firing rate. Firing rates could then be compared to the kinematic mappings of motion to see how well neural activity followed movement. The first question we address here is: does sensory feedback improve BCI control? We have found that when the BCI is trained with vision-only feedback, the subject performs worse. However, when the BCI is trained with both vision and proprioception, performance improves. We aim to identify the neural mechanisms that account for this difference. The second question in this project is whether subjects can learn to use sensory feedback in BCI
control. If control were to get better, there are two possible hypotheses. The subject is either learning to modulate neurons relative to the neurons’ sensorymodulated activity level rather than a standard baseline, or the subject is learning to use a subpopulation of neurons to control the BCI that aren’t modulated by sensory input. We will use linear discriminant analysis (LDA) to identify the axis in neural space that maximally separates the proprioception and no proprioception conditions and compute the projection between this “LDA axis” to the neural axis used in the BCI decoder. The larger the projection of the LDA axis on to the BCI decoding axis, the more the sensory activity is affecting neurons used for motor control. We hypothesize that the projection of the LDA axis onto the BCI decoder axis decreases as the subject improves at using proprioceptive feedback. RESULTS With no sensory input, BCI control is poor with an average of 5.75 crossings with the vision only decoder and 6.4 crossings with the vision and proprioception decoder. Receiving sensory feedback significantly improves BCI control in both types of BCI decoders (p<0.01). However, there was no difference in improvement between the vision-only and vision and proprioception decoders.
Figure 1: Crossings per Minute; Blue represents the Vision Only BCI, Red represents the Vision + Proprioception BCI We further examined the neural activity across these conditions. Close correlation between the sensory velocity and the firing rate indicates that BCI control is good because it indicates that the movement of the robotic arm is correlated with the neural activity. It is evident that, when adding proprioceptive input while utilizing the vision + proprioception, BCI control improves. Although initially there is a lack of relation between firing rate and sensor velocity, the firing of the neural population seems to spike right before a change in sensor velocity, indicating a control of movement. This may imply that the patient has found a subpopulation of neurons to help with BCI control.
Figure 2: Firing Rate (Blue) and Sensor Velocity (Red); Left column represents the Vision Only BCI and the right column represents the Vision + Proprioception BCI DISCUSSION By using crossings as an indication of success, it is evident that when proprioceptive input was utilized with the vision and proprioception BCI, BCI control improved. The theory is that there is a newly learned pattern of activity based on sensory input that is being utilized to improve control. A next step for analysis would be to see why proprioceptive input could be improving BCI control. It must be determined whether the patient is learning to use a subpopulation of neurons not modulated by sensory input to control the BCI. Using LDA, we will be able to see to what extent sensory input is influencing the control space. REFERENCES 1. Cunningham, John P, and Byron M. Yu. “Dimensionality Reduction for Large-Scale Neural Recordings”. Nature Neuroscience, Nov. 2014, www.nature.com/articles/nn.3776.pdf 2. Hatsopoulos, Nicholas G, and Aaron J Suminski. “Sensing with the motor cortex.” Neuron vol. 72,3 (2011): 477-87. doi:10.1016/j.neuron.2011.10.020 3. Oby, Emily R, et al. “New Neural Activity Patterns Emerge with Long-Term Learning.” PNAS, 2019, www.pnas.org/content/pnas/116/30/15210.full.pdf 4. Sadtler, Patrick T., et al. “Neural Constraints on Learning.” Nature News, Nature Publishing Group, 27 Aug. 2014, www.nature.com/articles/nature13665 ACKNOWLEDGEMENTS I would like to thank Dr. Robert Gaunt for making the data available to us for this paper.
ELECTROMYOGRAPHY OF THE EXTERNAL URETHRAL SPHINCTER DURING STIMULATION OF THE PELVIC NERVE IN A RAT SPINAL CORD INJURY MODEL Benjamin M. Wong12, Marlena N. Raczkowska2, Joseph D. Miller23 and Nitish V. Thakor24 1 University of Pittsburgh, PA, USA 2 N.1 Institute for Health, National University of Singapore, Singapore 3 NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 4 Johns Hopkins University, MD, USA Email: bmw98@pitt.edu INTRODUCTION Spinal cord injury (SCI) influences bladder function in patients causing bladder sphincter dyssynergia. Commercially, sacral anterior root stimulator (SARS) devices are available for SCI patients to allow voiding, but SARS devices use an open-loop system [1]. Thus, stimulations are tonic with no feedback for modulation [1]. SARS devices require a more invasive surgery for implantation, which results in irreversible nerve damage. Closed-loop neuromodulation offers an adaptable form of stimulation treatment. Unlike an open-loop system, stimulation intensity and other stimulation parameters can adjust according to feedback within the system [1]. The closed-loop system can directly stimulate the pelvic nerve to induce voiding, so surgery is less invasive. This study focuses on the electromyography (EMG) of the external urethral sphincter (EUS) muscle to understand its relationship with the closed-loop neuromodulation of the pelvic nerve in the case of SCI. METHODS Nine anesthetized rats were used for stimulation experimentation. SCI was induced by transecting the spinal cord between the T8 and T9 vertebrae. Platinum iridium electrodes were hooked on the pelvic nerve to deliver stimulation. Figure 1 depicts the experimental setup for stimulating the pelvic nerve. Each rat received stimulations 150 µs pulse width with increasing stimulation amplitudes: 50, 100, 200, 300, 400 and 500 µA. These stimulation sessions started at 1 Hz then repeated with 10 Hz. Some 600 and 700 µA stimulations were recorded for further understanding on increasing the amplitude. For 500 µA and 1 Hz, a trial of 300 µs pulse width was recorded for most rats to compare the effects of pulse width. For 10 Hz, pelvic nerve cutting was performed near the stimulation site to
observe the effects of nerve transection. EUS EMG data was recorded utilizing Intan Technologies equipment.
Figure 1: Experimental set-up for spinal cord transection and closed-loop neuromodulation [2].
DATA PROCESSING Each stimulation trial recorded bladder pressure, stimulation pulse, and the filtered EUS EMG as shown in Figure 2. The EMG recordings were preprocessed through MATLAB. Compound muscle action potentials (CMAP) and area under the curve (AUC) were used as quantifiable responses for EUS EMG during stimulation. These quantities allow for comparisons between rats and different parameters to be made. Statistical analysis employed the paired t-test.
Figure 2: Single depiction of EMG signal processing shown in the case of 10 Hz and one 100 μA stimulation.
RESULTS The resulting difference in CMAP count between 50 μA and 500 μA is significant (paired t-test, p = 4.25e-22, α = 0.05) represented in Figure 3. The Hedges’ g value indicates that the effect of the stimulation increase is large (g = 1.8863). This shows that the EUS muscle has a higher response and more muscle contraction due to an increased stimulation of the pelvic nerve. There is no significant difference regarding stimulation pulse widths 150 μs and 300 μs depicted in Figure 4. The EMG response is significantly different between an intact and transect pelvic nerve (p = 6.11e-18) shown in Figure 5. This highlights the importance of the pelvic nerve to contract both the bladder muscle and EUS muscle simultaneously for proper voiding. The closed-loop system was to be observed on how it affects neurological injury cases, such as spinal cord injury. *
Figure 3: CMAP versus stimulation amplitude for every rat; 1 Hz stimulation frequency.
*
Figure 5: AUC versus pelvic nerve transection; 10 Hz and 500 μA stimulation.
DISCUSSION This study provided essential information on the parameters necessary to stimulate the pelvic nerve, as well as the effects of altering the parameters. Closed-loop neuromodulation is a promising alternative for SARS devices offering the future of medicine with patient-specific adaptation and wireless capabilities. The goal is to develop a system for efficient bladder management. This study provided important information on the ideal state of the stimulation amplitude, pulse width, and pelvic nerve transection for closed-loop neuromodulation with respect to SCI. The development of a new commercially available treatment for SCI patients with bladder sphincter dyssynergia is vital for their own healthy management of voiding. REFERENCES 1. Peh, W. Y. X. et al. “Closed-loop stimulation of the pelvic nerve for optimal micturition.” J. Neural Eng. 2018. Vol 15. 066009
2. Raczkowska, M. N., Peh, W. Y. X. et al. (2019), “Closed-Loop Bladder Meuromodulation Therapy in Spinal Cord Injury Rat Model,” Conference Paper, Current.
Figure 4: AUC versus stimulation pulse width for every rat; 1 Hz and 500 μA stimulation.
ACKNOWLEDGEMENTS Research funding scholarship graciously provided by the Swanson School of Engineering, University of Pittsburgh. Thank you to Prof. Thakor for use of his laboratory. Thank you to Marlena Raczkowska and Joseph Miller for their strong mentorship. Selection for the program overseen by the Summer Engineering Research Internship for US Students (SERIUS) program.
A COST-EFFECTIVE CELLULAR & SATELLITE EQUIPPED WILDLIFE TRACKER Kevin Xu, Oliver Snyder, Joseph Gaspard, Mark Gartner Gartner Design Lab, Department of Bioengineering University of Pittsburgh, PA, USA kex8@pitt.edu INTRODUCTION Biotelemetry is crucial to a variety of wildlife and conservation-related assessments, which include evaluation of an ecosystemâ&#x20AC;&#x2122;s health or preservation of an endangered species. However, biotelemetric hardware is expensive and frequently challenging to effectively implement. For example, one commercial system that is currently used to track manatees utilizes a Global Positioning System (GPS) unit, an Argos satellite transmitter, and a very high frequency (VHF) radio transmitter. This system costs $9000 per unit and as a result, only the most at-risk manatees are tracked [1]. However, there are many lesser atrisk manatees that are of interest to wildlife biologists. Reliance on the Argos satellite system for data transmission requires favorable weather conditions and proper timing â&#x20AC;&#x201C; a cloudy sky or the lack of a satellite overhead can prevent data from being transmitted and subsequently collected. These transmissions are infrequent even when conditions are favorable. In fact, often only a single satellitebased location point is collected per day, which contains very little information other than the GPS coordinate.
LabView complicated its continued use as a development platform. A goal of this iteration was to allow any wildlife biologist to access the tracking information via a simple web app. By using this cloud-based approach, we were able to integrate the original microcontroller with existing online mapping software that allowed data to be published in an Internet-connected web browser. In order to overcome the limitations of current tracking systems, we decided to focus on cellular communications, which would be more reliable and cost significantly less than a satellite system. The ability to use an external antenna in conjunction with any microcontroller was a requirement in order to improve the resolution of the device. Battery life was also important, as devices in the field typically last for months at a time before maintenance is required or the battery needs to be replaced.
In response to these limitations of the current tracking systems, we have designed a lower-cost, higher resolution biotelemetry solution for tracking animals in the wild. While we expect that this lowcost tracker will allow for more detailed assessments of a variety of animals, our focus in this work is application to manatees in Southwest Florida as well as more remote areas such as Belize, Angola, and Mali. METHODS This project was based on a previous proof-ofconcept device. The first iteration of the tracker leveraged LabView, Google Maps, and a Particle Electron microcontroller in order to achieve a compact tracker. When testing this system, compatibility and licensing issues related to
Figure 1. Flowchart of the logic used by the manatee tracking device in order to transmit and receive coordinates from the device.
Another important design requirement for this iteration was that it had to include satellite capabilities to supplement the cellular connection. This requirement would retain tracking functionality if cellular signal was weak or if the tracker was deployed in areas without cellular coverage. To achieve the functional requirements, a design was developed that included a Particle Electron microcontroller with cellular capability, GPS unit, and satellite communication module. The GPS unit obtains a GPS satellite fix and the coordinates are published to the cloud by the microcontroller via cellular communication. If a cellular signal is not present, the coordinate is transmitted through the Iridium satellite system. Both devices are linked to a single endpoint on the Losant IoT Platform, which allows the data to be easily viewed via an Internet connected web browser. The cost of the prototype tracker includes $426 in components, $18/mo for services, and up to $60/mo in data and maintenance costs. Thus, the total price for usage of the tracker will be approximately $280 per year. To prolong the battery life of the tracker, the GPS unit and the microcontroller are only on for five minutes at specific time intervals. If cellular signal is present, the satellite communication module never turns on. At all other times, these components are in a deep sleep mode which draws the lowest current consumption.
Federal approvals. However, benchtop testing and field testing was performed in Rookery Bay reserve in Southwest Florida. During this field testing, the tracker reliably transmitted a GPS coordinate every two minutes as demonstrated by the markers in Figure 2. The observer was also able to ping the tracker at will, waking the tracker from deep sleep and transmitting a GPS coordinate by request. GPS resolution testing demonstrated that the tracker was accurate to within a 10 ft (3 m) radius. Based on power considerations as well as spatial constraints within the typical manatee tracker buoy, we determined that our tracker conservatively draws 128mAh per day. Based on these estimates, a 6600mAh lithium-ion polymer battery would power the tracker for approximately five months before battery pack replacement is necessary. DISCUSSION In comparison to previous iterations of the tracker, this tracker offers a lower cost, higher accuracy, and potentially more reliable system compared to trackers currently used by wildlife biologists. Despite a much lower cost, our tracker provides an accurate an inexpensive solution to a spectrum of potential biotelemetry applications. By using off-theshelf, mostly open-source hardware, the tracker is customizable to fit the userâ&#x20AC;&#x2122;s requirements and offers an experience close to that of current manatee tracking devices. Furthermore, by including satellite capabilities, we enable low-cost manatee tracking in remote areas such as Belize where cellular coverage is not present. We expect to deploy one of our trackers on a manatee in the near future to confirm performance and compare function to current tracking tags. REFERENCES 1. Thomas, B. Wildlife Research (2011). 38, 653663.
Figure 2. GPS coordinates (denoted by the green markers) transmitted from the tracker during a test in Southwest Florida in the Rookery Bay reserve, transmitted every 2 minutes.
RESULTS The prototype tracker was not able to be tested on a manatee due to the need for pending State and
ACKNOWLEDGEMENTS Thanks to the Pittsburgh Zoo & PPG Aquarium for the support throughout this project. Funding was provided jointly by Dr. Gartner, the Swanson School of Engineering and the Office of the Provost.
INTENSIFYING OXIDATIVE DEHYDROGENATION VIA NANOSTRUCTURED CATALYSTS Eyram Akabua, Yahui Yang, and Goetz Veser Catalytic Reaction Engineering Group, Department of Chemical Engineering University of Pittsburgh, PA, USA Email: eya4@pitt.edu, Web: https://www.crelab.org INTRODUCTION Ethylene, as the most manufactured organic compound in the world, is used as a fundamental component for a large percentage of synthetic materials. Steam cracking has been an adequate process for ethylene production for over a century, but concerns have been drawn over its energy use and emissions [1, 2]. As such, oxidative dehydrogenation (ODH) has been researched as an alternative. Steam cracking C2H6 C2H4 + H2
+
Hydrogen oxidation H2 + 1/2O2 H2O
↓ ODH C2H6 + 1/2O2 C2H4 + H2O Figure 1: Chemical reaction equations for steam cracking, the oxidation of formed hydrogen, and oxidative dehydrogenation.
In ODH, ethane is co-fed with gaseous oxygen in the presence of a catalyst, and the hydrogen formed during the cracking reaction is oxidized. This exothermic reaction both decreases the amount of power energy consumption and shifts the thermodynamic equilibrium in favor of more ethylene production. ODH also exhibits undesirable consequences, like the formation of greenhouse gas CO2 due to the interactions between C2 species and gaseous oxygen [1]. Therefore, an optimal nanocatalyst as an oxygen source for the process would allow selective oxidation solely of the hydrogen formed in the reaction. Our research group had previously investigated the synthesis, formation, and reactivity of iron silica nanoparticles for use in ODH. Both conventional Fe/SiO2 nanoparticles and Fe@SiO2 nanoparticles with a core-shell structure were studied. The core-shell configuration consists of a solid iron oxide core, which is sufficiently separated from carbon sources by a porous silica shell; the
pores allow access of hydrogen species to the lattice oxygen. The current investigation focuses on the synthesis, characterization, and reactivity of similarly utilized nickel oxide nanoparticles, Ni/SiO2 and Ni@SiO2, to conclusively determine the effect of the silica shell in the products in the ODH process. Since nickel displays a higher activity and less selectivity than iron, the change in performance in both nanoparticles due to the silica shell would be more definite. METHODS Core-shell nanoparticle synthesis: both the Ni@SiO2 and Fe@SiO2 samples were synthesized using a one-pot water-in-oil sol-gel method. For the Fe/SiO2, surfactant Brij® C20 (Acros Organics) and cyclohexane (50 ml, Aldrich) were constantly stirred and heated in a three-neck flask at 50°C. Aqueous iron (III) nitrate (1.5 ml, 1.3M) was added to the mixture dropwise and stirred until uniform to form reverse micelles. Reducing agent hydrazine hydrate (50-60%, Aldrich) was then added, also dropwise, to the mixture using a syringe to reduce the Fe3+ ion to elemental Fe. After overnight aging, ammonium hydroxide solution (3 ml, 28.0 – 30% NH3) was added to the mixture to catalyze TEOS condensation. Then, TEOS (tetraethyl orthosilicate, 99%), precursor to silica, was added to the microemulsion and allowed to age. The sample was then precipitated and washed by adding 2-propanol and calcined at 500oC for 2 hours under 0.500 SLM air stream. The synthesis method for Ni@SiO2 was the same except for the addition of aqueous nickel nitrate (1.5 ml, 1.0M) instead of aqueous iron nitrate, and that hydrazine was not used for reduction. Also, after the initial calcination in air, the sample was calcined under a 0.020 SLM H2 stream at 600oC. Conventional nanoparticle synthesis: Fe/SiO2 was synthesized via wet impregnation on a silica support. Ni/SiO2 was synthesized via depositionprecipitation. Ethane oxidative dehydrogenation pulse chemisorption: consecutive ethane pulses of volume
0.11 mL each are injected at 800oC into the four samples at a total flow rate of 20 sccm. RESULTS Figure 1 shows the results of the pulse chemisorption experiments for all four nanoparticle samples for several pulses. Each pulse corresponds with the specified volume of ethane passing through the samples. Each color corresponds to the different mass spectrometry signals given by the species present during the reaction.
Flow rate (sccm)
Fe2O3/SiO2
a)
0.40
C2H6 CH4
0.00
Flow rate (sccm)
0.40
20
40 Time (min)
60
Fe2O3@SiO2
b)
C2H6 CH4
C2H4 CO2
H2
0.20
0.00 0
20
40 Time (min)
60
NiO/SiO2
c)
0.40 Flow Rate (sccm)
H2
0.20
0
C2H6 CH4
C2H4 CO2
H2
0.20
0.00 0
20
40 60 Time (min)
80
NiO@SiO2
d)
0.40 Flow Rate (sccm)
C2H4 CO2
C2H6 CH4
C2H4 CO2
H2
0.20
0.00 0
20
40 Time (min)
60
Figure 2: Ethane pulse chemisorption results of a) conventional Fe/SiO2, b) core-shell Fe@SiO2, c) conventional Ni/SiO2, and d) core-shell Ni@SiO2
DISCUSSION Firstly, the difference in magnitude of the hydrogen signals for the iron samples suggested a higher rate of hydrogen oxidation, and therefore, relative selectivity. Secondly, the nickel materials displayed an initial spike in hydrogen production in pulse 1, indicating a strong cracking reaction. There is also a high initial CO2 peak in pulse 1 due to the high availability of the oxygen source, which is later decreased in subsequent pulses by the effects of coking. Based on the high CO2 and H2 peaks for the Ni materials, it can be determined that the nickel materials display a low selectivity. Overall, the conventional materials showed a higher amount of persistent hydrogen than their core shell counterparts, as well as a higher amount of CO2. This could be explained by the function of the silica shell enveloping the metal oxide core of the core shell samples; the porous shell allows for selective diffusion and oxidation of H2 to occur, while preventing the diffusion and access of the carbon sources to the oxide core. The selective combustion of H2 pushes the reaction to increase the amount of product formed, and therefore, the ethylene yield. With this information, it could be determined that the silica shell does improve the oxidizing capabilities of the nanoparticles overall. In order to construct the optimal material for ODH, in addition to a selective metal oxide like Fe2O3, a shell of a material more stable than SiO2 in the presence of water could be pursued. REFERENCES 1. T. Ren, M. Patel, and K. Blok, "Olefins from conventional and heavy feedstocks: Energy use in steam cracking and alternative processes," Energy, vol. 31, no. 4, pp. 425-451, 2006/03/01. 2. L. M. Neal, S. Yusuf., J. A. Sofranko, and F. Li, "Oxidative Dehydrogenation of Ethane: A Chemical Looping Approach," Energy Technology, vol. 4, no. 10, pp. 1200-1208, 2016 [Accessed: June 10, 2018]. ACKNOWLEDGEMENTS Thank you to Yahui Yang, Dr. Goetz Veser, the Swanson School of Engineering, and the Office of the Provost for guidance and funding throughout this project.
REPLACING PLASTIC PACKAGING WITH BIODEGRADABLE CALCIUM ALGINATE Samantha P Bunke, Dr. Susan Fullerton, Dr. Eric Beckman Nanoionics and Electronics Laboratory, Department of Chemical Engineering University of Pittsburgh, PA, USA Email: spb52@pitt.edu NOTE: a patent disclosure has not been submitted for this project and you should therefore regard the contents to be confidential and for internal Pitt use only. Thank you! INTRODUCTION Roughly 8.3 billion metric tons of plastic have been manufactured since its invention in 19031. Less than a fifth of all plastic is currently recycled, and more than 18 billion pounds enter the ocean each year2. The most frequently used plastics are derived from petrochemicals and are not biodegradable, and the only methods of permanent disposal include combustion or pyrolysis which release highly toxic pollutants1. While most organic materials undergo biodegradation where bacteria break the material down into smaller, useful compounds, the only “decomposition” experienced by petrochemicalbased plastics is the fracture into microplastic beads1. Despite the drawbacks, the demand for plastic continues to increase. A replacement material is needed that has similar properties to petrochemicalbased plastics, but will not accumulate in the environment. Biopolymers, such as sodium alginate, can be modified to possess similar properties to synthetic plastics, while maintaining their ability to degrade in nature. Sodium alginate is a natural polysaccharide derived from brown algae that can be crosslinked using calcium ions3. The calcium ions displace the sodium ions and electrostatically crosslink the polyalginate chains, thus forming a gel with stronger mechanical properties3. Unlike plastics, calcium alginate decomposes easily: when exposed to a solution with sodium ions (i.e., salt water) the crosslinks are disrupted and the solid dissolves. METHODS Two methods were used to crosslink the alginate. Method one introduces a calcium dichloride (CaCl2) solution to the alginate, which forms a hydrogel through crosslinking that can then be dried into a solid. This mechanism is illustrated in figure 1.
In method two, the source of calcium ions is calcium carbonate (CaCO3), which is insoluble in water. CaCO3 particles would be distributed throughout the sodium alginate as evenly as possible, and then glucono-δ-lactone (GDL) would be added to slowly lower the pH, causing the calcium ions to be released into the solution5. This mechanism is shown in figure 2.
Method three introduces calcium ions using a soluble calcium salt in a complexing agent, ethylenediamine tetraacetate (EDTA)6. The pH of the solution is then lowered by the slowly hydrolyzing GDL, which forces the calcium ions to be released at a controlled rate6. This mechanism is shown in figure 3.
The homogeneity of the resulting hydrogels was evaluated by visual observation. The degradation speed of the hydrogels was evaluated by placing samples in deionized water and 3wt% NaCl solutions. The samples’ masses were recorded prior to and following the dissolution demonstrations to
monitor the amount of water absorbed by the hydrogels. RESULTS Method one resulted in calcium alginate samples with concentric rings radiating outward from the point of CaCl2 solution deposition. Method two produced more uniform samples than method one, but with CaCO3 particles distributed throughout. Method three resulted in homogeneous hydrogels with uniform crosslinking throughout entire sample. Figure 4 shows calcium alginate samples made from each method.
On average, samples from all three methods begin swelling from water absorption within thirty minutes. Samples in NaCl solutions would eventually dissolve while samples in deionized water would only swell. However, agitation was needed to fully break part the alginate materials.
acid, there will always be CaCO3 particles remaining in the sample. The degree of crosslinking is also not perfectly uniform throughout the material since the CaCO3 particles tend to settle to the bottom before the calcium ions are released. This causes the degree of crosslinking to be higher in the areas surrounding the settled CaCO3. In method three, the calcium ions are evenly distributed throughout the polyalginate as part of the water-soluble EDTA complex before crosslinking even begins. As the pH is slowly lowered by GDL hydrolyzing, the calcium ions are released at a controlled rate. This allows the degree of crosslinking to be uniform throughout the entire sample, resulting in homogeneous materials. This method will be used going forward since it achieves the highest level of homogeneity. For the degradation of calcium alginate, NaCl solution can be used to dissolve the material. Deionized water is not capable of dissolving calcium alginate since sodium ions are needed to displace the calcium ions to disrupt the crosslinking. With an abundance of sodium ions, the calcium alginate will eventually dissolve completely, but agitation advances the process. Environments such as oceans are ideal for breaking down this material due to its unlimited supply of sodium ions and natural convection. REFERENCES
DISCUSSION The concentric rings found using method one are the result of an uncontrolled rate of release of calcium ions. As the calcium ions diffuse outward from the center and crosslink the alginate, their concentration steadily decreases. This forces the degree of crosslinking to decrease from the center deposition location to the sample edge; thus, causing the mechanical integrity to decrease with increasing radius.
[1] R. Geyer et al, “Production, use, and fate of all plastics ever made” Science Advances, 3, 2375-2548 (2019). [2] Parker, L. “Fast facts about plastic pollution” National Geographic, 20 Dec. 2018. Accessed 12 Feb. 2019. [3] K. Lee et al, “Alginate: properties and biomedical applications” Prog in polym sci vol. 37,1 (2012): 106-126. [4] A. Waldman et al, Journal of Chemical Education 75(11): 1430-1431 (1998). [5] K. Draget et al, “Homogeneous Alginate Gels: A Technical Approach” Carbohydrate Polymers, 14 (1991) 159178. [6] Toft K (1982) “Interactions between Pectins and Alginates” Prog Food Nutr Sci 6:89-96.
In method two, the release of calcium ions is more controlled since it relies on the slow hydrolysis of GDL. However, since the release of calcium ions is due to the equilibrium between CaCO3 and carbonic
ACKNOWLEDGEMENTS Funding was provided by Swanson School of Engineering, the Office of the Provost, and the Ellen MacArthur Foundation.
A CONTROLLED RELEASE RETINOIC ACID DELIVERY SYSTEM TO ENHANCE CILIOGENESIS OF THE SINONASAL EPITHELIUM Adam R. Carcella, Andrea Schilling, Steven R. Little Little Lab, Department of Chemical Engineering University of Pittsburgh, PA, USA INTRODUCTION Chronic rhinosinusitis (CRS) is a condition of inflammation in the paranasal sinuses that occurs in 10% of the U.S. population [1]. Treatment of CRS is challenging due to the inherent isolation of the sinus cavities as well as the complex etiology of the disease. Often, infectious microbes and pathogens are present within the paranasal sinuses and can contribute to increased inflammation associated with CRS. If this burden of foreign pathogens could be reduced, infection and inflammation associated with CRS would likely improve [2].
be fabricated to extend the topical release of active RA for at least 30 days. METHODS RA-loaded MSs were prepared using the single emulsion-evaporation technique [6]. Poly(lactic-coglycolic)acid and RA were dissolved in dichloromethane (oil phase). The oil phase was dispersed in a polyvinyl-alcohol (PVA) solution using a homogenizer at a speed of 3000rpm. After the dichloromethane evaporates, the MS were centrifuged and washed in deionized water. The MS were then resuspended in deionized water, flash frozen with liquid nitrogen and lyophilized for 48 hours with an aluminum foil cover to reduce exposure to UV light, which can degrade RA. The following variables during MS fabrication were tested to extend the RA release profile: the molecular weight of the PLGA polymer, the lactide:glycolide ratio of the PLGA polymer and the RA loading.
It is often observed that CRS patients have compromised mucociliary clearance (MCC), which is the natural defense mechanism of the paranasal sinuses responsible for the removal of debris and foreign pathogens in the upper airways [2]. Cilia are cylindrical projections lining the upper respiratory epithelium that drive MCC [3]. Both cilia dysfunction and cilia loss are commonly seen in patients with CRS due to the repeated cycles of infection and inflammation in the sinuses [3]. Therefore, improving cilia function and promoting ciliogenesis in patients with CRS has the potential to improve MCC and reduce the burden of potential disease-inducing microbes.
In vitro RA release assays were conducted by incubating a known amount of RA-loaded or blank MS in a set amount of release medium containing 2% sodium deoxycholate on a roto-shaker at 37ď&#x201A;°C. At regular time intervals, the release medium was collected and diluted in methanol (3:1 v/v of MeOH/supernatant) to dissolve the hydrophobic RA. The amount of RA released was analyzed via UV-Vis spectrophotometry at 338nm. The release profiles of multiple MS batches were compared to determine the effect of the aforementioned variables on extending the release of RA.
Cilia regeneration of the paranasal sinus mucosa has been shown to improve with topical Vitamin A treatment, and its derivative retinoic acid (RA), in a both an animal model [4] and in CRS patients [5]. However, the repetitive weekly dosing that was necessary for a clinical effect is time consuming and inconvenient for both the patient and physician. If a sustained release system were available, treatment could be better controlled and patient compliance would likely improve.
RESULTS Previous MSs fabricated with 0.4mg of RA per 200mg of RG 504H PLGA polymer achieved RA release for 14 days. In an effort to extend the RA release, we explored polymers with varying lactide:glycolide ratios and molecular weights. Both PLGA characteristics have been shown to alter the release profile [7]. Specifically, lactide rich polymers are less hydrophilic and absorb less water which decreases the degradation of the MSs compared to glycolide rich polymers [7]. The four PLGA polymers tested were RG 756, RG 755, Lactel B6001-1 and RG 504H (Table 1).
We propose that a controlled release delivery system comprised of biodegradable RA-loaded microspheres (MS) suspended in thermoresponsive hydrogel would provide a local, sustained concentration of RA. Ultimately, local, sustained delivery of RA from the MS-hydrogel system has the potential to enhance ciliogenesis, thus improving MCC and aiding in the clearance of pathogens that contribute to the cycles of infection and inflammation associated with CRS. We hypothesize that the MS-hydrogel formulation can 1
Table 1. PLGA polymer characteristics
PLGA Polymer RG 756
Molecular Weight (kDa) 76-115
Lactic:Glycolic Acid 75:25
RG 755 Lactel B6001-1 RG 504H
44-70
75:25
50-75
65:35
38-54
50:50
The RA release profiles of RG 755 and RG 756 PLGA polymers were identical. This was unexpected because RG 756 has twice the molecular weight of RG 755 and increased molecular weight has been directly linked to a slower drug release rate [7]. Instead, the lactide:glycolide ratio had the greatest impact on RA release. Specifically, increasing the lactide ratio resulted in a much slower RA release, which is consistent with literature.
The RA release profiles of RG 504H, Lactel B60011 and RG 755 exhibit different release rates (Figure 1). As expected, increasing the lactic acid ratio results in a slower drug release rate.
The amount of RA loaded in the PLGA MSs was increased to achieve the desired dosing over 30 days to be efficacious in promoting ciliary growth in vivo. Previous studies utilized a rabbit model and indicated RA efficacy with a total dose of 0.086mg over the course of 30 days [4]. A human equivalent dose would equate to 0.51mg, which was determined using a rabbit to human conversion factor [8]. With limitations to the amount of MSs suspended in hydrogel as well as the amount of hydrogel applied within the paranasal sinuses, higher RA loading in the PLGA MSs is desirable. However, too high of loading has resulted in an undesirable burst release of over half of the encapsulated drug (data not shown).
Cumulative Release (%)
100 80 60 40
RG504H Lactel B6001-1 RG 755
20 0
0
5
10
15
20
25
30
Time (days)
Figure 1. Cumulative release profiles of MSs fabricated with varied lactide:glycolide PLGA polymers.
40
The most desired microsphere formulation that slowly released RA for 30 days was 2mg of RA per 200mg of RG 755 PLGA polymer (Figure 2). This formulation slowly released 50% of the encapsulated RA within 4 weeks, while minimizing the initial burst release. In future work, the RA release profile could be fine-tuned to achieve an increased release rate towards the end of the 30 days. Additionally, future work should explore an in vitro RA bioactivity assay and ex vivo ciliary studies to evaluate RA efficacy post-delivery.
30
REFERENCES
The RA loading was increased for the most promising PLGA polymer to achieve the necessary RA dosing based on previous studies in a rabbit model [4]. The RA release profile of RG 755 with a RA loading of 2mg per 200mg of polymer achieved sustained release for 30 days. Cumulative Release (%)
50
[1] Hulse et al. Allergy Asthma Rep. 16, 1–8, 2016. [2] Stevens et al. J. Allergy Clin. Immunol. 136, 1442– 1453, 2015. [3] Gudis et al. Am. J. Rhinol. Allergy. 26, 1–6, 2012. [4] Hwang et al. Laryngoscope. 116, 1080–5, 2006. [5] Fang et al. Am. J. Rhinol Allergy. 29, 430-4, 2015. [6] Sanchez et al. International Journal of Pharmaceutics. 99, 263–273, 1993. [7] Makadia et al. Polymers (Basel). 3, 1377-1397, 2011. [8] Nair et al Journal of Basic and Clinical Pharmacy, 7, 27-31, 2016.
20
RG 755 (2mg RA)
10 0
0
5
10
15
20
25
30
Time (days)
Figure 2. Cumulative release profile of MSs fabricated with 2mg of RA per 200mg of RG 755 PLGA polymer (error bars too small to be seen).
DISCUSSION The goal of characterizing various MS formulations was to achieve a sustained release profile for 4 weeks.
ACKNOWLEDGMENTS Funding was provided by the Swanson School of Engineering and the Office of Provost.
2
THREE-DIMENSIONAL NICKEL FOAM AND GRAPHENE ELECTRODE IN MICROBIAL FUEL CELL APLLICATION: STUDY OF BIOFILM COMPATIBILITY Claire P. Chouinard, Felipe Sanhueza Gómez, Natalia Padilla Gálvez, Hernán Valle Zapata, R.V. Mangalaraja, and Homero Urrutia Laboratorio Biopelículas y Microbiología Ambiental, Departamento de Microbiología Laboratorio de Cerámicos Avanzados y Nanotecnología, Departamento de Ingeniería de Materiales Universidad de Concepción, Concepción, Chile Email: cpc26@pitt.edu INTRODUCTION Industrial and domestic water treatment is an energy intensive process with a global impact, and water treatment was identified as an Engineering Grand Challenge in 2008 [1]. Microbial fuels cells (MFCs), a type of bioelectrochemical system (BES), offer an alternative to traditional methods of wastewater treatment and have the potential to significantly reduce energy consumption. Despite prior work, MFCs still suffer from a wide variety of inefficiencies and are not yet practical at an industrial scale. The objective of this project is to develop more effective anode materials for implementation in an MFC system and to analyze the compatibility of these anode materials with bacterial communities. Zhao et al. identified a layer by layer synthesis method of three-dimensional nickel foam (NF) electrodes coated with graphene layers for use in oxygen evolution reactions; electrical properties of the synthesized electrodes were comparable to that of state-of-the-art materials including Ir/C and Ru/C electrodes, demonstrating a current density of 10 mA cm-2 [2]. In MFC systems, carbon materials, including three-dimensional carbon foams, have routinely been explored as a potential anode material, and Chen et al. found that reticulated carbon foam derived from a sponge-like natural product performed relatively well in an MFC application, with a current density of over 4.0 mA cm-2 [3]. However, the Zhao procedure offers an economical three-dimensional anode material with superior electrical properties, and a biocompatibility analysis is required due to the known biotoxicity of nickel, as explained by Hausinger et al. [4]. METHODS NF and graphene electrode samples of 1 cm by 1 cm were created by first submerging NF in acetone for 20 minutes, rinsing with deionized water,
submerging in 0.10 M HCl for 20 minutes, and rinsing again with deionized water. After cleaning the NF base, graphite oxide (GO) layers were cyclically developed on the surface of the NF. Electrodes were submerged in a 6.25 mg/mL solution of poly(ethyleneimine) (PEI) at a pH of 10 for 20 minutes. Following a rinse with deionized water, electrodes were then submerged in a graphite oxide (GO) suspension (GO-325) of 1.5 mg/mL at a pH>10 for 20 minutes. GO was prepared using a modified version of Hummer’s method. Electrodes were removed from the solution and air dried. Steps were repeated until scanning electron microscope (SEM) imaging revealed a uniform graphene layer, between five and seven times depending on the trial. The GO coating was reduced to graphene with functionalized nickel nanoparticles using a mix of 20 mL of “Ni(NO3)2∙6H2O” 10 mM (58.2 mg/20 mL agua milli-Q) and 4 mL of L-ascorbic acid (LAA) 120 mg/mL. Electrodes were left in the solution for 4 hours at 80 °C under gentle magnetic stirring. All chemicals used in this procedure were purchased from Sigma-Aldrich. Following UV radiation for the purpose of sterilization, two NF and graphene electrodes were introduced in each of six bacterial cultures: Tryptic Soy Broth (TSB) only, Streptomyces sp., Pectobacterium sp., wastewater, sludge, and solid waste. A TSB control without NF/graphene electrodes was included as well. All environmental samples were collected from ESSBIO wastewater treatment plant facility in Concepción, Chile. Planktonic cell counts were collected at day 2 and at day 11 using an Olympus BX51 Epifluorescence Microscope at a magnification of 1000x, and biofilm images were collected at day 11 using a LSM780 NL0 Zeiss Spectral Confocal Microscope at magnifications of 25x and 40x. Additionally, planktonic cell counts were collected at day 4 for
plastic and carbon cloth controls. All samples were stained using a LIVE/DEAD® BacLightTM Bacterial Viability Kit. DATA PROCESSING Images collected via confocal microscopy were analyzed using image processing software provided by the Centro de Microscopía Avanzada at the Universidad de Concepción. The percent volume of live and dead cells relative to the total electrode volume were determined using manually set controls and were held constant across samples, except for Pectobacterium sp. RESULTS Bacterial growth was absent from the TSB controls, and filaments present in the Streptomyces sp. culture prevented data collection for all biofilm supports; these data points are not included. Planktonic cell concentrations for day 2 and day 11 in the NF/graphene electrodes and for day 4 in the plastic and carbon cloth supports are presented in Table 1. Table 1: Planktonic cell concentrations for live growth Carbon NF/Graphene Plastic Cloth (cell/mL) (cell/mL) (cell/mL) Day 2 Day 11 Day 4 Culture Pectobacterium 3.51×109 3.45×109 — — sp. 9 9 9 1.90×10 3.42×10 5.28×10 5.20×109 Wastewater 9 9 9 2.27×10 3.00×10 5.72×10 5.17×109 Sludge 9 9 9 3.46×10 1.36×10 7.94×10 6.50×109 Solid waste
Confocal microscopy images are included in Figure 1 for both live and dead cells. Quantitative results for percent volume of live and dead cells are shown in Table 2.
Table 2: Calculated biofilm growth on NF/graphene electrodes by percent total volume Culture Live Dead (% volume) (% volume) Pectobacterium sp. Wastewater Sludge Solid waste
10.58 28.49 31.62 15.70
11.08 7.48 13.70 5.56
DISCUSSION The planktonic cell counts of live bacterial growth were not significantly different across bacterial samples in the NF and graphene electrodes; the differences between the day 2 and day 11 samples were negligible as well. However, the plastic and carbon cloth supports exhibited cell concentrations almost double that of the NF/graphene electrodes. The confocal microscopy analysis supports the likelihood of biotoxicity in the NF/graphene electrodes, and after two weeks of growth, the observed biofilms were less prevalent than expected. Biofilms colonized around 20% to 50% of the total sample volume, including both live and dead cells. As supported by literature, the environmental samples seemed to fare better than the pure strain Pectobacterium sp. due to interspecies cooperation and greater adaptability within a multispecies biofilm [5]. The biotoxicity of the NF/graphene electrodes could be due to the NF or graphene, as graphene has been shown to cause membrane and oxidative stress in bacterial cells [6]. It is recommended that the antibacterial effects of the synthesized materials be further explored with more extensive controls and larger sample sizes, with a focus on environmental bacterial cultures. REFERENCES 1. Perry et al. NAE Grand Challenges for Engineering. https://www.nae.edu/187212/NAEGrand-Challenges-for-Engineering 2. Zhao et al. J of Mater Chem A 5, 1201-1210, 2017. 3. Chen et al. J Mater Chem 22, 18609-18613, 2012. 4. Hausinger et al. Metallomics 3, 1153-1162, 2011. 5. Flemming et al. Nature Reviews Microbiology 14, 563-575, 2016. 6. Chen et al. J of Power Sources 290, 80-86, 2015. ACKNOWLEDGEMENTS This work was funded by the NSF REU Project #1757529 and National Commission for Scientific and Technological Research FONDEF IDEA No. ID15i10500, Concepción, Chile.
ATTEMPTING TO CHARACTERIZE HYDROXYL GROUP ROTATION ON GRAPHANOL Ruby DeMaio, Abhishek Bagusetty, and Dr. J. Karl Johnson Department of Chemical and Petroleum Engineering University of Pittsburgh, PA, USA Email: rid23@pitt.edu angular velocity ACF (AVACF) in order to INTRODUCTION Proton exchange membrane (PEM) fuel quantify the motion. cells are currently unable to operate at 1 đ?&#x2018; đ?&#x2018; temperatures higher than 80 °C due to the â&#x;¨đ?&#x153;&#x201D;(0)đ?&#x153;&#x201D;(đ?&#x2018;Ą)â&#x;Š = đ?&#x203A;´đ?&#x2018;&#x2DC; đ?&#x2018;Ą đ?&#x203A;´đ?&#x2018;&#x2013; đ?&#x2018;? đ?&#x153;&#x201D;đ?&#x2018;&#x2013; (đ?&#x2018;&#x2DC;)đ?&#x153;&#x201D;đ?&#x2018;&#x2013; (đ?&#x2018;&#x2DC; + đ?&#x2018;Ą) đ?&#x2018; đ?&#x2018; đ?&#x2018;Ą đ?&#x2018;? use of polyelectrolyte polymers for the membrane. To increase the rate of PT, PEM The equation for the AVACF taken over fuel cells must operate at higher multiple time origins is shown above. Here temperatures. Therefore, it is necessary to Nt is the number of time origins and Nb is examine materials that can conduct protons the number of bond pairs (i.e. the number of anhydrously. Hydroxygraphane, or hydroxyl groups). While the AVACF is the graphanol, is being investigated as a preferred method of quantifying the motion material for this purpose. [1] of the hydroxyl groups, the angular autocorrelation function (AACF) and the velocity autocorrelation function (VACF), Previously, PT on graphanol was studied which is commonly used in MD analysis, using density functional theory (DFT), were also methods examined for quantifying where it exhibited the ability to conduct fast this motion. To compute the ACF function and robust proton transport [1]. To examine multiple Python codes were developed, PT on a larger system of graphanol than differentiated in terms of ability to read allowable by DFT, a force field needs to be different types of trajectory files, and in developed. While using the adaptive force terms of physical properties computed. matching method (AFM) of developing force fields described by [2], abnormal RESULTS behavior in the hydroxyl groups that make Properly computing the ensemble average up the hydroxygraphane was observed. proved to be the most challenging part of Capturing this behavior accurately is developing an appropriate Python code. Below are the original results of the AVACF important because the PT transport occurs as a function of time at 400 K (red) and 800 on the hydroxyl groups. It is imperative to K (green). These are taken over a single develop a method of characterizing the time origin, and instead of using angular librational motion. velocity for our ensemble average, we used the cos(Î&#x2DC;), Î&#x2DC; being the angle need to METHODS compute the angular velocity. As shown by It was determined that the autocorrelation the fact that neither of the functions start at function (ACF) would need to be calculated one, they are not properly normalized. based off the AIMD simulation data, at Instead of experiencing decay, both different temperatures. Due to the type of functions fluctuate around a fixed value. At motion exhibited by the ACF, it was 400 K, the ACF fluctuates around close to determined that it would be best to take the .96; at 800 K, roughly .935.
When looking at Figure 1. it was determined that this method of computing the ACF was incorrect due to possible issues with how we calculated the cos(Î&#x2DC;) value and error in terms of the summation over the length of time. When looking at what went wrong with Figure 2, it was determined that the main issue in that method was improper translation of the method described in the book, with presents a code in Fortran, to a code in Python. Figure 1. Angular velocity autocorrelation function at 400 K (red) and 800 K (green).
There were many other attempts taken to calculate the ACF, whether it be the AVACF, the AACF, or the VACF, properly. In the final attempts book sources with codes and methods were looked for determining the proper method of calculating the ACF. The figure below shows the results of attempting the method outlined in Computer Simulation of Liquids [3]. The graphs clearly produced unexpected results. Again, the function is not normalized. Also while the function experiences decay, the decay is sharp and then levels out, as opposed to a more gradual decay.
Figure 2. Angular velocity autocorrelation function at 1000 K.
DISCUSSION/FUTURE WORK At this point, the proper method of computing the ACF has not been coded.
There are still unexplored methods that need to be looked into in the future. The goal is to properly compute the AVACF and determine if that is a good method for quantifying the librational motion. Once the librational motion is properly quantified, the process of developing a force field for graphanol will be resumed and this time the proper rotation of the hydroxyl groups will not only be tested visually, but also graphically by computing ACFs based on trajectory data from the classical simulations being used to develop the force field. A code that reads and extracts the necessary data from the classical simulation files has already been developed. Once an accurate force field has been developed, it will be validated and then used to examine proton transport. REFERENCES 1. Bagusetty et al. Phys Rev Lett 118. 186101, 2017. 2. Akin-Ojo et al. J Chem Phys 129. 064108, 2008. 3. M.P. Allen and D.J. Tildesley. Computer Simulations of Liquids. Oxford University Press, New York. ACKNOWLEDGEMENTS Thank you to Dr. Johnson, the Swanson School of Engineering, and the Office of the Provost for funding this project.
DEVELOPING AUTOMATED METHODS TO EXTRACT ATOMIC DYNAMICS FROM IN SITU HRTEM MOVIES 1
Michael Gresh-Sill1, Meng Li1, Judith C. Yang1 Department of Chemical and Petroleum Engineering University of Pittsburgh, PA, USA Email: mag344@pitt.edu
INTRODUCTION Transmission Electron Microscopy (TEM) is a powerful and versatile characterization tool widely used in the physical, chemical, and biological sciences to acquire direct structural and chemical information with nanometer to atomic resolution. Static characterization (e.g., before-and-after) loses all information in between the static images and is inadequate for studying dynamic processes of material structure evolution and chemical reactions. In situ TEM is a family of techniques that enables the observation of these dynamic systems in real time and in the relevant reaction/functional environments (e.g., in gas or liquid, while heated) (1). The critical insights into multi-scale dynamics provided by in situ TEM are often difficult or impossible to achieve with other techniques, which increases demand for this technique (2). Greater spatial and temporal resolutions for in situ TEM experiments leads to greater quantities of data – gigabytes to terabytes per minute – typically in the form of digital video. Researchers now face the issue of the ability to acquire information surpassing the ability to fully analyze it. This information rich data if meaningfully analyzed can be a great boon to understanding complex material dynamics. However, existing software solutions either focus on static characterization and are insufficient to adequately analyze massive in situ datasets, or are only available for the highest-end (up to ~$1 million) cameras (3,4). Hence, the aim of this work is to explore potential avenues for automating in situ High-Resolution TEM (HRTEM) movie processing with more accessible tools. METHODS In this work, examples of in situ TEM movies of copper oxide (Cu2O) growth during thermal oxidation of copper (Cu) were used to develop codes for automated processing.
Using the open-source programming language Python, we developed an automated in situ HRTEM data processing module that contains the following methods: 1. Data selection: removal of unusable frames such as blurry frames, blank frames, etc. 2. Movie alignment: frame-by-frame alignment of HRTEM movies to keep the region of interest (ROI) stable. 3. Data processing: filtration to denoise each frame, extraction of quantitative data from the processed movie, annotation (with timestamp and scalebar), acceleration and output to published formats. DATA PROCESSING Data selection The movies were loaded into the module as threedimensional arrays with each frame represented as a two-dimensional (x,y) array of gray values and the third dimension being time. Frames were considered blank and removed if a random line profile of gray values was summed to less than an arbitrary threshold. The variance of the Laplacian was used to determine each frame’s blurriness as described by Pech-Pacheco et al. (5). A frame was considered blurry and removed if the variance of its Laplacian was higher than a threshold determined by the median of the corresponding values of the neighboring frames. Once all the bad frames were removed the movie was ready for alignment of time-sequence images (i.e., removal of jitter and drift). Movie alignment Scale-Invariant Feature Transform (SIFT) plugins were used to accomplish frame-by-frame alignment. SIFT is an algorithm that matches and aligns frames utilizing frame-by-frame propagation so each aligned frame is used as a template for the next frame in the sequence. Alignment by propagation does not rely on unchanging features which
differentiates SIFT from other methods (6). In situ movies rarely contain unchanging features due to the dynamic nature of the environment making SIFT a valuable tool. Proper movie alignment is critical for accurate and automated analysis. Data processing Automated data processing methods, such as Bragg filtering or atom position determination, are being explored. RESULTS Data selection Preprocessing codes now allow for automated cropping, detection of blank and blurry frames, frame reorientation, and acceleration. Movie alignment SIFT and template matching techniques work reliably for a wide variety of in situ TEM movies as Figure 1 demonstrates below with a kymograph of a single layer. However, a few examples remain of movies with drift that continues to defy correction by our current methods and still require some degree of manual correction.
Figure 1: Kymograph taken of the red-outlined region from an aligned HRTEM movie. The vertical lines show the frames well-aligned through time. Data processing For the aligned examples, codes for automated analysis were developed such as Bragg filtering which emphasizes lattice structures. Atomap, a Python module, provides functions for identifying atom positions; however, its current implementation has difficulty with borders between islands and vacuum (open space) (7). DISCUSSION The developed methods for processing in situ TEM movies have proven successful many applications,
such as Cu2O thermal oxidation, reduction, and CuO nanowire growth. The preprocessing methods quickly provide a functional format for the data to be aligned. Alignment via both template matching and SIFT techniques is able to better correct the thermal drifting of subjects, an unavoidable issue for experiments at elevated temperatures. The ability to reliably process these movies lends to more efficient analysis. Image analysis that does not require identifying a specific section of an image can be more easily adapted from single-image techniques to movie analysis. On the other hand, analysis like that possible with Atomap requires that the computer somehow be able to “see” what is in each image, frequently a rather difficult task with this kind of data (7). A trained expert can often easily make the distinction between the vacuum and subject, for example, but the volume of data makes this approach impractical. Overall, the new methods developed constitute a further step towards automated analysis. Automation for extracting atomic-scale information such as atom positions and growth rates is being investigated. REFERENCES 1. Zhang, X. F. Springer Berlin Heidelberg, 2014. 2. Taheri, M. L. et al. Ultramicroscopy 170, 86–95 2016. 3. Hussaini, Z. et al. Ultramicroscopy 186, 139– 145 2018. 4. Somnath, S. et al. Adv. Struct. Chem. Imaging 4, 2018. 5. A Pech-Pacheco J.L. et al. Proceedings 15th Int. on Pattern Recog. 3, 314-317 2000. 6. B Thévenaz P. et al. IEEE Transactions on Image Processing 7, 27-41, 1998. 7. Nord et al. Adv. Struct. and Chem. Imaging 3, 9, 2017. ACKNOWLEDGEMENTS Microscopy performed at the Nanoscale Fabrication and Characterization Facility (NFCF) in the Petersen Institute of Nanoscience and Engineering (PINSE). Funding for this project was provided by Dr. Judith Yang (NSF-DMR grants No. 1410055 and 1508417), the Swanson School of Engineering, and the Office of the Provost.
CHARACTERIZATION OF REDOX FLOW BATTERY KINETICS USING A FLOW CHANNEL ANALYTICAL PLATFORM Thomas J. Henry and James R. McKone McKone Group Laboratory, Department of Chemical and Petroleum Engineering University of Pittsburgh, PA, USA Email: tjh89@pitt.edu, Web: https://mckonelab.pitt.edu/ INTRODUCTION Redox flow batteries (RFBs) are a promising industrial technology capable of providing largescale electricity storage [1]. Modern energy production methods are trending in a direction of enhanced sustainability and reduced carbon emissions. Renewable power sources such as hydro, wind, or solar are clean alternatives to fossil fuels; however, they produce inconsistent supplies of energy that vary based on external conditions. To combat this issue, RFB technology can be implemented to store vast amounts of energy that can be released as needed, improving the operational efficiency of the power grid even as more renewables come online. RFBs operate through flow-based redox chemistry by manipulating the oxidation states of the active electrolyte species [2]. While it is fundamentally similar to a solid-state rechargeable battery, the RFB holds particular potential for its ability to be scaledup at very low cost with the use of large electrolyte storage tanks. However, a concern with this technology involves inconsistent prior reports on the energy efficiencies (reaction kinetics) of the redox reactions within the flow batteries, which makes further development difficult. Previous work in the McKone Lab has shown that as the concentration of electrolyte is increased, a clear kinetic slowdown is observed [3]. This trend is a relevant issue to real RFBs, because very high concentrations are necessary for industrial systems. Our goal is to establish an electroanalytical platform that will consistently return valid kinetic characteristics of traditional electrolyte species used in industrial RFBs. To accomplish this, we are using a 3Dprinted, lab-scale flow cell that houses a microscopic Pt electrode to directly analyze the reaction kinetics. As an initial test of the system, we are using Fe-based RFB electrolyte over a range of concentrations to observe the dependence of concentration on kinetics.
METHODS Stock solutions of 0.5 M and 2 M HCl were used as supporting electrolyte. Necessary amounts of FeCl2⋅4H2O and FeCl3⋅6H2O were weighed and added to the stock solutions of 0.5 M HCl to produce 50 mL batches of electrolyte with total Fe concentrations of: 10 mM, 20 mM, 40 mM, and 100 mM. Additionally, 100 mM and 1000 mM Fe electrolytes were prepared with the 2 M HCl stock solution. A platinum microelectrode (10 m diameter) was polished for 1 minute each on alumina powder of sizes 1, 0.3, and 0.05 m, and slurried on polishing pads with deionized (DI) water by performing circular motions. This was followed by sonicating for 30 seconds in DI water to remove any alumina residue between each polishing. The electrode was then submerged in DI water for 15 minutes prior to experimentation. Graphite was used as the counter electrode and Ag/AgCl as the reference electrode. To set up the flow channel, O-rings were attached to the tip of each electrode and threaded-plugs were slid onto the electrodes. These plugs were then wrapped with Teflon tape to provide a tighter seal before screwing into the 3D-printed flow channel where the O-rings provided an internal seal. Two lengths of MasterFlex pump tubing (1.6 mm ID) was connected to the in and out-flow of the channel using plastic fittings, depicted in Figure 1.
Figure 1: Flow channel experimental setup with components installed.
A peristaltic pump (MasterFlex) was used to flow the electrolyte through the cell, and a 100 mL beaker was used as the electrolyte reservoir. The flow cell was
cleaned by flowing DI water through the channel for 15 minutes immediately prior to experimentation. To perform the experiments, a Gamry Interface 1000E potentiostat was used. After DI water cleaning, a separate 100 mL beaker was filled with Fe electrolyte. The water in the system was drained, and the two open-ends of tubing were transferred to the electrolyte reservoir, and the pump was set to produce a 10 mL/min flow rate. Cyclic voltammetry experiments were performed to collect kinetic data. A potential range vs. Ag/AgCl from 0 V to 0.8 V was used at a scan rate of 30 mV/s. Ten cycles were recorded for each trial; however, only the first cycle was used for kinetic analysis. DATA PROCESSING Raw data obtained from the potentiostat was translated to current density (mA/cm2) by dividing out the active area of the electrode (7.85E-07 cm2). A mathematical model was used to fit current density as a function of overpotential. J0, or exchange current density, is a parameter that describes kinetic activity around the equilibrium potential. The Butler-Volmer equation (Eq. 1) and a least-squares analysis was used to determine J0 and electron transfer coefficients (Îąa/c). Kinetic behavior can also be expressed by k0, or the heterogeneous electron transfer rate constant. This metric is found using the following relation (Eq. 2): (1) (2)
đ?&#x2018;&#x2014; = đ?&#x2018;&#x2014;0 â&#x2C6;&#x2122; [đ?&#x2018;&#x2019;đ?&#x2018;Ľđ?&#x2018;?â Ą( đ?&#x2018;&#x2014;0 â Ą = â Ąđ?&#x2018;&#x203A;đ??šđ??śđ?&#x2018;&#x2DC;
0
đ?&#x203A;źđ?&#x2018;&#x17D; đ?&#x2018;§đ??šđ?&#x153;&#x201A; đ?&#x2018;&#x2026;đ?&#x2018;&#x2021;
) â&#x2C6;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;Ľđ?&#x2018;? (â&#x2C6;&#x2019;
đ?&#x203A;źđ?&#x2018;? đ?&#x2018;§đ??šđ?&#x153;&#x201A; đ?&#x2018;&#x2026;đ?&#x2018;&#x2021;
)]
RESULTS Figure 2 shows electrolyte performance via concentration-normalized current density as a function of electric potential vs. Ag/AgCl. The raw data exhibited high levels of noise which was produced from the pulse-action of the peristaltic pump; other pumping methods need to be considered
Figure 2: Raw current density (right) and noise-reduced current density (left). Plots were normalized by dividing current density values by concentration.
and examined. The left panel represents a smoothed plot of the data with outliers extracted and moving averages applied. In Figure 2, the slope of the curves at the equilibrium potential (zero current density) provides a qualitative insight, as a steeper slope denotes faster kinetics. This is due to less overpotential being required to drive the reaction forward. Table 1 compiles quantitative results through calculations using Eq. 1 and 2. The values are expressed as averages over three trials per each concentration of electrolyte. Table 1: Average J0 & k0 Kinetic Values (n=3) [Fe] (M) [HCl] (M) J0 (mA/cm2) 10 0.5 5.55 Âą 0.591 20 0.5 10.7 Âą 0.852 40 0.5 23.8 Âą 0.884 100 0.5 41.9 Âą 1.76 100 2 22.8 Âą 1.33 1000 2 255 Âą 2.79
k0 (cm/s) 5.75E-03 5.54E-03 6.17E-03 4.34E-03 2.36E-03 2.65E-03
DISCUSSION The kinetic results broadly agree with prior work indicating decreased rates as the concentration of electrolyte is increased [3]. Figure 2 captures this trend, as shallower slopes and lower normalized current density maxima are seen as concentration increases. The concentration of iron was raised 100fold in these experiments, and a resulting 45.9-fold increase in J0 values was observed. This platform outperformed a common method of kinetic characterization known as rotating disk electrodes (RDE), in which one literature report saw only a 15fold increase after a 100-fold concentration boost [3]. This level of improvement is encouraging towards the prospect of continuing to develop and implement a flow channel analytical platform to characterize other commonly used RFB electrolytes. REFERENCES 1. Skyllas-Kazacos et. al. J Electrochem Soc., 158(8), 1945-7111, 2011. 2.Weber et al. J Appl Electrochem 41, 1572-8838, 2011. 3.Sawant and McKone. ACS Appl. Energy Mater., 123(1), 4743-4753, 2018. ACKNOWLEDGEMENTS Project funding and support provided by Dr. James McKone, McKone Group Lab, the Swanson School of Engineering, and the Office of the Provost.
THE EFFECT OF THE DIFFUSION OF COMPLEX MOLECULES THROUGH MOFs FOR THE CAPTURE AND DESTRUCTION OF TOXIC CHEMICALS Allie Schmidt Johnson Group, Department of Chemical Engineering University of Pittsburgh, PA, USA Email: ams614@pitt.edu, Web: http://puccini.che.pitt.edu/ INTRODUCTION The release of toxic chemicals in the environment poses substantial heath concerns to humans. Due to this, metal-organic frameworks (MOFs), are under active investigation for the capture and destruction of these chemicals. MOFs are made of inorganic and organic units strongly bonded together [1]. MOFs are very promising materials because they are highly tailorable, have robust crystalline pores and show good thermal and chemical stability [1]. The application of interest to this study is the use of functionalized MOFs for the capture and destruction of chemical warfare agents (CWAs). The performance of MOFs for CWA mitigation may be subject to transport limitations, i.e., limited by the diffusion of the CWA into the pores of the MOF. The goal of this work is to develop a formalism for computing the diffusivity of a fluid through the UiO-66 MOF. The initial calculations utilize acetone as a model for a polar agent molecule. The self-diffusion coefficients for acetone in UiO-66 as a function of time are presented and compared with bulk diffusivity in liquid acetone. METHODS Simulations were conducted using the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) simulation package [2]. Diffusion coefficients were computed from analysis of mean square displacement (MSD) data. Simulations of acetone in UiO-66 were performed at 425 and 325 K. The systems were first equilibrated within the NVT (constant number, volume, and temperature) ensemble for 106 time steps, with a time step of 0.5 femtoseconds. MSD data was collected from an NVE (constant number, volume, and energy) ensemble run of 5Ă&#x2014;107 time steps. A total of 100 independent simulations were performed for statistical accuracy.
In order to determine the accuracy of the diffusion of acetone within UiO-66, the self-diffusion of bulk liquid acetone was simulated at 298 K using the same NVT and NVE parameters as the acetone in UiO-66 simulations at a density of 0.775 g/cm3. For accuracy, 10 independent runs were conducted. DATA PROCESSING The simulations were analyzed in several different areas for both the acetone in UiO-66 and bulk liquid acetone. The mean-squared displacements (MSD) were calculated in the x, y and z directions for each trial and graphed and then streamlined utilizing multiple time origins.
Figure 1: Acetone in UiO-66 Average MSD vs. Time at 325 K
Figure 2: Acetone in UiO-66 Average MSD vs. Time at 425 K
Figure 3: Bulk Liquid Acetone Average MSD vs. Time at 298 K The diffusion coefficients for each trial, as well as for the average of all the runs, were calculated. This was done by taking the slope of the plotted multiple time origin data and the results are shown in Table 1. These diffusion coefficient values were compared to a diffusion coefficient equation presented by F. A. L. Dullien and H. Ertl: D =Doexp(-E/RT) Equation 1 where Do is the prefactor, E is the activation energy and R is the universal gas constant [3]. RESULTS Using the method previously mentioned, the diffusion coefficients of the bulk liquid acetone and acetone within UiO-66 were calculated using the average of all independent runs. The standard deviation was also calculated, using the diffusion coefficients of each individual trial. These values are reported below in Table 1. Table 1: Comparison of Average Diffusion Coefficients in the liquid and in UiO-66 Temp (K)
298 (bulk liquid acetone) 325 425
Simulated Diffusion Coefficient (m2/s) 4.28×10-9
Standard Deviation
Diffusion Coefficient from Equation 1 (m2/s)
3.11×10-10
4.64×10-9
5.44×10-11
1.07×10-10
9.05×10-11
1.14×10-10
For bulk liquid acetone, the value determined by the simulation and the value determined by Equation 1 are similar, which shows the simulation is accurate. This equation is not used for the trials within the
MOF as it is derived for bulk liquid acetone only. The behavior of the diffusion coefficients within the MOF is expected, as the values are smaller than the bulk liquid acetone. However, there is a great deal of variance of the individual runs, creating a large standard deviation. DISCUSSION The graphs, calculations and analysis of the simulations has provided several main insights. The first, shown by the graphs, is the randomness of the diffusion of acetone within UiO-66. The trend of one direction, x, y or z, dominating while the other two remains present in all trials, even within averages. The second insight is in regards to the bulk liquid acetone. The results presented above show a small difference in error between the experimental diffusion coefficient and the one calculated from Equation 1, only once the box size is reduced to the exact size needed for 250 molecules. This shows that the diffusion coefficient is heavily dependent on density. In order to continue investigating these topics, it is recommended to continue to add more runs to confirm these observations. REFERENCES 1.W. Guo et al. Mechanism and Kinetics for Reaction of the Chemical Simulation DMMP(g) with Zirconium(IV) MOFs: An Ultrahigh-Vacuum and DFT Study, 2017. 2. S. Plimpton, Fast Parallel Algorithms for ShortRange Molecular Dynamics, J Comp Phys, 117, 1-19 (1995). 3. H. Ertl and F. A. L. Dullien Self-Diffusion and Viscosity of Some Liquids as a Function of Temperature, 1973 ACKNOWLEDGEMENTS Funding provided by the Swanson School of Engineering and the Office of the Provost. Thanks to Dr. Karl Johnson and Jon Ruffley.
ADVENTITIAL EXTRACELLULAR MATRIX FROM AMEURYSMAL AORTA EXHIBITS LESS PERICYTE CONTRACTILITY Kaitlyn Wintruba1, Bryant Fisher2, Jennifer C. Hill2, Tara D. Richards2, Marie Billaud2-4, Amadeus Stern4, Thomas G. Gleason2-4, Julie A. Phillippi2-4 1 Department of Chemical Engineering, 2Department of Cardiothoracic Surgery, 3Department of Bioengineering, 4McGowan Institute for Regenerative Medicine University of Pittsburgh, Pittsburgh, PA, USA INTRODUCTION Ascending thoracic aortic aneurysm (TAA) is a lifethreatening condition lacking adequate diagnostics and risk adjudication for aortic dissection or rupture. The mortality rates of aortic dissection worldwide are 1% per hour without surgical intervention [1]. Interactions between the cellular and extracellular matrix (ECM) components of the intimal, medial, and adventitial layers of the aorta mediate blood flow throughout the body. Currently, the complex cellular molecular mechanisms inciting TAA and drive disease progression remain incompletely understood. Prior work in the Thoracic Aortic Disease Research Laboratory demonstrated a role for the adventitial microenvironment in the pathogenesis of TAA. Growth factors within adventitial ECM such as FGF2, PDGF, VEGF-A that can influence vasa vasorum-associated cells were found to be downregulated in human aneurysmal aortic specimens and porcine-derived adventitial (pAdv) ECM exhibited FGF2-mediated angiogenic activity in vitro and in vivo [2]. The vasa vasorum exhibited microvascular remodeling and lower density suggestive of reduced neovascularization in aortic specimens from TAA patients [3]. Furthermore, several populations of vasa vasorum-associated progenitor cells reside within the adventitia using analytical flow cytometry and immunohistochemistry to identify CD146+/CD34±/CD31- pericytes [4]. Pericytes serve as perivascular progenitor cells within capillaries and micro vessels and in ex vivo culture exhibited unique spheroid formation and sprouting on Matrigel substrates [4]. In the present study, we hypothesized that adventitial ECM from normal aorta increases pericyte contractility through a growth factormediated mechanism deficient in aneurysm-derived aortic adventitia. METHODS
Human ascending aortic adventitial tissue specimens were collected from patients undergoing ascending aortic and/or aortic valve replacement, or heart transplantation with Institutional Review Board approval and using an informed consent process. Pericytes were isolated from the adventitia of nonaneurysmal human aorta, culture-expanded as previously described [4] and then immortalized using a lentiviral vector to deliver HPV-E6/E7 to avoid senescence and phenotypic changes at high passage. A pericyte marker profile (CD146+/CD31-) was routinely enriched with magnetic bead separation. Adventitial ECMs were prepared from decellularized and lyophilized adventitial specimens from human and porcine aorta. These ECMs were previously shown to contain minimal DNA content while retaining numerous bioactive growth factors [2]. Pericytes were cultured within 2mg/mL bovine Type I collagen gels in the presence or absence of normal or aneurysmal human adventitial (hAdv) ECMs (250µg/mL) added within the gel. Parallel experiments cultured pericyte-embedded collagen gels in the presence or absence of pAdv ECM (250µg/mL) added within the gel or with TGFβ-1R inhibitor SB431542 (100nM) added to the culture medium or DMSO as the vehicle control. Optical absorbance readings (405nm) over 3h of dry heat incubation (37˚C) were obtained to calculate gelation kinetics of collagen blended with hAdv and pAdv ECMs. Cell contraction was quantified by measuring the initial gel area and after 48 hours of culture. Statistical analysis was carried out utilizing one-way ANOVA with Tukey’s post-hoc test. RESULTS All ECM treatments increased pericyte contractility as evidenced by decreased gel area when compared with pericyte-embedded collagen gels lacking ECM treatment. Addition of normal hAdv ECM doubled the degree of pericyte contractility when compared with gels lacking any ECM treatment and aneurysmderived hAdv ECM [Figure 1A]. Aneurysm-derived
hAdv ECM failed to induce contractility above baseline (-ECM). Inhibition of TGFβ-1R decreased pAdv ECM-induced contractility when compared with cells treated with vehicle (DMSO) and pAdv ECM [Figure 1B]. Addition of hAdv ECM to Type I collagen accelerated gelation at 37˚C as evidenced by a higher optical density [Figure 3] and decreased time to 50% gelation [Table 1]. A. Pericyte Contractility of Human Adv ECMs
B. pAdv ECM-induced Pericyte Contractility ± TGFβ-1R Inhibitor
Figure 1: A. Contractility of collagen gels without pericytes, with pericytes, and with pericytes treated with 250µg/mL hAdv ECMs, respectively. B. Contractility of 250µg/mL pAdv ECM treated pericyte-embedded collagen gels with TGFβ1 inhibitor (SB431542 100nM) or vehicle (DMSO) added to culture medium and respective acellular and -ECM controls. Bars represent the mean contractility of four assay replicates ± standard error of the mean. * indicates p<0.001. All figures are representative data from one of two experiments utilizing pericyte cell lines from different patients.
Knowledge of the effects of adventitial ECM on vasa vasorum-associated cells might assist in developing a novel therapeutic intervention for patients at risk for TAA or aortic dissection. This work supports the use of porcine-derived ECM hydrogels to improve function of vasa vasorum-associated cells as a potential therapeutic biomaterial for microvascular regeneration in human aortic disease.
Collagen Gelation ± Adv ECMs
1.2
Normalized Absorbance
Less pericyte contractility with aneurysm-derived hAdv ECM compared with normal hAdv ECM treatment demonstrates growth factor deficiencies adversely affect pericyte function. Porcine Adv ECM-induced pericyte contractility and SB431542mediated inhibition of contractility suggests that Adv ECM influences pericyte function via a TGFβ1mediated mechanism. ECM hydrogels represent a candidate biomaterial option for non-invasive treatment in the setting of TAA due to their potential to replenish growth factors and restore the adventitial microenvironment. Pericytes make up a subpopulation of cells isolated from the adventitia which localize to the vasa vasorum and serve as progenitor support cells. The therapeutic potential of an ECM hydrogel likely lies in its ability to promote pericyte function.
1 0.8 No Cell
0.6
Cell Normal hAdv
0.4
Aneurysmal hAdv
0.2
pAdv
0
0
20
-0.2
40
60 Time (min)
80
100
Figure 3: Lines depict normalized absorbance readings of pericyteembedded collagen (2mg/mL) blended with normal and aneurysmal hAdv and pAdv ECMs (250µg/mL) over 100 minutes.
Gelation Kinetics: Collagen Blended with Adv ECMs Rate (OD/min) T1/2 (min) No Cell 0.0659 ± 0.0041 54.76 ± 1.11 Cell 0.0616 ± 0.0024 50.84 ± 1.37 Normal hAdv 0.0656 ± 0.0035 47.11 ± 2.16 Aneurysmal hAdv 0.0675 ± 0.0039 *41.10 ± 0.78 pAdv 0.0563 ± 0.0029 63.14 ± 1.20 Table 1: Gelation rate (optical density divided by time) and time required for 50% gelation (T1/2) of pericyte-embedded collagen blended with Adv ECMs. Data represents mean of four assay replicates ± standard deviation. * indicates p<0.05 compared to acellular collagen gel.
DISCUSSION
REFERENCES 1. Masuda et al. Prognosis of patients with medically treated aortic dissections. Circulation. 1991;84:117113. 2. Fercana et al. Perivascular extracellular matrix hydrogels mimic native matrix microarchitecture and promote angiogenesis via basic fibroblast growth factor. Biomaterials. 2017;123:142–54. 3. Billaud et al. Medial Hypoxia and Adventitial Vasa Vasorum Remodeling in Human Ascending Aortic Aneurysm. Front. Cardiovasc. Med. 2018;5:124. 4. Billaud et al. Classification and functional characterization of vasa vasorum-associated perivascular progenitor cells in human aorta. Stem Cell Reports. 2017;9(1):292-303. ACKNOWLEDGEMENTS This study was supported by the National Institutes of Health under award #HL131632 (JAP), #HL127214 (JAP) and #HL109132 (TGG), and the UPMC Health System Competitive Medicine Research Fund (JAP).
Wetting Transparency of Monolayer Graphene on 4 wt% Agarose Gel Yingqi Yi Email: yiy54@pitt.edu INTRODUCTION Graphene, an one-atom-layer thick of carbon atom arranged in hexagonal lattice, is dubbed as a “supermaterial”. It is thin, light, highly conductive, incredibly strong, and nearly impenetrable to most gases or liquids. It is considered to possess numerous potentials for many important applications. Scientists around the world are thrilled to do research to unearth more of graphene. [1] The aim of this project is to understand the surface property of graphene, i.e., the wetting transparency of monolayer graphene suspended on the 4 wt% agarose gel (hydrogel) in water. METHODS This project was started with preparing graphene samples. Monolayer graphene was first generated via chemical vapor deposition on coppers, and then the rectangular graphene/copper sample was cut into smaller square pieces with a thin layer of 20w% polymethyl methacrylate (PMMA) applied to each of its four edges to prevent graphene from wrinkling. In order for PMMA to dry faster on the graphene surface, graphene/copper with PMMA on it was heated up in the oven for 3-5 minutes based on the temperature of the oven. 4 wt% agarose gel was made by mixing water and agarose powder in a ratio of 25:1 in weight. The mixed solution was then heated up in the microwave to blend completely and poured into a rectangular container built specifically for forming agarose gel. During the time that the solution was gradually becoming gel, graphene/copper sample with dried PMMA on it was “pre-etched” in 0.2M Ammonium persulfate (APS) for 20 minutes to etch out the graphene grown at the bottom and some copper at the bottom to minimize the time of etching process. Then, “pre-etched” graphene/copper sample was adhered to the surface of fresh 4w% hydrogel whose size is fitted with that of the graphene/copper sample. Finally, the graphene/copper sample with 4% hydrogel underneath was placed on the tray that was put into a glass cylindrical petri dish. Then the 0.2M APS used in the pre-etching was continued to be used in the later etching process.
After approximately 14-18 hours, APS solution was changed to water to rinse over the samples to get rid of the copper persulfate formed between graphene and hydrogel which could easily break graphene even with a small vibration. In order to minimize the vibration during the solution changing, siphoning system was being used. In the siphoning system, the glass cylindrical petri dish was placed on a bucket with which a tube inserted into and the other end of the tube was placed in a much lower level compared to the bottom of the petri dish. liquid inside the petri dish could therefore be automatically drained out with nearly no vibration by starting the siphoning at the end of the tube that had much lower level. New liquid was then added into the petri dish using polypropylene plastic lab wash bottle to minimize the speed of the liquid added in. Normally, 5-6 hours of water bath was carried out after APS etching, and then fresh APS solution was added again to replace the water inside the petri dish to continually etch out the copper between the graphene and the hydrogel. Often, one APS etching plus one water bath was called one round. In order to completely etch out all of the copper underneath graphene, at least three rounds needed to be done for etching process under 0.2M Ammonium persulfate solution. Once all of the copper was etched out, samples were taken out of the petri dish and then taken to do the water contact angle measurement. Water contact angle measurement instrument was composed of a humidifier that could control the amount of vapor coming out and a tube connected to the opening of the humidifier. Vapor coming out from the opening was then transmitted to the other end of the tube and was then shoot onto the surface of the graphene or hydrogel. Small water droplets were gradually formed on the surface whose diameter were between 100 um to 300 um that could greatly reduce the impact of gravity. 10x optical images were taken over the samples before and after contact angle measurement (CAM). Only 3 optical microscopy images were randomly taken before CAM because the light beam coming
from the lens of the microscope would damage the graphene surface by melting the hydrogel underneath. After CAM, optical images of the entire graphene area were taken.
the graphene after CAM. [2] The cracks, most likely, result from the wrinkles of the CVD graphene. Figure1 below displays an optical microscopy image of graphene on 4% agarose gel after contact angle measurement with water.
DATA PROCESSING Video was always recorded while doing the contact angle measurement on each sample. The image of water droplets was then screened out from the video and raw contact angle data including left angle and right angle was extracted by using a program called Image J which has a contact angle measurement plugin. All of the water contact angle data were then collected in an excel sheet and were taken average of their left and right angle. Then the average of all the average of left and right angle and the standard deviation of all the average of left and right angle was calculated. The optical microscopy images after CAM were lined up and combined to become a complete image of the surface of graphene after the contact angle measurement.
Figure 1: An optical microscopy image of the graphene suspended on a 4% agarose hydrogel after the contact angle measurement with water. Area surrounded by a red rectangle shows cracks formed after the deposition of water droplets on the graphene surface.
RESULTS According to the optical microscopy images before CAM, there were 50% chance that large graphene surface could be produced. Large graphene surface is the surface with no apparent cracks or holes on it. It didn’t mean the whole surface was perfect, but instead an area that was perfect and large enough to do contact angle measurement on.
To fully recover the mechanism, in our future research, we plan to create smaller water droplet so that it will sit on the “perfect” graphene.
The average water contact angle of fresh 4 wt% agarose gel is approximately 10 degree and the average water contact angle of fresh 20 wt% agarose gel is also approximately 10 degree. The average water contact angle of fresh (new-made) graphene sample is approximately 50 degree and the average water contact angle of old graphene sample is approximately 50 degree too. This result suggest that the monolayer graphene is nearly wetting transparent. DISCUSSION One uncertainty is that cracks were found on the graphene after the contact angle measurement. It is likely that observed “wetting transparency” is simply due to the ‘leak” of the water droplets. Previously, reported the similar results. Although they concluded that monolayer graphene is nearly wetting transparent, they also found that there were cracks on
REFERENCES 1. Nicol, Will. “What is Graphene.” Digital Trends, 15 Nov, 2018, https://www.digitaltrends.com/cool- tech/what-isgraphene/ 2. Belyaeva, L. A., Deursen, P. M., Barbetsea, K. I., & Schneider, G. F. (2017, December 20). Hydrophilicity of Graphene in Water through Transparency to Polar and Dispersive Interactions - Belyaeva - 2018 - Advanced Materials - Wiley Online Library. Retrieved from https://onlinelibrary.wiley.com/doi/full/10.1002/a dma.201703274 ACKNOWLEDGEMENTS I would like to thank Dr. Lei Li, the Swanson School of Engineering and the Office of the Provost for their funding for my summer research internship. I also want to thank Dr. Haitao Liu and Ms. Mina Kim for their guidance in the research.
THE ROLE OF OXYGEN FUNCTIONAL GROUPS IN GRAPHENE OXIDE MODIFIED GLASSY CARBON ELECTRODES FOR ELECTROCHEMICAL SENSING OF NADH Ananya Mukherjee, Yan Wang and Leanne Gilbertson Environmental Engineering Laboratory, Department of Civil and Environmental Engineering University of Pittsburgh, PA 15260, USA Email: anm297@pitt.edu INTRODUCTION The oxidation of NADH to NAD+ plays an important role in cellular metabolizing processes that generate energy in living cells. NADH is essential for mitochondrial respiration to produce ATP, as well as for over 200 other enzymatic reactions.[1] As a result, there is significant interest in developing sensors to better monitor NADH levels as this is indicative of cellular health, which has been related to critical human diseases.[2][3] Electrochemical sensing is of particular interest to researchers because it is more cost effective, user friendly, and works with smaller sampling volumes than other sensing methods (e.g. fluorescence, colorimetric).[4] Existing research on electrochemical sensors used to detect NADH demonstrate electrode fouling caused by adsorption of the NAD+ oxidation product at high overpotentials. One method to reduce fouling and decrease the high oxidation potential is by modifying the electrode surface. Graphene oxide is a popular nanomaterial often used for electrode modification because of its unique electrochemical properties.[5] Graphene’s honeycomb lattice of carbon atoms has delocalized pi orbitals that allow electrons to move freely above and below the plane, making it extremely conductive.[6] The abundant oxygen groups in graphene oxide provide electron transfer sites that help reduce oxidation potentials.[7] It has been suggested by Zhang et al. that oxidation of NADH is mediated by abundant oxygen groups at edge planes in graphene oxide.[8] However, there has yet to be research focused on identifying the oxygen group(s) are responsible for the enhanced oxidation at graphene oxide modified GCEs, which would inform material design to further enhance sensing performance. NADH oxidation has been reported to be mediated by quinone moieties on carbon nanotube modified electrodes [9], so it is reasonable to assume that quinones, which are abundant on the graphene oxide surface, will similarly mediate NADH oxidation. The primary
goal of this project is to contribute to understanding the role of different oxygen groups in the electrochemical oxidation of NADH. METHODS Graphene ink dispersions were prepared using 2 mg of either as received graphene oxide powder (ARGO) or thermally reduced graphene oxide powder (TGO) annealed at 600 °C under helium flow. 1.2 mL of DI water was added to the powder, followed by 792 µL isopropanol to aid with dispersion and 8 µL of nafion perfluorinated resin to act as a binding agent to the electrode surface, resulting in a final concentration of 1 mg GO/mL. Dispersions were probe sonicated at 10% power three times in 5 minute segments and cooled down in an icebath for 2-3 minutes in between rounds. Afterwards, samples were bath sonicated for 1 hour. Glassy carbon electrodes (GCEs) were polished by rubbing 0.05 µM alumina slurry on microfiber cloth in a circular motion for 1-2 minutes. Electrodes were then cleaned with isopropanol and then DI water. 10 µL of each dispersion was dropcasted onto a GCE and protected while left to air dry. A phosphate buffer solution (PBS, 0.1 M, pH 7) electrolyte, was prepared by combining 0.0696 M of K2HPO4 with 0.0304 M KH2PO4. 76 mL of the prepared PBS solution was used in the electrochemical cell and purged with nitrogen gas for about 20 minutes prior to starting the experiment to remove dissolved oxygen. The working electrode was cleaned by conducting a cyclic voltammogram (CV) scan from -0.4 V – 0.9 V. The electrochemical experiments were conducted using a standard three electrode system with Ag/AgCl reference electrode, Pt wire external electrode, and ARGO/TGO modified GCE working electrode. 4 mL of 14.2 g/L NADH stock was added to the 76 mL PBS electrolyte to give a 1 mM
concentration. CV scans of the 1 mM NADH cell were carried out with a 10 mV/s scan rate and a potential range of -0.4 V â&#x20AC;&#x201C; 0.9 V.
Current (A)
Cyclic Voltammograms of TGO and ARGO 0.000006 ARGO TGO
0.000001
0.492 0.612
DATA ANALYSIS Three separate overlapping CV scan measurements were used for the analysis after the system stabilized. The CVs showed one nonreversible oxidation reaction resulting in a single anodic peak while the voltage increased from -0.4 V to 0.9 V. In order to quantify the potentials at which the current peaked the derivative graph of the anodic scan was analyzed. The derivative graphs were obtained by graphing the change in current (I) divided by the change in potential (V) as the y value and the corresponding midpoint potential as the x value. The peak potential was taken as the potential at which the derivative touched or crossed the x-axis after reaching a maximum. RESULTS Figure 1 shows our ARGO and TGO dispersions. ARGO had a high oxygen content, evident from the brown color and well dispersed solution. TGO had a lower oxygen content which made it a gray color and poorly dispersed.
Figure 1: Pictures of ARGO (left) and TGO (right).
Wang et al. used x-ray photoelectron spectroscopy (XPS) to obtain C:O ratios and oxygen group distributions for materials similar to the ones used in our experiment (Figure 2).[10] 50 40 30 20 10 0
Oxygen Group Distribution (%)
C:O Ratio 4 2
C-O
C=O ARGO TGO
COOH
0 ARGO
TGO
Figure 2: Bar graphs depicting percent oxygen group distributions (left) and C:O ratio (right) of ARGO vs TGO.
Figure 3 shows CV curves of ARGO and TGO modified electrodes. The peak potential for the ARGO modified GCE was found to be 0.492 V, more negative than the TGO modified GCE which had a peak potential of 0.612 V.
-0.000004 Potential (V) Figure 3: Cyclic voltammograms of ARGO (red) and TGO (blue) modified GCE in the presence of 1 mM NADH in 0.1 M PBS from -0.4 V â&#x20AC;&#x201C; 0.9 V with a scan rate of 10 mV/s.
DISCUSSION The lower peak potential at ARGO compared to TGO indicates faster electron transfer at the ARGO surface. Since ARGO exhibits better electrocatalytic properties and mainly differs from TGO by its high oxygen content (lower C:O ratio), the improved NADH oxidation at ARGO can be ascribed to the abundance of oxygen groups at the surface. Both samples have similar percent distribution of C=O and COOH groups and the biggest difference is the large amount of C-O groups, which consists of epoxides and hydroxyl groups, on ARGO compared to TGO. This indicates that these epoxide and/or hydroxyl groups can be most electrochemically active during oxidation of NADH. Although ARGO was found to have a lower peak potential, the current peak was less defined and lower than that of TGO. Additional experiments would be necessary to see if the lower current of ARGO can be ascribed to adsorption of NAD+ at the electrode surface. REFERENCES 1. Fricker et al. Int J Tryptophan Res 11, 1-11, 2018. 2. Braidy et al. Brain Res 1537, 267-272, 2013. 3. Canto et al. Cell Metab 22.1, 31-53, 2015. 4. Kumar et al. Sensors 8, 739-766, 2008. 5. Liu et al. Chem Soc Rev 41, 2283-2307, 2012. 6. Sarkar et al. Mater Today 15.6, 276-285, 2012. 7. Shao et al. Electroanalysis 22.10, 1027-1036, 2010. 8. Yang et al. Bionsens Bioelectron 25.4, 733-738, 2009. 9. Zhang et al. Int J Electrochem Sci, 6, 819-829, 2011. 10. Wang et al. Green Chem 19, 2826-2838, 2017. ACKNOWLEDGEMENT This research was conducted under the supervision of Dr. Gilbertson and the mentorship of Yan Wang. This research was funded by Dr. Gilbertson, the Swanson School of Engineering, and the Office of the Provost.
NESTED EVENT REPRESENTATION FOR AUTOMATED ASSEMBLY OF CELL SIGNALING NETWORK MODELS Evan W. Becker, Kara N. Bocan and Natasa Miskov-Zivanov MeLoDy Laboratory, Department of Electrical and Computer Engineering University of Pittsburgh, PA, USA Email: ewb12@pitt.edu, Web: https://www.nmzlab.pitt.edu/ INTRODUCTION The rate at which biological literature is published far outpaces the current capabilities of modeling experts. In order to facilitate the automation of model assembly, we improve upon methods for converting machine reading output obtained from papers studying intracellular networks into discrete element rule-based models. In this work, we introduce a graph representation that can capture the complicated semantics found in machine reading output. Specifically, we focus on extracting changeof-rate information available when network elements are found to inhibit or catalyze other interactions (nested events). We demonstrate the viability of this approach by measuring the prevalence of these nested events in cancer literature, as well as the success rates of two machine readers in capturing them. Finally, we show how machine reading output can be mapped to the new graph structure. BACKGROUND Discrete element rule-based models have been shown to efficiently simulate biological systems, without the need for a complex parameterization process [1][2]. This highly canonical representation lends itself well to automated assembly, as demonstrated in [3]. In this previous work, BioRECIPES was proposed as an intermediate representation between machine reading output and the final model. BioRECIPES represents influences with signed edges, which imply either an increase or decrease in value of the regulated node (illustrated with an arrow and bar, respectively). Edges are always directed toward a single child node but can have multiple parent nodes joined by logical operators. Representation format of machine reading output varies widely across readers, and even from application to application. Popular methods for representing knowledge about natural language include first order logic, production rules, semantic
networks, and Bayesian networks [4]. The two readers analyzed in this work, TRIPS [5], and REACH [6] both produce outputs that can be considered semantic networks. They mainly consist of event nodes (a type of change over time) and entity nodes (objects from text). A set of predefined relations (semantic roles) link the nodes together. METHODS To accurately model nested events, we extend the BioRECIPES graphical representation with an intermediate node, and a new type of influence termed fixed regulation. The intermediate node represents the biological event being modified and is regulated by this event’s agent. This fixed regulation means that the child node’s value directly tracks without any delay the parent node’s value. Additionally, the event is also positively or negatively regulated by the controlling event’s agent. The intermediate node can then regulate other elements in the same manner as any other node in the graph. In the intermediate node template in Fig. 1, element B regulates the intermediate node in a fixed manner, meaning when B’s value is high, so is the intermediate node’s value.
Figure 1: Semantic network of a nested event from machine reading (left) converted to BioRECIPES graph (right)
The proposed structure provides key advantages over previous representations. The intermediate node is compatible with all operations allowed by the BioRECIPES format, such as the method presented in [2], and are highly amenable to extension. Special cases of nested events often occur when the regulated event is underspecified. Using intermediate nodes allows for additional information about the interaction to be added at a later time.
RESULTS We selected eight highly cited papers from PubMed studying cancer cells and reviewed them manually in order to find nested events. This was then compared to the number of nested event extractions processed by the REACH and TRIPS. The selected papers provided more than 1000 sentences to use for classification. The prevalence of nested events in each of these papers is presented in Fig. 2 (left). The prevalence of nested events in literature vary widely across papers. On average however, prevalence was recorded at 32.7%, with the lowest occurrence seen at 11%. This indicates that a significant amount of information is available in the form of nested events.
Figure 2: (Left) Prevalence of nested events in cancer literature as determined by manual count; (right) precision and recall of TRIPS and REACH on two of the cancer papers.
Both readers had high precision when simply tasked with classifying sentences containing nested events (REACH had 82.4% and TRIPS had 87.3%). The following precision (true positive/all predicted positive) and recall (true positive/condition positive) metrics illustrated in Fig. 2 (right) address whether each reader can also accurately capture the nested event in its extractions. We conducted further analysis of the reading output accuracy from the first paper (PMC1403772) and provide an example of mapping machine reading output to the BioRECIPES format in Fig. 3.
Figure 3: (a) A sentence taken from PMC1403772 (b) The associated semantic network from TRIPS (b) the BioRECIPES mapping of the sentence.
DISCUSSION TRIPS was observed to pick up on complicated syntax better and generally had better recall, especially when simply identifying text which contained nested events. However, both systems suffered from variable precision when capturing the full information from the nested events. REACH performed best in situations where nested events took on a typical inhibit/activate phosphorylation paradigm, which explains why it performed well on the second paper (whose focus was a drug inhibiting the phosphorylation of a cancer related protein). In all cases where the NLP readers correctly extracted nested events, the reading output was converted successfully into the extended BioRECIPES format. Most of the incorrect extractions observed occurred when information was missing from the nested event, meaning conversion to the BioRECIPES format would not introduce false information into the model. This observation was supported by both readers’ high precision in classifying nested events. In other words, it is unlikely that a sentence not containing a nested event type would be extracted as one. REFERENCES 1. Albert, R., Wang, R.-S. Methods in Enzymology. 281–306, 2009. 2.Sayed et al. 40th Annual International Conference of the IEEE EMBC, 2018. 3. Sayed et al. LNCS Machine Learning, Optimization, and Big Data. 1–15, 2017. 4.Cambria et al. IEEE Computational Intelligence Magazine. 9, 48–57, 2014. 5.Allen et al. AAAI Workshop on Construction Grammars. 2017. 6. Valenzuela-Escárcega et al. Database. 2018. ACKNOWLEDGEMENTS This work was partially supported by DARPA award W911NF-17-1-0135, and by the Swanson School of Engineering and the Office of the Provost at the University of Pittsburgh. The authors would like to thank Dr. Cheryl Telmer of the CMU Department of Biological Sciences for her constructive feedback.
EVALUATING DECARBONIZATION STRATEGIES FOR THE UNIVERSITY OF PITTSBURGH Eli Brock and Sabrina Nguyen Department of Electrical Engineering University of Pittsburgh, PA, USA Email: etb28@pitt.edu and san86@pitt.edu INTRODUCTION Two ambitious goals constitute the University of Pittsburgh’s clean energy vision. The first mandates that 50% of the university’s electricity be renewably sourced by 2030. The second calls for a 50% reduction in greenhouse gas emissions by 2030 from the 2008 baseline [1]. These objectives, while superficially similar, incentivize different actions and will produce different outcomes. This analysis will focus on the latter goal because it more comprehensively addresses the challenge of decarbonization, which should be the priority of energy-based sustainability initiatives. The university, like any climate-conscious economic actor, must weigh its benevolence against financial constraints and responsibilities. Therefore, the fundamental challenge of decarbonization is to evaluate the cost efficiency of different strategies. Such a comparison is the purpose of this study. Specifically, the decarbonization potential of the university’s purchasing power – a power shared by most energy consumers – will be weighed against the prospects for distributed solar on campus. The driving question of this study should be of great interest to other institutions. The University of Pittsburgh is a dense (many students per square kilometer), urban school located in a city which is among the very cloudiest in the US [2]. It is not the only university whose conditions are unfavorable to distributed generation; therefore, a successful decarbonization effort at the University of Pittsburgh would be a model for similar campuses to emulate. METHODS This abstract is a summary of the first three months of a project which will continue over the course of at least another semester, so the process of investigation remains incomplete. However, some interesting results have been produced. To prepare for the analysis, the two student researchers completed an online course in Geographic Information Systems (GIS) through the
University of California at Davis and Coursera. The students completed this course, which provided an extended introduction into analysis with ESRI’s ArcMap software, at the suggestion of advisors who understood the relevancy of geographic analysis to the study of distributed energy generation. After completing the specialization, the researchers wrote a brief literature review on the frontiers of modern distribution systems and the importance of GIS and modelling software to the evolution of these systems. The purpose of this review was to understand the shifting landscape of distributed energy resources, which include distributed generation and energy storage, and to understand how different software tools fit into the field. Following the literature review, the researchers moved to an analysis of campus solar information acquired in a previous project conducted during the prior summer. This project used HelioScope, a solar generation modeling software, to simulate a year of generation from buildings around campus, assuming each roof was installed to capacity with panels. Several of the data fields from these simulations were used to rank over 70 campus buildings from most to least cost efficient in terms of kWh per year per dollar. Using this ranking to inform the order of the buildings, the balance associated with the solar panels can be given as a function of time and initial investment. DATA PROCESSING The fields of interest in the HelioScope data were the annual energy production in kWh, the type of panel used for the simulation, and the cell unit capacity for each roof. Several important assumptions were made to make the analysis possible. First, it is unlikely that the university would use the panels from the HelioScope simulation; the software requires a panel to be entered, so the default was used. Each building’s wattage installation capacity, however, is
assumed to be constant. The market price of solar panels was estimated to be $2/installed watt [3][4]. The time frame of the analysis was 25 years, as this is the approximate lifetime of one panel [3]. Cleaning and maintenance costs, as well as the decrease in efficiency over time, were assumed to be negligible. The University’s agreement with Duquesne Light, the local electricity distributer, is such that prices will increase by no more than 3% per year [4]. In terms of the payback from a solar installation, higher electricity costs are favorable, so the analysis was based on the conservative assumption that prices would not rise at all over the next 25 years. Therefore, the university’s current electricity price of $0.081/kWh is used to relate the amount of solar power generated to the money saved by the university [4]. To order the buildings by investment quality, an economic efficiency metric is derived. This metric is the annual production divided by the price as calculated earlier using the $2/watt scaling factor. The resulting figure is the energy produced by the building per invested dollar per year. Solar units should obviously be installed on the more efficient buildings first. RESULTS Figure 1 is a map of Oakland, Pittsburgh’s academic neighborhood, which includes most (but not all) of the buildings included in this analysis. The map was created with ArcMap. The buildings are symbolized by the economic efficiency metric discussed earlier, with green buildings being more efficient and red buildings less efficient. The best buildings are about 28% more efficient than the least efficient buildings.
Figure 1: Pitt Buildings by potential solar efficiency
According to the analysis of the university’s balance over time (using Python and Matlab), the investment yields a payback of 19-21 years regardless of the number of buildings with units installed. The cost for a full-campus installation would be approximately $20 million. DISCUSSION The 28% discrepancy between the most and least efficient buildings is a testament to the value of the efficiency metric for decision-making. However, the 20-year payback is likely too long to be a wise investment. As this analysis continues, these numbers will be compared to the cost of purchasing Renewable Energy Credits (RECs), entering into power purchase agreements with low-carbon energy suppliers, and exercising the university’s energy supplier choice through Duquesne Light. Because of the long payoff, such an analysis is likely to suggest that such actions are cheaper and safer avenues for decarbonization. Additionally, this ongoing project will reconcile the efficiency ordering with the roof replacement schedules of the buildings and investigate possible advantages of a campus-wide microgrid based on solar generation and energy storage. REFERENCES [1] “2018 Pitt Sustainability Plan.” University of Pitsburgh. 12.5.2017. Accessed 8.20.2019. https://issuu.com/upitt/docs/2018_pittsustainabilityp lan_final [2] Miaschi, John. “The Cloudiest Cities in the United States.” WorldAtlas. 5.9.2018. Accessed 8.20.2019. https://www.worldatlas.com/articles/thecloudiest-cities-in-the-united-states.html [3] Conversation with representative from Tesla. 7.21.2019. [4] Conversation with Dr. Aurora Sharrard. 7.23.2019. ACKNOWLEDGEMENTS Funding was provided through the SSOE Summer Research Internship with advisors Dr. Robert Kerestes and Dr. Katrina Kelly. We would also like to thank Dr. Aurora Sharrard, who met with us to go over the specifics of the University’s decarbonization landscape.
Emerging Memory Devices: Silk Based RRAM and 2D Material Based Synaptic Analog Memory Austin Champion Mentors: Mohammad Taghi Sharbati and Professor Feng Xiong Microfabrication Laboratory, Department of Electrical and Computer University of Pittsburgh, PA, USA Email: amc336@pitt.edu Progress in the world of computing demands more efficient memory devices in both biological applications and neuromorphic computing. These works investigate silk RRAM devices and electrochemical graphene synapses. SILK RRAM DEVICES INTRODUCTION Biomaterials such as silk fibroin protein offer a bio-compatible solution to electronics with a low environmental impact compared to current materials. RRAM (Resistive Random-Access Memory) devices allow for memory storage in the form of a resistance that can be switched from high to low and back again with the application of positive or negative voltages. Yong et al. observed that the silk used in this experiment, harvested from the Bombyx mori silkworm, has been shown to have switching behavior worth investigating when between an active electrode and an inert electrode [1]. Sharbatti et al. have shown that changing the sheet structure of the silk allows for lower switching voltages and a higher endurance for the states [2]. This work seeks to prove that the switching is due to electron hopping in the silk. The alternative theory is that ions from the active metals are creating a conductive path through the silk. To prove that silk is responsible for the mechanism we study the process using inert metal (Pt) for both electrodes as opposed to previous studies which used active metals such as gold (Au).
METHOD We measure our devices with a Keithley 4200A-SCS Parameter Analyzer. We chose a device out of the grid system by selecting a bottom and a top electrode for the corresponding device as shown in Figure 1a. By sweeping positive and then negative voltages we were able to observe the switching of the device to set state of low resistance and back to a reset state of high resistance. RESULTS We saw the same switching behavior in our devices as has been shown in previous devices. Figure 1b shows a set and reset cycle that we recorded with a low voltage at both instances. The switch between a high resistance to a low resistance and back again is easy to see and shows that the same mechanism that has been seen in devices with active metal electrodes. a
b
Figure 1: Silk RRAM Devices. (a) Crossbar structure of device with silk placed as a sheet between electrodes. (b) Switching results with set and reset cycles graphed together.
DISCUSSION This work suggests that the switching mechanism is based on the electron hopping and not on a process by which active metal
electrodes disperse ions into the silk to create a conductive path. The switching behavior of the silk has been recorded before, but we believe that the results of this work bring a better understanding of the resistive switching concept. The next objective should be to further improve the programming voltage and the endurance of the device as well as studying the effect of temperature on endurance.
to continue the process. Electrodes were placed over the suitable flakes so that two large reference electrodes could intercalate Li ions into the graphene while several other electrodes could measure the read the graphene state at several different lengths of the flake. A finished electrode design for five flakes shown below in Figure 3. Graphene Flake Reference Electrodes
2D Material Based Synapses INTRODUCTION Brains demonstrate an unparalleled level of efficiency and the ability to emulate a brain in the computing world shows great potential. Brain inspired computing is hampered by the digital structure of current computers. Unlike the two-state metal-oxide-semiconductor devices which make up modern computers, synapses in brains allow for gradual changes in synaptic weight by means of a process called synaptic plasticity. We seek to create an electrochemical 2D material synapse capable of sustaining neuromorphic computing. Our device is 2D which makes it well suited for intercalation of Li ions which is the main process to write to our device by lowering the resistance of the device.
Probing Electrodes
a
b
Figure 3: Electrode designs of synapses. (a) Design for 5 flakes on one chip with probing pads. (b) Graphene flake with electrode placement.
DISCUSSION This experiment is ongoing and while we constructed many new devices for testing, the testing procedure and system is still being built. We hope to be able to measure the change in our devices based on the injection of very fast low powered signals and we are creating a system that can measure it accurately. We hope to show the change in resistance by the application of voltage pulses that intercalate the Li ions to the 2D materials (i.e. graphene or MoS2). REFERENCES
Figure 2: Electrochemical synapse. (a) Biological synapse. (b) A model of the electrochemical synapse showing the reference intercalating Li ions to the graphene when a pulse is applied.
METHOD The process we conducted began with a silicon wafer that was cleaned and then annealed in a CVD furnace which removed impurities. Graphene was then mechanically exfoliated onto the wafer. Flakes that were both thin and uniform in shape were chosen
1. Yong et al. Scientific Reports 7, 14731, 2017. 2. Sharbati et al. Biofriendly Electronics: Helix-Rich Silk Fibroin Films for Biocompatible Memory Devices, Phoenix, Arizona, 2019. ACKNOWLEDGEMENTS Funding was provided by the Swanson School of Engineering and the Office of the Provost at the University of Pittsburgh.
The Speedy Doctor Ryan Estatico, Department of Electrical Engineering University of Pittsburgh, PA, USA Email: rme20@pitt.edu INTRODUCTION It has long been the job of a doctor to deliver the news of discovery quickly and honestly to a patient. However, the average weight time to schedule an appointment with a healthcare professional is 24 days according to Bruce Japsen, a healthcare business writer at Forbes [1]. Hence, creating the dreaded fear of not knowing for the patient. I believe neural networks would be an adequate solution for giving quick and accurate medical feedback. My goal for the summer was to collect metrics and benchmarking data on a Convolutional Neural Network (CNN) across multiple platforms and possibly an operating demo of the application running on a Movidius Myriad X VPU neural compute stick. The data set would use a set of 10,000 images to train on. The application of Neural Networks and benchmarking of programs in fields of study like oncology could help doctors better analyze the severity and characteristics of disease better than before. I was especially interested in this CNN because of the relationship with oncology and know from personal experience that accurate diagnostics and time are two very important factors when facing a health problem. As a freshman student the beginning portion of my research and time was spent to familiarize myself with new programs, language and data involved with the equipment I would be working with. METHODS I was the able to find a Keras/TensorFlow model that classifies the skin lesions. The model used Mobile Net CNN to train on. Using a Mobile Net version was important because it was previously shown to be compatible with the Movidius architecture. Using the Movidius device was a main interest point of my research. I was intrigued by the new technology and the computing results promised at such low power compared to the CPU and GPU. The compute stick enables rapid prototyping, validation, and deployment of Neural Networks. The low-power
processing unit is designed to run at high speed and low power without sacrificing accuracy [2]. I found using the HAM10000 dataset – “biggest collections of dermatoscopic images of common pigmented skin lesions” as the training set of the program the program ran with 86% accuracy across all three platforms. Proving the Movidius Neural Compute Stick did not sacrifice accuracy. Breaking these images down into 7 different classes of skin lesions to analyze by the program [3]. These classes included Actinic Keratoses and Intraepithelial Carcinoma (AKIEC), Basal Cell Carcinoma (BCC), Benign Keratosis (BKL), Dermatofibroma (DF) Melanocytic Nevi (NV), Melanoma (MEL), and Vascular skin cancer (VASC) [3]. RESULTS
LATENCY (ms) 0
50
100
150
200
Figure 1: Latency results side by side with the gray bar being the Movidius Neural Compute Stick, the orange- Intel 8th generation Core i7-8650U, and the blue-Nvidia GTX 1050.
Figure 1 displays that the Compute stick when using Mobile Net has a lower latency than both CPU and GPU. The exact numbers for the Neural Compute stick, Nvidia GTX, and Intel i7 were 77.81, 88.82, 154.93 milliseconds respectively.
DURATION (ms)
0
100000 200000 300000 400000 500000 600000
Figure 2: Duration results side by side with the gray bar being the Movidius Neural Compute Stick, the orange- Intel 8th generation Core i7-8650U, and the blue-Nvidia GTX 1050.
Figure 2 displays the Compute stick running at a much quicker speed than the other devices. It outperforms the GPU at a rate about 5 times as fast. The Movidius Device was able to identify the image and classify it in about 6 seconds.
THROUGHPUT (FPS)
0
10
20
30
40
50
60
Figure 3: Throughput results side by side with the gray bar being the Movidius Neural Compute Stick, the orange- Intel 8th generation Core i7-8650U, and the blue-Nvidia GTX 1050.
Figure 3 once again collects data in favor of the Compute stick. The exact results for throughput in FPS for the Neural Compute stick, Nvidia GTX, and Intel i7 were 51.34, 21.82, and 12.09 respectively.
LATENCY/POWER…
0
10000
20000
30000
40000
Figure 4: Latency/Power (sec/W) side by side with the gray bar being the Movidius Neural Compute Stick, the orange- Intel 8th generation Core i7-8650U, and the blue-Nvidia GTX 1050.
Figure 4 investigates the relationship of Latency and power at each device. I chose to use the published max power values of each device, so it is possible that the Compute stick produced these results at an even lower power. The Compute stick uses 30 times less power than the GPU system. Conclusion During the 3088 iterations recorded by the program the Neural Compute stick outperformed both the
CPU and GPU in every category. This could have been due to the optimization process of using the open vino toolkit. The fear of not knowing when waiting for results of an urgent medical examination is a major cause of stress. By using a CNN feedback can be given very quickly. Furthermore, the compute stick’s low use of power could be utilized in a small mobile device and act like a medical handbook. Having access to immediate medical assessments could lead to faster recovery. Other networks may be trained to diagnose injuries, not necessarily this network since it only works specifically for skin cancer. An on-the-go device where medical attention and cell service are not available could prevent an injury from worsening. REFERENCES 1.B. Japsen.” Doctor Wait Times Soar 30% In Major U.S. Cities.” Accessed 08.10.2019 https://www.forbes.com/sites/brucejapsen/2017/0 3/19/doctor-wait-times-soar-amid-trumpcaredebate/#39232af62e74 2.Intel Software. “Intel Neural Compute Stick.” Intel. Accessed 08.10.2019 https://software.intel.com/en-us/neural-computestick 3.P. Tschandl. “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions.” Harvard Dataverse. 2018. Accessed 06.20.2019 https://dataverse.harvard.edu/dataset.xhtml?persist entId=doi:10.7910/DVN/DBW86T ACKNOWLEDGEMENTS Thank you to Dr. George for allowing me to use his Lab and resources over the summer, and Nik Salvatore for teaching me how to use them! Hopefully more projects to come.
DESIGN OF USB TO UART PCB Richard Gall and Noah Perryman NSF SHREC Center, Department of Electrical and Computer Engineering University of Pittsburgh, PA, USA Email: rtg16@pitt.edu, Web: https://nsf-shrec.org INTRODUCTION High-performance and resilient computing is the core goal of SHREC, with a focus on space computing. In order to achieve this SHREC utilize field programmable gate arrays, FPGAs, to create reconfigurable hardware for supercomputing. The primary aim of this project is to create a four-layered printed circuit board, PCB, that can be used to control and run commands on the CSP, the CHREC Space Processor. In addition, it needs to interface with the CSP Active Evaluation Board, the PCB designed to test each CSP for functionality. This board is required to transmit serial data from USB to parallel data required for the CSP. A method of transmitting data from serial to parallel, it is achieved using a UART transmitter/receiver. A goal of my project is to minimize the size of the board and routing distance of each signal. METHODS The study consisted of the completion of an Altium Designer tutorial as I had no prior PCB design experience. The tutorialâ&#x20AC;&#x2122;s purpose was to design a 555 timer PCB on a two-layer board. This consisted of creating a schematic, PCB footprint for the 555 timer, and routing the PCB. The PCB was then converted into a four-layer board to show how to utilize ground and power planes. Figure 1 shows the 555 timer created on the four-layer board.
functionality which made part selection not entirely critical. The FT232 UART transceiver was selected to act the serial to parallel data converter. This chip was selected based on its configurability to be powered by a USB bus. The USB and switches selected were chosen as they were used on the older revision of the board. The connector chosen was predetermined as the Active Evaluation board had already been designed so the male version, was selected to ensure they can interface. Once the chips were selected the schematic was to be created. The schematic was based off of the initial layout created in figure 1. When creating the schematic each data sheet was read over again to get the recommended layout for each part. For example, where to put the coupling capacitors and the values of the capacitors. The schematic, seen in figure 3, created underwent a design review until it was deemed to be correct and read to move onto the PCB. The PCB was created by first making each PCB footprint. This requires using the data sheet to figure out the dimensions and pin layout of each chip. The PCB footprint is important to get correct otherwise the routing will not line up correctly and the board wonâ&#x20AC;&#x2122;t work. Once the PCB footprints were constructed the only part left was to rout the board.
After understanding the core of Altium Designer a basic layout of how the PCB was going to function was created. This layout was to be the basis of the design. The design needed to consist of a UART transmitter/receiver, a connector to interface with the Active Evaluation board, and USB port at its core; the layout chosen and how it interfaces with the Active Evaluation board can be seen in figure 2.
A goal of the project was to rout the board in a way to minimize the size of the board and routing distance. In order to achieve this the board was made a four-layer to utilize the ground and power planes. Routing presented several problems one being figuring out the most effective way to position the chips and how to shape the board effectively. The board seen in figure 4 was the initial revision of the UART board. However, following a design review, several changes were made to improve the design.
Once the layout was finalized the integrated circuit chips that were going to be used on the board were researched. There are many parts with the same
Revision two, as seen in figure 5, corrected some inefficient routing, for example, in the first revision the diodes, D1 and D2, were crossing each other. The
final design fixed this and removed the need for an extra via. Revision two changed the shape of the board to remove a stress point created by the corner in the initial design. It also included text for the voltage selection header to help understand for those who donâ&#x20AC;&#x2122;t know the design. RESULTS Overall, the design of the PCB meets all requirements set initially. It works as a USB to UART convert and interfaces directly with the Active Evaluation board. The board size and routing were minimized effectively too. The board itself has not yet been printed so no data points were able to be collected to prove the design.
DISCUSSION The board itself still has improvements to be made in the routing and layout. Mounting holes should be added to support the board better when connected to Active Evaluation board. Routing distance can be minimized more by rotating the FT232 chip. Printing the board and soldering is the next step in finishing the research. This way the UART board can be tested by interfacing it with the Active Evaluation board and data points can be collected to prove the concept. ACKNOWLEDGEMENTS Would like to thank Noah Perryman for mentoring me this summer. Along with Dr. Alan George for letting me work with the SHREC center.
Figure 1: 555 timer created for Altium Designer Tutorial
Figure 4: Revision one of UART Board
Figure 2: Design layout used as basis for design
Figure 3: Schematic created in Altium Designer
Figure 5: Revision two of UART board
FEASIBILITY STUDY OF KINETIC, THERMOELECTRIC AND RF ENERGY HARVESTING POWERED SENSOR SYSTEM Jiangyin Huang, Keting Zhao, Hongye Xu Deparatment of Electrical Engineering University of Pittsburgh, PA Email: jih105@pitt.edu INTRODUCTION Wearable devices in real life still use lithium batteries as power sources. However, devices that monitoring vital body signals need more sustainable power sources. This research is to figure out the feasibility of using energy harvesting system (EH) to power a wearable system. The system is divided into four parts, EHs, an energy storage and management system, an embedded microcontroller unit (MCU), and peripherals. Three different EHs are used in this research. In this abstract, the feasibility of thermal energy harvester (TEG) and the design of the whole system are discussed. METHODS Thermal energy harvester (TEG) uses thermoelectrics technology. TEG generates voltage when differing temperatures are placed side by side. There are multiple TEGs in the market with similar designs, only different in sizes. (Figure1) All products’ efficiencies are highly correlated to their sizes.
designed for ultra-low voltage input, can step up 20mV to 3.3V.[1] LTC-3108 suits perfectly with TEG. The output of the TEG is connected to LTC-3108, and four 1 F capacitors are connected in parallel to the output of LTC-3108 as storage. MSP430G2ET (an ultra-low power MCU designed by Texas Industry) is used. The working voltage of MSP430G2ET starts from 1.8V to 3.6V. [2] A few tests demonstrate that MSP430G2ET starts up when the input voltage reaches 2.4V. A few peripherals with high power, such as antenna, only work at 3.2V and above. Therefore, the output of the power management system is connected to 0.9kΩ and 2.4kΩ resistors in serial. The input of the MCU is connected between two resistors, and the input of GPIOs is directly connected to the power management system. Such design starts up the MCU only when input voltages of all peripherals (input voltage of GPIOs) reach their working voltages. Considering multiple EHs maybe used in real life, Power ORing Architecture [3] is used in this research to give the system a more stable power source. (Figure2)
Figure1. Sizes of three TEGs are compared with one dollar coin. Top left is EHA-PA1AN1. Right is CP2,127,06. Middle left is OT20-66-F0-1211-11-W2.25. Figure 2. Power ORing architecture
The efficiency of a TEG is higher when either the fluid speed around it is faster or the temperature difference between two sides is larger. Different energy harvesters are placed on arms, metals under sun, in room and in wind to test the efficiency of harvesting. (Figure3) Thermoelectrics are stable but small in scale. LTC-3108, a power management IC,
DATA PROCESSING Different thermal energy harvesters with different sizes are tested. After experimenting different TEGs, EHA-PA1AN, generating considerable power with coin size, is used in this experiment.
This TEG can stably generate 25mV with 3.5mA when placed on an arm according to the experiment. To simulate the charging up process, 25mV with 3mA is directly connected to the input of the LTC3108. After 2 hours’ charging, the voltage at the input of the MCU stabilizes at 2.4V and the input of the GPIOs stabilizes at 3.3V. DS18B20 (4.5mW Thermal sensor) [4] is connected to MCU, used as a test peripheral. With stable 25mV input, the sensor works for half an hour. The power source is then disconnected. The supercapacitor starts discharging and the MCU actives for 90s before switching to LPM (sleep mode). The MCU could stay in LPM for another 400s before completely shutting down. If TEG or other EHs start generating power again before shutting down, MCU will switch back to active mode after supercapacitor is charged to 3.3V again. RESULTS The wearable device using thermal energy harvester as the only power source is feasible when peripherals are low power consumptions. When high power peripherals, like antenna are connected to MCU, thermal energy harvester needs to charge for 15mins for MCU to work 5 seconds. When multiple EHs are used, a system with high power peripherals is feasible. DISCUSSION There are several problems underlying this TEG system. First, TEG takes approximately 2 hours to charge up four 1 F capacitors to 3.3V when the system is placed on
human body. Even with smaller and fewer capacitors, due to the low input voltage, it still takes a long time to charge capacitors to 3.3V. The solution took in this experiment is to charge the capacitors to 3.3V first and use the TEG to stabilize the voltage. Second, because the voltage generated would change the difference in temperature, the hot side of the TE harvester (the side on skin) gets colder as time goes on. Third, thermoelectrics’ efficiency is only 10% of photovoltaics. The TEG is very inefficient, considering the amount energy it absorbs. REFERENCES [1] Linear, “Ultralow Voltage Step-Up Converter and Power Manager”, Mar 2019. [2] Texas Instruments, “Mixed Signal Microcontroller”, MSP430G2x53, May 2013. [3] Estrada-López JJ, Multiple Input Energy Harvesting Systems for Autonomous IoT EndNodes. 2018. [4] Maxim Integrated, “Programmable Resolution 1-Wire Digital Thermometer”, 197487; Rev 6; Jun 2019. ACKNOWLEDGEMENTS Funding for this project was provided by the Swanson School of Engineering and the Office of the Provost. Special thanks to Dr. Hu and his PhD student, Yawen Wu, for providing the source of devices and the help for designing and test.
EHA-PA1AN1
CP2,127,06
OT20-66-F0-1211-11-W2.25
Metal under sun (stable)
120mV/11mA
180mV/45mA
35mV/2mA
Human body room temp(first touch)
85mV/8.8mA
125mV/38mA
25mV/100uA
Human body room temp (stable)
25mV/3.2mA
25mV/5.1mA
3mV/70uA
85ºC metal with room temp
330mV
400mV
80mV
Voltage under wind comparing to no wind
130% ↑
120%↑
Not significant
Size (mm)
22x20x26.4
62x62x4.6
12.3x14.4x2.2
Figure 3 Three TEGs’ efficiency under different circumstance
ANALYZING RENEWABLE GENERATION FOR THE UNIVERSITY OF PITTSBURGH Eli Brock and Sabrina Nguyen Department of Electrical Engineering University of Pittsburgh, PA, USA Email: etb28@pitt.edu and san86@pitt.edu INTRODUCTION The University of Pittsburgh has the intention to transition to renewable and low-carbon energy. The current University goal is to reach 50% renewable and 50% carbon reduction, based on a 2008 baseline, by 2030. As of 2018, the University was at approximately 11% renewable with the intention of increasing to approximately 14% by FY19. One of the larger investments is in hydroelectricity which will be integrated by 1/1/23, which has the capability to decrease the University’s Co2/kWh by 25%, but there are other factors that can change that projection sooner. This project will expand on data from existing energy reports as well as research collected by other students and/or faculty. This project specifically uses data regarding the location and placement of solar panels on campus and further considers the established timeline of when to install the panels as well as the economic versus sustainable payback of the installation. In addition, we explore net metering and whether it is a possibility for the University of Pittsburgh as a subset of our investigation into solar potential. We base a lot of our thought process on a literature review that was also completed during this process to better understand the current distribution system, its distributed energy resources, and new technologies that can help better develop the future distribution system. METHODS To better understand how to improve the University’s carbon consumption, we first needed to understand the different technologies involved and the basics of the current distribution network to try and improve the network. In addition, we wanted to further investigate the different software that would be at our disposal to use for the University’s development. After a short literature review, it was easier to understand the basic functioning of the distribution network and look at what components
can be further developed. There were also three software, ArcGIS, HOMER, and GridLab-D, that seemed to be advantageous to investigate and further our understanding of the distribution network and organize the information given to us. Enrolling in Coursera’s course for ArcGIS increased my understanding of the capabilities of that software and how to perform basic functions to manipulate the given data. The purpose of learning this course was to be able to use data given by the University to map out the current distribution patterns, such as loads, generation, lines, etc., to get a better understanding of the network and, in doing so, analyze the integration of renewable energy and model its effects. DATA PROCESSING One of the data sets used a software called HelioScope which simulates solar installation and provides peak power generation, yearly output, tilt angle, and rooftop wattage capacity. By putting this data into ArcMap, the output showed the locations of each building and was able to prioritize which buildings should have solar panels inserted first based on the expected energy output. Unfortunately, there were hopes of using data from the University this summer, but there were difficulties getting the data. A new goal is to continue this research by developing our capabilities with the software and work with another dataset. Our analysis and processing method will depend on the data we receive. RESULTS The data collected through HelioScope was reanalyzed in ArcMap which produced a map that could be compared with a given roof replacement schedule for the University. This could be used to optimize the timing of said replacements. For the literature review, I researched the current distribution system and how there is a potential to upgrade the existing system with new technologies and improved methods. With the
development of new software, there are discussions on how to plan where newer devices should go, how to route power lines, and how to better perform different distribution analyses. One of those analyses was spatial load forecasting which is one of the challenges of distribution planning. This challenge arises because it opens the idea of having energy enter the system at the generation end as well as load end. This shifts not only where generation occurs, but also how utilities need to estimate the amount of generation needed. Spatial load forecasting can be used on Pittsburgh’s campus to see the locations where generators are needed and better understand where most of the load is needed versus critical loads. DISCUSSION Distribution systems are undergoing a generational transformation as demand for a cleaner, more efficient grid incentivizes the development of new technologies. Such a transformation derives part of its feasibility from parallel developments in the systems used to visualize it and model it. The literature review analyzes structural changes to the distribution grid and the respective developments in visualization and modeling software that accompany them. These developments include the increasing prevalence of Geographic Information Systems (GIS) for spatially visualizing distribution systems and the transition to agent-based simulation software such as GridLABD to model them. These software are in turn used to analyze the on-campus distribution network and how the addition of solar panels will impact the campus’ carbon emission. In order to complete our analysis, some assumptions were made. One assumption that should be noted is that the type of panel used for analysis was a default panel chosen by the software, however, the cost and capacity of all panels are similar. Overall, if the panels chosen to be installed throughout campus are around $2/watt, it would be a $20 million investment for the University. To complement the installation of solar panels, large scale energy storage, another addition that can contribute to decreasing the campus’ consumption of carbon- based energy, must also be implemented. Depending on Duquesne Light Co. and paying the service bill, energy storage can be used to shift the demand during certain hours as an indirect method of peak shaving. By shaving peak load, it could put the University in a different
demand category, thus reducing the overall service bill. Although this is not as effective as the typical peak shaving method, it is a potential investment that needs further investigation. Smaller energy storage can be more effective in buildings that need to have backup energy in the event of a blackout. Current fail-safes are most likely carbon-based generators and using energy storage based on renewable energy can eliminate carbon-based generators. Some locations that are most likely benefit from this installation include, but are not limited to, Scaife Hall, Wesley Posvar Hall, and the Peterson Events Center. Although many of these decisions are not 100% economically based, it is important to understand that not all sustainable decisions can be economic. In order to continue on a renewable path and integrate newer technologies, investment upfront is needed, and the payback is not necessarily going to be monetary or immediate. REFERENCES [1] G.T. Heydt, “The Next Generation of Power Distribution Systems,” IEEE Transactions on Smart Grid, vol. 1, no. 3, pp 225-235, Dec. 2010. [2] R. F. Arritt, R. C. Dugan, “Distribution System Analysis and the Future Smart Grid,” IEEE Transactions on Industry Applications, vol. 47, no. 6, pp 2343-2350, Nov/Dec 2011. [3] N. Rezaee, et. al., “Role of GIS in Power Distribution Systems,” World Acadamy of Science, Engineering and Technology, vol. 60, pp 902-906, 2009. [4] D. P. Chassin, J. C. Fuller, and N. Djilali, “GridLAB-D: An Agent-Based Simulation Framework for Smart Grids,” Hindawi Publishing Corporation: Journal of Applied Mathematics, vol. 2014, pp 1-12, 2014. [5] “Advanced Metering Infrastructure and Customer Systems: Results from the Smart Grid Investment Grant Program,” U.S. Department of Energy: Office of Electricity Delivery and Energy Reliability, Sept. 2016. https://www.energy.gov/sites/prod/files/2016/12/f34 /AMI%20Summary%20Report_09-26-16.pdf ACKNOWLEDGEMENTS Funding was provided through the SSOE Summer Research Internship with advisors Dr. Robert Kerestes and Dr. Katrina Kelly. We would also like to thank Dr. Aurora Sharrard for working with us and meeting to further discuss many of these points.
EPITAXIAL GROWTH OF WO3 ON LaAlO3 Matthew Reilly, Qingzhou Wan, and Feng Xiong Department of Computer and Electrical Engineering University of Pittsburgh, PA 15261, USA Email: mtr34@pitt.edu INTRODUCTION Artificial intelligence is at the forefront of new developing technology. Its variety of application in fields such as finance, engineering, health care, etc. have been responsible for this explosion. The primary aim of this project is to develop a nonvolatile memory (NVM) device, where resistance switching from intercalation emulates synaptic weight in an artificial synapse. More specifically, fabricate a three terminal device where the synaptic weight is regarded as the conductivity of the channel between the drain and source electrodes.
The thin film quality post-deposition was verified via x-ray diffraction (XRD), raman spectroscopy (RS), and atomic force microscopy (AFM).
Tungsten oxide (WO3) was examined and determined to be an appropriate channel and gate material for the device. The absence of A-site cations provides sufficient interstitial space for ion intercalation [1]. WO3 which is typically an insulator can become metallic via intercalation. This unique characteristic is what allows for the opportunity to emulate the human synapse by varying conductivity [1].
DATA PROCESSING XRD, raman spectroscopy, and AFM all produced data that was used to characterize the film. Gaussian deconvolution in OriginLab data analysis software was used on XRD data to deconvolute the main peak into separate LAO and WO3 peaks. Peak locations, intensities, and full width at half maximum (FWHM) values were then obtained from these deconvoluted peaks. Raman spectroscopy and AFM data were processed internally by the machines. The characterization data was then compared to known results to test film quality [2].
METHODS High quality WO3 films were prepared on LaAlO3 (LAO) substrates using radio frequency (RF) magnetron sputtering technique. LAO was used as the substrate because its lattice mismatch is fairly low with WO3. Pre-deposition, the substrate, substrate holder, and substrate clips were all cleaned in an ultrasonic bath. Two five-minute baths in acetone and isopropyl alcohol (IPA) were used for this, respectively. A shadow mask was designed for the RF process. Using this mask, growth of WO3 in specific gate and channel dimensions on LAO were achievable. The RF recipe parameters were altered until high quality WO3 film was grown. This high-quality film is essential for good device functionality.
After film characterization, electron beam evaporation (EBM) was used to deposit metal electrodes onto the device. Another shadow mask was used for this so the electrodes would be aligned appropriately with the gate and channel. 5nm of titanium was first evaporated to act as a binding layer to the substrate. An additional 80nm of gold was evaporated on the titanium to act as the electrodes.
RESULTS Overall, good film quality was obtained and the conductivity between drain and source does vary with intercalation. It took six different depositions to obtain the right recipe. See Figure 1 below where raman spectroscopy was first used to verify WO 3 was on the substrate by the additional peak at 780 cm -1 on the red graph. Figure 2 below shows the deconvoluted WO3 (green) and LAO (red) peaks from the XRD data. Good crystallinity is reported to have <.1ยบ FWHM [1]. OriginLab gaussian deconvolution shows the WO 3 peak having .03697ยบ FWHM.
Atomically flat films must have a surface roughness <1nm [2]. See Figure 3 below which verifies surface roughness of WO3 thin film to be .337nm using AFM. These results show crystal matching between substrate and film is good and that the film is atomically flat (<1nm) [2]. DISCUSSION Further electrical testing needs to be done on varying channel length devices. It is not known if WO3 is the best film for the application but thus far has been seen to produce results that exhibit characteristics of an artificial synapse.
Figure 1: Raman spectroscopy of LAO substrate (black) and WO3 on LAO substrate (red).
Slight error is to be expected when working in nanoscale region with shadow masks in areas such as alignment and gate/channel dimension; however, other forms of forms such as photolithography are less time and cost efficient. Device benchmarking will be done with NeuroSim software to test learning accuracy of the device. This software utilizes device characteristics and benchmarks it by using MNIST database. REFERENCES 1. Leng, X., et al. (2015). "Epitaxial growth of high quality WO3 thin films." APL Materials 3(9): 096102. 2. Yang, J.-T., et al. (2018). "Artificial Synapses Emulated by an Electrolyte-Gated TungstenOxide Transistor." Advanced Materials 30(34): 1801548.
Figure 2: XRD deconvolution of 28nm thick film on LAO. WO3 green peak FWHM .03697ยบ.
ACKNOWLEDGEMENTS Funding was partly provided by the Swanson School of Engineering at the University of Pittsburgh. Figure 3: Surface height (1 x 1 um) AFM image of 28nm thick WO3 film on LAO.
TRADE STUDY OF SHREC CENTER NEXT GENERATION SPACE CO-PROCESSOR Brendan Schuster, Noah Perryman, Eric Shea, and Dr. Alan George NSF Center for Space, High Performance, and Resilient Computing (SHREC), Department of Electrical Engineering University of Pittsburgh, PA, USA Email: bjs111@pitt.edu INTRODUCTION Space computing presents many challenges due to the unique environmental conditions faced, such as the absence of atmosphere, extreme temperature fluctuations, and radiation exposure. The purpose of this research is to perform a review and trade study of one of the next generation space processors being developed by the University of Pittsburghâ&#x20AC;&#x2122;s NSF SHREC center. This is completed through review of each sub circuit on the printed circuit board (PCB). There has recently been a push in the field of High Performance Computing that focuses on efficiently handling compute intensive processes, such as image processing. CPU based processor systems with additional hardware accelerators as coprocessors are showing great promise as an avenue to increase performance-per-watt as applications constantly need increased computing power. Field Programmable Gate Arrays (FPGAs), the central component of many coprocessors including the SSP+, shows great promise to work in junction with another CPU to achieve high computational power and efficiency, which is paramount in space supercomputing [1]. FPGAs can also handle a wider variety of tasks by being physically reprogrammed with computer code into different physical configurations, which is highly desirable for space missions that only get sent to space once. METHODS Before the trade study of the SSP+ began, most of the architecture and component selection was completed in Altium Designer, a circuit designing tool. No major integrated circuits (ICs) were replaced throughout the study (no major changes to architecture). Each sub-circuit of the SSP+ was reviewed ensuring the correct component selection. The review included verifying the pinout of major components in the schematic (FPGA and DDR3 memory), routing between parts, and adding/changing passive components (resistors,
capacitors, etc.). The second element of each subcircuit review included verifying correct values of resistors, capacitors, and inductors already laid out. Proper review of each circuit required fundamental understanding of the electrical characteristics of each component and the overall functionality of the sub-circuits. Datasheet recommendations and FPGA evaluation board schematics provided example layouts of selected ICs while LTSPICE and Renesas iSim4.1 Power Management tool verified correct hand calculations. Figure 1 shows a top-level architecture of the SSP with each of the sub-circuits in a separate block.
Figure 1: SSP+ Top Level Architecture being connected in parallel with another SSP+ and/or SSP/CSP.
Once the review of the architecture was completed, an analysis on the power consumption of the system was performed, starting with the FPGA. The original parameters for the resource utilization were developed by a graduate student for another FPGA used on the STP-H7 SSIVP (Spacecraft Supercomputing for Image and Video Processing) mission, which was the most recent SHREC project flying in space. The parameters were updated to better match the new coprocessor FPGA and its resources, while keeping the environmental constraints the same (70° C with no-airflow). Xilinx Power Estimator (XPE) tool was used for this estimation.
and VCCO voltage rails. Switching Buck regulators were powering the transceiver voltage rails MGTVCC, MGTVTT, and MGTVCCAUX. Table 1: Resource Utilization of the SSP+ FPGA
Once the power consumption on the FPGA was estimated, each of the FPGA regulators’ power consumption was estimated using LTSPICE and Renesas iSim4. Since the models for the regulators did not have any temperature data, the calculated values of the regulators were multiplied by the ratio of estimated FPGA power consumption at room temperature to the 70° C with no-airflow conditions. P70 is the estimated power of a regulator at 70° C, P40 is the room temperature spice simulation. XPE70 and XPE40 represent the XPE output power at 70° C with no airflow and at room temperature, respectively. (Eq. 1) RESULTS At 70° C and no-airflow the power consumption is estimated 6.0W (XPE70), and at room temperature conditions, the power consumption is 5.4W (XPE40). Figure 3 shows two graphs from the XPE70 simulation; the first graph shows how FPGA power varies with temperature while the second graph shows the static currents that can be used to estimate regulator power consumption.
Figure 2: FPGA Power vs. Temperature (Top), Static Current for each voltage rail (Bottom)
The power consumption for the all of the FPGA regulators is shown in Figure 3. In the original design, the Linear Dropout (LDO) Voltage Regulators were powering VCCINT, VCCAUX,
Figure 3: Power Consumption of Voltage Regulators for each FPGA voltage rail regulators.
DISCUSSION There were various minor errors found in the design throughout the architecture review which would have only slightly hindered performance. After the power analysis, it was determined that the types of regulators powering each rail should be swapped. A Switching Buck regulator is better suited for high current and high efficiency needs. LDO regulators are better suited for low ripple outputs such as the transceiver rails on the FPGA, which are should see less than 10 mV peak-to-peak fluctuation [2]. The graph above shows the reduced power consumption once the two types of regulators were swapped. The power consumption on the MGT voltage rails were not simulated with the Switching regulators. Swapping the types of regulators between the MGT and core FPGA supply lines saves about 15W of power. This may not seem like much, but the previous SHREC mission on the space station (STP H6-SSIVP) typically consumed about 14.75W at load power, so in high performance space computing, this is a significant amount [3]. REFERENCES 1. Gali, Ravindra. PCB Design Considerations for FPGA Accelerator Cards. 1, 2017. 2. UltraScale Architecture GTH Transceivers User Guide. 327-329, 2018. 3. Sebastian Sabogal, Patrick Gauvin, Brad Shea, Antony Gillette, Christopher Wilson, Ansel Barchowsky, Alan D. George. SSIVP: Spacecraft Supercomputing Experiment for STP-H6. 10. 2017. ACKNOWLEDGEMENTS Funding was provided by a grant from University of Pittsburgh SSOE.
CONVOLUTIONAL NEURAL NETWORKS FOR DEVICE COMPACT MODELING Gouri Vinod Engineering Department of the National University of Singapore National University of Singapore, Singapore Email: gov1@pitt.edu INTRODUCTION Due to the increased demand of smaller MOSFETs due to nanotechnology rapidly developing and as a result decreasing the size of the silicon components to be used in those devices, the most space efficient MOSFETS are reaching a limit to how small they can be. This is a result of the logistical acquiring effective but miniscule materials and being able to manufacture them. The shortcomings of MOSFETs have resulted in alternative devices becoming popular to perform the same tasks [1]. These alternatives include carbon nanotube (CNT) devices, high-k dielectrics, and multi gate devices. Although high-k dielectrics and multi gate devices are both effective at reducing size, they only accomplish this by a small fraction when compared to devices fabricated with CNTs. CNTs are cylindrical sheets with multiple layers of carbon atoms and have been shown to have semi conducting properties [2]. Single- layer carbon nanotubes have been shown to have the most active electrical properties and have been shown to be promising when reducing the slack of other nonelectronic materials when used either as an active element or interconnects. As a result, the popularity of carbon nanotube transistors has skyrocketed. These electronics are more effective due to their ability to have a significant carrier mobility when contrasted with other materials. As a result, when configured correctly, CNTs can have these beneficial properties every time they are utilized. As CNTs cannot be configured for mass- circuit simulations due to their computation accuracy being deteriorated [3], Convolutional Neural Networks (CNNs) can be used to estimate a nonlinear function with the capability to be trained with database simulation results. Another benefit of CNNs is that they can be used multiple times after being trained once. Essentially, a modeling problem can be solved
multiple times with variations by training the CNN once. METHODS
Figure 1: Basic Structure of a four-layer CNN This paper utilized a four-layer convolutional neural network in order to analyze data developed in MATLAB. The network had three inputs and one output with two hidden layers in between as seen in Figure 1. Every neural network has neurons and the connections amongst them. Each connection and neuron has a certain weight coefficient associated to it which is used generate the value of the output neuron using the following equation where w is the input coefficient, l is the input carried through the link, b is the neuron coefficient, and A is the neuron activation function: Output= A x ( S (w x l) + b) The three inputs represent the carbon nanotubeâ&#x20AC;&#x2122;s drain voltage, gate-source voltage, and gate length; Vds, Vgs, and Lg respectively. The activation functions to be utilized for the hidden layers were tangent sigmoid due to the its nonlinear results for both positive and negative neuron values [4]. A linear function, as proved by literature, was used to primarily decrease the networkâ&#x20AC;&#x2122;s computational time. This output represents the drain source current
(Ids) of the CNT and was later used to generate IdsVds and Ids- Vgs curves as shown in Figures 2 and 3. The maximum number of epochs for the network to cycle through was selected to be 100 for maximum efficiency and the error was calculated using Mean Square Error (MSE) with the minimum allowable error set to 10 ^-13 [5].
DATA PROCESSING AND DISCUSSION In order to train the network and to obtain the desired Ids- Vds and Ids- Vgs curves, both Vds and Vgs were swept from 0 to .18 V with the step sizes being .02 V. The curves Ids- Vds and Ids- Vgs were obtained using the MATLAB and the code to obtain the Ids- Vgs is shown in Figure 4.
RESULTS
This was then done multiple times for various Cox and Vth values as the network was repeated. The curves obtained were to be expected and similar to previous literature.
Figure 2: Ids- Vds with Cox= 3 and Vth =.7
Figure 3: Ids- Vgs with Cox= 3 and Vth =.7
REFERENCES 1. D. J. Frank, R. H. Dennard, E. Nowak, P. M. Solomon, Y. Taur and Hon-Sum Philip Wong, "Device scaling limits of Si MOSFETs and their application dependencies," in Proceedings of the IEEE, vol. 89, no. 3, pp. 259-288, March 2001. 2. P. Avouris, J. Appenzeller, R. Martel and S. J. Wind, "Carbon nanotube electronics," in Proceedings of the IEEE, vol. 91, no. 11, pp. 1772-1784, Nov. 2003. 3 .Jing Guo, S. Datta and M. Lundstrom, "A numerical study of scaling issues for Schottkybarrier carbon nanotube transistors," in IEEE Transactions on Electron Devices, vol. 51, no. 2, pp. 172-177, Feb. 2004. 4. Yonaba, H & Anctil, F & Fortin, Vincent. (2010). Comparing Sigmoid Transfer Functions for Neural Network Multistep Ahead Streamflow Forecasting. Journal of Hydrologic Engineering - J HYDROL ENG. 15. 10.1061/(ASCE)HE.1943-5584.0000188. 5. Shi Jian Zhao and Yong Mao Xu, "LevenbergMarquardt algorithm for nonlinear principal component analysis neural network through inputs training," Fifth World Congress on Intelligent Control and Automation (IEEE Cat. No.04EX788), Hangzhou, China, 2004, pp. 3278-3281 Vol.4. ACKNOWLEDGEMENT All work was done at the National University of Singapore over a 8- week period under the counsel of Dr. Kelvin Fong of their department Electrical and Computer engineering as part of the 2019 SERIUS research scheme for American and Canadian undergraduate students.
Figure 4: Code to obtain the Ids- Vgs curve
ELECTROCARDIOGRAM SYSTEM ON THE SMARTPHONE Chen Wang, Tianyi Li and Kabi Balasubramni Dr. Liang Zhan, Department of Electrical and Computer engineering University of Pittsburgh, PA, USA Email: chw133@pitt.edu will send and receive the voltage; for the INTRODUCTION Bluetooth module, we connected the power to 5v, From the Centers for Disease Control and and connected rx and tx pin to the board, by doing Prevention Official website, about 610,000 this once the board start to do the Serial print people die of heart disease in the United States command, the Bluetooth module will also send every year-that’s 1 in every 4 deaths. How to out value for the smartphone to receive. monitor and keep track of heart to prevent people from dying of heart disease has been studied by researchers. The purpose of this study is to develop a portal device that can measure human electrocardiogram (ECG) and process the data immediately. At the same time, the device sends out warning signal if the user’s heartbeat is abnormal.
Figure 2: Hardware connection
Figure 1: ECG wave METHODS The study consisted of three young, healthy adults (3F; mean age 20 ± 3years) who were in healthy condition with no heart disease history. Participants’ physical health would not be affected since the experiment were sending microcurrent through their body. The experiment used Arduino UNO board (Arduino, LLC.) connected to electrocardiogram chip (SparkFun, AD8232) and Bluetooth module (Kedsum, HC-06). The electrocardiogram chip has three sensor pads that connect to participants’ body. To get better result, we choose to place three pads close to each other as possible. In the experiment, we put three pads on the chest near the arms and above right, lower abdomen. Details of connection: for the ECG chip, we connected it to the sensor cable - electrode pads and then attached the biomedical sensor pad to it, then connected the power to 3.3v. We use 3.3 v instead of 5v power to avoid the chip got burned; then connected output to a0, lo+ to port 10, lo-to port 11. By coding the Arduino board, the ports
Phone application UI design: the initial page(Figure 3 phone screen) has 6 buttons, by clicking on, the application will search for Bluetooth signal, and will start printing the ECG on the bottom half of the page; by clicking rec button, the application will store the value in local storage; by clicking restart button, the application will clear all the data and re-receive the data; by clicking filters button, the application will apply different type of filter to the ECG signal; and by clicking open button, the application can allow the user to open the ECG they stored on their phone, and user can examine their ECG graph there(Figure 4 and 5). And the initial page will also show the user current heartbeat per minute. After setting up the smart phone application, and handle the code of the Arduino UNO board, we did a few tests to see the ECG waveform on computer screen. The ECG waveform are the shape we expected. Next we started to testing with the application. We recorded three participants’ electrocardiogram for three minutes, and sent all the data through Bluetooth module to the phone application, and the application stored each participants’ data in local storage for later examine. A total of three participants’ data was collected.
Figure 3 : Collecting Data DATA PROCESSING Since the solder joint on electrocardiogram is not good enough, the data that received have noises. To solve the problem, we chose to do noise cancellation on phone application. The algorithm in the application has the ability to remove low pass 35HZ/40HZ, high pass 0.15HZ/0.25HZ/0.5HZ, and 50HZ/60HZ. After the application received the data from the user, the user can open the text file of where all the data have been stored, and then use the filter to clear out extra noise. After the electrocardiogram cleaned, the user can start process or detect their heart condition with these data. The application will also calculate the user’s current heartbeat per minute through getting the real time readings from the sensor.
testing the LED sometimes went off, or sometimes blinking instead of remaining on. Second main reason is when doing the digital filter using the algorithm it may filter out some points on QRS complex, this will produce small deviation on the results. Since we are working with people’s heart, all the results should be exact correct, otherwise in real life it will cause unwanted problem. The next step is making our own PCB board that include Arduino microprocessor, AD8232 chip and HC-06 chip, this will give help to give out better results. Also, we want the whole system to become wireless, even for the sensor pad that attached to human body.
Figure 4: original data without filter
RESULTS Overall, the electrocardiogram system works very well. The hardware can receive the correct output, by checking the data that print on the computer screen; and the software receive the correct value and analyze the data as expected. The initial data we received from the participants with noise looks like figure 2, we picked the data from one participant, from 3.41s to 7.21s. After we apply 60HZ plus LowPass 25HZ filter the electrocardiogram looks much clearer than before, and the noise have been successful cancelled (Figure 5).
REFERENCES 1. AD8232 Heart Rate Monitor Hookup Guide, learn.sparkfun.com 2. “Digital Filter.” Wikipedia, Wikimedia Foundation, 26 July 2019, en.wikipedia.org/wiki/Digital_filter. 3. Becker, Daniel E. “Fundamentals of Electrocardiography Interpretation.” Anesthesia Progress, American Dental Society of Anesthesiology, 2006 4. https://www.cdc.gov/heartdisease/facts.htm
DISCUSSION During the experiment, we found some deviation between the application readings and counting by hands. One possible reason should be the unstable hardware connection. The LED on AD8232 chip should remain on once it powers up, but during
ACKNOWLEDGEMENTS Thank you to the computer and electric engineering department for giving me this opportunity to do this research and thank you Dr. Liang Zhan for your incredible mentorship and your time.
Figure 5: result data/ECG graph with filter
FEASIBILITY STUDY OF KINETIC, THERMOELECTRIC AND RF ENERGY HARVESTING POWERED SENSOR SYSTEM Hongye Xu, Jiangyin Huang and Keting Zhao Department of Electrical and Computer Engineering University of Pittsburgh, PA, USA Email: Hox10@pitt.edu 1. INTRODUCTION While the lifetime limitation of battery has become a shortcoming for IoT devices. The need for self-powering IoT devices is getting stronger. This research is aimed at studying the feasibility of integrating kinetic, thermoelectric, and RF energy harvesting to power sensor system for wearable IoT devices. This abstract will focus on kinetic energy harvesting, the integration of three types of energy harvesting, and MCU design. 2.1 PIEZOELECTRIC EH Piezoelectric energy harvesting is composed of three parts, a piezoelectric energy harvester, a zener diode voltage regulator, and a piezoelectric energy harvester breakout board. The piezoelectric EH module used in research is PPA-1011 manufactured by Mide. PPA-1011 output an AC and has a 23.2 V maximum output voltage. A zener diode voltage regulator is required to regulate the output of PPA-1011 to 16V maximum. A piezoelectric EH breakout board Sparkfun BOB-09946 is used to output a stable 3.3v DC from PPA-1011. Under an ideal condition, this system produces 200 μA DC under 3.3v voltage. 2.2 ELECTROMAGNETIC EH Electromagnetic energy harvesting is composed of two parts, an electromagnetic energy harvester (EMEH), and a rectifying circuit. The design for EMEH is inspired by an article “Electromagnetic energy harvester with repulsively stacked multilayer magnets for low frequency vibrations” [1] and takes consideration of faraday's law of electromagnetic induction. The EMEH has three parts, a steel rod, a core composed of Neodymium magnet ring, steel washer and spring, and a plastic case. A rectifying circuit is connected at the end. For outputs, two sets of data can be present. Under 1.5 Hz frequency and 5cm amplitude, the circuit’s output is 1.8 mA and 4.6 V. Under 3.1
Hz frequency and 5 cm amplitude, the circuit’s output is 2.3 mA and 6.1 V.
Figure 1.1 EMEH. Figure 1.2 EMEH & MULTIINPUT EHs
2.3 MULTI-INPUT EHs DESIGN A multi-input energy harvester design is used in the research to allow all three types of energy harvesters to coexist and power up the system under different circumstances. Figure 1 shows the power ORing architecture being used in the design. [2] There are multiple branches for different energy harvesters. Each branch contains an energy harvesting circuit, a capacitor, and a diode to prevent back charging. They are connected in parallel to charge the supercapacitor. When fully charged, the supercapacitor will then provide energy for MCU unit through the boost of a DC-DC booster. Figure 2 shows the schematic of the circuit that built base on the power ORing architecture. When a capacitor within a branch is charged to have the highest voltage, it will begin to charge the supercapacitor. The supercapacitor has a capacitance of 0.22 F and maximum voltage of 3.5 V. This supercapacitor is used to power up the Texas Instrument MSP430G2553 MCU board. A magnetometer sensor MAG3110, a radio communication module A1101R9, and a UART to USB convert FT232 are connected to MCU board for collecting and transmitting data.
Figure 2. Power Oring Architecture
amount of time. Once the supercapacitor is fully charged, it can supply MCU to work under active mode for 90 seconds and allow MCU to stay in low power mode for another 400 seconds.
Figure 3. Schematic of System Design
2.4 DUTY CYCLE The goal for powering MCU is when supercapacitor is fully charged, MCU will be wake up and begin to collect and transmit data. Once the power in supercapacitor drops to a certain threshold, the MCU will then go into low power mode to stand by and wait for supercapacitor to be fully charged again. To achieve this goal, a software duty cycle is implemented in the research. The code used in the research is modified base on source code from course CSE466 at University of Washington [3] and Texas Instruments E2E support [4]. An i/o pin P1.4 on MCU is attached to a voltage divider connected to supercapacitor. An if statement is placed at the end of the main code inside the loop. The if statement contains the code for setting up interrupt and putting MCU in low power mode. When P1.4 on MCU reads a digital 0, the code inside the if statement will be executed. An interrupt will be set up listening to P1.4. Then MCU will go to Low Power Mode 0 (LPM0). The system will stand by under low energy consumption until supercapacitor is fully recharged. After the charge is done, the P1.4 will be digital 1, and trigger the interrupt. The code responsible for exiting LPM0 is placed inside the interrupt. The MCU will then continue to collect and transmit data until the supercapacitor reaches its low threshold. To allow P1.4’s read respond to state of supercapacitor, a voltage divider is implemented. Tests indicate that a 2.4 V voltage is expected when supercapacitor is fully charged for P1.4 to change from 0 to 1. Resistors with relatively large resistance can minimalize the power loss for the voltage divider. Thus, a 0.9 kΩ and a 2.4 kΩ resistor are connected to supercapacitor as voltage divider as shown in Figure 2. 3. RESULTS As a result, both piezoelectric EH and EMEH can supply enough energy for MCU within a short
4. DISCUSSION There are still few issues with the design that need future improvements. The current design for EMEH is relatively big for wearable device and it generates much more energy than the system actually needs. A much smaller EMEH can still have promising results. The current duty cycle is flawed for two reasons. The digital read for MCU i/o pin shows an inconsistent when supercapacitor is charging and discharging. This makes the determination of when MCU is supposed to enter Low Power Mode vague. The power loss of the voltage divider is also not necessary. To solve these issues, a software ADC interrupt or a hardware duty cycle can replace the current duty cycle design. A larger supercapacitor or a battery is going to be needed in the future when adding a much more energy consuming radio communication module. All in all, the result shows that it is feasible to integrate kinetic, thermoelectric and RF energy harvesting and power a low energy consumption sensor system. REFERENCES 1.Soon-Duck K., “Electromagnetic energy harvester with repulsively stacked multilayer magnets for low frequency vibrations”, March 2013. 2.Estrada-López JJ, Multiple Input Energy Harvesting Systems for Autonomous IoT End-Nodes. 2018. 3.Hemingway B. (2015, Fall), Lecture 4: MSP430 Interrupt. 4.Texas Instrument, “MSP430FR69989 Watchdog Example”, August 2015. ACKNOWLEDGEMENTS Funding for this research is provided by Swanson School of Engineering and the Office of the Provost. A special thanks to Dr. Hu and his PhD student, Yawen Wu, for supervising the team and providing research devices.
FEASIBILITY STUDY OF KINETIC, THERMOELECTRIC AND RF ENERGY HARVESTING POWERED SENSOR SYSTEM Keting Zhao, Jiangyin Huang and Hongye Xu Department of Electrical and Computer Engineering University of Pittsburgh, PA, USA Email: kez19@pitt.edu 1. INTRODUCTION Batteryless and wireless energy harvesting systems are critical to internet of things (IoT) vision as well as the sustainability of long-lived, untethered systems. This kind of system can be divided into three levels: energy harvesting subsystems (EH), an embedded microcontroller unit (MCU), and peripherals (sensors, radios, etc.) [1]. This research will provide a feasibility study of integrating all the levels mentioned above. Three different types of EHs are used in this research. This abstract will focus on the radio frequency energy harvester (RFEH) and an instruction on designing the whole system to be compatible with other EHs and able to handle frequent power shortages during the process. 2. METHODS 2.1 Equipment Set Up for RFEH Radio frequency energy harvesting is a technique that harvests energy from the electromagnetic field in the air and converts it into voltage and current. To do this, a RFEH is made of an antenna, a rectifying circuit and a matching circuit to give a DC output. The RFEH module used in this research comes from PowerCast. The model TX91501 transmitter can broadcast unlicensed 915MHz ISM radio band and can produce data up to 40-50ft. The module P1110B receiver can rectify a 915MHz center frequency at a maximum of 23 dBm input power and maximum output of 4.3V/100mA. The beam pattern of the transmitter is 60x60ft, and a patch directional antenna with 122x68ft energy pattern is used to capture power from the transmitter.
Figure 2. Schematic of System Design
2.2
Multi-Input EHs Design The system design is based on the Power ORing architecture, which is a modular design that allows multiple EH sources to be connected in parallel through diodes [2]. A supercapacitor with capacitance of 0.22F and maximum voltage at 3.5V smooths the raw output voltage from EHs. The modified system schematics are based on this architecture with the testing result as seen in Figure 1. The diode connected to the RFEH has a cut off voltage of 0.7V. The MCU used for this research is the model MSP430G2553. A magnetometer sensor, MAG3110, a radio communication module, and a UART to USB converter are connected to the MCU. The idea is to put MCU in stasis when the EHs are charging the supercapacitor. Once the supercapacitor is charged up to the operation voltage, 3.3V, the MCU will switch to its active mode, reading and transmitting data repeatedly until the supply power is below threshold, causing a reversion to stasis. 2.3 Software Duty Cycle Control An interruptible computation is integrated to complete the duty cycle control from the University of Washington [3]. In the MSP430 architecture, there are several types of interrupts: timer, port and ADC interrupts. MSP430G2553 has four different low power modes (LPM), but only LPM0 is used. During LPM0, the CPU and the MCLK are disabled, while the ACLK and SMCLK remain active. Additionally, MSP430 uses a watchdog timer to reset itself after a certain amount of time in order to avoid a counting overflow, which gets put on hold to avoid an unexpected reset during an interrupt. The interrupt service routine (ISR) saves the state and redirects the stack pointer to interrupt functionality. After an interrupt, the ISR continues sensing the input from pin P1.4. When the input voltage is under its minimum operation voltage requirement, P1.4 will indicate as digital 0, and vice versa. On a digital 1, the MCU exits LPM0 and resumes the main programâ&#x20AC;&#x2122;s next state. The clock will resume, the
MCU will execute from the main program, and read sensors repeatedly, until the next interrupt. 3. RESULTS AND RE-DESIGN 3.1 RFEH Modify with DC-DC Booster The testing result of output voltage and current with different distances between the RF transmitter and RF receiver is shown in Figure 2. The result shows that the maximum output power of RF energy harvester is 1.037mW and rapidly decreases as the distance from the source increases. The output power is only an average of 0.04mW at 150 cm, but the minimum requirement for MCU to execute the program is 3.3V and 190ď A. This testing result clearly does not meet the minimum MCU supply.
Figure 2. Raw Output Current, Voltage and Power vs Distance
Figure 3. Output Current vs Distance with DC-DC Booster
The DC-DC booster module BQ25570 is connected to the RFEH to boost the power. Retesting shows that the output voltage from the receiving end is stable at 4.1V upon 487cm from the source. The relationship between output current and distance is shown in Figure 3. Current drops below the threshold to 0.159mA at 7ft. Therefore, the RFEH now can power up the system 200cm away. The modified RFEH is connected to the diode which has a cutoff voltage at 0.7V. Thus, the max voltage over the supercapacitor when using RFEH alone will be 3.5V which is in its safe range. 3.2 Voltage Divider Implement The operation voltage range of MSP430 is from 1.8V to 3.6V; however, testing shows that pin P1.4 will indicate as digital 0 if the voltage is lower than 2.3V. A voltage divider is implemented so that when the input source voltage reaches 3.3V, the voltage
across pin P1.4 will be 2.4V. In order to minimize the loss on input current, resistors with large resistance values, 0.9đ?&#x2018;&#x2DC;Ί and 2.4 đ?&#x2018;&#x2DC;Ί, are used. After re-designing the components, the system can function properly while RFEH generates power. When the power supply stops, the supercapacitor starts discharging and the MCU could still be active up to 90s before switching to LPM, which can persist for another 410s before completely shutting down. 4. DISCUSSION The result shows a success to power the MCU with sensor module by RFEH. Results with other EHs are discussed in the other team membersâ&#x20AC;&#x2122; abstract. However, a few issues still remain with this design. RFEH is easily affected by kinetic motions along the energy collection path. People walking or cellphone radio waves may lift voltage unexpectedly. Introducing a dc-dc booster eliminates issues with distances under 200cm, but they persist otherwise. The software duty cycle control functions when the MCU is not completely shut down, which means the power shortage cannot be longer than 8.3 minutes. This can be challenge in some cases and the possible way to solve this issue is to implement a duty cycle with logic ICs in the hardware design. 5. CONCLUSION The result shows it is feasible to integrate all three levels of the system together with commercial products on a relatively large scale. For future experiments, a hardware duty cycle should be tested as mentioned in section 4 as well as making PCB based on this design in order to scale down the system. The radio module is still in design and has not yet been calibrated. Once it is connected into the system, the power consumption will increase. A supercapacitor with a larger capacitance might be needed in the modified the design. 6. REFERENCES 1. J. Hester and J. Sorber, Flicker, 2017. 2. Estrada-LĂłpez JJ, Multiple Input Energy Harvesting Systems for Autonomous IoT EndNodes. 2018. 3. Hemingway B. (2015, Fall), Lecture 4: MSP430 Interrupt. 7. ACKNOWLEDGEMENTS Funding for this project was provided by the Swanson School of Engineering and the Office of the Provost. Special thanks to Dr. Hu and his PhD student, Yawen Wu, for providing the source of devices and the help we needed on this project.
LASER-INDUCED NANOCARBON FORMATION FOR TUNING SURFACE PROPERTIES OF COMMERCIAL POLYMERS Angela J. McComb, Moataz M. Abdulhafez and Dr. Mostafa Bedewy NanoProduct Lab, Department of Industrial Engineering University of Pittsburgh, PA, USA Email: ajm288@pitt.edu, mbedewy@pitt.edu, Web: http://nanoproductlab.org/ wavelength and 45W laser power. The system allows INTRODUCTION Although the applications of commodity polymers tuning the power by controlling the laser current. The are ubiquitous, controlling their surface properties in beam diameter (1/e2) was measured to be 110 m. a scalable and affordable process is still challenging. The samples were placed on an XYZ stage with a Recently, it has been shown that nanoscale graphitic maximum horizontal rastering speed of 500 mm/sec. Figure 1: (a) Image of laser head and polyimide with LINC. (b) structures can be synthesized through the lasing of Schematic of experiment slide with LINC patches at different polyimide, a popular commodity polymer, with orientations. commercially available low-cost IR lasers [1]. These laser-induced nanocarbon (LINC) structures have been shown to exhibit excellent electrical and thermal conductivity as well as high surface area. Accordingly, wide-reaching potential applications of LINC such as anti-fouling [2], anti-icing [3] and antimicrobial [4] surfaces have been investigated. Recent research has shown that LINC surface properties depend on lasing parameters such as laser power and speed [5]–[7]. For example, it was shown that by changing the lasing environment, namely the process gas, the contact angle between water droplets and LINC surfaces can be switched between superhydrophobic (>150°) and superhydrophilic. Here, we generate nanostructured carbon directly on the surfaces of the commercial Kapton films by surface lasing with the objective of tuning surface properties in a large range of contact angles based on changing laser power, speed, and line-to-line gap in the LINC process. METHODS Sample preparation. The experiment was conducted by laser scribing on polyimide (PI) tape, also referred to as Kapton, (Grainger, Cat. No. 15C616, thickness: 3.5 mil), which was applied to a glass slide with a rubber roller to eliminate air bubbles. A chemical wash consisting of acetone and isopropyl was applied and then quickly dried off with compressed air to clean the PI surface. LINC Processing. The laser scribing was conducted using a CO2 laser system (Full Spectrum Laser ProSeries 20x12, 45 W, 1.5” focus lens) with 10.6m
Experiment design. As shown in Figure 1a, each slide has 4 square patches (0.5” x 0.5”) that are lased at different conditions. A single LINC patch is formed by lasing in the positive x-direction a distance (L), then shifting by an increment (g) in the y-direction, followed by lasing in the negative xdirection by the same distance (L) (Fig. 1b). This sequence is repeated until the square is completed. The laser machine input design file is generated by programming a MATLAB script with lines and a certain number of pixels gap between each line, which represents a single patch. The study was conducted by varying power (28W and 12W), speed (v, 500 mm/sec, 350 mm/sec), and the distance between lines lased (g, 0 pixels, 3 pixels, 5 pixels, 7
pixels), where one pixel represents a shift of 0.002” (Figure 1). Finally, the patches were rotated 90° for a total of two orientations in order to investigate contact angle anisotropy (Fig. 1b). Contact angle measurements. Quantitative results were obtained by measuring the Young-Laplace’s contact angle using a Biolin Scientific Optical Tensiometer. The process included ejecting four 5 µl water droplets on each sample and recording the results at 6.9 frames per second for 20 seconds. Afterwards, the program OneAttention was used to analyze each droplet by curve fitting the YoungLaplace equation to the droplet to yield a contact angle for each of the 140 frames. The average of the 140 contact angles was then recorded as the contact angle for the droplet. RESULTS AND DISCUSSION The control sample had an average contact angle of about 88°, which is interesting as that is close to a perfect hemisphere. As shown in Figure 2, the parallel orientation almost always had a higher contact angle measurement than its perpendicular counterparts. The lowest contact angle measured was 0°, at power = 28W, speed = 350 mm/sec, g = 0 pixels, both orientations (Fig. 2c). The overlap at these power and speed settings resulted in the patch displaying a cotton-candy-type structure that exhibited wicking behavior. When wicking occurs, the water is absorbed spontaneously into the LINC patch, driven by capillary forces. The highest contact
angle measured was 111°, also at power = 28W, speed = 350 mm/sec, but this time at g = 7 and the parallel orientation (Fig. 2d, inset). CONCLUSION Our results demonstrate that varying the gap between lines has a significant impact on the contact angle of water on LINC. The wide range of contact angles achieved highlights the versatility of our technique, which is promising for tuning the surface properties of Kapton for various applications, such as selfcleaning, anti-fouling, and water-repellant surfaces. REFERENCES 1. J. Lin et al. Nat. Commun. 5,1–8, 2014. 2. S. P. Singh, Y. Li, J. Zhang, J. M. Tour, and C. J. Arnusch ACS Nano 12, 289–297, 2018. 3. D. X. Luong et al. ACS Nano 13, 2019. 4. S. P. Singh, Y. Li, A. Be’er, Y. Oren, J. M. Tour, and C. J. Arnusch ACS Appl. Mater. Interfaces 9, 18238–18247, 2017. 5. L.Andrea et al.Nanotechnology 28,174002, 2017. 6. E. Vasile, S. M. Iordache, C. Ceaus, I. Stamatin, and A. Tiliakos J. Anal. Appl. Pyrolysis 121, 275–286, 2016. 7. Y. Li et al. Adv. Mater 29, 1–8, 2017. ACKNOWLEDGEMENTS This research was funded by the Swanson School of Engineering and the Office of the Provost. Much thanks to the members of the NanoProduct lab.
Figure 2: Box plot representing the contact measurements resulting at different laser powers, speed and line gaps. Insets represent images of contact angle measurements at a wicking condition and a hydrophobic condition.
DURABLE, ANTI-SOILING, SELF-CLEANING, AND ANTIREFLECTIVE SOLAR GLASS Sooraj Sharma, Paul W. Leu Laboratory for Advanced Materials at Pittsburgh University of Pittsburgh, PA, USA Email: sps49@pitt.edu, Web: http://lamp.pitt.edu/ INTRODUCTION Glass layers used for encapsulation in solar devices are highly reflective and are vulnerable to particulate-based soiling due to environmental effects. In the former case, incident light is partially reflected at a wide range of wavelengths and incidence angles whilst in the latter, layers of dirt, dust, and/or soil greatly occlude incident light from transmitting into the cell’s internal components [1][2]. Therefore, a glass surface with antireflective and self-cleaning properties is ideal for this application. In the literature, nanopatterning and fluorination techniques have been utilized to create nanotextured surfaces with near-perfect transmission and omniphobicity [3]. However, many of these fabrication methods are high cost and have low throughput due to their inability to be cheaply scaled up for industrial-level usage. Sol-gel processing, which is low-cost and highly scalable, can be used to deposit a textured nanoporous thin film, that when fluorinated, yields desirable antireflective and selfcleaning properties.
generation using SigOpt, a Bayesian optimization algorithm. Samples of 500-micron thickness fused silica wafers were cut with a diamond scribe and alongside pre-cut pieces of low-iron tempered glass (2x2 in. 3.2 mm thickness), were sonicated in acetone and isopropanol to remove contaminants. After rinsing with DI water, the samples were dried with a nitrogen gun. First, the homogenous layer was spin coated at the desired RPM and time on the samples. These were then prebaked in a convection muffle oven at 80 °C for 10 minutes. The samples were then spin coated with the nanoporous sol-gel at the desired parameters and successively annealed in a furnace. The final samples were then exposed to a nitrogen gun to remove any excess dust particles from the furnace internals. Silane treatment was carried out in a vacuum desiccator along with a glass slide. Approximately 2 to 3 drops of 0.05 µl Trimethoxy(1H,1H,2H,2H-heptadecafluorodecyl) silane were placed on the slide, and a vacuum pump was used to seal the chamber completely. After 16 hours, the samples were removed from the desiccator. The transmission data of the samples was collected using a Perkin Elmer Lambda 750 2D detector module. Self-cleaning tests were conducted with water, ethylene glycol, olive oil, and hexadecane using an angled stage and pollutant powder (Arizona road dust). Durability testing was conducted using a Taber linear abrader model 5750.
EXPERIMENTAL METHODOLOGY Two sol-gel solutions were used for spin coating in this experiment, the recipes for which gained from [4]. A slight modification was made to the production of the nanoporous sol, which involved stirring at 60 °C for 4 hours to ensure more rapid aging. These gels served as first (homogenous sol) and second (nanoporous sol) coating layers that were deposited through single-sided spin coating (Laurell WS-650). For the deposition of both layers, both the spin RPM and time were controlled, along with the annealing temperature and time of the sample upon the deposition of the nanoporous layer. These six experimental parameters were derived initially from experimental intuition followed by random
RESULTS AND DISCUSSION
1
Abrasion tests were carried out using cotton swabs dabbed in 2-propanol and ring weights in increments of 50 and 125 grams. With the 50-gram weight, the transmission loss after 150 cycles of abrasion is incredibly low; after 500 cycles around ~0.24% transmission at the normal was lost. For the 125gram weight testing, after 1000 cycles, around 1.8% transmission was lost. These values are indicative of the durability of the sol-gel coating on the substrate and can likely be owed to the inherent improved mechanical properties of the HCl-based catalysis used in both layers. CONCLUSION In this study we demonstrated how the sol-gel process was used in tandem with Bayesian optimization to create surfaces that were highly antireflective and with fluorination, self-cleaning. The best samples displayed a direct transmission of 93.14% and 96.36% at the normal for the single and dual-layer coated samples. With fluorination, the samples demonstrated adequate water and oil repellencies alongside visible self-cleaning properties during testing. The mechanical durability of the samples was confirmed with linear abrasion testing; the samples incurred only a minimal transmission loss. These results indicate that this particular process is viable for modifying the encapsulation layer to reduce efficiency losses in photovoltaics.
Figure 1: Transmission data for (a) single-layer and (b) double-layer substrates.
The transmission data was collected from the fused silica samples and was characterized over a wavelength range of 1200 to 250 nm. In order to assess the influence of multilayering the sol-gel coatings, the transmission was measured with both a single layer coating of the HCl-based nonporous sol and the double layer final sample. Bare fused silica exhibits a direct transmission of ~92.2% at 550 nm. With the first layer coating, this was increased to 93.1% and as stated earlier, with both coatings, a maximum of 96.4% transmission was observed at 550 nm. This value is likely the theoretical maximum for this experimental setup, due to the around 3-4% reflectance-based losses at the back interface in a solar cell. Self-cleaning properties were assessed by testing the repellency of several liquids on the fluorinated tempered glass substrates. Water, ethylene glycol, olive oil, and hexadecane did not display high contact angles on the substrate, but experienced low adhesion (hysteresis) to the surface. To further verify this, the samples were polluted with the powder and placed on the angled stage and a water jet was shot at both silane-free and silane-treated surfaces. The silane-treated sample was able to effectively remove the dirt from its surface through droplet-assisted cleaning while the untreated sample collected dirty water near its edge, removing this only under the pressure of the water jet.
REFERENCES [1] Said, et al. Solar Energy 107, 328-337, 2014. [2] Scheydecker, et al. Solar Energy 53, 171-176, 1994. [3] Yao, et al. Progress in Materials Science 61, 94143, 2014. [4] Zou, et al. Langmuir 30, 10481-10486, 2014. ACKNOWLEDGEMENTS Funding for this project was provided by the University of Pittsburgh Mascaro Center for Sustainability (MCSI). The authors would also like to acknowledge the SSOE Office of Diversity.
2
THE EFFORTS OF APPLYING DAILY STRENGTH AND CONDITIONING RECORDS AND TECHNICAL TESTING DATA INTO ATHLETE INJURY PREDICTION MODELS Yiqi Tian Department of Industrial Engineering Email: yit30@pitt.edu, Phone: 412-427-0335 INTRODUTION University of Pittsburgh Strength and Conditioning center (Pitt S&C) aims at maximizing athletes’ performance [1]. However, injury is the most common obstacle for athletes to gain successful professional careers, which shows the importance of injury prevention. Currently, Pitt S&C uses force plates supplied by Sparta Science to measure a movement signature, which is a summary of the ground reaction force produced by the athlete when doing prescribed movements [2]. This project is going to use the S&C workout log of Pitt athletes in conjunction with the Sparta movement signature to improve Pitt S&C ability to identify injury risk and take measures to assess athletes and prevent injuries. METHODS The study consisted of 204 athletes within 8 teams (including volleyball, men soccer, women soccer, diving, baseball, swimming, wresting, and softball), who had the most recent one-year S&C records in Pitt S&C. Moreover, these athletes have recorded 962 movement signatures, including Load, Explode, Drive (LED) score, using movement vertical jump. Due to the lack of real injury data, the corresponding injury risk score provided by each Sparta jump test is used as the given response variable of each record. To record the S&C data electronically, a database was developed to record the Pitt S&C workouts of each athlete. This database was related to athlete demographics and the Sparta movement signature database using a unique key. Over 100,000 S&C data have been recorded electronically, which includes movement name, the weights, and the reps for each movement. In addition to the recorded data, several new features are introduced by classifying the 1000 unique movements along a range of dimensions,
which are listed in Table 1. The S&C data will be associated with the movement signature following the workout. Therefore, the movement signature will be removed if there is no record prior the scan, which lead 384 movement signatures left in the
final dataset. To predict injury risk, three machine learning methods are used to train and test the accuracy of the improved prediction model with S&C featured records and Sparta test data. LASSO, Regression Tree and Boosting are going to provide specific Root Mean Square Error (RMSE) as well as RMSE Standard Deviation (RMSESD) to provide the objective indication of model performance. RESULT A number of models were developed to predict injury risk. In addition to the three machine learning algorithms, the models differ according to the data model used. By using the featured LED test record only without S&C part, the model performance is shown in Table 2. Specifically, “LED value” provides the baseline prediction model, and uses only athlete demographic data along with the raw Load, Explode, and Drive values. Next, “Range” adds the difference between the minimum and maximum of LED. Lastly, “Diff from median” is the difference between maximum and median value among LED, as well as the difference between the median and minimum one.
As shown in Table 2, based on the horizontal comparison among those three features with 962 movement signatures, it is clear that “Diff from median” has smaller RMSE value among all other data models regardless of the algorithm. Moreover, as for machine learning algorithm, the result of Regression Tree and Boosting are much better than Lasso. In this circumstance, the “Diff from median” data model will be selected for better performed model, while the Lasso algorithm will be no longer be used to test the model performance. As adding featured S&C variables with only 384 eligible movement signatures, the result is shown in Table 3. It shows out that the “Diff from median” model preforms better when the model uses more data. The other data models with featured S&C data does not performed as expected. DISCUSSION It is a sacrifice to introducing a new variable which requires the raw data to be removed as the S&C workout data does. However, it is hard to define which one is better based on current result. The “Diff from median” works best among all models because it takes use of every treasure records. With limited records (384 only), adding S&C data records does not increase the model performance with Regression Tree algorithm, while it performs a better result with Gradient Boosting algorithm. In this case, the latter algorithm seems be able to use in future development.
The next step would be adding the variable of changing in scores over time, increasing training volume and practicing volume, as well as trying another algorithm (i.e. Neural Network). For future development, multiple dimensions of S&C records, like the workout volume calculated by weights and reps, will be tested in new data models. Instead of just counting the number of each type of classification, the effects of changes over time (i.e. velocity and intervals) would be considered for further. The normalized S&C data by time would become new variables as acute and chronic ratio [3] in future model. REFERENCES [1] Pitt S&C Official Website https://pittsburghpanthers.com/sports/2017/6/17/stre ngthconditioning-home-html.aspx, accessed August 21, 2019 [2] Sparta Science, January 15, 2019, Central Metric for evidence based training: The Movement Signature™, accessed August 21, 2019 [3] Billy et al. British Journal of Sports Medicine, 50, 231-236, 2016 ACKNOWLEDGEMENTS The research is accompanied with Dr. Luangkesorn, Yuezhe Wang and Tharunkumar Amirthalingam from Pitt Industrial Engineering department as well as Tyler Carpenter, Mary Beth George and Taylor Gossman from Pitt S&C. The funding was provided by the Swanson School of Engineering at University of Pittsburgh and the Office of the Provost and the Pitt Innovation Institute.
BRANCH AND BOUND FOR UNRESTRICTED CONTAINER RELOCATION PROBLEM Fan Zhang, Bo Zeng Department of industrial engineering University of Pittsburgh, PA, USA Email: faz38@pitt.edu INTRODUCTION The relocation of containers is a fundamental operational issue in modern port management. The containers in one bay must be retrieved one by one according to a priority sequence. If the target container is not at the top of its stack, then all the containers above it must be relocated to other stacks where the height constraint cannot be violated. In reality, those containers are large and heavy, resulting in a significant time and energy cost when being moved by handling equipment. Therefore, an efficient move sequence needs to be determined so as to retrieve all the items while completing the task with the least number of relocations. There are two major versions for container relocation problem in existing literature. One is the restricted BRP variant and the other is unrestricted BRP variant. For the restricted variant, the relocation of containers must be executed on the stack which contains target container, while the unrestricted one allows the relocation of any container in the bay. Many researches have been focusing on the restricted variant, while numerous examples have been brought out to demonstrate the efficacy of the unrestricted operations, which in most cases lead to a smaller number of relocations. In this light, the unrestricted case is the main focus of this study and the following method is based on the unrestricted frame. METHODS In this study, a branch and bound methodology is developed to find the minimum number of relocations and the corresponding path leading to the solution. The lower bound is derived using the strong framework suggested by Chao Lu et al. [1]. Firstly, we define several types of positions and moves. For a given layout, block b is called a bad placed (BP) block if in the same stack there exists a block with a high priority piled under block b. Otherwise, b is a well-placed (WP) block [2]. Additionally, a Bad-Bad (BB) move is relocating a BP block to a stack where it remains bad positioned. A Bad-Good (BG) move is relocating a BP block to a stack where it becomes well positioned. The other two moves, i.e., GB move and GG move are defined likewise. The method to calculate the lower bound for a given layout is as follows. Firstly, since at least one BG move has to be implemented on a BP block, the number of BP blocks is the first part of the lower bound. Secondly,
picking a block from each of the stack to form a virtual layer. If there exists a block piled below the virtual layer such that its priority is higher than the highest priority of the blocks in the virtual layer and the highest priority of BP blocks in the virtual layer is lower than the lowest priority of all stacks after removing blocks above the virtual layer, then at least one non-BG move has to be implemented on blocks of this virtual layer. For two virtual layers such that both of them satisfy property in [1] and share exactly one WP block, if the priority of the shared WP block is equal to the lowest priority of all stacks after removing all blocks above the upper layer, then either 2 non-BG moves will be implemented on blocks in the two virtual layers, or at least 1 GB and 1 BG moves will be implemented on the shared WP block. Two virtual layers scenarios are considered before one virtual layer since two relocations can be derived from less blocks for the two virtual layers scenarios. If such layer can be found, then corresponding relocations are recorded as the second part of the lower bound. Thirdly, an experiment is performed by relocating each block above the target block once without considering stack height limit, if some of the relocated blocks cannot become a WP block, then at least one non-BG move must be implemented on one of the relocated blocks [1]. This leads to the last part of the lower bound. In branch-and-bound, the upper bound is updated for every tree node through a heuristic. Firstly, BG move is searched for the topmost block on the target stack. If more than one such move exists, we choose the destination stack with the highest priority. Furthermore, if relocated block is not second smallest and the highest priority stack has only one slot, then the stack with the second highest priority is chosen. Secondly, GG move on topmost block in non-target stack which enables a BG move on target stack is searched. To illustrate, in figure 1, stack 3 is the target stack since block 1 is in that stack. Block 2 is well located in stack 2. Now we relocate block 2 to stack 1, which is a GG move for a topmost block in non-target stack (refer to figure 2 as result configuration). This relocation enables block 4 to become well located in stack 2 (refer to figure 3 as result configuration). Thirdly, BG move on non-target stack is considered. Note that if more than one destination stack exists, the one with highest priority is chosen. Finally, if it fails to satisfy any of the foregoing scenario requirements, then a forced BB move is performed on topmost block in the target stack, and lowest priority destination stack is preferred. The last rule ensures all the blocks can be
retrieved if following the heuristic and all together gives a compelling upper bound. Note that if target block is on the top of a stack, it will be retrieved immediately and such retrieval is not considered as a relocation movement.
strengthened by considering more than two virtual layers situation. REFERENCE [1]C. Lu, B. zeng and S. Liu , “A Study on Block Relocation Problem: Lower Bound Derivations and Fast Solution Methods,” https://arxiv.org/abs/1904.03347 [math.OC]. [2]F. Forster and A. Bortfeldt, “A tree search procedure for the container relocation problem,” Comput. Oper. Res., vol. 39, no. 2, pp. 299–309, 2012. [3]K. H. Kim and G. P. Hong, “A heuristic rule for relocating blocks,” Comput. Oper. Res., vol. 33, no. 4, pp. 940–954, 2006.
TREE BRANCH PROCEDURE For a given layout, it is considered as the parent node. Its lower bound and initial upper bound can be calculated using methods mentioned above. The children nodes are formed by performing all the possible relocations on the parent configuration. The upper and lower bounds are calculated at the same time when the new node is added to the tree. If this new node has an upper bound lower than the best upper bound found so far, then the global upper bound is updated. For each newly generated children node, if its lower bound is greater than the global upper bound, then the node is terminated and the tree branch will stop at that node since it will surely not lead to an optimal solution. Otherwise, the node is put into the parent node list and children nodes are generated based on this configuration. Note that nodes generated with one relocation based on the initial layout constitutes to level one. Level two, three, etc. are defined likewise. The tree branch starts from initial layout. Then all the nodes in level one are branched at the same time. For the nodes not terminated, branch continues to form level two nodes. The configuration of the branched nodes is recorded to prevent unnecessary repetition and the path leading to every configuration is stored to track an optimal relocation sequence. The tree branch will stop if it finds a node when the global upper bound equals to the global lower bound, which is also the optimal solution. RESULT AND DISCUSSION The branch and bound is realized through python, the code is in appendix. Further improvement includes code completion on bound calculation part. A loop needs to be developed to allow bound calculations for each possible configuration. Secondly, a heuristic rule can be better performed with more leading scenarios and assumption. Thirdly, the lower bound can be
[4]H. Zhang, S. Guo, W. Zhu, A. Lim, and B. Cheang, “An investigation of IDA* algorithms for the container relocation problem,” in Int. Conf. Ind. Eng. Appl. Appl. Intell. Syst., Springer, Berlin, Heidelberg, 2010, pp. 31–40. [5]S. Tanaka and K. Takii, “A faster branch-and-bound algorithm for the block relocation problem,” IEEE Trans. Autom. Sci. Eng., vol. 13, no. 1, pp. 181–190, 2016. [6]C. Expo ś ito-Izquierdo, B. Melia ń -Batista, and J. M. Moreno-Vega, “A domain-specific knowledgebased heuristic for the blocks relocation problem,” Adv. Eng. Inform., vol. 28, no. 4, pp. 327–343, 2014.
ACKNOWLEDGEMENTS The award is being funded by the Swanson School of Engineering and the Office of the Provost.
GEMINI XPROJECT: TANKER WATER METERING Megan Black Pitt Makerspaces, Innovation and Entrepreneurship Program University of Pittsburgh, PA, USA Email: meb255@pitt.edu INTRODUCTION Hands-on design projects prepare engineering students for real world problem solving. The XProject program allows student teams in the Swanson School of Engineering to work with companies and researchers to develop solutions. XProjects are done over a period of about six weeks during which students utilize the makerspaces and Swanson Center for Product Innovation in Benedum while regularly checking in with advisors Brandon Barber and Dr. William Clark to fabricate a prototype device or system. Emphasis is placed on clear communication during the project, including between team members, with the client, and between the team and Makerspace mentors, faculty, and staff the foundation of the XProject program was laid by the 2018 XProject Summer Research Internship team.
tank only being partially filled. By automating the process of measuring water volume, Gemini is able to save oil and gas companies millions of dollars. After an initial meeting with the client, the team identified the major design requirements. To approach the problem at hand, the team divided into four sub-teams: frame, electronics, algorithm, and guided wave radar. The frame sub-team, led by Sarah Snavely and Maya Roman, designed and analyzed a frame structure to incline the water tank at various pitch and roll angles. The team designed and fabricated the frame with help from the Swanson Center for Product Innovation.
This summer a new team of interns -- Megan Black, Sara Kron, Maya Roman, and Sarah Snavely -- led five XProjects. All four interns worked on the Gemini project, in addition to leading their own projects. Megan led the Major League Baseball Moment of Inertia project, Sara led the Major League Baseball Physical Therapy project, Maya led the Internet of Things Security project, and Sarah led the Hot Top project. This abstract focuses on the Gemini project.
The guided wave radar sub-team, led by Sara Kron, learned how to set up and calibrate the Rosemount 5300 level transmitter and various probes provided by the client. The guided wave radar required more in depth understanding and research into how to incorporate into the whole system, whereas the other sensors were more intuitive. Initially, the system was designed using a rigid probe, however the polyethylene material of the tank provided interference with the readings from the rigid probe. To fix this issue, the probe was placed inside a concentric metal tube, creating a coaxial probe.
GEMINI Together, the team of interns worked on the Gemini project to design and test a system that measures the volume of water in a tank. Gemini Shale Solutions sought out the XProjects program as a way to further work on automating water tracking in the oil and gas industry. Currently, water is tracked through paper tickets filled out at each load station and the volume of water filled is measured using a sight gauge at the back of the tank or simply estimated as full. The issue with using a sight gauge is the inaccuracy due to the inclined fill stations. Often, sight gauges read full, despite the
The electronics sub-team, led by Megan Black, wrote a program that enabled the electronic system, by way of a raspberry pi microcontroller system, to take measurements and to make calculations. The raspberry pi took the guided wave radar and two inclinometer readings as input values. An analog to digital converter was required to take these measurements because raspberry pis are not able to directly read analog inputs. These were measured as voltages and then translated into inches and degrees respectively. The device took ten readings of each of these three inputs and averaged the inputs respectively to determine the average values. These
averaged values were then be sent to the algorithm to determine the volume of water in the tank. The following figure demonstrate how each sensor communicated with the raspberry pi:
Case 3: the left end is full, and the right end is partially full.
Case 4: both ends are partially full.
The algorithm sub-team, led by Megan Black, developed the code for calculating the volume of water in the tank. To start, this sub-team defined the six cases to represent the volume in the tank by referring to a resource from LMNO Engineering (1). Case 0: the water level is below the capabilities of the probe. The volume outputted during this case shows the maximum volume possible. In reality, the volume is less that this value, however, this calculation is outside the capabilities of the given guided wave radar. Case 1: the left end of the tank is full, and the right end is empty.
Case 2: the left end is partially full, and the right end is empty.
Case 5: the water level is above the capabilities of the probe. The volume outputted during this case shows the minimum volume possible. In reality, the volume I greater than this value. The volume in the end was calculated by subtracting the empty space under the dome from the area of the projected cylinder extended to meet the dome (2). The algorithm was tested against models created in Fusion 360 to check the accuracy of the calculation. REFERENCES 1. Inclined Cylinder Volume Calculation for Tanks and Pipes. www.lmnoeng.com/Volume/InclinedCyl.php. 2. â&#x20AC;&#x153;Volume And Wetted Area Of Partially Filled Horizontal Vessels.â&#x20AC;? Neutrium, www.neutrium.net/equipment/volume-andwetted-area-of-partially-filled-vessels/. ACKNOWLEDGEMENTS The Swanson School of Engineering and the Office of the Provost for jointly funding these projects. Brandon Barber and Dr. Clark for guidance in these projects. All participants in the summer XProjects program. The Swanson Center for Product Innovation for guidance and fabrication.
FACILE PREPAPRATION OF BISMUTH VANADATE PHOTOANODES WITH DUAL OXYGEN EVOLUTION CATALYST LAYERS Joseph Damian Dr. Jung-Kun Lee Research Group, Department of Material Science and Engineering University of Pittsburgh, PA, USA Email: jjd79@pitt.edu INTRODUCTION Bismuth vanadate (BVO) has been identified as a promising material for use in photoelectrochemical (PEC) cells. BVO has a relatively small band gap compared to other commonly used materials for photoanodes in a PEC cell which allows it to absorb a larger portion of the solar spectrum. One major weakness of BVO is that it corrodes during prolonged use in a PEC cell. Previous research uses oxygen evolution catalysts (OEC) to suppress corrosion of the BVO photoanode [1]. There have been many published procedures for BVO photoanode preparation such as sol-gel deposition, and electrodeposition [2]. This project used sol-gel deposition via spin coating. The spin coating conditions, annealing durations, and precursor solution compositions were varied to find an optimal procedure for the fabrication of BVO photoanodes. The photoanodes were loaded with iron oxyhydroxide (FeOOH) and nickel oxyhydroxide (NiOOH) OEC layers by electrodeposition. METHODS AND PROCEDURE This project uses a previously published recipe for the precursor solution wherein is dissolved in acetic acid, and is dissolved in acetylacetone. The control solution uses 0.2 M solution in acetic acid mixed with a 0.03 M solution in acetylacetone in a 1:1 Bi to V molar ratio [2]. This solution was the base recipe from which the recipes for the higher concentrated solutions, and the varied Bi:V ratio solutions were derived. Solutions twice (2xC), and four times (4xC) as concentrated as the control solution were tested to produce thicker films. Additionally, solutions with Bi:V molar ratios of 1:1.03 and 1.03:1 were tested to control film quality.
The precursor solution was deposited on a fluorine doped tin oxide glass (FTO) substrate. The FTO was scored and broken into 2 x 1.5 cm^2 pieces and cleaned in acetone, ethanol and deionized water for 10 minutes each in a sonic bath. 100 µL of the precursor solution was placed evenly on the FTO, then the substrate was spun at various speeds and for various durations. Spinning speeds of 500 and 1000 rotations per minute (rpm) were tested, and durations of 10 seconds and 30 seconds were tested. After the solution was spin coated, the FTO was placed on a hot plate at 500 ˚C for varied times. 30 minute, 5 minute and 2 minute intermediate annealing times were tested. Once all the cycles were complete, the films were annealed at 500 ˚C in a box furnace for 1, 2 or 3 hours. The FeOOH and NiOOH OEC layers were electrodeposited using an undivided three electrode cell. The cell consisted of a BVO photoanode working electrode, a platinum plate counter electrode and an Ag/AgCl (3M NaCl) counter electrode. The electrodes were submerged in a .1M solution of or at 70˚C under gentle stirring. A voltage of 1.2 V vs Ag/AgCl was applied for 20 minutes for both OEC layers. A similar three electrode cell was used for PEC testing. The electrolyte used was a 0.5 M phosphate electrolyte consisting of 15.375 mL of dibasic potassium phosphate, 9.625 mL of monobasic potassium phosphate and 25 mL of deionized water. RESULTS AND DISCUSSION The first parameter tested was the spin coating conditions. The goal of changing the spin coating conditions was to efficiently prepare uniform films with good PEC performance. Two different spin speeds and spinning durations were tested: 1000 and 500 rpm for 10 and 30 seconds. Figure 1 shows the PEC performance and uniformity of each spin coating condition. There is no clear effect of spin
showed slightly better and more consistent PEC performance than samples annealed for 2 hours.
coating condition on PEC performance. Additionally, there is no clear improvement of film uniformity between a shorter and longer spinning duration. Based on these findings, using 1000 rpm for 10 seconds the most efficient spin coating condition to produce uniform BVO films. After the optimal spin coating conditions were found, different solutions, cycle numbers, intermediate and final annealing durations were tested. Using an intermediate annealing duration of 30 minutes did not produce films with high enough current densities, and it took too long to produce samples. To shorten the fabrication process, annealing times of 2 and 5 minutes were tested. It was found that using a shorter intermediate annealing duration of 2 minutes resulted in a much better illuminated current density at 1.23 V vs RHE compared to the 5 and 30 minute samples. Additionally, final annealing durations of 1, 2 and 3 hours were tested. Samples annealed for 1 hour were very unstable and dissolved quickly in the electrolyte. The samples annealed for 3 hours
Aside from annealing durations, several different variations of the base precursor solution were tested. Bi:V molar ratios of 1:1, 1:1.03 and 1.03:1 were tested and there was no clear relationship between the tested ratios and PEC performance. Aside from changing the Bi:V molar ratio of the precursor solution, higher concentrations of the precursor solution were tested to try to produce thicker films. 2xC and 4xC precursor solutions were tested. The 4xC solution did not fully dissolve which resulted in an uneven film with poor performance. The 2xC resulted in a slightly higher performance than the base precursor solution. The highest performing photoanodes were the 2xC precursor solution annealed for 2 minutes per cycle with a final anneal of 3 hours. Finally, OEC layers were loaded via electrodeposition. The OEC layers improved the light current density from around .05 mA/cm^2 to .2 mA/cm^2 at 1.23 V vs RHE. Additionally, the OEC layers resulted in a much earlier onset potential compared to the bare BVO. REFERENCES 1. Lee et al. Nature 3, 53-60, 2018. 2. Hilliard et al. ChemPhotoChem 1, 273-280, 2017. ACKNOWLEDGEMENTS This research was funded by the Swanson School of Engineering and the Office of the Provost. Thank you to Dr. Jung-Kun Lee, Dr. Salim Ă&#x2021;aliĹ&#x;kan, Matthew Duff for training and guidance throughout the project.
Development of Mg – Ca – Y Based Alloys For Engineering/Biomedical Applications Christine K. Determan Materials Science Laboratory, Department of Mechanical Engineering National University of Singapore Email: ckd14@pitt.edu
INTRODUCTION The desire to develop light magnesium alloys comes largely from the automotive industry to cut down on greenhouse gas emissions. To date, steel is the primary metal used in this industry for various car parts, including the frame, body, and engine. However, magnesium is one quarter the density of steel, which would significantly decrease the weight of vehicles. Lowering the weight would increase fuel efficiency and decrease fuel costs, creating a large economic incentive for the development of these alloys [1]. In addition, Mg is found abundantly in nature. Magnesium is the sixth most abundant element in the Earth’s crust, making it a plentiful resource [1]. A second driving force for developing these alloys are for the biomedical industry. The NIH recommends a daily serving of 320 mg for women and 420 mg for men of magnesium [2]. Using a biodegradable material such as magnesium has many benefits, including preventing revision surgery to remove implant from prior procedures and not having a permanent implant that can cause comfort/mobility issues. The addition of Ca is beneficial because it is also required by the body, particularly in the bones where the implant could be utilized. Finally, the human body can withstand a low quantity of rare earth metals, such as yttrium, eliminating any concern of poisoning [3]. Alloying with small quantities of yttrium does not adversely affect cell viability and proliferation [3]. In this study, samples of magnesium alloyed with calcium and yttrium were examined to determine their impact on mechanical, thermal, physical, and corrosive properties. These elements were chosen primarily due to their impact on biological, mechanical, ignition, and corrosion response. Calcium provides an increase in ignition temperature and oxidation resistance [4]. Calcium is the third most common element in the Earth’s crust, making it a widely available alloy addition and is non-toxic for humans [5]. Yttrium also offers oxidation resistance
[5]. Both elements form oxide films (CaO and Y2O3) on the surface to help reduce or eliminate oxidation and corrosion [5]. In this experiment, calcium was kept constant between samples and yttrium was varied to determine the minimum amount of yttrium required to achieve the desired properties. The baseline was Mg-1Ca to compare against samples with Y-addition. Also, it has been shown that Mg1Ca has good biocompatibility and low corrosion rates, while higher amounts of calcium adversely impact these qualities [3]. The composition of the remaining samples were Mg-1Ca-0.5Y, Mg-1Ca-1Y, and Mg-1Ca-2Y. The content of yttrium was limited due to economic constraints that would limit its manufacture on a larger scale, its negative impact on corrosion and biocompatibility, and toxicity concerns for biomedical purposes [3]. METHODS All samples were cast, soaked at 400 ℃ for one hour, then hot-extruded at 350 ℃. Damping capacity, Young’s Modulus, and density tests were performed to determine the physical characteristics of the four samples. Young’s Modulus and damping capacity were calculated using RDFA MF using specimens with dimensions of 7-7.5 mm in diameter and 60 mm in length. Density measurements were performed using Archimedes principle with specimens that were 5-8 mm long and 7-7.5 mm in diameter. Mechanical tests performed include Vickers microhardness and compression. Vickers hardness was determined using Shimadzu microhardness tester with specimens that were 7-8 mm long and 7-7.5 mm in diameter. Compression testing was carried out using MTS 810 universal testing machine with 647 Hydraulic Wedge Grips on specimens that were 7-7.5mm in diameter and had an L/D ratio of 1.5. Corrosion tests were performed in NaCl for environmental uses and HBSS solution to mimic the body for biomedical purposes. The microstructure post-extrusion was observed in SEM.
DATA PROCESSING Averages for each test were taken and a standard deviation was calculated. Extreme outliers were removed from data set. Values for the new alloys were compared to magnesium to determine if alloying enhanced or hindered properties. RESULTS The most important characteristics that were studied were ignition temperature and microstructure. Since microstructure was not characterized due to time constraint, XRD will be discussed here. XRD was performed cross-sectionally and longitudinally. There was a decrease in peak intensity longitudinally. Some peaks were identified using previously published XRD data and are labelled on Figure 1.
Figure 1. A) cross-sectional and B) longitudinal XRD results. Phases were identified from past XRD data if possible.
Table 1 lists the ignition temperature values from the samples studied. An ignition temperature could not be identified for 0.5Y due to errors that occurred while testing. Multiple peaks were identified on its temperature versus time graph, indicating the machine was disturbed while testing. Testing must be repeated in order to find its ignition temperature. Table 1. TGA results showing ignition temperature.
Mg-1Ca Mg-1Ca-.5Y Mg-1Ca-1Y Mg-1Ca-2Y 675 693 888 T( )
DISCUSSION The change in peak intensity indicates a change in texture. This is expected because magnesium has HCP structure, which causes it to have anisotropy. The peaks labeled unknown could not be identified using past XRD data. These peaks most likely correspond to yttrium containing phases in the matrix. More investigation into the microstructure must be conducted in this respect. Though the ignition temperature of Mg-1Ca-0.5Y is missing, there seems to be an increasing trend in ignition temperature with increasing yttrium content. An equation for the ignition temperature could be developed once the value for 0.5Y is determined. Further testing needs to be performed to determine if a pattern exists. REFERENCES 1. Gupta, M., & Sharon, N. M. (2011). Magnesium, magnesium alloys, and magnesium composites. Hoboken, NJ: Wiley. 2. National Institute of Health. (2018, September 26). Office of Dietary Supplements - Magnesium. Retrieved June 27, 2019, from https://ods.od.nih.gov/factsheets/MagnesiumHealthProfessional/ 3. Zeng, R., Qi, W., Cui, H., & Zheng, Y. (2014). In vitro corrosion of Mg–1.21Li–1.12Ca–1Y alloy. Progress in Natural Science: Materials International,24(5), 492-499. doi:https://doiorg.pitt.idm.oclc.org/10.1016/j.pnsc.2014.08.005 4. Kumar, N. R., Blandin, J., Suéry, M., & Grosjean, E. (2003). Effect of alloying elements on the ignition resistance of magnesium alloys. Scripta Materialia,49(3), 225-230. doi:10.1016/s13596462(03)00263-x 5. Fan, J., Chen, Z., Yang, W., Fang, S., & Xu, B. (2012). Effect of yttrium, calcium and zirconium on ignition-proof principle and mechanical properties of magnesium alloys. Journal of Rare Earths,30(1), 74-78. doi:10.1016/s10020721(10)60642-4 ACKNOWLEDGEMENTS Participation in research was funded by Swanson School of Engineering, the Office of the Provost, and the Office of Engineering International Programs. The project was supervised by A/P Manoj Gupta at the National University of Singapore.
ACCOUNTING FOR LOCAL SOLVENT EFFECTS AND PROTONATION STATES IN MACROCYCLIC BINDING Brian M. Gentry, William S. Belfield, and John A. Keith Department of Chemical Engineering University of Pittsburgh, PA, USA Email: jakeith@pitt.edu INTRODUCTION Chelating agents, or chelants, are organic compounds that bind to metal ions in solution, trapping them and preventing interactions with the surrounding environment. The strong interactions between chelants and metal ions give chelants a wide range of applications. For example, they are used in medical treatments as a means of removing toxic heavy metals from the body [1]. They can also be used in industry, for example as a method of water softening or as a supplement in fertilizers to supply plants with vital metals [2]. Our work here focuses on a specific chelating agent in the cryptand class, 2.2.2-cryptand ([2.2.2]), whose structure is depicted in Figure 1. Cryptands are macrocyclic chelating agents that capture metal ions inside the cage formed by the branches of the cryptand, coordinating the electronegative atoms of the cryptand with the metal cation.
both molecular dynamics (MD) and quantum chemistry (QC). We employ two QC methods, one that uses an implicit solvation model to model the solvent environment and a second, novel method that treats nearby water molecules explicitly and treats the bulk solvent using the same implicit model as before. We expect that QC methods, when combined with careful treatment of the local solvent environment, will yield more accurate binding energies for the ion-[2.2.2] complexes.
Figure 1: Structure of [2.2.2]. Cations are trapped inside the cage formed by the ether branches.
QC calculations were carried out in ORCA v.4.1.0. For the implicit solvation method, the free energies of the reactants (metal ion and [2.2.2]) and products (ion-[2.2.2] complex) were calculated in solution phase to determine the change in free energy using density functional theory (DFT) with the wB97X-D3 functional and the def2-TZVPP basis set. This implicit solvation approach is Scheme 2. For the microsolvation approach, snapshots of MD trajectories were used as starting geometries with the closest 16 water molecules included in the calculation, and the free energies of the reactants and products were calculated in the same way as above. This microsolvation approach is Scheme 3.
Computational determination of binding energies of [2.2.2] to arbitrary metal ions has proven elusive. Su and Burnette carried out a computational study investigating the binding affinity of [2.2.2] to metal ions in H2O, MeOH, and MeCN solvents. While they captured the qualitative trend in binding energies, they found systematic errors in the binding energy on the order of 20-30 kcal/mol [3]. We seek to develop a generalizable method of calculating the binding affinity of [2.2.2] with any given metal ion. We approach the problem using
METHODS MD simulations were carried out in Tinker 8.6. [2.2.2] was parameterized using the OPLS all-atom force field (OPLS-AA). Partial charges for each atom were taken from [4]. For each cation analyzed that was present in the OPLS-AA force field (alkali and alkaline earth metals), parallel MD simulations of the complex in solvent water were carried out using a free-energy perturbation (FEP) technique. Using FEP, electrostatic and van der Waals interactions are slowly tuned out, allowing for the calculation of the change in free energy between the initial and final states using the Bennett acceptance ratio. We term this method of calculating the binding energy Scheme 1.
molecules close to the [2.2.2] cage, are explicitly accounted for.
RESULTS Calculated binding energies using Scheme 1 were within 3 kcal/mol for monovalent complexes, but diverged significantly for the divalent complexes. With Scheme 2, binding energies diverged significantly in every case, sometimes by more than 100 kcal/mol, far outside the realm of chemical accuracy. However, Scheme 3 yielded values that were only an average of 3.7±1.6 kcal/mol off from experimental values. All experimental and calculated binding energies are summarized in Table 1.
While this study takes into account the different arrangement of solvent water molecules around the ion-[2.2.2] complex, it is unlikely that the structures used to calculate the binding energies represent a global minimum on the potential energy surface and are rather only local minima. Future research should explore the possible configurations of water molecules around the complex to better identify the lowest energy structures. This would likely further reduce the discrepancies between experimental and computationally calculated binding energies.
The systematic error in Scheme 3 suggests that another factor is not accounted for in this calculation. Since the pKa of [2.2.2] is 9.7 [5], the molecule will need to be deprotonated before chelation can occur, since the presence of a positively charged proton and a cation in close proximity within the cage would produce a highly unfavorable interaction. At neutral pH, the energy cost of deprotonating a molecule with a pKa of 9.7 is 3.7 kcal/mol. Adding this to each binding energy yields binding energies that are only an average of 1.6±1.8 kcal/mol off from experimental values.
REFERENCES 1. Williams, D. R.; Halstead, B. W. Clin Toxicol 1982, 19, 1081-1115. 2. Sekhon, B. S. Resonance 2003, 8, 46–53. 3. Su, J. W.; Burnette, R. R. ChemPhysChem 2008, 9, 1989–1996 4. Wipff, G.; Auffinger, P. JACS 1991, 113, 5976– 5988. 5. Anderegg, G. Helv Chim Acta 1975, 58, 1218– 1225. 6. Arnaud-Neu, F.; Spiess, B.; Schwing-Weill, M. Helv Chim Acta 1977, 60, 2633–2643.
DISCUSSION The failure of Scheme 1 to accurately calculate the binding energy of divalent cations to [2.2.2] can be attributed to the significant charge diffusion that occurs with polyvalent cations in solution. This charge diffusion is accounted for in polarizable force fields, but not with the OPLS-AA force field. Scheme 2 fails similarly because continuum solvent models do not perform well with large charge densities near the boundary of the pertinent geometry, such as with metal ions and the ion-[2.2.2] complexes. However, Scheme 3 performs significantly better, as local solvent effects, including hydrogen bonding networks and water
ACKNOWLEDGEMENTS Computational resources came from the Center for Research Computing at the University of Pittsburgh. This work has been funded by the National Science Foundation (NSF-CBET 1705592) Acknowledgements are made to the University of Pittsburgh's Swanson School of Engineering, the University of Pittsburgh's Office of the Provost, and Loughborough University's Department of Chemical Engineering for their financial contributions enabling the completion of this project.
Table 1: Experimental and calculated binding energies in kcal/mol. a [5]. b [6]. Binding between Zn2+ and [2.2.2] was reportedly too weak to accurately measure the binding constant. Method
Ion Na
+ a
Experimental
-5.6
Scheme 1 Scheme 2 Scheme 3 With pKa correction
-7.9 -17.9 -9.9 -6.2
K
+
-7.6
Rb a
-9.3 -24.6 -12.2 -8.5
+
Ca2+
a
a
-11.3
29.7 -98.2 -9.4 -5.7
15.8 -69.1 -17.4 -13.7
-5.5
-4.6 -13.4 -5.8 -2.1
-6.2
Sr2+
Zn2+ a
-3.5
b
— -204.5 2.4 6.1
Pb2+ -16.9a — -93.7 -22.3 -18.6
THE EFFECTS OF SURFACE DEFORMARION ON ALUMINA SCALE ESTABLISHMENT ON THE Ni-BASED ALLOY HAYNES-224™ Samuel Gershanok and Dr. B Gleeson Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: sag141@pitt.edu INTRODUCTION Nickel-based superalloys are specially designed for high-temperature strength and are therefore used in applications such as gas turbines, heat exchangers and fuel cells [1]. Alumina (Al O ) and chromia (Cr O ) are generally the most protective oxide scales. The composition of the alloys, temperature of oxidation and surface finish can all affect which of these oxide scales will form and to what degree of homogeneity. Processes such as abrading or gritblasting can be used to alter the alloy surface, changing the selection of scale formation. 2
2
3
3
With surface deformation, alumina-rich scaling has been produced using controlled heating profiles in the lab by Yihong Kang, a PhD student in Dr. Gleeson’s group [2]. That research has set the stage for further exploration of alternative aluminaforming routes, specifically at lower oxidation temperatures and in combination with surface deformation. The goal for this project is to conduct systematic experiments, building off Kang’s findings to elucidate the interrelationships between oxidation temperature, heating profile, and surface deformation to ultimately promote alumina-scale formation on superalloys. Initial focus will be on HAYNES-224™, a wrought Ni-based superalloy with a composition that is borderline between being an alumina- or chromia-scale former. METHODS A sample of 3.2 mm thick wrought HAYNES-224™ was obtained from Haynes International. The samples were cut into 9.8 ´ 9.8 mm2 coupons using a CNC laser cutter. The coupon samples were divided into three groups: as-received; abraded; and vapor honed. No surface deformation was administered to the asreceived sample, prior to heating.
For the abraded samples, each was subject to grinding on all faces using a standard rotating grinding wheel. All grinding was done using a similar force in multiple different orientations to ensure the entire surface was subject to grinding. Samples were ground to either a 320- or 500-grit finish. For vapor honed samples, a VAPOR BLAST™ Liquid Honing machine was used filled with 20% 300 BSVB glass beads. Each coupon was clamped and held under the running vapor hone stream for 30 sec on both large faces and 10 secs on all edges. The “close”, “medium” and “far” vapor-honed samples were held 50, 300 and 500 mm away from the vapor hone nozzle, respectively. A bi-thermal heating procedure was used for the oxidation exposures. In the initiation step, all samples were placed into the hot zone of a tube oxidation furnace for 800°C for five min. For the oxidation heating step, the samples were then placed in the hot zone of the tube furnace set at 1150°C for one min for short-term exposure and 30 min for longterm exposure. Topological images of each oxidized sample were taken using an Apreo field emission secondary electron microscope (FESEM). Concurrently, chemical analysis was conducted via energy dispersive spectroscopy (EDS). The samples were then prepared for cross-sectional analysis. Each sample was mounted in epoxy and ground using 320, 500, 1200, 2000-grit pads followed by polishing using 3, 1 and 0.25 µm diamond suspension. Crosssections were then imaged using the Apreo FESEM. DATA PROCESSING All images were exported and compared at identical magnifications. The average amount of internal oxidation was determined semi-quantitatively using the image scale bar.
RESULTS For samples exposed at 800°C for 5 mins followed by 1150°C for 1 min, a thermal profile in accordance with Kang’s thesis [3], significant internal oxidation was seen in all as-received samples while all surfacedeformed samples exhibited significantly less internal oxidation. Under extended oxidation conditions of 800°C for 5 min followed by 1150°C for 30 min, all samples exhibited some level of internal oxidation depicted in Figure 1.
Figure 2: Cross-sectional images after using the extended heating exposure of 800 °C for 5 min followed by 1150°C for 30 min. The top grey layer is comprised of Ni-containing oxides (NiO and Ni(Cr,Al)2O4) which are non-protective. Directly below this top layer are columnar α-alumina precipitates and chromia deposits. The large dark precipitates deeper into the alloys subsurface are pockets of internal aluminum.
Some areas of the samples exhibited protective mixed alumina and chromia scale growth, with different coverage percentages, indicated in Table 1. The percentage of each sample’s protective-scale coverage varied. Table 1: Approximate penetration depth of internal oxides for each sample after extended oxidation conditions and the percentage of the sample surface exhibiting protective scale. Surface Condition
Internal oxidation Depth
Protective scale formation
Vapor hone: Close
12 µm
14.5%
Vapor hone: Far
26 µm
0%
Abrasion: 320g
25 µm
8.0%
Abrasion: 500g
13 µm
0%
As received
16 µm
0%
DISCUSSION Replication of the bi-thermal exposure reported by Wang indicated in the current study the possibility to form an alumina-rich scale on a surface-worked HAYNES-224™; however, exclusive protective scaling was not complete for a given sample. Even though a continuous protective scale did not form,
some areas did exhibit protective scaling for extended times. These protective areas, which were found on both “close” vapor honed and 320-grit surface-deformed samples showed no sign of internal oxidation. All modes of surface treatment induced deformation and, hence, dislocations in the subsurface of the alloy. During the initial thermal step, the yielded subsurface was prone to recovery and recrystallization, with the driving force to reduce strain energy. The size and orientation of the recrystallized grains depends on the extent of deformation in the alloy subsurface. Specifically, a higher amount of deformation leads to a smaller recrystallization grain size for a given time and temperature. The finer the recrystallized grains are at the surface, the more diffusion paths there are for aluminum and chromium, which in turn has the effect of increasing the flux of these elements to the surface [4]. The “close” vapor hone condition produced the most favorable scale, the least internal oxidation and the highest surface percentage of protective scale formation. These results correlate with the understanding that the “close” vapor hone sample experienced the greatest amount of surface deformation, leading to the finest grain sizes on recrystallization. By proving the most effective surface deformation technique is “close” vapor-hone abrasion, focus can be turned to finding ideal initiation heating conditions. The next step of this research is to use etching techniques to determine the size and orientation of the recrystallized grains that led to protective scale formation. After characterizing the necessary recrystallized grains, further tests should be conducted to determine the temperature for and length of the initiation step that best produces favorable recrystallization conditions. REFERENCES [1] H.K.D.H. Bhadeshia, Nickel-Based Superalloys [2] Gleeson, Brian; Zhao, Wei; Kang, Yihong, Kinetics Affecting Al O -Scale Establishment [3] Y. Kang, IN-SITU AND EX-SITU STUDIES [4] N. Birks, G.H. Meier, F.S. Pettit, Introduction 2
3
ACKNOWLEDGEMENTS Sincere thank you to PhD students Matt Kovalchuk and Patrick Brennan in Dr. Gleeson’s group
Numerically Resolved Radiation View Factors Via Hybridized CPU-GPU Computing Asher J. Hancock, Matthew M. Barry, Ph.D. Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: ajh172@pitt.edu INTRODUCTION Thermoelectric generators (TEGs) are devices that can convert thermal energy directly into electrical energy via an applied temperature gradient. Constructed of n- and p-type doped semiconductors connected electrically in series by highly conductive interconnectors, TEGs operate thermally in parallel, i.e., between a heat source and sink, to develop an output voltage via the Seebeck effect.
calculations were parallelized across multiple GPUs. The program was validated with the analytical values for many geometrical configurations. Finally, a study of varying TEG geometrical parameters for view factor analysis was completed. METHODS The radiative view factor is calculated via
where θ is defined as
Figure 1: Example geometry of a thermoelectric device (left) from [5] and tessellated emitting and receiving surfaces with blocking geometry (right).
To achieve high efficiency values, TEGs typically operate under large temperature differences. Under these conditions, radiative heat transfer can dominate over conduction and convection, and thus must be properly analyzed. Numerical modeling is often sought after to analyze a TEG’s radiative heat transfer. However, due to the computationally intensive process of characterizing the proportion of radiation emitted and absorbed by surfaces, known as the view factor, Fij, this analysis is often oversimplified or computed with low accuracy. Vujičić et al. implemented a Monte Carlo and finite element technique to resolve radiative view factors, but the tradeoff between low mesh resolutions and computation time resulted in erroneous values [1]. Maruyma developed a parallelized ray-tracing project to determine the radiation heat transfer between arbitrary surfaces but linear programming resulted in too large of computation times [2]. In this work, a robust Java program was created to rapidly characterize a TEG’s view factor based upon a variety of geometrical parameters. To decrease computation time and increase accuracy,
To numerically resolve the view factors within a TEG, each surface was tessellated into differential areas based upon geometry and a corresponding desired refinement using [3]. Each differential area of the emitter surface creates a corresponding ray for each receiver differential area, which contributes individually for each view factor calculation.
Figure 2: Example of two tessellated surfaces with one ray cast between two differential areas [5].
Within a TEG, not every emitted ray will reach every receiving tessellation due to the possible intersection with the TEG’s semiconducting legs or interconnectors (see Figure 1). This interference, known as the shadow effect, is accounted for via the Möller-Trumbore triangle intersection algorithm [4]. The Möller-Trumbore algorithm is computed for every ray emitted. As the refinement value
increases, calculations needed grow at exponential rate. To resolve a large number of Fij and MöllerTrumbore computations effectively, hybrid CPUGPU computing was utilized. Parallel computing is desirable due to the large core counts intrinsic of GPUs, which can operate simultaneously to drastically reduce runtime.
through the program. Interconnector thicknesses and height per width (H/W) ratios were varied. The packing density, a ratio between area of the semiconductors to area of entire TEG, was held constant. Each emitter and receiver plate were tessellated to a refinement of at least four million triangles. Results are presented in Figure 4.
PROGRAM VALIDATION Analytical solutions for the view factors of known geometries were computed to validate the code. Parallel and perpendicular geometries were considered to showcase code robustness. Figure 3 depicts the analytical curves for numerous cases of coaxial and perpendicular plates. Plotted among the graphs are the calculated view factors for specific configurations, as shown by the plotted points. A grid independence study, as shown in Table 1, demonstrates the convergent nature of the code with increasing number of tessellations.
Figure 4: View factors vs. interconnector thickness for various H/W ratios.
Figure 3: Analytical graphs for various rj/L values for coaxial parallel plates and various Y/X values for perpendicular plates separated on the normal through corner. Table 1: Residuals for Selected Perpendicular Plates Aligned Along Normal Through Corner
Z/X 0.1
Y/X 1.0
65,536 9.6E-5
Number of Tessellations Per Surface 131,072 262,144 524,288 1,048,576 4.9E-5 2.4E-5 1.2E-5 6.0E-6
2.0
2.0
2.8E-6
1.4E-6
7.2E-7
3.6E-7
1.8E-7
4.0
4.0
1.6E-6
8.0E-7
4.0E-7
2.0E-7
1.0E-7
THERMOELECTRIC ANALYSIS Following analytical validation, numerous thermoelectric geometries were tessellated and run
DISCUSSION It is seen that as the H/W ratio increases, radiative view factors decrease. This is due to the increased shadow effect from the internal TEG geometry disrupting the radiative transfer between ceramic plates. Similar logic is applied when increasing interconnect thickness. These results agree with the trends presented in [5]. In comparison to the geometrical intersection algorithm presented in [5], the Möller-Trumbore algorithm offered up to 15% runtime gains. This speedup results from the decreased number of logical checks per view factor calculation, ultimately reserving more memory for simultaneous intersection checks. REFERENCES 1. Vujičić et al. Communications in Numerical Methods in Engineering 22, 197-203, 2005. 2. Maruyama S. Numerical Heat Transfer, Part A: Applications 24, 181-196, 1993. 3. Persson et al. SIAM review 2, 329-345, 2004. 4. Möller et al. Journal of Graphics Tools 2, 21-28, 1997. 5. Barry et al. Energy 102, 427-435, 2016. ACKNOWLEDGEMENTS I’d like to thank Dr. Barry for his help and guidance throughout this project. I’d also like to thank the Swanson School of Engineering for funding this research.
MANUAL BINDER JET PRINTER Jeffrey Martin, Katerina Kimes, Erica Stevens, Markus Chmielus Chmielus AM3 Laboratory, Department of Mechanical Engineering University of Pittsburgh, PA, USA Email: JLM438@pitt.edu, Web: http://chmieluslab.org/ INTRODUCTION Ni-Mn-Ga is a magnetic shape memory alloy that changes shape in a magnetic field and has been studied extensively for use as strain sensors and actuators [1]. Ni-Mn-Ga single crystals are currently the used for these applications, but they are time consuming and costly to manufacture. It is of interest to fabricate Ni-Mn-Ga using a different method such as binder jet 3D printing (BJ3DP). BJ3DP is a method of additive manufacturing where a layer of metal powder is spread flat then selectively bound with a polymer binder. The binder is cured and then the process repeats, building layer upon layer until the part is complete [2]. To be able to print Ni-Mn-Ga from small amounts of powder, it was necessary that a multi-material manual binder jet printer was developed along with a technique for additively manufacturing Ni-Mn-Ga while retaining its magnetic shape memory effect. The procedure and device that is developed to print Ni-Mn-Ga must satisfy the physical properties of this metal, while also accommodating the retention of its functional characteristics. PRINTER COMPONENTS
Figure 1 Component Schematic Captured from Solid works
The frame (Figure 1a) is the foundation of the printer. All the other components of the printer interface with the frame so it is important that it is rigid enough to support the other components and resist motion caused by the user’s interactions. For this reason, 8020 struts and a 6061 Aluminum plate were our
materials of choice for the frame’s construction. All the machine work on the aluminum pieces were done with the vertical knee mill except for the central square cut for the build surface, this cut done with the wire EDM because radiused corners were not acceptable as we wanted a perfectly square hole. To drive the build surface, we chose to use a micrometer (Figure 1d). A micrometer provides a mechanically simple, accurate, and cost-effective way to reliably lower the print bed while achieving our desirable resolution. This eliminated the need for a complex drive system or the need for a layer measurement system. The build shaft, the piece atop the micrometer, is additively manufactured from an onyx composite material with good strength and thermal resistance properties. To ensure a single point of contact between the build shaft and the micrometer, a ball bearing is situated between them. This limits frictional forces that may exert a torque on the shaft, allowing this system to translate smoothly without disturbing the powder. Lastly, retention springs ensure that there is no backlash in the precise micrometer adjustments. The springs also ensure that the build shaft’s vertical motion is not inconsistent or obstructed by intersystem friction. The transfer assembly (Figure 1e) is used to transport the completed sample from the printer to a curing oven. It must be temperature resistant to 200 °C and provide an easy, repeatable procedure for extracting the sample without disturbing its fragile shape. To accomplish this, a transfer assembly was designed with a series of interconnected dovetails and additively manufactured from high temperature resin. The assembly consists of the build plate, the holder, and the transfer box interlocked with the sample inside which can be easily transferred to the furnace. The powder carriages (Figure 1c) were designed to hold powder and deliver it to the print bed without disturbing the previous layers. The square hole in its center is used to store the powder and is of the same
size as the print bed. The powder sweeper (Figure 1c) was designed to remove excess powder from the print bed surface and around the print bed. It utilizes a blade-like design to flatten the print area. To aid powder removal both tools have felt placed along their bottom surfaces to direct powder into corresponding holes in the top aluminum block used to capture unused powder for reuse.
challenging. There is a delicate balance between wire thickness, amperage, number of turns in the coil and heat. Calculations were done to approximate the number of turns for a coil of a specific wire gauge that would generate our target magnitude, but things were further complicated when off-axis shifts, heat and resistance accumulations over time were considered.
A magnetic coil underneath the build plate was designed to produce a magnetic field of 5 mT (30 mm above the coil) used to align powder during printing. To achieve this, we used 20-gauge wire turned between 200 and 250 turns over a coil height of 25 mm and a diameter of 40 mm [3]. A coil holder was additively manufactured from high temperature resin to shield the drive assembly from the heat of the coil.
Developing a method for binder deposition has proved to be taxing as well. Starting with a setup that sprayed vertically downwards onto the print surface we found that a constant mist was hard to provide. The answer was a dosed sprayer that could dispense a mist horizontally. Aided by a spray holder, a constant height and distance allowed us to achieve a desirable mist.
PRINTER OPERATION The print starts with the micrometer at its top position. The micrometer is lowered and the powder carriage with the desired material is dragged over the printer bed, depositing powder. Then the sweeper is dragged over the print area leveling the surface. At this point the magnetic coil it turned on. Next, binder is deposited and then the layer is dried with the lamp. The magnetic coil is turned off, the lamp is removed, and the micrometer is lowered once more to begin the next layer.
The addition of powder catchers allowed us to recapture unused powder. They were designed to be easily removable, locking cartridges that secured to the 8020 struts by taking advantage of the multipurpose groove. This part took countless prototypes and redesigns to conquer the problem of stress risers [4].
RESULTS The printer has been functional for the past several months and has produced tens of samples for research in our lab (Figure 2). Though it is operational, that does not mean that the printer is finished. It will continue to evolve as it used. Printer refinement was driven by feedback received from the graduate students that use the printer.
Figure 2 Samples produced by the manual printer.
DISCUSSION The endeavor of magnetically aligning the powder particles during printing is a job that was especially
Lastly, the heat lamp is used to dry each layer of binder after deposition. Obstacles were faced concerning heat transfer and space restrictions when creating it. It must hold the lamp at a specific height above the print bed without interfering with the print bed or falling off the top aluminum block. The iteration that we are using now uses additively manufactured stand legs that feature thicker walls and higher infills to combat warping and narrower feet to make it easier to place on top of the printer. REFERENCES 1. Hobza, et al., Sensors and Actuators A: Physical 269, 137-144, 2018. 2. Mostafaei et al. Materials and Design 162, 375-383, 2018. 3. Walker. Fundamentals of Physics Halliday & Resnick, 2011 4. Snap-fit Design Manual, BASF, 2007 ACKNOWLEDGEMENTS Partial funding was provided by NSF #1727676. Thanks to community within the Chmielus AM3 lab at the University of Pittsburgh Swanson School of Engineering and Jeff Speakman and Andy Holmes in the Student Center for Product Innovation.
EFFECT OF PRINT PARAMETERS ON DIMENSIONAL ACCURACY AND SINTERING BEHAVIOR OF BINDER JET 3D PRINTED WATER AND GAS ATOMIZED INCONEL 625 Lorenzo Monteil, Ruby Jiang and Markus Chmielus AM3 Laboratory, Department of Materials Science and Engineering University of Pittsburgh, PA, USA Email: ljm101@pitt.edu INTRODUCTION Binder jet printing (BJP) is one of the subsets of dmetal additive manufacturing (AM), and one in which a liquid binder bonds a powdered material together [1]. Thin layers of powder material are deposited by a hopper and roller on the build platform, with a binder drawing the sections of the object on the powder, transforming it into a solid material. After each binder pass, the platform lowers and a new layer of powder is spread over the top of the object from a container on the side (Fig. 1) [2]. It then goes through a sintering process to burn off the binder and densify the objects. Creating objects via binder jet three-dimensional printing (BJ3DP, see schematic in Figure 1) has proven to be a very successful and adaptable process due to its potential to fabricate complex components with controlled microstructure, shorter lead times, reductions in material waste as a near-end-shape process, and without the need of melting process and control of inner atmosphere during printing [3]. In order to control the green density (the ratio of metal powder volume to the external volume of the printed part), microstructure, and porosity distribution, we need to understand the effect of various printing parameters in BJP process and post-processing parameters.
Ni-based alloy Inconel 625 (IN625), has many applications within engineering due to its combination of yield strength, creep strength, fatigue strength, excellent oxidation and corrosion resistance in aggressive environments [4, 5]. This work focuses on the effects various printing parameters have on the dimensional accuracy of printed metal parts of both water atomized (WA) and vacuum-meltedgas atomized IN625 as well as an analysis of the effects of binder saturation on dimensional accuracy and porosity. The printing parameters explored include the oscillation speed, recoat speed, binder saturation, roller traverse speed, and roller rotation speed METHODS AND MATERIALS In this study, the WA and GA IN625 both from Carpenter Technology Corporation, were printed using an ExOne Innovent binder jet 3D printer. The water-based binder from the ExOne company with ethylene glycol monobutyl ether (10 vol.%, CAS # 111-76-2) and ethylene glycol (20 vol.%, CAS # 107-21-1) was used. Each set of print parameters was performed on both WA and GA powders as shown below in Table 1. Table 1 List of Parameter Combinations for printing. Run Oscillation Recoat Binder Roller Roller Order Speed Speed Saturation Traverse Rotation Speed Speed (rpm) (mm/s) (%) (mm/s) (rpm)
1 2 3 4 5 6
Figure 1 Schematic of a binder jet printing bed
2800 2100 2100 2800 2800 2100
150 150 150 90 90 90
100 100 60 100 60 100
30 1 1 1 1 30
700 1 700 700 1 1
In total, sixteen different parameter combinations were tested in order to determine which parameters most affected the dimensional accuracy, while a separate run using default parameters and varying binder saturations was conducted to test the effect
binder saturation has on dimensional accuracy and porosity. After initial printing, the semi-finished “green parts” were cured at 180℃ in a JPW Design and Manufacturing furnace. After the green parts were finished curing, they were sintered in an Across International Tube Furnace at 1270℃ for the water atomized parts and 1285℃ for the gas atomized parts in an alumina powder bed with titanium sponge. Porosity data from the binder saturation tests was collected and processed via SEM micrographs and ImageJ image analyzing software. DATA PROCESSING The density of the as-printed parts, “green density”, was measured manually, finding both the volume of each individual metal part and mass to attain the density. The resultant finished metal parts were analyzed using an Archimedes principle setup with an OHAUS AX324 precision balance (0.1 mg accuracy). All density data was plotted using Matlab. RESULTS Resultant data of the green density versus print parameters by a mean plot of the five differing parameters in Figure 2. There seemed to be differing parameters that affect the dimensional accuracy of a printed part, where WA printed parts are most affected by binder saturation and roller traverse speed and GA printed parts are most affected by roller1 traverse and roller speed. Figure Design ofspeed Experiment (DoE)rotation mean plot
The results from the collection of data from the separate binder saturation testing exhibited a similar distribution of porosity across all saturations. DISCUSSION The effect of binder saturation within WA parts, but its lack thereof within GA seems to stem from water atomized powders being less densely packed within the powder bed, due to WA powder particles irregular morphology, thus any filling materials during the printing process affect the resultant density of the green parts. Lower binder saturations increased the green density which could be logically concluded as less binder occupying space within a metal part increases specimen density and structural integrity. Binder saturations lack of effect within gas atomized powders is most likely due to its already high packing density. Roller actuation’s effect on density comes from its purpose in uniformly spreading the powder during the printing process. The roller’s assistance in creating a more uniform spread within the print bed when activated contributes to higher density, though it is notable that occasional streaking and sticking of the powder also occurs and can create unevenness in printed parts. REFERENCES 1. Mostafaei et al. Data in Brief 10, 116–121, 2017 2. Jacobs et al. Add Man 24, 200–209, 2018 3. Mostafaei et al. Mater Des 162, 375–383. 2018 4. Reed et al. Cambrige University Press. 2006 5. Özgün et al. J Alloy Compd 546. 192-207. 200
ACKNOWLEDGEMENTS The authors gratefully appreciate support from ANSYS Additive Manufacturing Research Laboratory, hosted at University of Pittsburgh. This summer internship was funded by the Swanson School of Engineering and the Office of the Provost.
Figure 2 Mean Plot of Print Parameters tested within experiment of both WA(top) and GA(bottom)
CHARACTERIZATION OF HIERARCHICAL STRUCTURES IN REMELTED NI-MN-GA SUBSTRATES FOR DIRECTED ENERGY DEPOSITION MANUFACTURING OF SINGLE CRYSTALS Tyler W. Paplham, Jakub Toman, Markus Chmielus Advanced Manufacturing and Magnetic Materials Laboratory, Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: tyler.paplham@pitt.edu , Web: http://chmieluslab.org/ INTRODUCTION Magnetic shape-memory alloys (MSMAs) such as Ni2MnGa Heusler alloys are a unique class of metals which exhibit large reversible plastic deformations (up to 6-12% strain) upon a change of crystallographic orientation in the crystal, which may be induced by a changing external magnetic field or an applied mechanical stress. This is made possible due to the presence of martensitic twin variants, which are alternating regions within the crystal structure with differing directions of easy magnetization. The lattices in each region are misaligned by approximately ninety degrees, such that the twin boundary defines a dislocation line. Upon the application of an external mechanical stress, these boundaries â&#x20AC;&#x153;travelâ&#x20AC;? throughout the crystal with the help of a continuous series of dislocations, resulting in a macroscopic plastic deformation as the crystal structure transitions from one alignment to another and the direction of easy magnetization changes [1,2]. This phenomenon is called the inverse magnetoplastic effect (IMP) and makes these materials attractive for use in fatigueresistant, energy dense actuating devices.
METHODS The substrate used was assumed to be an austenitic single crystal and had a nominal composition of Ni51Mn24.4Ga24.6. An Optomec laser engineered net shaping (LENS) 450 system was used to remelt the substrate. For each track, the nominal laser power and travel velocity were chosen, then the laser was positioned and started a distance away from the substrate so that the full velocity had been reached by the time the laser made contact with the substrate. The laser was then shut off again a distance away from the substrate, then repositioned for the next track. Eight parallel tracks were made on the substrate surface. The substrate was then sectioned along a plane parallel to the top surface, revealing undisturbed material on which a ninth track (350-10) was made. The laser power and travel velocity were varied, and the chosen combinations are listed in Table 1. Each track was cut perpendicular to the travel velocity at approximately the halfway point in the track, mounted, polished, and etched to reveal the melt pool. The melt pools were then imaged on a digital microscope and analyzed in ImageJ. Table 1: Parameter combinations
However, non-negligible effects may only be generated from single crystals grown with traditional manufacturing (Bridgman crystal growth) which is time-intensive and often leads to undesired macrosegregation within the crystal. A novel approach for producing single crystals without this effect might be to use directed energy deposition (DED), a type of laser additive manufacturing. The purpose of this project was to analyze changes in solidification behavior of a single crystal Ni-Mn-Ga substrate after melting with a laser which would be used in DED at varying laser power and travel velocities in order to determine which parameter combination best preserves the single crystal nature.
Laser Power [W] 100 200 250
350
Velocity [mm/s] 0.5 1 2.5 2.5 1 2.5 5
10 10
RESULTS Melt pools were observed in six of the tracks and are shown in Fig. 1. Of great importance to the determination of appropriate parameters for single crystal remelting and deposition are the nature of the
hierarchical structures within the melt pool, namely the planar solidification region (PSR) and dendrites.
Figure 1: Resulting melt pools with parameters listed as powervelocity. The outline of each melt pool is shown in blue, while regions of different structure are outlined in yellow. All scale bars (lower right of each sub-image) read 200µm.
The individual effects of laser power and laser travel velocity on the thickness of the PSR and the normalized depth of transition from [100] dendrites to [001] dendrites are plotted in Fig. 2 and Fig. 3, respectively, where the normalized depth of transition is defined as the ratio of the depth of the [100] dendrites to the total depth of the melt pool.
Figure 2: Average thickness of the planar solidification region (PSR) as a function of laser power (left) and travel velocity (right). Note that parameter combinations that did not result in melt pools are excluded.
Figure 3: Relative depth of transition from [100] to [001] dendrites as measured from the top surface of the melt pool as a function of laser power (left) and travel velocity (right).
A simple qualitative analysis was also performed on the preponderance of certain dendrite angles,
particularly diagonally oriented dendrites that do not match <100>. However, no trends could be determined with respect to laser power or travel velocity within this small sample set, so a more comprehensive study in this area is needed to provide insight into these parameters’ effects on the presence and relative quantity of diagonal dendrites. DISCUSSION The trend in the relative depth of transition between [100] and [001] dendrites is consistent with theory. A high power and low velocity combination results in a deep, abrupt melt pool whereas low power and high velocity result in a more gradual, shallower melt pool. The solidification front advances with a velocity normal to itself, but the allowable directions of dendrite growth are limited to <100>. The dendrite growth must keep up with the solidification front, so the direction selected will be that which requires the lowest growth rate for the dendrites. Therefore, an abrupt, deep melt pool will have more dendrite growth in the [100] direction, whereas a shallow, gradual melt pool will have more dendrite growth in the [001] direction [3]. The observed behavior of the PSR is in disagreement to the Rosenthal solution of the thermal model used in Gäumann et al. which predicts that the average G3.4/V ratio decreases with increasing power and thus PSR thickness should decrease with increasing power [4]. It is hypothesized that perhaps increasing the power results in a slower initial rate of decreasing G3.4/V. Because of the shape of the G3.4/V vs. z curve, this could result in maintaining a thermal gradient sufficient for planar growth across a greater range of z while still showing a lower overall average value. which would result in the observed increase in PSR thickness with increasing power. REFERENCES 1. Chmielus, doctoral dissertation, Logos Verlag Berlin, 2010 2. Carpenter et al. Boise State Uni. Th. And Diss,. 544, 2008. 3. Rappaz et al. Metallurgical Transactions A, 20A, 1125-1138, 1989. 4. Gäumann et al. Acta Materialia 49, 1051-1062, 2001. ACKNOWLEDGEMENTS This project was funded by NSF grant #1808082 and the Mascaro Center for Sustainable Innovation.
WIRELESS SIGNAL TRANSMISSION THROUGH HERMETIC WALLS IN NUCLEAR REACTORS Jerry Potts, Yuan Gao and Heng Ban Multiscale Thermophysics Laboratory, Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: jlp221@pitt.edu INTRODUCTION Wireless sensors can serve as a cheaper, more reliable option to the wired sensors which are currently used in nuclear reactors. More importantly, they are able to circumvent issues such as feedthroughs penetrating pressure barriers, corrosion, and other forms of cable degradation [1, 2]. However, these sensors are still exposed to a harsh radiative environment and must have a signal strong enough to penetrate stainless steel walls, cladding, and/or neutron moderators. To resolve the challenges of signal penetration, low frequency mutual inductance can be used as the wireless transmission mechanism. Mutual inductance allows the signal to penetrate stainless steel walls in the reactor due to the tight electromagnetic coupling of the inductor pair. In addition, by operating the sensor at a low frequency, the signal will decay less during transmission through the steel wall or cladding. This would allow the sensor to be sealed inside cladding to avoid direct contact with the coolant. The purpose of this project was to design a system to assess the validity and accuracy of signal transmission using this method before testing the system in reactor-like conditions. The system was designed around the use of a linear variable differential transformer (LVDT) as the sensing mechanism. The Halden Reactor project (HRP) designed and used LVDTs to measure fuel parameters in reactors [3]. Thus, using the LVDT sensor makes this wireless system more applicable to near-term applications in reactors. THEORY Mutual inductance occurs when an alternating voltage in a coil induces a voltage in adjacent coils via its changing magnetic field. This property can be used to achieve both wireless power and signal transfer.
Figure 1: The equivalent circuit showing the principle of mutual inductance. “1” and “2” refer to the source and load circuits. M is the mutual inductance between coils L1 and L2.
Wireless signal transfer can be modeled by the equivalent circuit shown in Fig. 1 [4]. Kirchhoff’s voltage law can be applied to this circuit as shown in Eq. 1 and 2. (1) Vs = ( j L1 + R1 ) I1 − jMI 2 (2) 0 = jMI1 − ( j L2 + R2 + j Lm ) I 2 These equations were used to derive expressions for the output voltage in both power and signal transfer applications and were the basis for the design of the wireless transfer system. METHODS A wireless signal transmission model was designed for this experiment based on the circuit diagram in Figure 2. The system has two components, each in separate housings which are wirelessly coupled by two pairs of inductor coils. One section houses the LVDT, while the other supplies power to the sensor and receives the output signal via the inductors.
Figure 2: A circuit diagram of the sensor. Power is supplied using a function generator and is transferred to an LVDT whose output is transmitted to a voltage meter.
A 3D model of this system is shown in Figure 3. The use of the LVDT allowed for a simple assessment of the data collected, as the voltage output of the sensor is directly related to the core position. This position was controlled using a fine pitch screw whose rotation corresponded to the linear position of the ferritic core.
Figure 3: A colored 3D model of the sensor system. Each color represents a different section of the system. Most notably, the pink and light blue sections are the inductor coil pairs, and the yellow section is the mechanism used to move the iron core.
Once the system was fully assembled, the output voltage of the system was recorded over the full range of the core displacement for different input voltages from 1 to 4 V. The voltage was recorded after every half rotation of the screw for three trials per voltage tested. Linear regressions were generated for each set of data and then divided by their input voltage. The coefficiencts of these equations were then averaged together to obtain a general equation for the output voltage of the system. The standard deviation of each data point was also calculated, the highest of which was used to calculate the uncertainty of the overall system. RESULTS The output voltage as a function of core displacement for the full system is shown in Figure 4. Each line represents a different input voltage that was supplied to the LVDT. The data represented is the average values across all the trials. 1.0
1V
0.9
2V
Output Voltage (V)
0.8 0.7
3V
0.6
4V
0.5 0.4 0.3 0.2 0.1 0.0 0
5
10
15
20
Core Position (mm)
Figure 4: The voltage output of the full system as a function of core displacement for a selection of different voltages supplied to the LVDT.
A general equation is shown in Eq. 3, where the output voltage is a function of the input voltage in volts and the core displacement in mm. V = (0.0065X + 0.10) Vin
(3)
Using this general equation to generate new linear regressions for each set of data results in R2 values ranging from 0.9901 to 0.9999. The maximum uncertainty of the output voltage was found to be 9.57x10-4 V across all the input voltages. This corresponds to a type A uncertainty of 48 µm for the core displacement. The type B uncertainty from the methodology was found to be 26 µm, resulting in an overall uncertainty of 54 µm for this system. DISCUSSION From the uncertainty calculations it was found that there was no correlation between input voltage and the uncertainty of the system. The highest uncertainty value occurred at an input of 3 V, rather than 4 V, while all other values fluctuated with no apparent trend. Thus, there is no concern of a larger uncertainty if the voltage input is increased. In addition, the high linearity of the system indicates that these measurements are highly repeatable with very little noise interference. However, compared to the specified uncertainty of 32 µm for the commercial LVDT used here, this experimental setup significantly increases the uncertainty of the system. Additional precautions are therefore necessary to minimize the additional uncertainty from all aspects of the design, including the inductor coils. Further testing is currently being performed to assess how the system would operate in reactor-like conditions and if the technology would still be valid under those circumstances. REFERENCES 1. Nekoogar et al. Nuclear Technology 202, 1-10, 2018. 2. Hashemian et all, Nuclear Power - Control, Reliability and Human Factors, 49-66, 2011. 3. Solstad et al. Nuclear Technology 173, 78-85, 2011. 4. Suh et al. 2013 IEMDC, 234-240, 2013. ACKNOWLEDGEMENTS Funding for this project was provided by the U.S. Department of Energy.
EFFECTS OF PRINTING PARAMETERS ON DENSITY AND MECHANICAL PROPERTIES OF BINDER JET PRINTED WC-Co Pierangeli RodrĂguez*, Danielle Brunetta, Katerina Kimes, Drew Elhassid and Markus Chmielus Department of Mechanical Engineering and Materials Science, University of Pittsburgh, Pittsburgh, PA, USA *Email: pir6@pitt.edu INTRODUCTION Tungsten carbide-cobalt (WC-Co), also known as cemented carbide is a cermet (ceramic-metal) material widely known for its excellent mechanical properties and wear resistant applications, including machining, cutting, and rolling, as well as mining and oil drilling tools requiring high hardness at elevated temperatures and thermal shock resistance. Morphologically, WC-Co is composed of hard, polygonal WC grains in a tough Co matrix. Co is a binder metal as it allows metallic bonds to form between WC particles during sintering, reducing brittleness without greatly decreasing hardness [1]. Traditionally, WC-Co parts are manufactured by powder metallurgy through which WC and Co particles are blended and ball-milled. Parts are formed through mechanical pressing or molding to obtain a green state. A small amount of paraffin is added to increase its density which is removed through a de-waxing process, followed by sintering and hot isostatic pressing (HIP), resulting in the final, fully dense cermet part [2]. This technology is slow and expensive, requiring the production of molds (restricted to mass production) and limited in resolution as fine details cannot be achieved through this method. As a result, additive manufacturing (AM) appeared as an option to create WC-Co shapes. The focus of this project is the AM technology of binder-jet printing (BJP). BJP has the potential to produce WC-Co objects by selectively stacking layers of powder and binder alternatively, according to a computer design. It requires post-processing, curing, and sintering to obtain a fully dense part. The objective of this project was to find the optimal set of printing parameters and characterize the microstructure of WC-Co parts resulting in similar properties to those obtained conventionally. This was achieved through two series of design of experiments (DoE) testing the most relevant printing settings.
METHODS Spray-dried, half-sintered WC-Co powder (12.5% Co) provided by General Carbide was characterized through scanning electron microscopy (SEM) and stereology. Cubes of 1x1x1 cm3 were BJP with an ExOne Lab varying settings in two DoE studies. The first DoE study involved comparing roller speed (5 or 15 mm/s), layer thickness (50 or 100 Âľm), and build-to-feed ratio (1.5 or 2) to obtain the highest powder-bed packing density without binder. Packing densities of the resulting eight â&#x20AC;&#x153;printsâ&#x20AC;? were calculated as mass (OHAUS AX324) per volume. The combination yielding highest packing density was chosen: 100 Âľm layer thickness, 5 mm/s roller speed and a 2:1 build-to-feed ratio. The second DoE stage used the previous results as constants and consisted in varying binder saturation (100%, 160% or 220%) and drying time (30 s, 45 s or 60 s). These iterations resulted in nine prints of 12 cubes each. Green density was measured as before. Green parts were sintered by General Carbide at 1440 â&#x201E;&#x192;. One sample per print was cross-sectioned, cold mounted, ground, polished (Struers LaboPol 5), etched with Murakami reagent, and imaged (Zeiss Smartzoom5 optical microscope). Relative density was calculated with ImageJÂŽ software [3]. To measure hardness and toughness, three indents were imprinted (Rockwell 724 Wilson) on each sintered sample, with a Vickers diamond tip and major load of 60 kgf. Vickers hardness was calculated by measuring the indentâ&#x20AC;&#x2122;s diagonals (Keyence VH-Z500 digital microscope). Toughness was calculated by plugging the measured edge cracks in Shettyâ&#x20AC;&#x2122;s formula (Equation 1) for brittle materials exhibiting Palmqvist cracking. đ??ž1đ?&#x2018;? = 0.0889 â&#x2C6;&#x2014; â&#x2C6;&#x161;
đ??ťâ&#x2C6;&#x2014;đ?&#x2018;&#x192; â&#x2C6;&#x2018;đ??ż
(1)
RESULTS SEM images showed that individual powder particles are made up of collections of fine rectangular particles into nearly spherical agglomerates, as shown in Figure 1. Stereological analysis through the line intercept method [4] showed particles with an average diameter of 0.52 µm, forming agglomerates of about 21.3 µm.
Figure 1. Powder SEM showing fine particle agglomeration.
After measuring relative green and bulk density, the sample with the highest density was chosen to characterize and test. The print with 220% binder saturation and 45 s drying time, yielded a green density of 21.9 ± 0.6% and an ImageJ relative density of 99 %. The preliminary results for part properties showed measured hardness of 1286.0 ± 6.0 HV and 14.2 ± 0.1 MPam0.5 toughness, which is similar to industry values provided by General Carbide (1245-1633 HV, 13-14 MPam0.5 [3]). The etched microstructure (Figure 2) showed the expected small WC grains (~1.72 µm) in a Co matrix. Some Co pools occurred during sintering and little porosity was observed. Potential layer separation is observed in the line at the bottom, which might be caused by a printing glitch.
Figure 2. SEM image of etched WC grains in Co matrix for the sample with 220% binder saturation and 45 s drying time.
DISCUSSION Spray-dried WC-Co powder was chosen due to its uniform size distribution and powder flowability required in BJP. However, WC-Co is an intrinsically complex powder. Grouping of small particles generates porous agglomerates. Powder porosity is the main reason why binder saturation above 100% is needed, which in turn is the driving force of design of experiments, as default parameters will not work. The second DoE step showed that binder saturation is the variable with greater effect on the print’s density. The highest value (220%) was selected. This does not imply that the more binder the better density. ExOne solvent binder is C-based, and it is well known that excess C worsens WC-Co properties, by promoting WC grain growth and ηphase formation [5]. Drying time is also a relevant variable. Short times result in part bulging and too much time produces delamination as each layer is over-dried and doesn’t bind to the next. Intermediate drying (45 s) was proven to be optimal. Almost full density was achieved. Hardness and toughness results are within range of those expected for a microstructure with about 2 µm grains, showing that properties of BJP WC-Co parts can be designed to be as good as traditional parts. Continuing the study, more samples are being sintered at General Carbide, and the effect of curing time on density and properties will be analyzed and more complicated parts will be attempted. REFERENCES [1] L. Fu, et al. Two-step synthesis of nanostructured WC-Co powders, Scr. Mater. 44 (2001). [2] General Carbide, The Designer’s Guide to Tungsten Carbide, (2008). [3] C.T. Rueden, et al. ImageJ2: ImageJ for the next generation of scientific image data, (2017). [4] C. Russ, John, T. Dehoff, Robert, Practical Stereology, 2nd ed., Plenum Press, 2000. [5] A. Formisano, et al. Influence of Eta-Phase on Wear Behavior of WC-Co Carbides, Adv. Tribol. (2016). ACKNOWLEDGEMENTS Thanks to the Swanson School of Engineering and the Office of the Provost, as well as all Chmielus Lab group, PMFI and General Carbide.
XPROJECT: TANGIBLE SECURITY FOR INTERNET OF THINGS DEVICES Maya Román Pitt Makerspaces, Innovation, Product Design, & Entrepreneurship Program University of Pittsburgh, PA, USA Email: mcr79@pitt.edu INTRODUCTION Hands-on design projects prepare engineering students for real world problem solving. The XProject program allows student teams in the Swanson School of Engineering to work with companies and researchers to develop solutions. XProjects are done over a period of about six weeks during which students utilize the makerspaces and Swanson Center for Product Innovation in Benedum while regularly checking in with advisors Brandon Barber and Dr. William Clark to fabricate a prototype device or system. Emphasis is placed on clear communication during the project, including between team members, with the client, and between the team and Makerspace mentors, faculty, and staff. The foundation of the XProject program was laid by the 2018 XProject Summer Research Internship team. This summer a new team of interns -- Megan Black, Sara Kron, Maya Román, and Sarah Snavely -- led five XProjects. All four interns worked on the Gemini project, in addition to leading their own projects. Megan led the Major League Baseball Moment of Inertia project, Sara led the Major League Baseball Physical Therapy project, Maya led the Internet of Things Security project, and Sarah led the Hot Top project. This abstract focuses on the Internet of Things Security project. PURPOSE As Internet of Things (IOT) devices are becoming more ubiquitous in everyday life, a wider gap is forming between people’s perceived privacy and their actual privacy. Perceived privacy describes how safe people think they are, in this case while using IOT connected devices, while actual privacy reflects how much privacy they are truly granted in the presence of IOT devices. Instances such as Google’s omission of the microphone present in their Nest Guard technical specifications highlight a potential difference between the two concepts of privacy [1].
Dr. Adam Lee and Dr. Rosta Farzan at the University of Pittsburgh’s School of Computing and Information are researching that disparity in privacy, and wanted mechanisms to use in their research to test the degree to which users might feel safer while using IOT devices. They wanted to see if physical privacy devices can give users a more tangible sense of security. The XProject team of four students led by Maya Román addressed this request by creating three different cover mechanisms that can be used with a Nest Camera, each of which has a range of control methods and which provides feedback to the user on the state of the camera. METHODS The project started with the team obtaining client input on design and prototype requirements. In the project kickoff meeting, Dr. Lee described his research into perceived versus actual privacy to the team, and specified that he wanted mechanisms to provide tangible security with user feedback and a variation of control for IOT devices with cameras, but had no preference for what they looked like or how they worked. After the initial client meeting, the team began research into what kinds of tangible security options already exist for users of IOT devices with cameras, as well as into the specific product they would be working with, Google’s Nest Cameras. At this stage Maya created a TeamGantt chart to organize the team’s progress on the project, organized meetings with the client, and established a group chat for the team to communicate within. The team held multiple brainstorming sessions into potential mechanisms, with ideas ranging from a Faraday cage over the transmitter to an origami pinwheel that covers the camera. The ideas were grouped into categories such as apertures, changing covers, focus manipulation, rotating states, and power manipulation. The team discussed the categories, and sketched some designs before they
decided on which cover mechanisms to pursue, ultimately picking from the rotating states, changing cover, and power manipulation categories. In the exploratory phase the team also investigated methods to give users feedback on the state of the camera. The team tested if measurement of electrical current supplied to the device could indicate if the camera was recording, or if someone was watching the feed from the Nest app, however current was not a reliable indicator. The team eventually defined their design space as three mechanisms, all with visual user feedback, that each could be controlled by a range of methods. The final set of mechanisms delivered to the clients included i) a simple relay that completely cut power to the camera; ii) a cylindrical design that rotated different lenses in front of the camera, allowing for an open state, a closed state, and an obscured state in which the image is blurred; and iii) a paddle flipping mechanism that had three states, open and closed, and a third in which the camera is covered and uncovered in at regular intervals. Each device can be controlled by manual buttons or switches, by an IR remote, or through a Bluetooth app. Visual recognition of the state can be provided through an LED ring on each device.
Figure 1: From left to right: the relay, flipping mechanism, and the cylindrical mechanism as they were given to the clients.
Maya created CAD models of the cylindrical mechanism in Fusion 360, realized through 3D printing and laser cutting. She made multiple iterations, improving on the design until it was satisfactory. At the same time, she oversaw group members going through the same process on the other designs. All three mechanisms required the use of microcontrollers, so the team began prototyping by using Arduino Unos and breadboards, with each team member contributing to the code for the mechanisms. However, the team wanted to make a
more permanent solution to hand off to the clients. To address this, Maya researched and tested custom printed circuit boards (PCBs) that acted as shields for the microcontrollers. She designed them in Eagle, milled, and soldered a few iterations of the circuits, integrating button, infrared remote, and Bluetooth control. DISCUSSION Leading and participating on team projects with clients exemplified how crucial clear and open communication is to a team’s success. Although the team had a group chat, it was difficult to organize meetings when group members did not respond. The project did finish successfully, but could have wrapped up sooner if communication had been clearer. A key trait that turned out to be integral to Maya’s time on these projects was being able to learn on the job. Most of the skills she used were not taught in the classroom, and the ability to quickly pick up new skills on her own made the project run much more smoothly. At the beginning of the project, the team had difficulty defining the problem they were addressing based on the initial request made by the clients. The request was very open-ended, which allowed the team to go in any direction they wanted, but resulted in scattered, disorganized ideas. After brainstorming and designing without a clear path, the team sat down and defined their design space in a way that directed them for the remainder of the project. REFERENCES 1. D. Lee. “Google admits error over hidden microphone.” BBC News. 02.20.2019. Accessed 08.19.2019. https://www.bbc.com/news/technology47303077
ACKNOWLEDGEMENTS Dr. William Clark, Brandon Barber, and Daniel Yates advised this project. Joint funding for this project was provided by Dr. Clark, the Swanson School of Engineering, and the Office of the Provost. The National Science Foundation provided funding for Dr. Lee and Dr. Farzan’s research.
XPROJECT: FURNACE DESIGN FOR EVALUTION OF HOT TOP MATERIAL Sarah E. Snavely Pitt Makerspaces, I&E Program University of Pittsburgh, PA, USA Email: ses227@pitt.edu INTRODUCTION Hands-on design projects prepare engineering students for real world problem solving. The XProject program allows student teams in the Swanson School of Engineering to work with companies and researchers to develop solutions. XProjects are done over a period of about six weeks during which students utilize the makerspaces and Swanson Center for Product Innovation in Benedum while regularly checking in with advisors Brandon Barber and Dr. William Clark to fabricate a prototype device or system. Emphasis is placed on clear communication during the project, including between team members, with the client, and between the team and Makerspace mentors, faculty, and staff. The foundation of the XProject program was laid by the 2018 XProject Summer Research Internship team. This summer, a new team of interns -- Megan Black, Sara Kron, Maya Roman, and Sarah Snavely -- led five XProjects. All four interns worked on the Gemini project, in addition to leading their own projects. Megan led the Major League Baseball Moment of Inertia project, Sara led the Major League Baseball Physical Therapy project, Maya led the Internet of Things Security project, and Sarah led the Hot Top project. This abstract focuses on the Hot Top project. HOT TOP PROJECT PURPOSE A “hot top” is an insulating material used to line steel ingots to prevent molten steel from cooling too quickly which causes air pockets and other deformities. Jacob Nery, a Pitt alumnus working for Ellwood Quality Steels (EQS), sought out Pitt XProjects to develop a way to compare different hot top materials’ insulating properties, so the company could purchase the most effective hot top. Sarah Snavely led Leo Li, Joshua Dewald, and Dan Gunter,
and they worked to design a divided furnace that could heat the hot top to steel melting temperatures from one side and measure the temperature difference across the material. Testing of the hot top material would be conducted at 1400-1500 degrees Celsius for a period of two hours. METHODS Initial sketches were made with plans to modify different types of furnaces. Upon meeting with the client, a budget of $10,000 was established for construction of the instrumented furnace. Due to the high expense of purchasing a furnace capable of reaching temperatures of 1400-1500 degrees Celsius, the team began to look at vendors for parts to build a furnace instead of purchasing one. Sarah Snavely worked with the team to plan out how the they would design a furnace within six weeks and created a Gantt chart for the project; tasks were then divided within the group. Weekly design check-ins with the advisors and weekly calls with the client were scheduled. The team researched insulating materials for the furnace walls. After speaking with various vendors, the team decided to use a combination of firebrick and ceramic fiber board to build the furnace. A lumped-parameter model of the furnace was used to calculate the heat transferred through the system. Using a required a wall thickness of 15 inches, the team used foam core bricks to plan out the layout of the furnace and then moved to 3D modeling using Fusion 360. Multiple iterations of the design were made within Fusion 360 to accommodate other parts of the furnace.
would connect to a thermocouple placed in the furnace and maintain the temperature. A data acquisition device would be connected to multiple thermocouples placed inside and outside of the furnace to monitor and record temperatures throughout the test.
Figure 1: The final design with extractable components for furnace parts.
While heat calculations were being conducted, the team looked at different methods of heating the furnace. Research was done on molybdenum disilicide and silicon carbide heating elements. Communicating with vendors and comparing prices along with feasibility, the team informed the client of their options, and silicon carbide heating elements were chosen. Based on the calculated heat transfer, electrical calculations were conducted to find the voltage and current required to get the heating elements to output enough power to heat the work area to 1400 degrees Celsius. Due to the variable resistance of the heating elements shown in Figure 1, the team designed an electrical circuit with resistors and a single-pole double-throw relay to prevent the heating elements from receiving too much current.
DISCUSSION At the end of six weeks, the team had fully designed the furnace for testing of the hot top material. The largest challenge was accommodating the clientâ&#x20AC;&#x2122;s needs within the budget. Another difficult challenge was having multiple calculations and design constraints dependent on each other and communicating changes as they arose. The XProject taught the students how to communicate effectively with vendors and explain design plans clearly to get correct parts that would work with the whole system. Additionally, it was very important to maintain communication within the team and the advisors to have accurate calculations. After presenting the design to the client, their company agreed to purchase all of the parts, and the total cost was only slightly over budget. When all of the parts arrive at Pitt, a second phase of the Hot Top project will be run in the fall semester to build the furnace designed in the first phase. REFERENCES 1. â&#x20AC;&#x153;Silicon Carbide Heating Elements.â&#x20AC;? I Squared R Element Co. 2, 2014. ACKNOWLEDGEMENTS Dr. William Clark, the Swanson School of Engineering, and the Office of the Provost jointly funded these projects. Dr. Clark, Brandon Barber, and Daniel Yates organized and advised these projects. Associate Dean of Research, Dr. Schohn Shannon, assisted the Hot Top project. Jeffery Speakman and Andy Holmes collaborated and manufactured parts with XProject students.
Figure 2: The resistive temperature characteristic curve of a Starbar heating element [1].
To find a furnace temperature controller, the team discussed furnace design plans with vendors, and the engineers helped choose a controller and solid-state relay to purchase. And the team needed thermocouples that worked with the design and that could withstand high temperatures. The controller
THE APPLICATION OF GPU-COMPUTING TO NANOSCALE THERMAL TRANSPORT SIMULATIONS Benjamin Sumner1, Xun Li1, and Sangyeop Lee1,2 1 Department of Mechanical Engineering and Materials Science, University of Pittsburgh, PA, USA 2 Department of Physics and Astronomy, University of Pittsburgh, PA, USA Email: bss50@pitt.edu INTRODUCTION Understanding thermal transport at the nanoscale is essential to developing materials that will help to create more efficient thermoelectric devices, better batteries, and faster computers. The challenge for scientists is that studying nanoscale thermal phenomena can be computationally expensive; simulating interactions between hundreds or thousands of particles can take days and the hardware and energy required to run them is costly. One solution to this problem is GPU-computing. Graphics Processing Units, better known as GPUs, have intrinsically parallel structures. Whereas a typical CPU might have four processor cores or less, GPUs have hundreds of cores. These cores can process instructions in parallel to speed up code [1]. This has made GPUs useful in many fields. Some have sought to create faster programs to solve linear systems, and others have even used GPU computing to efficiently create electron micrographs. Such studies have seen speedups of 50x or more depending on the algorithm and the hardware employed [2,3]. This study sought to use GPU-computing to increase the speed of a deviational Monte Carlo simulation for the phonon Boltzmann transport equation, developed by Li and Lee [4]. This simulation was an ideal candidate because it had been previously tested for accuracy, and because it had great potential for parallelization; the majority of the computations within the simulation could be performed in parallel. METHODS This study utilized the CUDA language extension for FORTRAN as well as PGI compilers to enable the programming of a few different GPUs. An Nvidia GTX 1080 GPU was used for programming and debugging, and an Nvidia V100 GPU was used for final simulations.
The first step in this study was to determine the most time-consuming portions of the original simulation. To do this, the three main components of the simulation were profiled: the boundary emission (particle generation) phase, the advection phase, and the scattering phase. Attention was also paid to data transfers when choosing the appropriate sections to parallelize. Data transfer between the CPU and GPU is an expensive process and a major factor in GPU programming [5]. The remaining portion of code, which included file writing and other miscellaneous functions, was not considered for parallelization. Once the most time-consuming algorithms were identified, new algorithms were developed through incremental changes to functions and variables. The code was parallelized on a per-particle basis so that each thread (each independent set of program instructions) launched on the GPU would handle information for one phonon. All inputs (material, sample-size, and number of time steps) were held constant. Outputs, including thermal conductivity, the total number of phonons, and the heat flux were compared to determine if the new algorithm worked as intended. When the appropriate sections had been parallelized, the final simulations were run, and their overall simulation times compared. CPU usage was used as a control to ensure both the CPUs and GPUs were being used to their maximum potential and the data comparison would be fair. When the original code was run, CPU usage was maintained above 95%. It should be noted that parallelization of both scattering and advection was attempted to reduce data transfers between the CPU and GPU. Ultimately, the scattering process proved challenging to develop and was not implemented before the end of this study. Results were obtained from the simulation with and without the parallelized advection algorithm.
Original vs Parallel Simulation Times 10000 1000 100
Seconds
DATA PROCESSING Simulation times were captured by recording the CPU clock times before and after the sections of the code and subtracting them. When profiling the original simulation, the aggregate algorithm execution times, as well as the total simulation times, were recorded for three runs and then averaged. The simulation was run only three times due to the long duration (9.5 hours).
10
Original
1
Parallel
0.1 0.01 0.001 0.0001
To compare the new and original code, a simpler test case was implemented using a ballistic model and smaller input parameters. Again, the execution time of the major sections of code were recorded, as were the total simulation times which were then averaged over 10 runs. RESULTS In the first stage of this study, it was determined that scattering was the most computationally-intensive part of the original program. Looking at Figure 1, scattering constituted 68.78% of the simulation time. The next most intensive was the miscellaneous code (labelled as “Other”) at 27.59%. Advection made up only 3.25%, and boundary emission was the least intensive at 0.38%. Percentage of Time by Code Section 68.78% Scattering 27.59% Other 0.38% Boundary Emission 3.25% Advection
Figure 1: Above is a breakdown of the percentage of the total simulation time for each of the profiled code sections averaged over three runs. The average total time to complete a simulation was 34,361 seconds or 9.54 hours.
The results displayed in Figure 2 show that new advection algorithm proved to be around 3.2 times faster than the original. However, the average time for boundary emission was 505 times greater in the parallel simulation, and the simulation overall was slower by 14.5 times.
Bnd Em
Adv
Total
Code Section
Figure 2: Shown above is the average execution time in seconds for the bound emission and advection code sections as well as the total time to complete each simulation.
DISCUSSION It is easy to see in Figure 1 that scattering took the most time by a large margin. This makes sense as it involved many calculations performed on large, dense matrices. Though the advection algorithm in the parallel case was faster, it incurred a time penalty because it required a large amount of data transfer. This is seen in the drastic increase in the boundary emission time, which increased by a factor of 505. Although this study failed to parallelize the scattering algorithm or increase simulation speed, it revealed potential for speedup. Work is currently being done to revise the scattering algorithm. Once it is implemented, Amdahl’s Law estimates a speed up of 3.65 [5]. REFERENCES 1. Lippert, A. (2009). Nvidia GPU Architecture for General Purpose Programming [Powerpoint slides]. Retrieved from http://www.cs.wm.edu/ ~kemper/cs654/slides/nvidia.pdf 2. Hashemi et al. "Equal bi-Vectorized" (EbV) method to high performance on GPU, ISME 2012, 10.13140/RG.2.2.25953.56164. 3. Pryor et al. Adv Struct Chem Imag (2017) 3:15. 4. Lee et al. Phys. Rev. B 97, 094309 (2018) 5. Farber, R. (2012) First Programs and How to Think in CUDA In CUDA Application Design and Development (pp. 13-18) Waltham, MA: Morgan Kaufmann ACKNOWLEDGEMENTS This research was supported in part by the University of Pittsburgh CRC through the resources provided. Funding for this study was provided by the Swanson School of Engineering and the Office of the Provost.
ANALYTICAL MODEL VALIDATION FOR MELTING PROBE PERFORMANCE USING APPLIED COMPUTATIONAL FLUID DYNAMICS Michael Ullman, Michael Durka, Kevin Glunt, and Matthew Barry, Ph.D. Computational Fluid Dynamics Lab, Department of Mechanical Engineering and Materials Science University of Pittsburgh, PA, USA Email: mju8@pitt.edu INTRODUCTION About 400 million miles from Earth, the smallest of the Galilean moons, Europa, orbits its home planet of Jupiter. This satellite is notable for its fractured water-ice surface, which, according to observations from NASA’s Galileo spacecraft and Hubble Space Telescope, is a source of water vapor ejections [1]. Scientists have hypothesized that a water ocean lies beneath Europa’s surface, making the satellite a primary focus in the search for extraterrestrial life. Because of this promise, NASA is developing plans for its Europa Clipper mission, which will partly consist of a probe landing on and penetrating the moon’s surface to explore its subterranean oceans. The proposed method for penetrating the ice is a combination of drilling and melting—the latter of which will be caused by nuclear heat generation within the probe. NASA Jet Propulsion Laboratory (JPL) is developing an analytical model to determine the melting descent speed of the probe as a function of its internal heat generation and the characteristics of the ice environment. This model, however, does not fully characterize the underlying physical phenomena, and thus requires validation from numerical modeling and experimental testing. The purpose of the current project is to use numerical models within ANSYS-CFX to provide a framework for validating the analytical model and evaluating its shortcomings. These numerical models facilitate a more comprehensive understanding of the melting process, thereby providing insight into how the probe should be designed to optimize its performance. METHODS The foundation for the JPL analytical model was drawn from the work of H.W.C. Aamot [2]. This model assumes a cylindrical cavity in the ice with a constant radius and length, and calculates the heat required to produce a given descent speed in a given ice environment. Only the heat conducted into the ice surrounding the probe is considered—convection in
the water jacket which develops around the probe is neglected. The model assumes an infinitesimal water jacket thickness along the front of the probe, such that it is in contact with the phase change region. To allow debris to flow around the probe, a 6 mm gap is desired between the side of the probe and the ice cavity. Because the probe design is a cylinder of radius 11.5 cm and length 2.1 m., the Aamot model prescribes a 12.1 cm radius for the ice cavity. Another assumption of the model is that the melting process is at steady state. In this project, steady state was defined as the point at which the cavity profiles and heat fluxes no longer change with time. All of the models created for this project used an advecting ice scheme. Like a car in a wind tunnel, the probe was held stationary, while ice was forced through the domain at the Aamot-prescribed descent speed and temperature of 37.5 cm/hr and 160 K. This allowed for greater computational speed, as a model with stationary ice and a moving probe would require the domain to be remeshed after each time step. The first numerical model developed for the Aamot comparison was an ice-only model, which used a cavity geometry with a 12.1 cm radius. The front and side walls of the cavity were prescribed to be the melting temperature of the ice—273.15 K—while the rear wall was prescribed to be adiabatic. This model served as the most direct analog for the Aamot model, as the cavity profile was explicitly imposed. The second model was a water-ice conduction model, which used the probe radius of 11.5 cm. According to the Aamot model, which calculates a heat flux distribution for the side of the ice cavity, the heat flux on the side of the probe can be found by scaling the side cavity flux by the ratio of the cavity and probe radii. Thus, the scaled Aamot side flux was applied to the side wall of the probe, while the Aamot front end heat flux was applied to the front wall, and an adiabatic condition was applied to the rear wall.
In order to capture the pertinent physics that the Aamot model neglects, the second numerical model was modified to include convection by enabling the fluid momentum solver. In previous models, it was found that placing fillets on the corners of the probe allowed for more substantial water flow out from beneath the probe, preventing undesirable water pooling. Consequently, a 1 cm fillet was added to the front and rear corners of the probe in this model. RESULTS Once each of the models reached steady-state, the results were analyzed using CFD-Post and compared to the values calculated by the Aamot model. The most pertinent results, listed in Table 1, are the integrated heat fluxes into the side and front of the ice cavity and the average thickness of the ice cavity along the side and front of the probe.
DISCUSSION All three models have heat flux distributions similar to the Aamot model. The greatest differences occur at the front corner of the probe, suggesting that the Aamot model incorrectly describes the interaction between the radial and axial heat fluxes at this location. Correspondingly, the ice at the front corner of the conduction-only water-ice model is not fully melted. In the water-ice model with convection, however, water carrying energy out of the front cavity and into the side allowed the front corner fillet to fully melt and increased the average width of the side cavity to nearly 6 mm. Unlike in the Aamot model, though, this side cavity was not vertical. These results illustrate that the Aamot model is not an exhaustive description of the probe’s melt-driven descent. New probe boundary conditions are being investigated with the goal of creating a vertical 6-mm thick side cavity. Achieving this will create a direct numerical analog for the Aamot model, which can be used to identify correction factors for the analytical formulation. This corrected analytical model will then be used in a Monte Carlo simulation, which will determine the probe’s performance for a variety of Europan ice environments. To this end, this project provides key insights into understanding where the Aamot model fails and how the discrepancies with the desired melt cavity profile may be mitigated.
Figure 1: Heat fluxes into side of ice cavity along probe height.
REFERENCES [1] (2019) Europa, NASA Science Solar System Exploration. https://solarsystem.nasa.gov/moons/ jupiter-moons/europa/in-depth/. [2] H.W.C. Aamot, (1967) Heat transfer and performance analysis of a thermal probe for glaciers. Cold Regions Research & Engineering Laboratory.
Figure 2: Heat fluxes into front of ice cavity along probe radius.
ACKNOWLEDGEMENTS I’d like to thank Dr. Barry, the Swanson School of Engineering, and the Office of the Provost for sponsoring my SSOE Summer Research Internship. Computational resources were provided by the Center for Research Computing.
Table 1: Comparisons between Aamot model and numerical models: heat power and cavity thicknesses. Model Aamot Ice-Only Water-Ice: Conduction-Only Water-Ice: With Convection
Side Wall: Ice Conduction (W)
Side Wall: Mean Ice Cavity Thickness (mm)
Front End: Ice Conduction (W)
Front End: Mean Ice Cavity Thickness (mm)
4540.7 4521.6
6.000 -
1044.1 1121.5
0 -
4452.3
2.841
860.5
0.178
4908.3
5.811
771.5
0.534
ELECTROSPINNING CRIMPED MICROFIBERS FOR ARTERIAL GRAFT PRESSURE SUPPORT Nikolas J. Vostal Swanson School of Engineering Dr. Velankar Laboratory, Department of Chemical Engineering University of Pittsburgh, PA, USA Email: njv17@pitt.edu INTRODUCTION Electrospinning is one of the simplest and most versatile methods of creating polymer microfibers. In its most basic setup, electrospinning can be thought of as a random spray of micron-sized fibers between an electrically-charged syringe needle and a nearby metallic collector of opposite charge [1]. By altering the shape and movement of the collector, the fibers can be produced with uniform alignment. These fibers can be used to create patterns and meshes for a number of advanced applications [2].
Pocivavsek et al. have examined the use of wrinkled surfaces for anti-thrombotic vascular grafts made of soft, biomedical-grade elastomer and gel [6]. While the grafts mimic the internal structure of arteries extremely well, they lack the strength and toughness of collagen reinforced tissue. The purpose of this project is to apply uniaxially-aligned fibers with a crimped nature to the outside of a graft to improve the maximum pressure a graft can withstand while also allowing the graft to distend as desired at low pressures.
Figure 1: Diagram of a typical electrospinning setup [2]
METHODS The spinning solution and setup were based on the work of Akduman and Kumbasar [7] with 20% w/w thermoplastic polyurethane (Estane 58271, Lubrizol) being dissolved in tetrahydrofuran (THF). Fibers were spun with a 3 cm gap from needle to collector at 15 kV. The syringe was connected to an automatic syringe pump and placed on a lab jack for even fiber distribution. The side of a thin metal plate was used as a collector.
By applying external stresses during the electrospinning process crimped fibers can also be created. Crimped fibers have a sinusoidal curvature that allows for increased tensile strain as the fibers straighten before becoming taut [3]. Crimped microfibers are important for biomedical applications because they mimic collagen fibrils found within organic tissue. Crimping in the fibrils allows tissue such as skin and arteries to stretch easily to a certain point, but strongly resists further stretching [4].
Figure 2: [Left]A microscopic picture of collagen fibrils [5] and [Right] a microscopic picture of crimped TPU fibers
A hand drill suspended by metal scaffolding was placed above the collector with an acrylic rod in the chuck. The rod hung down in between the needle and collector, thus forcing spun fibers to deposit onto the graft instead of the collector. The constant rotation of the graft from the hand drill allowed for uniaxial alignment.
Figure 3: Electrospinning setup
In order to induce crimping, an acrylic rod with a larger diameter than the graft was used. Once fibers are spun onto the graft, the acrylic rod was removed allowing the graft to contract to its original shape. This contraction is what causes the fibers to crimp. To protect the fibers from dislocation and clumping a coating of cross-linkable silicone was rolled over the fibers. This soft coating ensures good bonding of the fibers to the graft while not affecting fiber or graft expansion. A thin coat is applied by rolling a graft with fibers in uncured silicone on wax paper. Once completely encased, the graft is left to cure for 3 hours at 70°C. Pressure tests were done on naked (no fiber) grafts, straight fiber (uncrimped) grafts spun using a 3mm rod, crimped fiber grafts spun using a 4mm rod, and crimped fiber grafts spun using a 6mm rod. To test the maximum pressure before an aneurysm forms, the grafts were directly attached to a syringe pump filled with water and a pressure sensor. As water was constantly pumped, the pressure rose until an aneurysm formed in the graft and the test was stopped. The grafts were also lightly covered in glitter particles and video recorded to track expansion over time. DATA PROCESSING The results of the pressure tests were recorded using the Logger software which gave a time-pressure comparison. The video recording was processed using the Blender software. This software allows automatic tracking of individual particles of glitter on the graft. From this a time-strain plot was created. By combining the data sets from both programs in Excel, strain-pressure results were interpolated and grafts with different amounts of fibers and crimp could be compared. RESULTS Early pressure testing trials showed that crimped microfibers are a viable solution to improving maximum allowable pressure and inhibiting distension after about 20% strain. The 4mm crimped fiber grafts showed the best overall performance with maximum pressures exceeding 900 mmHg and smaller gradual distension.
Figure 4: Pressure and strain data comparing crimped fibers, straight fibers, and “naked” grafts
DISCUSSION AND FUTURE WORK While the early tests shown in the data above were promising, subsequent pressure tests were less conclusive. This was extremely puzzling as some samples would behave as expected, while others would start to aneurysm prematurely or swell faster than anticipated. These results are most likely due to non-uniform fiber distribution and poor alignment. The current plan to solve these issues is to alter the electrospinning setup. Instead of charging a collector behind the graft, the acrylic rod will be replaced with a metal rod and charged directly. The current setup is not able to spin the graft sufficiently fast to form straight fibers. The fiber jet currently deposits much faster than a simple hand drill can spin, resulting in a bending instability in the fibers that causes them to buckle and land randomly [8]. Once a device is found that can hold the rod and match the speed of the jet (about 2000 rpm) then even, aligned fibers can be applied to the grafts reliably. REFERENCES 1. Li and Xia Adv Mater. 16, 1151- 1170, 2004. 2. Teo et al. Sci. Technol. Adv. Mater. 12, 2011. 3. Liu et al. Adv Mater. 27, 2583–2588, 2015. 4. Franchi et al. Journal of Anatomy 210, 1-7, 2007. 5. A. Mescher Junqueira’s Basic Histology, 15, 2018 6. Pocivavsek et al. Biomaterials 192, 226- 234, 2019 7. Akduman and Kumbasar IntechOpen- Aspects of Polyurethanes chapter 2, 2017. 8. Arras et al. Sci. Technol. Adv. Mater. 13, 2012. ACKNOWLEDGEMENTS Funding provided by the Swanson School of Engineering and the Office of the Provost. Dr. Sachin Velankar for his guidance and insight.
Distribution of Blebs on the Intercranial Aneurysm Walls Ji Zhou1, Juan R. Cebral, Ph.D.2, Spandan Maiti, Ph.D.1,3 and Anne M. Robertson, Ph.D.1 1 Department of Mechanical Engineering, University of Pittsburgh, Pittsburgh, PA, USA, 2Department of Bioengineering, George Mason University, Fairfax, VA, USA, 3Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA. Email: jiz167@pitt.edu, Phone: 412-773-2523 INTRODUCTION Intracranial aneurysms (IAs) are pathological enlargements of cerebral arteries found in 3-5% of the adult population [1]. Rupture of IAs can lead to subarachnoid hemorrhage with associated risks of mortality and long-term disability. While, physical factors leading to IA rupture are poorly understood. To improve the current clinical protocol for early management of IA disease, more non-invasive clinical imaging-derived morphological data of the aneurysm are needed. The “bleb” is a complex IA wall outcropping, which can be easily observed on the scan of aneurysm images if there exists. The objective of this summer research project was to determine the distribution of blebs across the IA walls. It has been well established by Frösen et al. that blebs on aneurysms are associated with increased rupture risk [2]. If the existence of blebs on IA walls has a noticeable preference, combined with the properties of local regions near the bleb, it may help understand the formation of blebs and reveal more details about IA ruptures. METHODS 50 patient specific luminal IA surface geometries were created from 3-D rotational angiography images using Mimics (Materialise, Leuven, Belguim). Only artery segments containing IA and at least one bleb were trimmed and extracted. To prepare for following stress analysis, all trimmed sections were not filled, hence, remaining the openended artery geometry. Aneurysm segments were meshed with triangular elements using Trelis (csimsoft, American Fork, UT). To adapt all range of actual IA dimensions, mesh size between 0.01 and 0.05 was applied accordingly. After meshing, a custom finite element code was used to compute the wall stress distribution under 200 mmHg blood pressure.
A Cartesian coordinate system was used to locate the position of the bleb on the IA wall. First the IA neck was identified by the ring of elevated stress relative to the parent aneurysm. Figure. 1. The origin of the coordinate system was located at the centroid of the neck area, and the z-axis was defined normal to this plane, Figure. 1. Blebs were similarly identified on the parent aneurysm and the vertical location of the bleb neck on the z-axis was identified. Besides, the maximum length of aneurysm in z-axis was also measured for normalization. z-axis
Bleb xy Plane
Figure 1. After the neck plane (xy) was identified, the zaxis was defined relative to this plane. The z-axis was normalized so values on the aneurysm dome ranged from 0 to 1.
DATA PROCESSING Overall, 57 blebs on 50 IA walls were identified. The coordinate on z-axis of each bleb was collected while being normalized by the highest possible elevation on the aneurysm wall. Every aneurysm was separated into three equal regions along z-axis, each continually representing 1/3 of the normalized position. Those regions are called “Neck”, “Middle” and “Top” starting from the aneurysm neck. The occurrence of blebs in each region has been recorded
and the average value of the z coordinate was calculated for each region and overall dataset. By categorizing the bleb location on the z-axis into those regions, a basic trend of the distribution could be obtained. RESULTS The average values of each region of the bleb positions were collected in Table 1., which indicates that blebs in the “Neck” area seemly being found very close to the xy plane while blebs shown in the “Top” region tending to locate at the furthest side of the parent aneurysm on the z-axis. For blebs in the “Middle” region, the average value was shown to be the center of the aneurysm. Table 1: Average Value of Blebs position and Occurrence Blebs Statistics Results Regions
Z Coordinate (Normalized)
Average Value
Neck Middle Top Overall
0.09 0.47 0.92 0.66
Occurrence
Neck Middle Top
10 (17%) 15 (27%) 32 (56%)
According to the occurrence result, Fig. 2, most blebs were formed in the dome area of the aneurysm. Specifically, blebs were approximately three times more common in the top region of the dome (56%), compared with the neck region (17%) and more than twice as likely compared with the middle region (27%).
Figure 2. Bleb distribution within the lower (neck), middle and top third of the IA sac. The fraction of blebs found in each third is shown.
DISCUSSION Blebs are most common in the top third of the dome, which is consistent with the prior work showing that IA rupture is similarly of increasing likelihood in the neck, middle and top thirds of the IA sac [3,4]. As the existence of a bleb is correlated with rupture risk, it is vital to understand the origin of bleb formation. Studies are underway to expand this work to a larger data set and to understand the cause for this trend. In future studies, additional information will be obtained about the blebs such as the size relative to the parent sac and the height to neck ratio of the bleb. As the Cartesian coordinate system was adopted to this study, the z-axis must be orthogonal to the xy plane, which is where the IA neck locates. However, not every aneurysm grows with its center line orthogonal to the neck. A custom coordinate system, for instance, using the streamline as one of the coordinates could reflect more reliable correlative facts. REFERENCES 1. Etminan et al. Nature Reviews Neurology., 12, 699-713, 2016. 2. Frösen, et al. Acta Neuropathol., 123, 773-786, 2012. 3. Crawford, Neurol Neurosurg Psychiat., 22, 259– 266, 1959. 4. Crompton MR, Br Med J, 1(5496), 1138–1142, 1966. ACKNOWLEDGEMENTS Ronald Fortunato is acknowledged for valuable guidance and support for JZ in the computational aspects of this work. The authors all gratefully acknowledge support from the National Institutes of Neurological Disorders and Stroke 1R01NS09745701. JZ received a research fellowship from the Swanson School of Engineering at the University of Pittsburgh (JZ) and support from the Office of the Provost. This work was previously reported in an abstract to the BMES 2019, Philadelphia.