Dyna Edition 185 - June of 2014

Page 1

DYNA Journal of the Facultad de Minas, Universidad Nacional de Colombia - Medellin Campus

DYNA 81 (185), June, 2014 - ISSN 0012-7353 Tarifa Postal Reducida No. 2014-287 4-72 La Red Postal de Colombia, Vence 31 de Dic. 2014. FACULTAD DE MINAS


DYNA

http://dyna.medellin.unal.edu.co/

DYNA is an international journal published by the Facultad de Minas, Universidad Nacional de Colombia, Medellín Campus since 1933. DYNA publishes peer-reviewed scientific articles covering all aspects of engineering. Our objective is the dissemination of original, useful and relevant research presenting new knowledge about theoretical or practical aspects of methodologies and methods used in engineering or leading to improvements in professional practices. All conclusions presented in the articles must be based on the current state-of-the-art and supported by a rigorous analysis and a balanced appraisal. The journal publishes scientific and technological research articles, review articles and case studies. DYNA publishes arts in the following areas: Organizational Engineering Civil Engineering Mines Engineering

Geosciences and the Environment Systems and Informatics Chemistry and Petroleum

Publication Information DYNA (ISSN 0012-73533, printed; 2346-2183, online) is published by the Facultad de Minas, Universidad Nacional de Colombia, with a bimonthly periodicity (February, April, June, August, October, and December). Circulation License Resolution 000584 de 1976 from the Ministry of the Government. Contact information Web page: http://dyna.unalmed.edu.co E-mail: dyna@unal.edu.co Mail address: Revista DYNA Facultad de Minas Universidad Nacional de Colombia - Medellín Campus Carrera 80 No. 65-223 Bloque M9 - Of.:103 Telephone: (574) 4255068 Fax: (574) 4255343 Medellín - Colombia © Copyright 2014. Universidad Nacional de Colombia The complete or partial reproduction of texts with educational ends is permitted, granted that the source is duly cited. Unless indicated otherwise.

Mechatronics Bio-engineering Other areas related to engineering

DYNA is admitted in: The National System of Indexation and Homologation of Specialized Journals CT+I-PUBLINDEX, Category A1 Science Citation Index Expanded Journal Citation Reports - JCR Science Direct SCOPUS Chemical Abstract - CAS Scientific Electronic Library on Line - SciELO GEOREF PERIÓDICA Data Base Latindex Actualidad Iberoaméricana RedALyC - Scientific Information System Directory of Open Acces Journals - DOAJ PASCAL CAPES UN Digital Library - SINAB CAPES

Notice All statements, methods, instructions and ideas are only responsibility of the authors and not necessarily represent the view of the Universidad Nacional de Colombia. The publisher does not accept responsibility for any injury and/or damage for the use of the content of this journal. The concepts and opinions expressed in the articles are the exclusive responsibility of the authors.

Publisher’s Office Juan David Velásquez Henao, Director Mónica del Pilar Rada T., Editorial Coordinator Catalina Cardona A., Editorial Assistant Amilkar Álvarez C., Diagrammer Byron Llano V., Editorial Assistant Andrew Mark Bailey, English Style Editor Alberto Gutiérrez, IT Assistant

Institutional Exchange Request DYNA may be requested as an institutional exchange through the e-mail canjebib_med@unal.edu.co or to the postal address:

Reduced Postal Fee Tarifa Postal Reducida # 2014-287 4-72. La Red Postal de Colombia, expires Dec. 31st, 2014

Biblioteca Central “Efe Gómez” Universidad Nacional de Colombia, Sede Medellín Calle 59A No 63-20 Teléfono: (57+4) 430 97 86 Medellín - Colombia


SEDE MEDELLÍN

DYNA



SEDE MEDELLÍN

COUNCIL OF THE FACULTAD DE MINAS Dean John Willian Branch Bedoya, PhD Vice-Dean Pedro Nel Benjumea Hernández, PhD Vice-Dean of Research and Extension Santiago Arango Aramburo, PhD Director of University Services Jhon Jairo Blandón Valencia, PhD Academic Secretary Héctor Iván Velásquez Arredondo, PhD Representative of the Curricular Area Directors Javier Gustavo Herrera Murcia, PhD Representative of the Basic Units of AcademicAdministrative Management Germán Alberto Sierra Gallego, PhD Representative of the Basic Units of AcademicAdministrative Management Jorge Iván Gómez Gómez, MSc Professor Representative Jaime Ignacio Vélez Upegui, PhD Delegate of the University Council Pedro Ignacio Torres Trujillo, PhD Undergraduate Student Representative Rubén David Montoya Pérez FACULTY EDITORIAL BOARD

JOURNAL EDITORIAL BOARD Editor-in-Chief Juan David Velásquez Henao, PhD Universidad Nacional de Colombia, Colombia Editors George Barbastathis, PhD Massachusetts Institute of Technology, USA Tim A. Osswald, PhD University of Wisconsin, USA Juan De Pablo, PhD University of Wisconsin, USA Hans Christian Öttinger, PhD Swiss Federal Institute of Technology (ETH), Switzerland Patrick D. Anderson, PhD Eindhoven University of Technology, the Netherlands Igor Emri, PhD Associate Professor, University of Ljubljana, Slovenia Dietmar Drummer, PhD Institute of Polymer Technology University ErlangenNürnberg, Germany Ting-Chung Poon, PhD Virginia Polytechnic Institute and State University, USA Pierre Boulanger, PhD University of Alberta, Canadá Jordi Payá Bernabeu, Ph.D. Instituto de Ciencia y Tecnología del Hormigón (ICITECH) Universitat Politècnica de València, España

Dean John Willian Branch Bedoya, PhD

Javier Belzunce Varela, Ph.D. Universidad de Oviedo, España

Vice-Dean of Research and Extension Santiago Arango Aramburo, PhD

Luis Gonzaga Santos Sobral, PhD Centro de Tecnología Mineral - CETEM, Brasil

Members Hernán Darío Álvarez Zapata, PhD Oscar Jaime Restrepo Baena, PhD Juan David Velásquez Henao, PhD Jaime Aguirre Cardona, PhD Mónica del Pilar Rada Tobón MSc

Agustín Bueno, PhD Universidad de Alicante, España Henrique Lorenzo Cimadevila, PhD Universidad de Vigo, España Mauricio Trujillo, PhD Universidad Nacional Autónoma de México, México

Carlos Palacio, PhD Universidad de Antioquia, Colombia Jorge Garcia-Sucerquia, PhD Universidad Nacional de Colombia, Colombia Juan Pablo Hernández, PhD Universidad Nacional de Colombia, Colombia John Willian Branch Bedoya, PhD Universidad Nacional de Colombia, Colombia Enrique Posada, Msc INDISA S.A, Colombia Oscar Jaime Restrepo Baena, PhD Universidad Nacional de Colombia, Colombia Moisés Oswaldo Bustamante Rúa, PhD Universidad Nacional de Colombia, Colombia Hernán Darío Álvarez, PhD Universidad Nacional de Colombia, Colombia Jaime Aguirre Cardona, PhD Universidad Nacional de Colombia, Colombia



DYNA http://dyna.medellin.unal.edu.co/

DYNA 81 (185), June, 2014. Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online

CONTENTS Editorial: Correct citation in DYNA and anti-plagiarism editorial policy

9

Letter to the Editor: Review Papers

11

Phase transformations in air plasma-sprayed yttria-stabilized zirconia thermal barrier coatings

13

Remote laboratory prototype for automation of industrial processes and communications tests

19

Potential for geologically active faults Department of Antioquia, Colombia

24

Static and dynamic task mapping onto network on chip multiprocessors

28

Effect of heating systems in litter quality in broiler facilities in Winter conditions

36

The dynamic model of a four control moment gyroscope system

41

Design and construction of low shear laminar transport system for the production of nixtamal corn dough

48

Dynamic stability of slender columns with semi-rigid connections under periodic axial load: theory

56

Dynamic stability of slender columns with semi-rigid connections under periodic axial load: verification and examples

66

Polyhydroxyalkanoate production from unexplored sugar substrates

73

Straight-Line Conventional Transient Pressure Analysis for Horizontal Wells with Isolated Zones

78

Creative experience in engineering design: the island exercise

86

Calibrating a photogrammetric digital frame sensor using a test field

94

Testing the efficiency market hypothesis for the Colombian stock market

100

Modeling of a simultaneous saccharification and fermentation process for ethanol production from lignocellulosic wastes by kluyveromyces marxianus

107

Simulation of a stand-alone renewable hydrogen system for residential supply

116

Conceptual framework language – CFL –

124

Flower wastes as a low-cost adsorbent for the removal of acid blue 9

132

Juan D. Velásquez

Carlos Andrés Ramos Paja

Julián D. Osorio, Adrián Lopera-Valle, Alejandro Toro & Juan P. Hernández-Ortiz

Sebastián Castrillón Ospina, Luz Inés Hincapie & Germán Zapata Madrigal Luis Hernán Sánchez-Arredondo & Orlando Giraldo-Bolivar

Freddy Bolaños Martínez, José Edison Aedo & Fredy Rivera Vélez

Ricardo Brauer Vigoderis, Ilda de Fátima Ferreira Tinôco, Héliton Pandorfi, Marcelo Bastos Cordeiro, Jalmir Pinheiro de Souza Júnior & Maria Clara de Carvalho Guimarães Eugenio Yime-Rodríguez, César Augusto Peña-Cortés & William Mauricio Rojas-Contreras Jorge Alberto Ortega-Moody, Eduardo Morales-Sánchez, Evelina Berenice Mercado-Pedraza, José Gabriel Ríos-Moreno & Mario Trejo-Perea.

Oliver Giraldo-Londoño & J. Darío Aristizábal-Ochoa Oliver Giraldo-Londoño & J. Darío Aristizábal-Ochoa

Alejandro Salazar, María Yepes, Guillermo Correa & Amanda Mora

Freddy Humberto Escobar, Alba Rolanda Meneses & Liliana Marcela Losada Vicente Chulvi, Javier Rivera & Rosario Vidal

Benjamín Arias-Pérez, Óscar Cuadrado-Méndez, Pilar Quintanilla, Javier Gómez-Lahoz & Diego González-Aguilera Juan Benjamín Duarte-Duarte, Juan Manuel Mascareñas-Pérez-Iñigo & Katherine Julieth Sierra-Suárez

Juan Esteban Vásquez, Juan Carlos Quintero & Silvia Ochoa Cáceres

Martín Hervello, Víctor Alfonsín, Ángel Sánchez, Ángeles Cancela & Guillermo Rey

Sandro J. Bolaños Castro, Rubén González Crespo, Victor H. Medina García & Julio Barón Velandia

Ana María Echavarria-Alvarez & Angelina Hormaza-Anaguano


Discrete Particle Swarm Optimization in the numerical solution of a system of linear Diophantine equations

139

Some recommendations for the construction of walls using adobe bricks

145

Thermodynamic analysis of R134a in an Organic Rankine Cycle for power generation from low temperature sources

153

Hydro-meteorological data analysis using OLAP techniques

160

Monitoring and groundwater/gas sampling in sands densified with explosives

168

Characterization of adherence for Ti6Al4V films rf magnetron sputter grown on stainless steels

175

UV-vis in situ spectrometry data mining through linear and non linear analysis methods

182

A refined protocol for calculating air flow rate of naturally-ventilated broiler barns based on co2 mass balance

189

Use of a multi-objective teaching-learning algorithm for reduction of power losses in a power test system

196

Bioethanol production by fermentation of hemicellulosic hydrolysates of african palm residues using an adapted strain of Scheffersomyces Stipitis

204

Making Use of Coastal Wave Energy: A proposal to improve oscillating water column converters

211

Enterprise Architecture as Tool For Managing Operational Complexity in Organizations

219

Iván Amaya, Luis Gómez & Rodrigo Correa

Miguel Ángel Rodríguez-Díaz, Belkis Saroza-Horta, Pedro Nolasco Ruiz-Sánchez, Ileana Julia Barroso-Valdés, Fernando Ariznavarreta-Fernández & Felipe González-Coto Fredy Vélez, Farid Chejne & Ana Quijano

Néstor Darío Duque-Méndez, Mauricio Orozco-Alzate & Jorge Julián Vélez Carlos A. Vega-Posada, Edwin F. García-Aristizábal & David G. Zapata-Medina Carlos Mario Garzón, José Edgar Alfonso & Edna Consuelo Corredor Liliana López-Kleine & Andrés Torres

Luciano Barreto Mendes, Ilda de Fatima Ferreira Tinoco, Nico Ogink, Robinson Osorio Hernandez & Jairo Alexander Osorio Saraz Miguel A. Medina, Juan M. Ramirez, Carlos A. Coello & Swagatam Das

Frank Carlos Herrera-Ruales & Mario Arias-Zabala

Ramón Borrás-Formoso, Ramón Ferreiro-García, Fernanda Miguélez-Pose & Cándido Fernández-Ameal Martín Dario Arango-Serna, John Willian Branch-Bedoya & Jesús Enrique Londoño-Salazar

Our cover Image alluding to Article: Characterization of adherence for Ti6Al4V films RF magnetron sputter grown on stainless steels Authors: Carlos Mario Garzón, José Edgar Alfonso, &Edna Consuelo Corredor


DYNA http://dyna.medellin.unal.edu.co/

DYNA 81 (185), June, 2014. Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online

CONTENIDO Editorial: Correct citation in DYNA and anti-plagiarism editorial policy

9

Carta del Editor: Review Papers

11

Transformaciones de fase en recubrimientos de barrera térmica de zirconia estabilizada con yttria depositados mediante aspersión por plasma atmosférico

13

Prototipo de laboratorio remoto para prácticas de automatización de procesos y comunicaciones industriales

19

Potencial de fallas geológicas activas Departamento de Antioquia, Colombia

24

Mapeo estático y dinámico de tareas en sistemas multiprocesador, basados en redes en circuito integrado

28

Efecto del sistema de calefacción en la calidad de la cama de galpones de pollos de engorde en condiciones de inverno

36

Modelo dinámico de un sistema de control de par por cuatro giróscopos

41

Diseño y construcción de un sistema de transporte laminar de bajo cizallamiento para la producción de masa de maíz nixtamalizada

48

Estabilidad dinámica de columnas esbeltas con conexiones semirrígidas bajo carga axial periódica: teoría

56

Estabilidad dinámica de columnas esbeltas con conexiones semirrígidas bajo carga axial periódica: verificación y ejemplos

66

Producción de polihidroxialcanoatos a partir de sustratos azucarados inexplorados

73

Análisis Convencional de Pruebas de Presión en Pozos Horizontales con Zonas Aisladas

78

Experiencia creativa en ingeniería en diseño: el ejercicio de la isla

86

Calibración de una cámara digital matricial empleando un campo de pruebas

94

Comprobación de la hipótesis de eficiencia del mercado bursátil en Colombia

100

Modelado de un proceso de sacarificación y fermentación simultanea para la producción de etanol a partir de residuos lignocelulósico utilizando kluyveromyces marxianus

107

Simulación de un sistema autónomo de hidrógeno renovable para uso residencial

116

Lenguaje de marcos conceptuales – LMC –

124

Residuos de flores como adsorbentes de bajo costo para la remoción de azul ácido 9

132

Juan D. Velásquez

Carlos Andrés Ramos Paja

Julián D. Osorio, Adrián Lopera-Valle, Alejandro Toro & Juan P. Hernández-Ortiz

Sebastián Castrillón Ospina, Luz Inés Hincapie & Germán Zapata Madrigal Luis Hernán Sánchez-Arredondo & Orlando Giraldo-Bolivar

Freddy Bolaños Martínez, José Edison Aedo & Fredy Rivera Vélez

Ricardo Brauer Vigoderis, Ilda de Fátima Ferreira Tinôco, Héliton Pandorfi, Marcelo Bastos Cordeiro, Jalmir Pinheiro de Souza Júnior & Maria Clara de Carvalho Guimarães Eugenio Yime-Rodríguez, César Augusto Peña-Cortés & William Mauricio Rojas-Contreras

Jorge Alberto Ortega-Moody, Eduardo Morales-Sánchez, Evelina Berenice Mercado-Pedraza, José Gabriel Ríos-Moreno & Mario Trejo-Perea.

Oliver Giraldo-Londoño & J. Darío Aristizábal-Ochoa Oliver Giraldo-Londoño & J. Darío Aristizábal-Ochoa

Alejandro Salazar, María Yepes, Guillermo Correa & Amanda Mora

Freddy Humberto Escobar, Alba Rolanda Meneses & Liliana Marcela Losada

Vicente Chulvi, Javier Rivera & Rosario Vidal

Benjamín Arias-Pérez, Óscar Cuadrado-Méndez, Pilar Quintanilla, Javier Gómez-Lahoz & Diego González-Aguilera

Juan Benjamín Duarte-Duarte, Juan Manuel Mascareñas-Pérez-Iñigo, & Katherine Julieth Sierra-Suárez

Juan Esteban Vásquez, Juan Carlos Quintero & Silvia Ochoa Cáceres

Martín Hervello, Víctor Alfonsín, Ángel Sánchez, Ángeles Cancela, & Guillermo Rey

Sandro J. Bolaños Castro, Rubén González Crespo, Victor H. Medina García & Julio Barón Velandia

Ana María Echavarria-Alvarez & Angelina Hormaza-Anaguano


Optimización por Enjambre de Partículas Discreto en la Solución Numérica de un Sistema de Ecuaciones Diofánticas Lineales

139

Algunas recomendaciones para la construcción de muros de adobe

145

Análisis termodinámico del R134a en un Ciclo Rankine Orgánico para la generación de energía a partir de fuentes de baja temperatura

153

Análisis de datos hidroclimatológicos usando técnicas OLAP

160

Monitoreo y muestreo de aguas subterráneas y gases en arenas densificadas con explosivos

168

Caracterización de la adherencia para películas de Ti6Al4V depositadas por medio de pulverización catódica magnetrón rf sobre acero inoxidable

175

Minería de datos UV-vis in situ con métodos de análisis lineales y no lineales

182

Un protocolo refinado basado en balance de masa de co2 para el cálculo de la tasa de flujo de aire en instalaciones avicolas naturalmente ventiladas

189

Uso de un algoritmo de enseñanza-aprendizaje multi-objetivo para la reducción de pérdidas de energía en un sistema de potencia de prueba

196

Producción de bioetanol por fermentación de hidrolizados hemicelulósicos de residuos de palma africana usando una cepa de scheffersomyces stipitis adaptada

204

Aprovechamiento Energético de las Olas Costeras: Una propuesta de mejora de los convertidores de columna de agua oscilante

211

Arquitectura Empresarial como Instrumento para Gestionar la Complejidad Operativa en las Organizaciones

219

Iván Amaya, Luis Gómez & Rodrigo Correa

Miguel Ángel Rodríguez-Díaz, Belkis Saroza-Horta, Pedro Nolasco Ruiz-Sánchez, Ileana Julia Barroso-Valdés, Fernando Ariznavarreta-Fernández & Felipe González-Coto

Fredy Vélez, Farid Chejne & Ana Quijano

Néstor Darío Duque-Méndez, Mauricio Orozco-Alzate & Jorge Julián Vélez

Carlos A. Vega-Posada, Edwin F. García-Aristizábal & David G. Zapata-Medina

Carlos Mario Garzón, José Edgar Alfonso & Edna Consuelo Corredor Liliana López-Kleine & Andrés Torres

Luciano Barreto Mendes, Ilda de Fatima Ferreira Tinoco, Nico Ogink, Robinson Osorio Hernandez & Jairo Alexander Osorio Saraz

Miguel A. Medina, Juan M. Ramirez, Carlos A. Coello & Swagatam Das

Frank Carlos Herrera-Ruales & Mario Arias-Zabala

Ramón Borrás-Formoso, Ramón Ferreiro-García, Fernanda Miguélez-Pose & Cándido Fernández-Ameal Martín Dario Arango-Serna, John Willian Branch-Bedoya & Jesús Enrique Londoño-Salazar

Nuestra Caratula Imagen alusiva al artículo: Caracterización de la adherencia para películas de Ti6Al4V depositadas por medio de pulverización catódica magnetrón RF sobre acero inoxidable Autores: Carlos Mario Garzón, José Edgar Alfonso, & Edna Consuelo Corredor


DYNA http://dyna.medellin.unal.edu.co/

Phase transformations in air plasma-sprayed yttria-stabilized zirconia thermal barrier coatings Transformaciones de fase en recubrimientos de barrera térmica de zirconia estabilizada con yttria depositados mediante aspersión por plasma atmosférico Julián D. Osorio a, Adrián Lopera-Valle b, Alejandro Toro c & Juan P. Hernández-Ortiz d a

Materials and Minerals Department, National University of Colombia, Medellín, Colombia, jdosorio@unal.edu.co b Mechanical Engineering, National University of Colombia, Medellín, Colombia, adloperav@unal.edu.co c Tribology and Surfaces Group, National University of Colombia, Medellín, Colombia, aotoro@unal.edu.co d Materials and Minerals Department, National University of Colombia, Medellín, Colombia, jphernandezo@unal.edu.co Received: September 24th, 2012. Received in revised form: January 23th, 2014. Accepted: January 27th, 2014

Abstract Phase transformations in air plasma-sprayed thermal barrier coatings composed of ZrO2 – 8 wt.% Y2O3 (zirconia - 8 wt.% yttria) are studied using X-Ray diffraction and Rietveld refinement measurements. Samples of TBC deposited onto Inconel 625 substrate were fabricated and heat treated at two different conditions: exposition to 1100ºC up to 1000 hours and exposition to temperatures between 700ºC and 1100ºC during 50 hours. According to Rietveld refinement measurements, the content of the cubic phase in the top coat increases with time and temperature; it starts at 7.3 wt.% and reaches 15.7 wt.% after 1000 hours at 1100ºC. The presence of a cubic phase in high amounts is undesirable due its lower mechanical properties compared with the tetragonal phase. After 800 hours of exposure to high temperature, the amount of Y2O3 in the tetragonal phase reduces to 6.6 wt.% and a fraction of this phase transforms to a monoclinic structure during cooling. The monoclinic phase reached 18.0 wt.% after 1000 hours. This phase is also undesirable, not only due to its higher thermal conductivity, but also because the tetragonal-to-monoclinic transformation implies a volume change of circa 5%, which favors crack formation and propagation and compromises the coating integrity. Keywords: Thermal Barrier Coating (TBC); Heat Treatment; Phase Transformation; Rietveld Analysis. Resumen En este trabajo, las transformaciones de fase en Recubrimientos de Barrera Térmica (TBC) constituidos por ZrO 2 – 8 wt.% Y2O3 (zirconia - 8 wt.% ytrria) fueron estudiados a través de Difracción de Rayos X (XRD) y refinamiento Rietveld. Las muestras de TBC fueron depositadas mediante aspersión por plasma atmosférico sobre un sustrato tipo Inconel 625 y fueron tratadas térmicamente con dos condiciones diferentes: en la primera se utilizó una temperatura de 1100ºC con tiempos de exposición entre 1 hora y 1000 horas; en la segunda las muestras fueron sometidas a temperaturas entre 700ºC y 1100º durante 50 horas. De acuerdo a los resultados obtenidos mediante refinamiento Rietveld el contenido de fase cúbica en el recubrimiento (TC) se incrementa con el tiempo y la temperatura, desde 7.3 wt.% hasta 15.7 wt.% después de 1000 horas a 1100ºC. La fase cúbica en grandes cantidades es indeseable debido a que presenta inferiores propiedades mecánicas cuando se compara con la fase tetragonal. Después de 800 horas de exposición a alta temperatura, el contenido de Y2O3 en la fase tetragonal se reduce hasta 6.6 wt.% y una fracción de la fase tetragonal transforma a monoclínica durante el enfriamiento. La fase monoclínica alcanza 18.0 wt.% después de 1000 horas. Esta fase es también indeseable porque además de tener una mayor conductividad térmica, la transformación de tetragonal a monoclínica viene acompañada de un cambio volumétrico de alrededor de 5% que promueve la formación y propagación de grietas, las cuales comprometen la integridad del recubrimiento. Palabras clave: Recubrimiento de Barrera Térmica (TBC); Tratamiento Térmico; Transformaciones de fase; Refinamiento Rietveld.

1. Introduction Thermal barrier coatings (TBCs) are multilayered systems widely used in gas turbines to increase efficiency and durability [1-4]. These coatings consist of three layers deposited onto a Base Substrate (BS): the Top Coat (TC), the Bond Coat (BC) and the Thermally Grown Oxide (TGO)

layer (see Fig. 1). Base substrates are usually Ni-based superalloys that offer good mechanical strength and excellent corrosion, oxidation and erosion resistances at high temperatures [5-7]. They contain significant amounts of alloying elements such as Cr, Mo, Al, Ti, Fe and C, which favor intermetallic compounds precipitation [8,9]. Two types of BCs are commonly used: the Platinum- modified Nickel

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 13-18 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Osorio et al / DYNA 81 (185), pp. 13-18. June, 2014.

Figure 2. Portion of the phase diagram of ZrO2 – Y2O3 system [21].

Figure 1. Schematic diagram of a Thermal Barrier Coating applied by Air Plasma Spray (APS). The TC faces combustion gases. The BS is air-cooled to increase the temperature gradient and, therefore, the efficiency.

transformation is undesirable because it is accompanied by a volumetric change of circa 5%, which causes a detrimental effect to the TBC due to crack nucleation and propagation [24,25]. In addition, the tetragonal phase presents excellent mechanical and thermal properties compared with those of the monoclinic phase [2,19,26,27]. The tetragonal-to-monoclinic transformation is avoided with the addition of 6 to 8 wt.% of Yttria [18]. Two different processes are currently used to deposit Yttria stabilized Zirconia: Electron Beam-Physical Vapor Deposition (EB-PVD) and Air Plasma Spray (APS). In both cases, the rapid cooling results in a metastable tetragonal phase (t’) rather than a stable tetragonal [28,29]. According to some studies, during exposure to high temperature and during cycling operations, Yttrium (Y) diffuses from the t’ phase to stabilize the cubic phase [30,31]. Consequently, a monoclinic phase appears from the Y-depleted tetragonal phase during cooling. The presence of both, cubic phase in high amounts and monoclinic phase is undesirable due to their lower mechanical properties compared with the tetragonal phase. Also, the volumetric changes associated with the phase transformations favor crack generation and propagation which compromise the coating integrity. Therefore, understanding these transformations is essential to find alternatives to improve the TBC’s lifetime. In this work, phase transformations in an APS-deposited TC under two different sets of heat treatments are studied using X-Ray Diffraction (XRD) and semi-quantitative Rietveld refinement. The paper is organized as follows: in Section 2, the experimental procedure, methods and materials are presented. Section 3 describes a summary of the results, where the effects of exposure time at 1100ºC are analyzed first and then the phase dynamics for different exposure temperatures during 50 hours is discussed. The most important conclusions are summarized at the end.

Aluminide (PtNiAl) and MCrAlY alloys, where M refers to one or more of the elements Co, Ni and Fe. The BC is a metallic layer that initially provides adherence between the TC and the substrate [10,11]. During operation at high temperatures, aluminum diffuses from the BC and reacts to form a barrier layer, known as the TGO. Once the TGO is formed, the BC serves as the anchoring layer between the TC and the TGO. The TGO provides the barrier to oxygen diffusion to avoid substrate oxidation. However, many of the failure mechanisms in TBCs are related to the TGO formation and growth [12-16]. The TC is usually composed by Yttria (Y2O3) stabilized Zirconia (ZrO2), and serves as the main defense mechanism of gas turbines against erosion and corrosion. The TC has low thermal conductivity that reduces the temperature of the bond coat up to 500ºC with a thickness of some hundred microns. It must be stabilized in order to maintain its tetragonal structure at room temperature and also to keep thermal properties constant (conductivity and thermal expansion coefficient) in the range of working temperatures. To accomplish this stabilization, some elements such as Hafnium (Hf) and Yttrium (Y), are commonly added [17-20]. The Yttrium (Y+3) and Hf (Hf+4) ions replace the zirconium (Zr+4) ions in the lattice cell inducing changes in the crystal structure. These changes stabilize the tetragonal phase and decrease the thermal conductivity. In the first case, the Y+3 ions produce oxygen vacancies in the lattice [18], while the Hf+4 ions, which are chemically similar and have comparable ionic radius to Zr+4 ions, are almost twice as massive and generate a lattice disorder. In both cases, thermal conductivity is reduced due to an un-harmonic scatter of the charge carriers in the ceramics at high temperature, i.e. un-harmonic phonon scatter phenomenon [19,21]. Zirconia without stabilizers can exist in three different phases [18,22]: cubic, tetragonal and monoclinic, see the phase diagram [23] in Fig. 2. The tetragonal-to- monoclinic 14


Osorio et al / DYNA 81 (185), pp. 13-18. June, 2014.

2. Experimental procedure The TBC samples are composed of a ZrO2 – 8 wt.% Y2O3 TC applied by APS onto a NiCrCoAlY BC, both layers having thicknesses around 300 Âľm. The BC layer was deposited onto a nickel-base substrate, namely Inconel 625. The dimensions of the TBC samples were 10 cm x 10 cm, extracted from plates of 30 cm x 30 cm. The samples were cut with a precision saw operating at 4000 RPM. Thereafter, some samples were heated at a rate of 18 ÂşC/min and maintained at 1100ÂşC between 1 and 1000 hours, while other samples were thermally treated for 50 hours at 700ÂşC, 800ÂşC, 900ÂşC and 1000ÂşC; In all cases, the samples were cooled in air. Sample preparation included grinding with No. 400 and No. 600 emery papers for 5 minutes, followed by polishing by cloths with abrasive diamond suspensions containing particles with 12 Âľm, 6 Âľm, 3 Âľm and 1 Âľm in average size. The polishing time for the first three suspensions was 15 minutes, while 60 minutes were required for the 1 Âľm suspension. A complete characterization of this material with similar heat treatment conditions has been reported in previous works [32,33]. The phase characterization was carried out in a Panalytical X Pert Pro MPD X-Ray Diffractometer with a CuKď Ą (Îť = 1.5406 Ă…) radiation gun within a 20Âş < 2θ <100Âş range and step of 0.013Âş/seg. Rietveld semi-quantitative measurements were made to account the phase changes at different temperature conditions. To ensure reliability of the results, two replicas of each sample were also measured. The software used to perform the Rietveld refinements was X'Pert High Score Plus Version 2.2a by PANalytical B.V. It is well known that the tetragonal lattice parameters change depend on the Y2O3 content. Therefore, Rietveld measurements were also performed to determine the lattice tetragonal parameters a and c, in order to determine the amount of Y2O3 in this phase. The amount of Y2O3 in the tetragonal phase, for each heat treatment condition, was determined using the following relation [22,34]: đ?‘Œđ?‘‚1.5 (đ?‘šđ?‘œđ?‘™. %) =

1.0225 − (đ?‘? â „đ?‘Ž ) , 0.001311

Figure 3. X-Ray diffractograms of TC treated at 1100ÂşC for different exposure times. a) 1000 hours. b) 400 hours. Measurements performed at room temperature, after the heat treatments

is prevented after cooling [18]. Some researchers have reported a TC consisting exclusively of the tetragonal phase [31] and others have found considerable amounts of the monoclinic phase in the as-sprayed condition [34,36]. The differences in the initial compositions and microstructure depend, not only on the stabilizers’ content, but also on the feedstock [34] and the presence of unmelted or partially melted particles [36]. 3.1. Effect of exposure time at 1100ºC Fig. 3 presents two x-ray diffractograms for the samples thermally treated during 1000 hours and 400 hours at 1100ºC, respectively. It can be observed that the monoclinic phase appears in the sample treated for 1000 hours. The evolution of the TC phases after the exposure to 1100ºC, measured through Rietveld refinement, is shown in Fig. 4. The tetragonal phase decreases from 97.3 wt.% to 66.3 wt.% after 1000 hours, while the cubic phase increases from 7.3 wt.% to 15.7 wt.%. At 800 hours, the monoclinic phase rapidly starts to form and, after 1000 hours, it reaches 18.0 wt.%. This is observed both in

(1)

where a and c are the lattice tetragonal parameters in nanometers. This expression was derived by H.G. Scott in 1975 [22], based on the change of lattice parameters of Yttria Stabilized Zirconia powders with different YO1.5 content; it was corrected empirically by Ilavsky and Stalick [34] to improve the fit throughout the annealing process to use it in a wide range of samples [35]. 3. Results and analysis In the as-sprayed condition, the TC was a mixture of 92.7 wt.% tetragonal and 7.3 wt.% cubic phases. In this condition, the Y2O3 content in the tetragonal phase was 7.53 wt.% (~7.53 mol% YO1.5) which is within the recommended range for which the tetragonal-to-monoclinic transformation

Figure 4. Tetragonal, cubic and monoclinic phase content in APS-deposited TC as a function of exposure time at 1100ÂşC. Measurements performed at room temperature, after the heat treatments. 15


Osorio et al / DYNA 81 (185), pp. 13-18. June, 2014.

Figure 5. X-Ray diffractograms of TC treated at 1100ºC for different exposure times. The peak corresponding to monoclinic phase first appears in samples treated for 800 hours. Measurements were performed at room temperature, after the heat treatments.

Figure 6. Y2O3 content in the tetragonal phase (t’) at room temperature as a function of exposure time at 1100ºC.

3.2.

Fig. 4 and in the X-Ray diffractogram in Fig. 5. The uncertainty in the phase content was determined using the standard deviation. The uncertainty value was around 0.38 wt.% with a maximum of 0.43 wt.% Fig. 4 also shows that the cubic phase forms in the first 200 hours at the expense of the tetragonal phase. Then, the cubic phase growth proceeds at a slower rate. After 800 hours, the monoclinic phase increases quickly, overpassing the cubic content before 1000 hours. Fig. 6 shows how the Yttrium content of the metastable t’ phase at room temperature reduces with exposure time at 1100ºC. It is known that the t’ phase decomposes into a mixture of stable tetragonal and cubic phases [37] as a consequence of Yttrium diffusion. However, for crystallographic purposes, both tetragonal structures can be analyzed as the same tetragonal polymorph in the zirconia solid solution [28,38]. As can be observed in Fig. 6, the Y2O3 content constantly decreases in the tetragonal (t') phase with exposure time. After 800 hours, the Y2O3 content in t' phase decreases to 6.60 wt.%. Then, the Y-depleted tetragonal phase transforms to a monoclinic phase during cooling and the monoclinic phase becomes more stable, favored by the Yttrium reduction in the tetragonal phase. Another factor which promotes the tetragonal-to-monoclinic phase transformation is the grain size [36,39,40]. From thermodynamic formulations, some researchers [39] have found that the surface energy of the tetragonal phase is lower than that of the monoclinic phase for a grain size smaller than 200 nm. Then, for a grain size smaller than 200 nm, the tetragonal phase is more stable. In addition, greater grain sizes favor the diffusion rate through the grain boundaries [36]. Therefore, it can be concluded that the Yttrium diffusion from the tetragonal phase increases and the amount of monoclinic phase formed from the Yttriumdepleted tetragonal phase increases.

Effect of the treatment temperature for fixed exposition time

The results of Rietveld measurements performed in the samples treated at different temperatures for 50 hours are presented in Fig. 7. The cubic phase increases slightly (around 3 wt.%) from 700ºC to 1100ºC. The uncertainty was around 0.41 wt.% with a maximum value of 0.44 wt.%. No monoclinic phase was detected in any treatments. Therefore, it can be said that the tetragonal phase decreases in the same proportion as the cubic phase increases. On the other hand, the cubic content increases with both temperature and exposure time. This behavior is in agreement with the results found in the literature in which the cubic content after 100 hours at 1200ºC is around 19.0 wt.% and it reaches more than 40.0 wt.% for heat treatments at 1400ºC after 100 hours [31].

Figure 7. Tetragonal and cubic phase content in APS-TC as function of temperature after 50 hours of treatment. Measurements performed at room temperature, after the heat treatments.

16


Osorio et al / DYNA 81 (185), pp. 13-18. June, 2014.

grateful to the Materials Characterization Laboratory at the National University of Colombia at Medellín, for providing the characterization instruments. References [1] Boyce, P. M., Gas Turbine Engineering Handbook, Gulf Professional Publishing, Second Edition, 2002. [2] Padture ,N. P., et al. Thermal Barrier Coatings for Gas-Turbine Engine Applications, Science 296, 280, 2002. [3] Trice, R. W., Su, Y. J., Mawdsley, J. R. and Faber, K. T., Effect of heat treatment on phase stability, microstructure, and thermal conductivity of plasma-sprayed YSZ, Journal Of Materials Science 37, pp. 2359-2365, 2002. [4] Sivakumar, R. and Mordike, B. L. High temperature coatings for gas turbine blades: a review, Surface and coatings technology 37, pp. 139 -160, 1989.

Figure 8. Y2O3 content in the tetragonal phase as a function of temperature after 50 hours of treatment. Measurements performed at room temperature, after the heat treatments.

[5] Davis, J. R., Heat Resistant Materials (ASM Specialty Handbook), ASM International, 1997. [6] Rai, S. K., Kumar, A., Shankar, V., Jayakumar, T. et al. Characterization of microstructures in Inconel 625 using X-ray diffraction peak broadening and lattice parameter measurements, Scripta Materialia 51, pp. 59–63, 2004.

The effect of the temperature in the Y2O3 content in the tetragonal phase is shown in Fig. 8. A slight reduction from 7.5 wt.% to around 7.0 wt.% is observed. On the other hand, the slight increment in cubic content is probably caused by the diffusion of the Yttrium from the tetragonal (t') phase to stabilize the cubic phase. According to the results presented in Fig. 6, the monoclinic phase appears when the Y2O3 content reduces to 6.6 wt.% or below. Therefore, it is not expected that the monoclinic phase forms after 50 hours at any temperature equal to or below 1100ºC. The generality of the value of 6.6 wt.% of Y2O3 in the tetragonal phase that was found in this work, at which the tetragonal phase destabilizes to transform in monoclinic phase during cooling requires additional research. Other factors, such as grain size and stresses, can favor the monoclinic stabilization.

[7] González, A., López, E., Tamayo, A., Restrepo, E. and Hernández, F., Microstructure and Phases Analyses of Zirconia-Alumina (ZrO2 - Al2O3) Coatings Produced By Thermal Spray, DYNA 77, no. 162, pp. 151-160, 2010. [8] Reed, R. C., The Superalloys: Fundamentals and Applications, Cambridge University Press, 2006. [9] Zhao, J. C., Larsen, M. and Ravikumar, V., Phase precipitation and time–temperature transformationdiagram of Hastelloy X, Materials Science and Engineering A293, pp. 112– 119, 2000. [10] Nicoll, A. R. and Wahl, G., The effect of alloying additions on M-CrAl-Y Systems: an experimental study, Thin Solid Films, 95, pp. 21-34, 1982. [11] Richard, C. S., Béanger, G., Lu J. and Flavenot, J. F., The influences of heat treatments and interdiffusion on the adhesion of plasma-sprayed NiCrAlY coatings, Surface and Coatings Technology 82, pp. 99-109, 1996.

4. Conclusions The phase transformations in APS-deposited TCs composed of ZrO2 – 8 wt.% Y2O3 under different heat treatment conditions were studied through XRD and Rietveld semi-quantitative measurements. The tetragonal structure (t’) generated from the APS deposition process became unstable at high temperatures. The increase in temperature and exposure time favored Yttrium diffusion from the tetragonal phase and promoted formation of the cubic phase. The amount of such a cubic phase increased from 7.3 wt.% at room temperature to 15.7 wt.% after 1000 hours at 1100ºC. After 800 hours at 1100ºC, the monoclinic phase started to form and the Y2O3 content in the tetragonal phase reduced to values below 6.6 wt.%; the amount of the monoclinic phase increased rapidly and reached 18.0 wt.% after 1000 hours at this temperature.

[12] Spitsberg, I.T., Mumm, D.R. and Evans, A. G., On the failure mechanisms of thermal barrier coatings with diffusion aluminide bond coatings, Materials Science and Engineering A 394, pp. 176–191, 2005. [13] Seo, D. and Ogawa, K., et al. Influence of high-temperature creep stress on growth of thermally grown oxide in thermal barrier coatings, Surface and Coatings Technology 203, pp. 1979–1983, 2009. [14] Nychka, J. A., Xu, T., Clarke, D. R. and Evans, A. G., The stresses and distortions caused by formation of a thermally grown alumina: comparison between measurements and simulations, Acta Materialia 52, pp. 2561–2568, 2004. [15] Osorio, J. D., Giraldo, J., Hernández, J. C., Toro, A. and HernándezOrtiz, J. P., Diffusion–Reaction of Aluminum and Oxygen in Thermally Grown Al2O3 Oxide Layers, Heat and Mass Transfer 50, 483-492, 2014. [16] Tolpygo, V. K., Clarke, D. R. Surface Rumpling of a (Ni, Pt) Al Bond Coat Induced by Cyclic Oxidation, Acta materialia 48, 3283-3293, 2000. [17] Clarke, D R., Materials selection guidelines for low thermal conductivity thermal barrier coatings, Surface and Coatings Technology 163 –164, 67–74, 2003.

Acknowledgements

[18] Clarke, D. R., Levi, C. G., Materials design for the next generation thermal barrier coatings, Annu. Rev. Mater. Res. 33, pp. 383-417, 2003.

The authors thank COLCIENCIAS and Empresas Públicas de Medellín (EPM) for funding this investigation through the project No. 111845421942. The authors are also 17


Osorio et al / DYNA 81 (185), pp. 13-18. June, 2014. [19] Winter, M. R. and Clarke, D. R., Oxide Materials with low Thermal Conductivity, Journal of the American Ceramic Society, 90, pp. 533–540, 2007.

[36] Di-Girolamo, G., Blasi, C., Pagnotta, L. and Schioppa, M., Phase evolution and thermophysical properties of plasma sprayed thick zirconia coatings after annealing, Ceramics International 36, pp. 2273–2280, 2010.

[20] Zhu, D. and Miller, R. A., Sintering and creep behavior of plasmasprayed zirconia- and hafnia based thermal barrier coatings, Surface and Coatings Technology 108–109, pp. 114-120, 1998.

[37] Lughi, V. and Clarke, D. R., High temperature aging of YSZ coatings and subsequent transformation at low temperatura, Surface and Coatings Technology 200, pp. 1287 – 1291, 2005.

[21] Niranatlumpong, P., Ponton, C. B. and Evans, H. E., The Failure of Protective Oxides on Plasma-Sprayed NiCrAlY Overlay Coatings, Oxidation of Metals, Vol. 53, no. 3/4, 2000.

[38] Sheu, T. S., Tien, T. Y. and Chen, I. W., Cubic-to-tetragonal (T) transformation in zirconia-containing systems, Journal of the American Ceramic Society 75, pp. 1108–1116, 1992.

[22] Scott, H. G., Phase relationships In Zirconia-Yttria System Journal of Material Science 10, pp. 1527-1535, 1975.

[39] Suresh, A., Mayo, M. J., Porter, W. D. and Rawn, C. J., Crystallite and Grain-Size-Dependent Phase Transformations in Yttria-Doped Zirconia, Journal of the American Ceramic Society 86 [2], pp. 360–62, 2003.

[23] Fabrichnaya, O., Wang, C., Zinkevich, M., Levi, C. G. and Aldinger, F., Phase Equilibria and Thermodynamic Properties of the ZrO2-GdO1.5YO1.5 System, Journal of Phase Equilibria 26 [6] pp. 591–604, 2005.

[40] Huang, X., Zakurdaev, A. and Wang, D., Microstructure and phase transformation of zirconia-based ternary oxides for thermal barrier coating applications, Journal of Material Science 43, pp. 2631–2641. 2008.

[24] VanValzah, J. R., Eaton, H. E. Cooling rate effects on the tetragonal to monoclinic phase transformation in aged plasma-sprayed yttria partially stabilized zirconia, Surface and Coatings Technology, 46, pp. 289-300, 1991.

J.D. Osorio, received a Mechanical Engineering degree in 2008 and a Master’s Degree in Engineering in Materials and Processes in 2012, all of them from the Universidad Nacional de Colombia in Medellín, where he worked, from 2007 to 2010, as a research assistant in subjects related with materials characterization, post weld heat treatment in stainless steels, numerical simulation and transport phenomena in thermal barrier coatings for gas turbines applications. He has been awarded with an honorary Mechanical Engineering degree in 2008 and an honors master’s thesis in 2010 by the Universidad Nacional de Colombia. From 2011 to 2012 he worked as professor at the Universidad Nacional de Colombia in Medellín, teaching the courses of Dynamics and Engineering Design. He is currently a doctoral candidate in Mechanical Engineering at Florida State University, USA. His research interests are based on thermal barrier coatings, sustainable energy, thermal energy storage and heat transfer optimization of thermal systems.

[25] Xie, L., Jordan, E. H., Padture, N. P. and Gell, M., Phase and microstructural stability of solution precursor plasma sprayed thermal barrier coatings, Materials Science and Engineering A 381, pp. 189–195, 2004. [26] Osorio, J. D., Maya, D., Barrios, A. C., Lopera, A., Jiménez, F., Meza, J. M., Hernández-Ortiz, J. P. and Toro, A., Correlations Between Microstructure and Mechanical Properties of Air Plasma-Sprayed Thermal Barrier Coatings Exposed to a High Temperature, Journal of the American Ceramic Society 96 [12], pp. 3901-3907, 2013. [27] Busso, E. P., Qian, Z. Q., Taylor, M. P. and Evans, H. E., The influence of bond coat and topcoat mechanical properties on stress development in thermal barrier coating systems, Acta Materialia 57, pp. 2349–2361, 2009.

A. Lopera-Valle, obtained the Mechanical Engineering degree in 2012 from the Universidad Nacional de Colombia in Medellín. In 2010, he joined the Multi-Scale Modeling of Complex Systems: Biophysics and Structured Materials and Tribology and Surfaces research groups. Currently, he is pursuing a MSc in Mechanical Engineering at the University of Alberta, Canada, where he is part of the Advanced Heat Transfer and Surface Technologies research Group. His research interests include: mechanical and thermal modeling of coating systems, biomedical application of polymer, ceramic and composites materials, characterization and application of multilayer coating systems, application of polymers in energy and dissipation processes.

[28] Tsipas, S. A., Effect of dopants on the phase stability of zirconiabased plasma sprayed thermal barrier coatings, Journal of the European Ceramic Society 30, pp. 61–72, 2010. [29] Ilavsky, J., Stalick, J. K. and Wallace, J., Thermal Spray YttriaStabilized Zirconia Phase Changes during Annealing, Journal of Thermal Spray Technology Volume 10(3), 497, 2001. [30] Trice, R. W., Jennifer, Y., Mawdsley, J. R., Faber, K. T., Arellanolopez R., Wang H. and Porter, W. D., Effect of heat treatment on phase stability, microstructure, and thermal conductivity of plasma-sprayed YSZ, Journal of Materials Science 37, pp. 2359 – 2365. 2002.

A. Toro obtained his B.S. in Mechanical Engineering from the National University of Colombia in 1997, and a PhD degree from the University of São Paulo in 2001. From 2001 to 2002 he did a postdoc at the Institute for Metal Forming at Lehigh University, USA. He is currently an associate professor of the Department of Materials and Minerals at the Universidad Nacional de Colombia in Medellín and his main areas of interest are industrial tribology, surface analysis and wear-resistant materials.

[31] Schulz, U., Phase Transformation in EB-PVD Yttria Partially Stabilized Zirconia Thermal Barrier Coatings during Annealing, Journal of the American Ceramic Society 83 [4], 904–10, 2000. [32] Osorio, J. D., Hernández-Ortiz, J. P. and Toro, A., Microstructure Characterization of Thermal Barrier Coating Systems After Controlled Exposure to a High Temperature, Ceramics International 40, pp. 46634671, 2014.

J.P. Hernandez-Ortiz, received a Mechanical Engineering degree in 1998 from the Universidad Pontificia Bolivariana, where he worked as a research assistant at the Energy and Thermodynamic Institute until 2000. He obtained his PhD degree from the University of Wisconsin-Madison in 2004 in the Department of Mechanical Engineering with a minor in Chemical Engineering. From 2004 to 2007 he did a postdoc in the Department of Chemical and Biological Engineering at the University of Wisconsin-Madison. Currently, he is a Full Profesor in the Department of Materials, Facultad de Minas, at the Universidad Nacional de Colombia in Medellín. He has published more than 50 papers and holds honorary positions at the University of Wisconsin-Madison and the Institute for Molecular Engineering at the University of Chicago. His research interests are based on multi-scale modeling of complex systems for biological and structured materials applications.

[33] Osorio, J. D., Toro, A. and Hernández-Ortiz, J. P., Thermal Barrier Coatings for Gas Turbine Applications: Failure Mechanisms and Key Microstructural Features, DYNA 79, no. 176, pp 149-158, 2012. [34] Ilavsky, J. and Stalick, J. K., Phase composition and its changes during annealing of plasma-sprayed YSZ, Surface and Coatings Technology 127, pp. 120 - 129, 2000. [35] Witz, G., Shklover, V. and Steurer, W., Phase Evolution in YttriaStabilized Zirconia Thermal Barrier Coatings Studied by Rietveld Refinement of X-Ray Powder Diffraction Patterns, Journal of the American Ceramic Society 90 [9], pp. 2935–2940, 2007.

18


DYNA http://dyna.medellin.unal.edu.co/

Remote laboratory prototype for automation of industrial processes and communications tests Prototipo de laboratorio remoto para prácticas de automatización de procesos y comunicaciones industriales Sebastián Castrillón-Ospina a, Luz Inés Hincapie b & Germán Zapata-Madrigal c a

Facultad de Minas, Universidad Nacional de Colombia, Colombia. scastri0@unal.edu.co Facultad de Minas, Universidad Nacional de Colombia, Colombia. lhincap@unal.edu.co Facultad de Minas, Universidad Nacional de Colombia, Colombia. gdzapata@unal.edu.co

b c

Received: December 5th, 2012. Received in revised form: February 20th, 2014. Accepted: May 12th, 2014.

Abstract This paper presents the initial phase of the approach and implementation of a prototype of virtual infrastructure for teaching and lab development in the areas of Industrial Automation and Industrial Communications. The proposed prototype allows remote lab practice using: internet, VMware virtualization and management through Netlab. It allows easy and permanent access to the automation software and hardware necessary for an adequate student experience with real devices. Keywords: manuscript formatting; Virtualization; Cloud Computing; Virtual machines; Remote laboratory; Web platform; Logical Programmable Controller; Industrial Ethernet; Industrial Communications. Resumen En este artículo presenta la fase inicial del planteamiento e implementación de un prototipo de infraestructura virtual para la enseñanza y el desarrollo de laboratorios en las áreas de Automatización Industrial y Comunicaciones Industriales. El prototipo propuesto permite que las prácticas sean desarrolladas remotamente mediante el uso de Internet, la virtualización de VMware y la gestión a través NetLab, lo que permite un acceso fácil y permanente al software de automatización y los materiales necesarios para una adecuada experiencia de los estudiantes con los dispositivos reales. Palabras clave: Virtualización; Cloud Computing; Máquinas virtuales; Laboratorio remoto; ¨Plataforma Web; PLC; Ethernet industrial; Comunicaciones industriales.

1. Introduction The introduction of virtualization technologies and Cloud Computing has permitted a considerable cost reduction associated with the implementation and maintenance of IT systems. The adoption of these new technologies can be seen in enriched collaboration environments and in improved process agility. Their adoption is finally translated into considerable productivity increases. Internet and Cloud Computing have been integrated into education processes of multiple institutions. However, some knowledge areas have presented some degree of resistance to the integration of these new technologies within the learning processes. Laboratories with electrical machines and automation are used as examples of subject areas where physical contact with the laboratory resources is almost indispensable. Given the potential of the Internet and the different virtualization technologies in existence, it is appropriate to

perform a study that enables the integration of these new technologies into laboratories that contain physical resource restrictions and that normally require a physical presence for access to them. In this work, the structure of a remote laboratory is presented as a support tool for the development of the practical component of industrial automation courses offered to undergraduates in Electrical, Control, and Mechanical Engineering at the Universidad Nacional de Colombia – Sede Medellin (National University of Colombia – Medellin Campus). The article is structured in the following way: In section 2, a review of the state of the art and related works is performed. In section 3, the theoretical foundation of virtualization and Cloud Computing technologies is presented. Then, in section 4, the description of the proposed methodology and the architecture for development of a remote laboratory, applicable to industrial automation teaching, is presented. Last, in section 5, the conclusions of

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 19-23 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Castrillón-Ospina et al / DYNA 81 (185), pp. 19-23. June, 2014.

Similar applications to the proposed one in [8] are described in [9], [10] and [11]. ZUBÍA, J., DE VELASCO refer to the remote lab controlled from the Internet (WebLab) and describes its definition, technological advantages, and common implementation strategies. ZERPA, S., GIMÉNEZ, D. also cover remote labs over the Internet. They describe a remote laboratory that facilitates the development of the practical component of automation and process control courses for the students. From this work, the hardware– software system isused to perform variable monitoring and control of an industrial process prototype located in the Industrial Automation Laboratory in the Electronic Engineering Department at the Universidad Politécnica "Antonio José de Sucre", UNEXPO, Barquisimeto, Venezuela ("Antonio José de Sucre" Polytechnic University ). The purpose was to implement a Modbus/TCP/IP network for the virtual practice session development using a Programmable Logic Controller or PLC. Other interesting studies regard the implementation of virtual labs for teaching automation courses and similar courses such as [12] and [13]. In general terms, HERÍAS, M., MORENO, J. describes the role that Information and Communication Technologies have within teaching processes in different subjects, such as Information Technology and Automation. This study provides the description of multiple resources based on the Internet that can be used to support learning. DANILLES, S., CUSTODIO A. show the design of a virtual environment for the remote programming of a Programmable Logic Controller (PLC) for the development of remote engineering exercises. The main motivation of this project was the large number of enrolled students in Electronics courses, therefore, an alternative to physical presence in the classroom was created for the students to conduct different practical exercises, thereby facilitating their execution.

the work are presented, and future works are defined. 2. State of the art and related works Virtualization and Cloud Computing concepts are not new. In 1964, IBM developed the concept of virtualization in their IBM/S360 system. That system gave rise to the term “family architecture” and consisted of six computers that all could use the same software and the same peripherals. Using terminals connected to a server through a telephone line, remote computing came into existence. The term Cloud Computing was introduced in the 1980s under the concept of Grid Computing, which is emphasized in virtual servers. Later, it was adopted by large-scale Internet providers such as Google, Amazon AWS, Microsoft, and others that built their own infrastructure [1], [2]. Virtualization and Cloud Computing have evolved over time and they have positively impacted different areas such as the industrial and academic sectors. In the managerial sector, the virtualization of servers and storage has gained a prominent position. Other trends such as Web application virtualization, email, and calendars have gradually found their place. Also there are standout Infrastructures virtualization and management, the virtualization of jobs and the management and automation of virtual environments, among others. The virtual machines and applications development such as VirtualBox, “VMware”, “Qemu”, “Virtual PC”, “Google App Engine”, and “Virtual Application Networks”and others, have contributed to the continuous usage of virtualization and Cloud Computing technologies. They provide multiple options and alternatives for applications according to requirements of a specific sector or industry. “VirtualBox”, “VMware”, “Qemu”, and “Virtual PC” are operating systems virtualization tools. These types of tools enable the creation of a virtual computer or PC within a real one to install multiple operating systems that execute independently within real operating system [3], [4]. “Google App Engine” is a platform that allows the development and execution of Web applications using Google’s infrastructure [5]. “Virtual Application Networks” is a Hewlett Packard Cloud functionality that provides a virtual network vision (abstracted from the physical equipment) that transforms a physical enterprise network into a programmable, multiuser network that is able to recognize applications [6]. Some cloud infrastructure and virtualization solutions including vCloud, vsphere, and vCenter Operation Management Suite, and others also standout. These solutions are owned by the well-known company VMware Inc. [7]. In the academic environment, the authors of [8] refer to the design and implementation of a remote laboratory to teach Control and Automation courses at the Universidad Miguel Hernández de España (University Miguel Hernandez in Spain).This work details the hardware and software architecture of this system and introduces the Web application developed to enable access to different laboratory services.

3. Theoretical Framework 3.1. Virtualization and Cloud Computing: definition Various definitions have been assigned to the terms “virtualization” and “Cloud Computing”. It is tended in some instances to associate the same meaning with both terms. However, it is important to note that both terms have different meanings, though they are related. The term virtualization broadly describes the separation of a resource or request for a service from the underlying physical delivery of that service [15]. Virtualization divides physical hardware and operating system apart so that to provide higher utilization of IT resources and flexibility [16]. SMITH, J, NAIR R, define virtualization how the most significant progress since the microprocessor introduction, in providing information and business secure systems. LASSO, G. defines Cloud Computing as “a model that provides on-demand access to a shared and configurable set of IT resources (networks, servers, storage, applications and 20


Castrillón-Ospina et al / DYNA 81 (185), pp. 19-23. June, 2014.

3.2.4. Benefits

others) in a convenient way and that can be quickly served with a resources provider minimal effort ”.

 It enables the development of remote exercises. NETLAB Academy Edition allows the students to have additional time for their laboratory exercises and to make maximum use of laboratory equipment. NETLAB allows access to laboratory equipment twenty-four hours a day, seven days a week.  It optimizes the laboratory time inside the classroom. It enables instructors to spend more time teaching and less time in laboratory equipment management.  The laboratory resources can be shared with other universities. Because access to Netlab is over the Internet resources can be shared with other universities that need them.  Students can work in groups. Students can organize themselves in workgroups to develop exercises inside and outside of the classroom without the need for all group members to be in the same physical location.  System activity log. The system saves all the actions of the student using the equipment during the laboratory activity, which enables the instructor to evaluate the performance of the student in the laboratory and to provide feedback to the student.  Grants scheduled access to laboratory. A reservation is required for each individual’s access to the laboratory. Thus, the students schedule when they are going to develop the exercises according to their time availability, and instructors can reserve the laboratory for class times with demonstrations using real equipment.

3.2. Types of Virtualization Different types of virtualization have been established. In general, three broad types of virtualization can be distinguished: server virtualization, desktop virtualization, and data storage virtualization. Next, we present a brief description of these classifications and distinguish other types of virtualization within them. 3.2.1. Server virtualization The architecture of today's x86 servers allows them to run only one operating system at a time. Server virtualization unlocks the traditional one-to-one architecture of x86 servers by abstracting the operating system and applications from the physical hardware, enabling a more cost-efficient, agile and simplified server environment. Using server virtualization, multiple operating systems can run on a single physical server as virtual machines, each with access to the underlying server's computing resources. [20]. In operating system virtualization, there is a base operating system or host where virtualization software is installed. It is called a hypervisor and allows the installation of other operating systems that run on the main operating system. These operating systems are called guests and use a virtualization layer provided by the virtualization software (VMware Workstation, Virtual PC, Hyper – V, among others). In the case of hardware emulation, the virtualization software generates a software layer that emulates the hardware, that means, computer resources for the operation of the installed operating system work as if it was found in a single computer. Finally, in paravirtualization there is no hardware emulation. The hypervisor coordinates the access of guest operating system to the physical computer resources [19].

4. Theoretical Framework To get true remote student access and local connectivity (inter-university) we have an infrastructure that allows access by means of a WEB service with a public domain http://netlab.unalmed.edu.co. This URL will give the access to the Netlab graphic interface, which works like a bridge and manager for equipment access and for the automation and control software.

3.2.2. Desktop virtualization The architecture of today's x86 servers allows them to run only one operating system at a time. Server virtualization unlocks the traditional one-to-one architecture of x86 servers by abstracting the operating system and applications from the physical hardware, enabling a more cost-efficient, agile and simplified server environment. Using server virtualization, multiple operating systems can run on a single physical server as virtual machines, each with access to the underlying server's computing resources. [20].

4.1. Virtualization and Cloud Computing: definition Netlab allows physical laboratory space reservation management, which is a pod laboratory interaction via WEB, we make reference to laboratory pod as a devices group that are used by one student or students group in a specific reservation. Netlab scalability is one of its characteristics as it allows the user to easily add new laboratory devices when they are acquired in a new pod, then they can be used by multiple students in a simultaneous way within different reservations. Netlab works as a proxy between connections, creating a bridge between the clients and the laboratory virtual machines. It allows access to be simplified because only a web browser with a Java plugin is required. The control switch enables the connections between the Netlab server, the laboratory topology computers, and the

3.2.3. Netlab definition NETLAB is a laboratory platform for using CISCO, VMware, and CompTIA academies. With this platform, teachers and students can remotely access the laboratory with real equipment in an easy management environment over the Internet.

21


Castrillón-Ospina et al / DYNA 81 (185), pp. 19-23. June, 2014.

4.2.1. Input and Output

automation elements in use.

It is necessary for individuals who interact with the system to be able to activate and deactivate the inputs and outputs of each one of PLCs that are permitted by the process. This can be done with OPC technology using Matrikon software from Matrikon Inc. or KepserverEx software from Kepware. The software also allow the activation and deactivation of inputs and outputs virtually. This can be physically observed in the equipment using Web cameras or in Manufacturing Execution System MES. 4.3. Case study The research Group Teleinformatics and Teleautomatic of Universidad Nacional de Colombia – Sede Medellín has been studying industrial communications; therefore, it is appropriate to propose a case study where the available communication media are integrated with Siemens S7 controllers and Rockwell L23E-1769 controllers. This case covers the laboratory topology necessary to simulate a Manufacturing Execution System (MES) with a topology that contains Industrial Ethernet communication protocols. The simulated manufacturing lines consist in beer bottling and packaging in ten-unit boxes. This simulation is of the production of 1 bottle every 2 seconds in the inlet line (Bottling) and 3 boxes per minute in the outlet line (Packaging). In addition, failures in the production line will be simulated. Good and bad bottles and boxes will be counted. The SIEMENS equipment include a Bottling controller (S7-1200), a Packaging controller (S7-1200), and Master OPC (S7-300). When using the Rockwell equipment, one controller (L23E-1769 QB1B) was used for Bottling and Packaging, and another controller served as the Master OPC (L23E-1769 QB1B). Communication between PLCs S7-1200 was by the TCP protocol, and communication between PLC S7-122 and PLC S7-300 was by the S7 protocol (TCP based, SIEMENS proprietary protocol). When using Rockwell equipment, communication between two PLCs L23E-1769 was performed in two ways: the first one used ProducedConsumed tags, and the second one used Message blocks.

Figure 1: Netlab architecture

Virtual machines are provided by a dedicated virtualization server. Using a VMware ESXi 4.1 Hypervisor, the virtual machines have access to devices using an Ethernet, a USB, or a serial port, which enables access and configuration of different series of the implemented automation equipment. 4.2. Pod for automation Laboratory topology design allows a model of different industrial cases and with different manufacturers to be made. For that an industrial communication network with Rockwell and Siemens equipment is planned, each one with their communications ports that allow communication with protocols as well as Profinet and Ethernet. Based on Fig. 2 each one of its component is described:  2 virtual machines where all required programs to develop the courses activities, as well as configuration, monitoring and simulation are installed. Netlab has the ability to come back to the default state of each virtual machine at the end of the reservation.  2 web cameras that allow the change of the equipment leds to be observed.  2 Siemens s7-200 PLC with Factory Ethernet port.

Figure 2: Automation laboratory model

 1 Siemens S7-300 PLC with Ethernet module.  2 L23E-1769 Rockwell PLC with Ethernet port.

Figure 3: Automation laboratory model

22


Castrillón-Ospina et al / DYNA 81 (185), pp. 19-23. June, 2014. [10] Zubía, J., De Velasco, J. Diseño de laboratorios remotos virtuales: WebLab. Dpto. Arquitectura de Computadores y Dpto. Ingeniería del Software. Facultad de Ingeniería. ESIDE. Universidad de Deusto. pp. 1 – 8.

4.3.1. Implemented software Implementation of the proposed case study required the use of different applications to program the PLCs, such as TIA PORTAL PROFESIONAL SP2 and the Step 7 V5.5, both for the Siemens equipment, and Rslogix 5000 and RSLinx Lite for the Rockwell PLCs. To implement the manufacturing execution system, two programs were used. These programs are Manufacturing Execution System V4.0 made by Wonderware and Ignition made by Inductive Automation, DEMO version.

[11] Zerpa, S., Giménez, D. Desarrollo de un Laboratorio Remoto de Automatización de Procesos vía Internet. Seventh LACCEI Latin American and Caribbean Conference for Engineering and Technology (LACCEI’2009) .June 2-5, 2009, San Cristóbal, Venezuela. [12] Herías, M., Moreno, J. Recursos Didácticos Basados en Internet para el Apoyo a la Enseñanza de Materias del Área de Ingeniería de Sistemas y Automática. Revista Iberoamericana de Automática e Informática Industrial. Vol 2, no. 2. Pp. 93 – 101, Abril de 2005. [13] Danilles, S., Custodio A. Programación a Distancia del PLC Simatic S7-300 para Realizar Prácticas Virtuales en Ingeniería. Eighth LACCEI Latin American and Caribbean Conference for Engineering and Technology (LACCEI’2010). June 1-4, 2010, Arequipa, Perú.

When communicating between different control equipment and MES, OPC technology was required, KepserverEx v5.0, DEMO version was used, made by Kepware. The university has education agreements and full licenses of the implemented software by means of complete versions.

[14] Sitio oficial de Netlab. Disponible en: http://www.netdevgroup.com [Citado 2 de mayo de 2013]. [15] Virtualization Overview. VMware White Paper. pp. 1 – 11. [16] Jingxian, Z. , D., Rodríguez, J., Trigo, D. The Applied Research on Virtualization of Server in Campus Network. International Symposium on Computer Science and Society. pp. 23 – 25. 2011

5. Conclusions and future work

[17] Smith, J, Nair R, “The Architecture of Virtual Machines”, Computer, vol. 38, no. 5, pp. 32–38, 2005.

 Netlab scalability and flexibility allow the implementation of many other laboratory topologies to control and automation systems of many manufacturer cases and electric systems.  Actually method consolidation is being worked on that will be used to manage the system inputs and outputs, both analog and digital, that allows new characteristics to this Pod or new Pod to be implemented, that be developed and appropriate for other case studies.  With this work, networking concepts, Cloud Computing, and industrial communications were integrated to achieve 24-hour availability of an industrial automation laboratory from any location.

[18] LASSO, G. Cloud Computing: Tendencias. Modelos. Posibilidades. pp. 1 – 25. [19] Vásquez, J. Cloud Computing. Séptimo Congreso Internacional de Cómputo en Optimización y Software. Universidad Autónoma del Estado de México. pp. 1 – 9, 2009. [20] Virtualización. Disponible en: http://www.vmware.com/virtualization [Citado 26 de febrero de 2014]. S. C. Castrillón-Ospina, received the Bs. Eng in Control Engineering in 2008, from the Universidad Nacional de Colombia. Medellin, Colombia. MS student in telecomuncation engineer at Univesidad de Antioquia, he work as a Research Engineer for the Investigation Group in Teleinformatic and Teleautomation of the Universidad Nacional de Colombia. Since 2008 he work for telecommunication research and consulting companies within the telecommunication sector. His research interests include: simulation and modeling of protocols and telecommunication systems; virtualization networks models; and industrial protocols for automation.

Reference [1] Espino, L. Virtualización de redes como elemento clave para Cloud Computing. Instituto Tecnológico de Costa Rica. pp. 1 – 12, 2009.

L.I. Hincapié-Mesa, received the Bs.Eng in Control Engineering from the Universidad Nacional de Colombia - Campus Medellin in 2008. Sp Degree in Teleinformatic: Data Networks and Distributed Systems from the Universidad EAFIT - Campus Medellin, Colombia in 2011. From 2008 to 2012, She worked as a Research Engineer and Project Management support for the Investigation Group in Teleinformatic and Teleautomation of the Universidad Nacional de Colombia. Currently, she works in the areas of: Management of Information Technology Projects, Telecommunications Infrastructure management and operational support . Her main research interests include: Next Generation Networks (NGN), networking security, industrial communications, network management tools and Cloud Computing applications.

[2] Quan, D. From Cloud Computing to the New Enterprise Data Center, IBM Corporation, 2008. [3] Serrano, J. Máquinas Virtuales - Taller de Software Libre. Oficina de Software Libre. Universidad de Granada. pp. 1 – 89, 2010. [4] Andrade, J. Creación de una Máquina Virtual. Escuela Politécnica del Ejército. pp. 1 - 8. [5] GETUG Colombia. Una Introducción a Google App Engine. TecnoParque. pp. 1- 75, 2009. [6] Soluciones cloud computing HP. Disponible en : http://www.hp.com/hpinfo/newsroom/press_kits/2012/convergedcloud201 2/NA_VAN.pdf [Citado 5 de mayo de 2013].

G.D. Zapata: Received the Bs. Eng in electrical engineering from the Universidad Nacional de Colombia. Medellin, Colombia in 1991, MS degreed in Automatic from the Universidad del Valle in 2004 and the PhD degreed in applied science from the Universidad de los Andes in 2012. He work as associated profesor at Universidad Nacional since 1992 and is currently the group director for the Investigation Group in Teleinformatic and Teleautomation of the Universidad Nacional de Colombia. His research areas are: Industrial Automation, Automation of Power Systems, Discrete Event Systems, monitoring systems, intelligent production systems.

[7] Soluciones de virtualización VMware. Disponible en : http://www.vmware.com/latam/solutions/ [Citado 25 de marzo de 2013]. [8] Azorín, M., Paya, L., Remote Laboratory for Automation Education.Departamento de Ingeniería de Sistemas Industriales. Universidad Miguel Hernández. España. pp. 1 – 5, 2004. [9] Gómez, M., Uribe G., Jimenéz J., nueva perspectiva de los entornos virtuales de Enseñanza y aprendizaje en ingeniería. Caso práctico: operaciones con sólidos. Dyna, Año 76, no. 160, pp. 283-292, 2009 23


DYNA http://dyna.medellin.unal.edu.co/

Potential for geologically active faults Department of Antioquia, Colombia Potencial de fallas geológicas activas Departamento de Antioquia, Colombia Luis Hernán Sánchez-Arredondo a & Orlando Giraldo-Bolivar b a

Departamento de Materiales y Minerales de la Facultad de Minas, Universidad Nacional de Colombia, Colombia. lhsanche@unal.edu.co b Departamento de Ingeniería Civil de la Facultad de Minas, Universidad Nacional de Colombia, Colombia. ogiraldo@unal.edu.co Received: November 7th, 2012. Received in revised form: March 13th, 2013. Accepted: April 11th, 2014.

Abstract A geostatistics module ( mg 95 ) was determined based on geostatistical studies of global estimations of seismicity data reported by the National Seismological Network of Colombia [Red Sismológica Nacional de Colombia (RSNC)]. It enabled the level of activity of the cortical fault in the Department of Antioquia (DA), Colombia to be categorized. The mg 95 relates the estimated values with polygonal kriging and the corresponding error for each one of the municipalities of Antioquia, with a Student parameter at 95% confidence, dependant on the number of microseisms registered locally. The following categorization scale is proposed to determine the levels of active faults in each municipality: proved active fault mg 95  0  0.2 , probable active fault mg 95  (0.2  0.3] , and possible active fault mg 95  0.3 . Keywords: seismology, geostatistics module, active faults, Colombia. Resumen Con base en estudios geoestadísticos de estimación global de los datos de sismicidad reportados por la Red Sismológica Nacional de Colombia (RSNC), se determinó un módulo geoestadístico ( mg 95 ) que permitió categorizar el nivel de actividad del fallamiento cortical en el Departamento de Antioquia (DA), Colombia. El mg 95 relaciona los valores de estimación con krigeage poligonal y su error correspondiente para cada uno de los municipios antioqueños con un parámetro de Student al 95% de confianza, que depende del número de microsismos registrados localmente. Para determinar los niveles de fallamiento activo en cada municipio, se propone la siguiente escala de categorización: fallamiento activo probado mg 95  0  0.2 , fallamiento activo probable mg 95  (0.2  0.3] y fallamiento activo posible mg 95  0.3 . Palabras clave: sismología, módulo geoestadístico, fallas activas, Colombia.

1. Introduction The first line of each paragraph is indented 0.5 cm. The Department of Antioquia is located in the north western corner of South America. It is one of the world’s least understood regions because its geology is the result of the interaction of multiple geological processes and of tectonic plates and microplates that meet there. The seisms generated in this region show drastic changes in the focal mechanisms [1]. In the last 130 years, cortical active faults have generated earthquakes such as the one in Turbo 1882 2 and the one of in Murindo 1992[3]. 2. Database The seismicity analysis reported by the RSNC from June 1, 1993 to June 30, 2009, indicates that the Department of Antioquia has registered 2429 epicentres, of which 78%

(1897) corresponds to superficial activity, (77% of the data is located at a depth of 3 Km.) with a local magnitude (ML) of 2.7, and 25% of the data in the interval 3  M L  5.3 . The quantile - quantile chart (Q-Q plot) of Fig. 1, indicates the strong tendency of superficial microseismic activity in the DA, to behave statistically as a log-normal distribution for the data of local magnitude. 3. Structural Geostatistics A semivariogram whose graph assimilated an exponential type model (Fig. 2), with an influence range for seismic data of 5 kilometres and a real variance (plateau) of 0.31 developed as the main geostatistical tool to evaluate the spatial behaviour of the superficial seismic activity in the Department of Antioquia. The semivariogram model for the ML variable was validated regionally with 1725 seisms, a standardized error

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 24-27 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Sánchez-Arredondo & Giraldo-Bolivar / DYNA 81 (185), pp. 24-27. June, 2014.

local epicentres, plus those located outside the municipal polygon but inside a 5 km outline, which corresponds to the influence range deduced from the semivariogram (Fig. 2). 4. Geostatistics Module And Categorization Of Geologically Active Faults The level of geological fault activity, or segments of geological faults belonging to a system, was defined with a confidence factor of 95% labelled “geostatistics module (mg95) to characterize the level of activity of geological faults”. mg95 

Estimation Error * StudentFactor(t ) Estimated Value

t depends on the number of seisms considered. The study case of the municipality of Murindo (Fig. 3) will be used as an example.

Figure 1. Quantile-Quantile graph indicating the statistical behaviour of the ML data of the Department of Antioquia is log-normally distributed.

Global estimation error = 0.11 Global estimated seismicity (ML) =2.91 Number of seisms considered = 358 Student factor = 1.96, (see attachment)

mg95 

0.11 *1.96  0.07 2.91

If the number of epicentres increases, the estimation error and the Student factor decrease and, therefore, the level of threat by active faults increases. Thus the

mg 95 is used to categorize the level of active

faults of the DA, based on the fact that the presence of cortical seismic activity implies the presence of active geological faults. Thus the

mg 95 is used to categorize the level of active

faults of the DA, based on the fact that the presence of cortical seismic activity implies the presence of active geological faults. The classification levels for geological fault activity are proposed considering the mg95 values obtained for each one of the municipalities of the DA, (Table 1). Table 2 summarizes the results obtained for some municipalities in the DA.

Figure2. Experimental semivariogram (dotted line) and theoretical (continuous line) of local magnitude ML. The numbers above the points indicate the number of couples considered

average of 0.005, and a standardized error variance of 0.88. The model rejected 0.4 % of the data inside a 100 Km radius. The equation representing the model is the following.

5. Discussion

h 5

 ( h)  0.31 * (1  e )

Regionally the territory of the DA is affected by the triple union of the convergence of the Caribe, Nazca and Suramericana tectonic plates. This contact zone is represented by two microplates called bloque Andino and bloque de Panamá Baudó [4]. As a consequence of this geotectonic activity, three geological fault systems control the seismological activity in the DA: The Cauca Romeral System, cartographed since the start of the 20th century [5], the Palestina Fault System [6], and the Mutatá-Murindó Fault System (Suture of Dabeiba).

The global estimate was based on the geostatistical technique known as “polygonal kriging”. This procedure was designed to provide an estimated value of seismicity, inside the areas delimiting the polygonal coverage of each one of the municipalities of the DA. Each municipality receives a unique global estimate value with its corresponding error. The number of seisms considered for the estimates inside each municipality, is equivalent to the 25


Sánchez-Arredondo & Giraldo-Bolivar / DYNA 81 (185), pp. 24-27. June, 2014.

Figure 3. Intersection of epicentres with local magnitude M L, from the RSNC, and coverage of the DA. Estimated for this example were the global value ML for the Municipality of Murindó M L  2.91 y   0.11, the number of seisms= 358, the Student Factor (t)=1.96, and hence mg95=(0.11/2.91)*1.96 =0.07 indicates that the geological faults that control Murindó have a level of proved activity.

For some authors [7], seismicity in Colombia´s Northwest is diffused and complex and caused by the fault generating compression east-west. This implies that these hypotheses suppose that seisms generated in the region are not caused by subduction, but the result of the convergence tectonic plates and microplates. The geostatistical results obtained in this research indicate that 42% of the municipalities of Antioquia are controlled by geological faults with a proved level of activity. 24% of the municipalities determined in this level are related geotectonically to the Mutatá-Murindó Fault

System (Suture of Dabeiba), 37% are related to the Cauca Romeral Fault System, and the remaining 39 % to the Palestina Fault System. Table 2. Active fault potential in some municipalities of the DA.

Municipality Medellin Abriaquí Andes Barbosa Frontino Ituango Peñol Olaya Mutatá

Table 1. Categorization of the level of activity of geological faults.

mg95 [0-0.2] (0.2-0.3] >0.3

Active Fault Proved Probable Possible

ML Estimated 2.55 2.46 2.61 2.30 2.61 2.75 2.35 2.40 2.71

Estimation Error 0.41 0.19 0.20 0.58 0.12 0.15 0.38 0.28 0.18

mg95 2.04 0.16 0.16 2.63 0.09 0.11 2.05 0.30 0.13

Active Fault Possible Proved Proved Possible Proved Proved Possible Probable Proved

The highest potential for active faults in the DA is registered in the municipalities of Ituango (Cauca Romeral 26


Sánchez-Arredondo & Giraldo-Bolivar / DYNA 81 (185), pp. 24-27. June, 2014.

Fault System, Suture of Dabeiba), Dabeiba, Urrao, Uramita, (Palestina Fault System). Cañas Gordas, Murindó (Suture of Dabeiba) and Remedios The cortical fault presenting the largest threat for the DA is related to the Mutatá-Murindó Fault System (Suture of Dabeiba). Considering the external seismic threat presented by this suture for the Valle de Aburra (3,317,166 inhabitants) [8], home of the city of Medellín, the segments that pass through Urrao are the neotectonic features presenting the highest danger. They present potential for generating superficial seisms with a PGA (Peak Ground Acceleration) of 0.96g (units of gravity), which may arrive at Medellín, with a acceleration of 0.25g. Historically, one of the most representative seisms in the zone, related to the Suture of Dabeiba, is the one that occurred on September 7, 1882, with an intensity of X in the Mercalli Scale and an Ms magnitude estimated between 6.5 and 7.2. Likewise, this seism was felt intensely in the Isthmus of Panama and in a large part of the Departments of Antioquia and Choco1. Similar to the seism of 1982, most of the Uraba Region was affected by the seisms of October 17 and 18 of 1992, which brought about countless damage to the environment and the urban centres of Colombia's Northwest3. The event of October 18 registered, in an accelerograph located in ISA in the city of Medellín, a maximum horizontal acceleration of 0.03g. This is a pretty low value considering the damage registered [3]. The faults related to the Suture of Dabeiba, as well as the faults related to the Cauca Romeral system, affect the territory of the Municipality of Ituango. The seismic activity registered there is moderate with 110 superficial seisms, housing high geostatistical potential for the generation of microseismic activity, with a high potential danger for the Valle de Aburra (PGA of 0.18), which is to say 0.176g of horizontal acceleration. Additionally, the close relationship shown by the results obtained in this research with the neotectonic studies made so far about the faults systems of the Cauca Romeral System in the northwest of the DA [9-11] is remarkable. In conclusion geostatistics can be used as a valuable tool for supporting seismology. It complements neotectonics and geochronology (carbon-14) studies and can be used to take decisions when instrumenting and monitoring the places with the greatest seismic threats.

References [1] Cardona, C., Salcedo, E. Y Mora, H. Caracterización Sismotectónica y Geodinámica de la Fuente Sismogénica de Murindó – Colombia.(www.geoslac.org/memorias2/memorias/.../sismot_geodinamic a_col.pdf, mayo 27/2010). [2] París, G., Machette, M., Darat, R. and Haller, K., 2000. Map and Database of Quaternary Faults and Folds in Colombia and its Offshore Regions: USGS (International Lithosphere Program), 61 P. [3] Martinez, J., Parra, E., París, G., Forero, C., Bustamante, M., Cardona O. Y Jaramillo, J., 1994. Los Sismos del Atrato Medio 17 y 18 de Octubre de 1992: Revista Ingeominas (no. 2), Bogotá, pp 35-76. [4] Toussaint, J.F. Y Restrepo J.J., 1987. Límites de Placas y Acortamientos Recientes entre los Paralelos 5ºN y 8ºN, Andes Colombianos: Revista Geológica de Chile (no 31), Santiago de Chile, pp. 95-100. [5] GROSSE, E., 1926. El Terciario Carbonífero de Antioquia : Dietrich Reimer, Berlín, 361 P. [6] Feininger, T., Barrero, D. and Castro, N., 1972. Geología de la parte de los departamentos de Antioquia y Caldas (Sub-zona IIA, oriente de Antioquia): Boletín Geológico Ingeominas V.20 (2), Bogotá, 173 P. [7] Pennington. W. ET AL., (1988). Seismicity of the Caribbean - Nazca Boundary: Contrains on Microplates Tectonics of the Panamá Región. Journal of Geophysical Research. Vol. 93. no. B3, pp 2053 - 2075. [8] DEPARTAMENTO ADMINISTRATIVO NACIONAL DE ESTADÍSTICA: Censo 2005. (Http: //es.wikipedia.org/wiki/DANE, junio 17 de 2010). [9] Arias, L.A., 1981. Actividad Cuaternaria de la Falla Espíritu Santo: Revista CIAF, Vol. 6, pp. 1-16. [10] WOOWARD  CLYDE CONSULTANTS, 1980. Phase I, Preliminary Seismic Hazard Study Ituango Proyect, Colombia: Unpublished report for Integral, Ltda and ISA, Medellín, 152 P. [11] Cline, K., Page, E. Gillan, M., Cluff, L., Arias, L., Belarcazar, L. Y and López J., 1980. Quaternary activity of the Romeral and cauca Faults, Northwest Colombia: In Seminario sobre el Cuaternario de Colombia, no. 1, Resumenes, -vol. 1, Bogotá, pp. 37-38. L H. Sánchez-Arredondo, received the Bs. Eng in Geological Engineering in 1984 (Universidad Nacional de Colombia. Medellin, Colombia.), the Sp degree in Analysis and Management of Geological Risk in 2001 (University of Geneva. Geneva, Switzerland), and the MS degree in Master Coal Science and Technology in1991 (Universidad Nacional de Colombia. Medellin, Colombia.), He is a Full Professor in the area of mining geology and geostatistics in the department of materials and minerals, Facultad de Minas, Universidad Nacional de Colombia. His research interests include: Exploration Geochemistry, Geostatistical Modeling and Simulation, Risk in Geology and Mining. ORCID:

Acknowledgements

O. Giraldo-Bolivar, received the Bs. Eng in Civil Engineering in 1981, the Sp degree in structures in 2000, all of them from the Universidad Nacional de Colombia. Medellin, Colombia. He is a Full Professor in the area of Materials and Structures in the department of civil engineering at the National University of Colombia, Medellín. His research interests include: Architectural precast concrete panels, Evaluation of chemical additives for concrete, concrete quality control of civil works. ORCID:

This article is a contribution to project 9536 “Seismic Threat of the Department of Antioquia Based on Microseismic Activity of the RSNC” Phase I, financed by the Office of Research of Universidad Nacional de Colombia, Sede Medellín (DIME).

27


DYNA http://dyna.medellin.unal.edu.co/

Static and dynamic task mapping onto network on chip multiprocessors Mapeo estático y dinámico de tareas en sistemas multiprocesador, basados en redes en circuito integrado Freddy Bolaños-Martínez a, José Edison Aedo b & Fredy Rivera-Vélez b b

a Facultad de Minas, Universidad Nacional de Colombia, Colombia. fbolanosm@unal.edu.co Facultad de Ingeniería, Universidad de Antioquia, Colombia. {farivera, joseaedo}@udea.edu.co

Received: November 16th, 2012. Received in revised form: April 2th, 2014. Accepted: April 10th, 2014.

Abstract Due to its scalability and flexibility, Network-on-Chip (NoC) is a growing and promising communication paradigm for Multiprocessor System-on-Chip (MPSoC) design. As the manufacturing process scales down to the deep submicron domain and the complexity of the system increases, fault-tolerant design strategies are gaining increased relevance. This paper exhibits the use of a Population-Based Incremental Learning (PBIL) algorithm aimed at finding the best mapping solutions at design time, as well as to finding the optimal remapping solution, in presence of single-node failures on the NoC. The optimization objectives in both cases are the application completion time and the network's peak bandwidth. A deterministic XY routing algorithm was used in order to simulate the traffic conditions in the network which has a 2D mesh topology. Obtained results are promising. The proposed algorithm exhibits a better performance, when compared with other reported approaches, as the problem size increases. Keywords: Task mapping, Multiprocessor System-on-Chip (MPSoC), Networks on Chip (NoC), Population-based Incremental Learning (PBIL). Resumen Las redes en circuito integrado (NoC) representan un importante paradigma de uso creciente para los sistemas multiprocesador en circuito integrado (MPSoC), debido a su flexibilidad y escalabilidad. Las estrategias de tolerancia a fallos han venido adquiriendo importancia, a medida que los procesos de manufactura incursionan en dimensiones por debajo del micrómetro y la complejidad de los diseños aumenta. Este artículo describe un algoritmo de aprendizaje incremental basado en población (PBIL), orientado a optimizar el proceso de mapeo en tiempo de diseño, así como a encontrar soluciones de mapeo óptimas en tiempo de ejecución, para hacer frente a fallos de único nodo en la red. En ambos casos, los objetivos de optimización corresponden al tiempo de ejecución de las aplicaciones y al ancho de banda pico que aparece en la red. Las simulaciones se basaron en un algoritmo de ruteo XY determinístico, operando sobre una topología de malla 2D para la NoC. Los resultados obtenidos son prometedores. El algoritmo propuesto exhibe un desempeño superior a otras técnicas reportadas cuando el tamaño del problema aumenta. Palabras clave: Mapeo de tareas, Sistemas integrados multiprocesador (MPSoC), Redes en circuito integrado (NoC), Aprendizaje incremental basado en población (PBIL).

1. Introduction MPSoC systems are a feasible alternative for implementing a complexity-growing and variable set of applications. NoC-based MPSoCs have appeared as a way to easily scale the size of the system, and to deal with application performance requirements, application variability and constraints, such as real time [1]. In such systems, it is necessary to establish an optimal way to map the executable tasks of an application onto the available resources for its implementation. Static mapping is performed at design time, before executing the application.

In [2], an Integer Linear Programming (ILP) approach is proposed for static mapping aimed to optimize energy in a NoC-based MPSoC. The algorithm considers both the processing and communication energy as optimization objectives. A simulated annealing heuristic is added to the optimization process, which suffers from large execution times. Similarly, reference [3] reports a custom algorithm for static mapping of tasks on a NoC platform. The algorithm optimizes the computation and communication energy, with a slight degradation of system's performance. The work reported in [4] proposes a technique for mapping tasks onto a set of heterogeneous Processing Elements (PEs) operating at multiple voltage levels in a

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 28-35 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


BolaĂąos-MartĂ­nez et al / DYNA 81 (185), pp. 28-35. June, 2014.

NoC platform. Such work is based on a Mixed Integer Linear Programming (MILP) formulation for the static mapping problem, and it aims to optimize the overall energy consumption of the system, under performance constraints. The only objective considered for optimization is energy, and there are some complex problems (for instance, those related with low voltage setups) for which a feasible solution may not be found. On the other hand, Dynamic mapping is also often referred to as remapping and is used in two defined contexts. First, the workload of the system may change due to several reasons [5], so a remapping procedure may adjust the system to the new workload and traffic conditions according to the figures of merit to be optimized. In second place, as a consequence of current systems complexity, there is a growing set of malfunctions and failures that cannot be detected or avoided by using current design methodologies [6]. Fault tolerance may be achieved by using dynamic mapping, which distributes the current workload of the system and avoids the use of faulty resources. Some of the reported dynamic mapping approaches are restricted to homogeneous networks [5, 7, 8], meaning that all the processing elements are identical. Some other reported works are limited to the mapping of single tasks onto each processor of the system [9]. In [10], a multi task dynamic mapping approach is proposed for heterogeneous networks, i.e., processing elements in the system are of

different kinds. The mapping algorithm uses heuristics aimed at reducing the traffic overhead, by means of assessing the adjacent available resources and measuring of the proximity of the communicating tasks. The work reported in [11], presents a set of simple heuristics for dynamic mapping in NoC-based MPSoCs. Due to its simplicity, these algorithms may run very fast and deal with changing conditions in the network's workload. However, link occupation is the only objective being considered in the optimization process. Besides, the mapping algorithms described are not designed for achieving optimal solutions, as derived from the reported results. A multitask dynamic mapping approach is proposed in [12]. The work is aimed at providing fault tolerance in a heterogeneous network. The optimization algorithm is based on ILP, and performs a multiobjective space search, in order to minimize both the execution time and the communication cost. The main issue with ILP is that optimization becomes highly complex as the problem's size increases. The work reported in [13] proposes an algorithm for mapping and scheduling in MpSoC systems. The algorithm is able to map executable applications both to bus-based and to NoC-based architectures. The exploration of the solution space is performed by means of a simulated annealing algorithm, which starts from a given solution (usually a random solution), and improves it gradually until reaching an

Table 1. Survey of the revised mapping solutions.

Reference

Target Architecture

Mapping Nature

[15]

Heterogeneous

Static

[16]

Homogeneous

Hybrid

[17, 18]

Homogeneous

Hybrid

[19] [20] [21] [22] [10, 23] [12] [24] [5, 25]

Heterogeneous Heterogeneous Homogeneous Homogeneous Heterogeneous Heterogeneous Heterogeneous Homogeneous

Hybrid Hybrid Static Dynamic Dynamic Dynamic Static Dynamic

Successive Relaxation or Genetic Algorithms Custom Simulated Annealing and custom ILP Distributed Stochastic ILP Custom Custom ILP Artificial Bee Colony Custom

[26]

Heterogeneous

Static

Fuzzy and Custom

Task Graph

[2]

Heterogeneous

Static

ILP and simulated annealing

Task Graph

[27]

Heterogeneous

Static

Ant Colony

[28] [3]

Homogeneous Heterogeneous

Static Static

[29]

Homogeneous

Hybrid

[30]

Homogeneous

Static

Simulated Annealing Custom Multiobjective Evolutionary Quadratic Programming

Optimization Algorithm

29

Common domain semantic

Optimization Objective

Metric Space

Network traffic

Dataflow graph

Throughput

Task Graph

Energy

Task Graph Task Graph Task Graph Task Graph Task Graph Task Graph Task Graph Task Graph

Energy and Exec. Time Communication energy Temperature Hop Count Multiobjective Multiobjective Power Energy Energy and message latency

Task and core graphs Task Graph Task Graph

Energy Energy and temperature Energy Energy

Task Graph

Latency and Power

Task Graph

Energy


BolaĂąos-MartĂ­nez et al / DYNA 81 (185), pp. 28-35. June, 2014.

optimal. Working with a single solution, instead of a population of solutions, may carry problems related to localoptimal solutions, as stated in [14]. Table 1 summarizes most of the relevant related works concerning mapping of tasks into NoC-based systems. The mapping may be either static, dynamic, or hybrid, meaning that part of the mapping labor is performed in design time, and the remaining work takes place in runtime. The common domain semantics refers to an intermediate representation, which combines features of both the high level specification of the application, and figures of merit related to the implementation platform [31]. As depicted in Table 1, task graphs are the most common approach as intermediate representation. Particularly, annotated task graphs (ATGs) allow the tasks structure (represented as dependences in the graph) and the figures of merit to optimize (supplied in the form of annotations) to be represented. Some formal optimization methods, such as ILP, appear often in Table 1. Heuristics are also very common approaches for performing the optimization of the mapping problem. Such optimization may be devoted to a single objective, such as throughput, energy, traffic, temperature, and so on. Some of the mapping strategies are devoted to several objectives at once, i.e., they are multiobjective. This paper describes an approach for static and dynamic mapping based on a PBIL optimization algorithm. The dynamic approach is aimed at providing fault tolerance in a single-node failure scenario. A heterogeneous NoC, based on a 2D mesh interconnection network, is used as a case study. Two objectives were taken into account for the optimization process: Completion time of the application, and peak bandwidth of the interconnection resources within the network. Bandwidth is related to the implementation costs of the system, since interconnection resources must be appraised at design time and placed into the system chip. For the sake of assessing the second objective, an XY routing algorithm was used in simulations. The remainder of this paper is organized as follows. Section 2 describes the static and dynamic mapping problems, as well as the experimental setup used to test the proposed approach. Section 3 describes the PBIL optimization algorithm and the customizations performed on it in order to deal with the dynamic and static mapping problems. Section 4 shows the simulation results. Concluding remarks and future work appear in Section 5.

Figure 1. A 12-task ATG for an MPEG2 decoder.

dependences among the tasks of the system (labeled from e1 to e14). Annotations provide information about figures of merit such as performance, power, bandwidth, and some others. Such annotations allow exploring several implementation choices in the optimization process, and were omitted in Fig. 1 for space reasons. A 3 Ă— 3 2D mesh was used as the target architecture. Fig. 2 shows such a mesh, composed of RISC and DSP processors. In such figure, there are nine nodes or tile spaces (labeled from n1 to n9) representing the processing elements, and twelve communication links between the different nodes (labeled from L1 to L12). A deterministic XY routing algorithm was used to simulate the traffic conditions in the network. The PBIL optimization was performed for two conflicting objectives: First, the completion time of the application, which is equal to the maximum time stamp associated with the execution of tasks in the whole system. The second optimization objective was the peak bandwidth of the target NoC. This figure of merit may be calculated as the maximum value of bandwidth requirements for the links in the network, once the mapping has been performed.

2. Static And Dynamic Mapping As mentioned before, static mapping is performed at design time and is aimed at choosing the optimal combination of available resources in a NoC, in order to implement an application, composed of a set of executable tasks. An annotated task graph (ATG) is often used as a middle-level representation of the application which is going to be implemented. Fig. 1 shows a 12-task ATG for an MPEG-2 decoder [32]. In such graph, vertices are associated with executable tasks (labeled from t1 to t12), and edges represent data

Figure 2. Target Architecture.

30


Bolaños-Martínez et al / DYNA 81 (185), pp. 28-35. June, 2014.

Given the input task graph and the target architecture, the static mapping problem may be defined as finding the best task distribution, for the sake of optimizing both completion time and peak bandwidth in the system implementation. On the other hand, dynamic mapping must deal with a subset of the system tasks and resources . Since dynamic mapping must deal only with exceptional situations, such as node failures or changing traffic conditions, the primer mapping solution (which was performed at design time) is still valid for most of the executable tasks on the system. Only a subset of the system tasks must be mapped at runtime to some other executable resources. Let’s suppose that one of the nodes in Fig. 2 suffers a failure whilst the system is executing a given application. In order to provide some degree of fault tolerance, tasks that were running in a faulty node, may be redistributed to the remaining ones. In the proposed approach, dynamic mapping is performed to accomplish this aim. The main difference with respect to the static approach is that dynamic mapping must be performed at runtime. Besides, the number of tasks and resources that must be taken into account in the dynamic approach will be lower than that for static scenarios.

Figure 3. PBIL Probability Matrix.

values of probability are meant to appear more frequently in the population’s individuals. The Evaluate_Population routine assesses the population’s individuals just created. Fitness values allow choosing the best solution for the mapping problem. The Choose_Best routine is used to accomplish this goal. The learning rate or LR is a way to control the convergence speed of the PBIL algorithm. Higher values of LR will lead to fast convergences, although the quality of the solutions might not be satisfactory. If LR is reduced, quality will improve at the expenses of longer convergence time. In our adaptive approach, the LR parameter must be adjusted in order to allow both exploration and exploitation of the PBIL search space. The entropy (E) of the probability array is calculated and used as an estimation of the population’s diversity. In Fig. 4, the routine Learning_Rule represents the way in which the LR parameter is tuned as a function of the P array’s entropy. Once the LR parameter is calculated, the P array must be updated in order to adjust the probabilities, according to the best solutions found in the population. Function Update_Array performs this.

3. PBIL–Based Task Mapping PBIL algorithms are stochastic search methods, which obtain directional information from the best solutions previously found in the solution space. Such algorithms have been used in design automation for embedded systems with promising results [33, 34]. PBIL techniques are a special case of a larger group of optimization approaches based on population. The main feature of the PBIL-based algorithms is an array of probabilities, which converge progressively to an optimal solution. The values in such an array must be updated iteratively. In the final stages of the optimization process, some entries of the PBIL array have greater probabilities, pointing to an optimal solution of the problem at hand. Let's suppose a mapping problem (it may be either static or dynamic) with a set of N tasks and M available resources. The PBIL probability matrix for such a problem may take the form of the array shown in Fig. 3. In this figure, P(i,j) represents the probability of task j to be implemented on the resource i. Fig. 4 shows a basic version of the adaptive PBIL algorithm, which is intended to update the probabilities of the PBIL array iteratively, until an optimal solution becomes more probable than the remaining ones. This algorithm starts with the PBIL probability array, namely P, with dimensions M × N, as shown in Fig. 3. All the probabilities in the array are initialized to 1/M, which is the value that ensures maximum population diversity, in such a way that all potential solutions to the mapping problem are being considered at the beginning of the optimization process. The routine Create_Population generates a new population (namely Pop), starting from probabilities in the PBIL array. Rows (resources) in the array with the highest

Figure 4. Basic Adaptive PBIL approach.

31


Bolaños-Martínez et al / DYNA 81 (185), pp. 28-35. June, 2014.

The value of the E parameter in Fig. 4 is calculated as the systemic entropy of the PBIL array, just as is done in information theory. Equation (1) depicts the calculations performed inside the Entropy routine, for the entropy calculation.

The PBIL approach described so far may be easily adapted to perform dynamic mapping. In the event of a single node failure, the number of columns in the probability array in Fig. 3 (N) would be equal to the number of tasks that the faulty node was executing. The number of rows may be kept the same. Then, all the probabilities associated with the faulty node (a single row in the array) must be set to zero. In such a case, the initialization stage of the array in Fig. 4, must set all the probabilities to (M − 1) −1. The situation is not so different in the event of failures involving several nodes at once. The rows associated with the faulty resources must be equal to zero and the initialization stage, at the beginning of the optimization process, must take into account only the available resources for task implementation. In an improved version of the PBIL algorithm, the probability array must take the exact dimensions according with the specific dynamic mapping problem: N must be equal to the number of tasks to be remapped and M must be equal to the amount of available resources. This is the approach adopted for the remaining of this paper.

(1)

According to Equation (1), entropy values range from 0 to 1. E = 1 means that there is maximum population’s diversity (this only happens when all values on the PBIL matrix are equal to 1/M). When E = 0, it means that the PBIL matrix points to a unique and completely defined solution. Entropy decreases as the probability array tends to concentrate on single entries of each column of the P array (i.e., when an optimal solution becomes more probable). For the sake of speeding up the convergence of the PBIL algorithm, the termination condition in Fig. 4 is a comparison between the Entropy value and a given tolerance. By using this strategy, it is not necessary to wait until the Entropy value becomes equal to zero, which may be very restrictive and time consuming. The way in which the LR parameter is changed as a function of entropy is often referred to as the learning rule. Equation (2) describes a sigmoid learning rule, which was used inside the Learning_Rule routine. In the equation, LRMIN and LRMAX are the minimum and maximum values, respectively, for the learning rule parameter (LR), whilst Δ is an empirical value which usually ranges from 4 to 6. The idea is to keep the LR parameter low at the beginning of the algorithm, when there is a high population’s diversity and the values of E are close to one. When entropy’s value decreases, i.e., when the population approaches to a given optimal, LR parameter is increased to speed up the convergence process.

4. Experimental Results The PBIL optimization algorithms, both for static and dynamic mapping, were written and tested in Matlab (R2011a), for an MPEG-2 decoder like the one represented in Fig. 1, with 12, 24 and 36 tasks. The NoC target architecture was that shown in Fig. 2. The traffic conditions in the network were simulated using a deterministic XY algorithm. The profiling information (annotations of the taskgraph) regarding execution time and bandwidth was extracted from [36]. The routine Evaluate Population in Fig. 4, as described in previous section, assesses each solution in the population and gives it a fitness value. A weight vector was used to deal with the multiobjective issue in the optimization process. Each entry of the vector is associated with a given objective of the problem (such as completion time, energy consumption or bandwidth). The relative value of each entry with respect to the remaining ones, represent the probability of its associated objective to be optimized at each PBIL algorithm’s iteration. By changing the relative values of the weight vector, it is possible to construct a Pareto curve, as shown in Fig. 5. Pareto curves show several trade-offs among the objectives to be optimized, because they define the set of solutions in which a given objective cannot be improved, without degrading some other objective. In order to profile our PBIL approach, static mapping may be considered as the worst-case scenario (i.e., the one which takes more convergence time). In static mapping, all tasks must be mapped, and all the resources are available for potential implementations. Alternatively, dynamic mapping must deal with a subset of the system’s tasks and a subset of the available resources. Convergence times for several

(2) For each task of the mapping problem or, equivalently, for each column in the PBIL array, the function Update_Array must increase the probability of the choice which resulted in the best solution. Since each single column in the PBIL matrix represents a conjoint probability event, the probabilities sum along a column must be equal to one. Therefore, when a given probability in the array is increased, the remaining ones in that column must be decreased accordingly. Equation (3) shows the probability’s updating formulae, which are based on the Hebbian learning rule [35]. In Equation (3), it is supposed that for a given attribute j, the best solution obtained is the choice k. Suffixes Old and New in Equation (3) are meant to denote the old and new versions of each probability, respectively.

(3)

32


Bolaños-Martínez et al / DYNA 81 (185), pp. 28-35. June, 2014.

Figure 7. Evolution of the optimization objectives.

necessary, for identifying the system tasks that must be remapped. By using such information, it is possible to define the dimensions of the PBIL probability array, and the optimization algorithm may take the form depicted in Fig. 4. For dynamic mapping, only single node failure scenarios were considered in the simulations. However, multiple-node failures may be easily considered with the proposed methodology: If a failure event affects two nodes simultaneously, two rows of the matrix in Fig. 3 must be set to zero. The remaining values of such a matrix should be set to 1/(M − 2). The PBIL algorithm may then perform the optimization process as described before. In a more general fashion, if a failure affects an amount of F nodes, the matrix in Fig. 3 must be initialized in such a way that F rows, in correspondence with the faulty resources, must be set to zero. The remaining values of the matrix must be set to 1/(M − F), for the sake of guaranteeing maximum population diversity. Fig. 7 depicts the evolution of the two optimization objectives (Completion Time and Bandwidth) as a function of the number of algorithm iterations. In this case, the size of the problem, or equivalently, the number of tasks to be mapped was equal to 36 and the weight vector was tuned to provide 60 % of probability to the Completion Time objective to be optimized, whilst the Bandwidth had a probability of 40 %.

Figure 5. Pareto curves for PBIL optimization

instances of the PBIL static mapping optimization are shown in Fig. 6. In this figure, the continuous line represents a quadratic fit performed on the data. Data from an ILP optimization [12], performed over the same static mapping problem, was included in the figure for comparison purposes. As shown in Fig. 6, the ILP approach exhibits a better performance than PBIL for small problems. However, if the number of tasks increases, optimization using the ILP algorithm becomes prohibitive. As reported in [12], the mean ILP convergence time for a 36-tasks optimization is around 1700 seconds. PBIL convergence time is around one order of magnitude lower than this value.

5. Conclusions A multiobjective PBIL optimization approach has been described and tested for static and dynamic mapping of tasks to an MPSoC based on NoC. The objectives considered in the optimization process were the completion time of the executable application and peak bandwidth. For our simulations, a 2D mesh architecture and a deterministic routing schema were adopted. The PBIL optimization algorithm seems to have a better performance than some other reported approaches, such as ILP, when the problem size increases. This is a major advantage, since the size of MPSoC systems has been increasing as well as the complexity of the applications involved.

Figure 6. Convergence times for several mapping instances.

The PBIL algorithm for dynamic mapping starts from a previous mapping schema, obtained from the static optimization. The reference to the faulty node is also 33


Bolaños-Martínez et al / DYNA 81 (185), pp. 28-35. June, 2014.

[13] Tafesse B., Raina A., Suseela J. , Muthukumar V. Efficient scheduling algorithms for MPSoC systems. In Information Technology: New Generations (ITNG), 2011 Eighth International Conference on, pp. 683 –688, April 2011.

Acknowledgements The authors would like to thank ARTICA, COLCIENCIAS, the Communications Ministry of Colombia, the National University of Colombia and the University of Antioquia, for their support in the development this work.

[14] Russell S. J., Norvig P. Artificial Intelligence: A Modern Approach. 2nd ed. Pearson Education, 2003. [15] Jang W., Pan D. Z. A3Map: Architecture-aware analytic mapping for Networks-on-Chip. ACM Trans. Des. Autom. Electron. Syst., vol. 17, pp. 26:1–26:22, July 2012.

References

[16] Singh A. K., Kumar A., Srikanthan T. A hybrid strategy for mapping multiple throughput-constrained applications on MPSoCs. In Proceedings of the 14th international conference on Compilers, architectures and synthesis for embedded systems, CASES ’11, (New York, NY, USA), pp. 175–184, ACM, 2011.

[1] Marculescu R., Ogras U. Y., Peh L. S., Jerger N. E., Hoskote Y. Outstanding research problems in NoC design: system, microarchitecture, and circuit perspectives. Trans. Comp.-Aided Des. Integ. Cir. Sys., vol. 28, no. 1, pp. 3–21, Jan. 2009. [2] Huang J., Buckl C., Raabe A., Knoll A. Energy-aware task allocation for network-on-chip based heterogeneous multiprocessor systems. In Parallel, Distributed and NetworkBased Processing (PDP), 2011 19th Euromicro International Conference on, pp. 447 –454, Feb. 2011.

[17] Antunes E., Soares M., Aguiar A., Filho S. J., Sartori M., Hessel F., Marcon C. A. M. Partitioning and dynamic mapping evaluation for energy consumption minimization on noc-based MPSoC. In ISQED (K. A. Bowman, K. V. Gadepally, P. Chatterjee, M. M. Budnik, and L. Immaneni, eds.), pp. 451–457, IEEE, 2012.

[3] Rajaei R., Hessabi S., Vahdat B. V. An energy-aware methodology for mapping and scheduling of concurrent applications in MPSOC architectures. In Electrical Engineering (ICEE), 2011 19th Iranian Conference on, pp. 1 –6, May 2011.

[18] Antunes E., Aguiar A., Johann F. S., Sartori M. , Hessel F., Marcon C. Partitioning and mapping on NoC-based MPSoC: an energy consumption saving approach. In Proceedings of the 4th International Workshop on Network on Chip Architectures, NoCArc ’11, (New York, NY, USA), pp. 51–56, ACM, 2011.

[4] Ghosh P., Sen A., Hall A. Energy efficient application mapping to noc processing elements operating at multiple voltage levels. In Networks-on-Chip, 2009. NoCS 2009. 3rd ACM/IEEE International Symposium on, pp. 80 –85, May 2009.

[19] He O., Dong S., Jang W., Bian J., Pan D. Z. UNISM: Unified scheduling and mapping for general Networks on Chip. IEEE Trans. VLSI Syst., vol. 20, no. 8, pp. 1496–1509, 2012.

[5] Mandelli M., Ost L., Carara E., Guindani G. , Gouvea T., Medeiros G., Moraes F. Energy-aware dynamic task mapping for NoC-based MPSoCs. In Circuits and Systems (ISCAS), 2011 IEEE International Symposium on, pp. 1676 –1679, May 2011.

[20] Hosseinabady M., Nunez-Yanez J. L. Run-time stochastic task mapping on a large scale Network-on-Chip with dynamically reconfigurable tiles. IET Computers and Digital Techniques, vol. 6, no. 1, pp. 1–11, 2012.

[6] Marculescu R. Networks-on-chip: The quest for on-chip faulttolerant communication. In VLSI, 2003 Proceedings of the IEEE Computer Society Annual Symposium on, pp. 8 – 12, Feb. 2003.

[21] Hamedani P. K., Hessabi S., Sarbazi-Azad H., Jerger N. D. E. Exploration of temperature constraints for thermal aware mapping of 3D Networks on Chip. In PDP (R. Stotzka, M. Schiffers, and Y. Cotronis, eds.), pp. 499–506, IEEE, 2012.

[7] Schranzhofer A., Chen J. J., Santinelli L., Thiele L. Dynamic and adaptive allocation of applications on mpsoc platforms. In Design Automation Conference (ASP-DAC), 2010 15th Asia and South Pacific, pp. 885 –890, Jan. 2010.

[22] Wang C., Yu L., Liu L., Chen T. Packet triggered prediction based task migration for Network-on-Chip. In Proceedings of the 2012 20th Euromicro International Conference on Parallel, Distributed and Network-based Processing, PDP ’12, (Washington, DC, USA), pp. 491–498, IEEE Computer Society, 2012.

[8] Wildermann S., Ziermann T., Teich J. Run time mapping of adaptive applications onto homogeneous noc-based reconfigurable architectures. In Field-Programmable Technology 2009. FPT 2009. International Conference on, pp. 514 –517, Dec. 2009. [9] Carvalho E. de S., Calazans N., Moraes F. Dynamic task mapping for MPSoCs. Design Test of Computers, IEEE, vol. 27, no. 5, pp. 26 –35, Oct. 2010.

[23] Kaushik S., Singh A. K., Jigang W., Srikanthan T. Run-time computation and communication aware mapping heuristic for NoC based heterogeneous MPSoC platforms. In Proceedings of the 2011 Fourth International Symposium on Parallel Architectures, Algorithms and Programming, PAAP ’11, (Washington, DC, USA), pp. 203–207, IEEE Computer Society, 2011.

[10] Singh A. K., Srikanthan T., Kumar A., Jigang W. Communication aware heuristics for run-time task mapping on NoC-based MPSoC platforms. J. Syst. Archit., vol. 56, no. 7, pp. 242–255, Jul. 2010.

[24] Zhe L., Xiang L. NoC mapping based on chaos artificial bee colony optimization. In Computational Problem-Solving (ICCP), 2011 International Conference on, pp. 518 –521, oct. 2011.

[11] Carvalho E., Calazans N., Moraes F. Heuristics for dynamic task mapping in NoC-based heterogeneous MPSoCs. In Proceedings of the 18th IEEE/IFIP International Workshop on Rapid System Prototyping, ser. RSP ’07. Washington, DC, USA: IEEE Computer Society, pp. 34–40, 2007.

[25] Mandelli M., Amory A., Ost L., Moraes F. G. Multi-task dynamic mapping onto NoC-based MPSoCs. In Proceedings of the 24th symposium on Integrated circuits and systems design, SBCCI ’11, (New York, NY, USA), pp. 191–196, ACM, 2011.

[12] Derin O., Kabakci D., Fiorin L. Online task remapping strategies for fault-tolerant network-on-chip multiprocessors. In Networks on Chip (NoCS), 2011 Fifth IEEE/ACM International Symposium on, pp. 129 –136, May 2011.

[26] Habibi A., Arjomand M., Sarbazi-Azad H. Multicast-aware mapping algorithm for on-chip networks. In Proceedings of the 34


Bolaños-Martínez et al / DYNA 81 (185), pp. 28-35. June, 2014.

2011 19th International Euromicro Conference on Parallel, Distributed and Network-Based Processing, PDP ’11, (Washington, DC, USA), pp. 455–462, IEEE Computer Society, 2011.

[32] Bonatti P. A., Lutz C., Murano A., Vardi M. ISO IEC 138182 MPEG2. Information Technology - Generic Coding of Moving Pictures and Associated Audio Information: Video,” in ICALP 2006. LNCS, pp. 540–551, Springer, 2006.

[27] Liu Y., Ruan Y., Lai Z., Jing W. Energy and thermal aware mapping for mesh-based NoC architectures using multi-objective ant colony algorithm. In Computer Research and Development (ICCRD), 2011 3rd International Conference on, vol. 3, pp. 407 – 411, march 2011.

[33] Fan L. J., Li B., Zhuang Z. Q., Fu Z. Q. An approach for dynamic Hardware/Software partitioning based on DPBIL. In Proceedings of the Third International Conference on Natural Computation. Volume 05, ser. ICNC ’07. Washington, DC, USA: IEEE Computer Society, 2007.

[28] Zhong L., Sheng J., Jing M., Yu Z., Zeng X. , Zhou D. An optimized mapping algorithm based on simulated annealing for regular NoC architecture. In ASIC (ASICON), 2011 IEEE 9th International Conference on, pp. 389 –392, oct. 2011.

[34] Bolanos F., Aedo J., Rivera F. System-level partitioning for embedded systems design using Population-based Incremental Learning. In CDES, H. R. Arabnia and A. M. G. Solo, Eds. CSREA Press, pp. 74–80, 2010.

[29] Sepulveda J., Strum M., Chau W. J., Gogniat G. A multiobjective approach for multi-application NoC mapping. In Circuits and Systems (LASCAS), 2011 IEEE Second Latin American Symposium on, pp. 1 –4, feb. 2011.

[35] White R. H. Competitive hebbian learning: Algorithm and demonstrations. Neural Networks, vol. 5, no. 2, pp. 261 – 275, 1992. [36] Thiele L., Bacivarov I., Haid W., Huang K. Mapping applications to tiled multiprocessor embedded systems. In Proceedings of the Seventh International Conference on Application of Concurrency to System Design, ser. ACSD ’07. Washington, DC, USA: IEEE Computer Society, pp. 29–40, 2007.

[30] Sheng J., Zhong L., Jing M., Yu Z. , Zeng X. A method of quadratic programming for mapping on NoC architecture. In ASIC (ASICON), 2011 IEEE 9th International Conference on, pp. 200 – 203, Oct. 2011. [31] Sangiovanni-Vincentelli A. Is a unified methodology for system-level design possible? IEEE Des. Test, vol. 25, pp. 346– 357, July 2008.

35


DYNA http://dyna.medellin.unal.edu.co/

Effect of heating systems in litter quality in broiler facilities in winter conditions Efecto del sistema de calefacción en la calidad de la cama de galpones de pollos de engorde en condiciones de inverno Ricardo Brauer-Vigoderis a, Ilda de Fátima Ferreira-Tinôco b, Héliton Pandorfi c, Marcelo Bastos-Cordeiro d, Jalmir Pinheiro de Souza-Júnior e & Maria Clara de Carvalho-Guimarães f a

Unidad Académica, Universidad Federal Rural de Pernambuco, Brazil, vigoderis@uag.ufrpe.br Departamento de Ingeniería Agrícola, Universidade Federal de Viçosa, Brazil, iftinoco@ufv.br c Departamento de Ingeniería Agrícola, Universidad Federal Rural de Pernambuco, Brazil, pandorfi@dtr.ufrpe.br d Departamento de Zootecnia, Universidad Federal de Acre, Brazil, mbcordeiro@gmail.com e Departamento de Ingeniería de Producción, Universidad Federal de Itajubá, Brasil, jalmirpinheiro@yahoo.com.br f Departamento de Agronomia, Universidad Federal del Jequitinhonha y Mucuri, Brazil, mariaclara.guimaraes@ufvjm.edu.br b

Received: November 18th, 2012. Received in revised form: January 26th, 2014. Accepted: April 25th, 2014.

Abstract The objective of this research was to evaluate the influence of heating systems in poultry houses on the characteristics of the psychrometric air and litter quality (moisture and pH) in winter conditions in the western Santa Catarina State, Brazil. The experiment was conducted in three properties of the integrated poultry farms Perdigão with three similar sheds, equipped with different heating systems (infrared heater –; furnace with indirect air heating; radiant experimental “drum” system with an infrared supplemental heating system). Values were obtained continuously from three median points of each facility at the same height of the birds - measuring the relative humidity and air temperature - for the determination of THI. For the analysis of litter humidity and litter pH, four samples were collected at four different points in each shed every two days during the period in which the heating systems were used. In the poultry houses, the THI values detected were within the range considered adequate for the development of animals in their first week, respecting the requirements of animal welfare; In their second week of life, excessive values of THI were detected, featuring discomfort and energy waste. In environments heated by the evaluated systems, the litter moisture content is maintained a suitable value according to the literature, however, the pH values of the litter samples collected in the production environments showed a basic environment, making it suitable for growth of ammonifying bacteria. Keywords: broiler, poultry houses, humity, pH Resumen El objetivo de este trabajo, fue evaluar la influencia de los sistemas de calefacción en galpones avícolas, las condiciones psicométricas del aire interno y en la calidad de la cama (humedad y pH), en condiciones de invierno, en el oeste del Estado de Santa Catarina. El experimento fue desarrollado en tres propiedades avícolas integradas en la industria Brasilera Perdigão, en núcleos con tres galpones similares, equipados con diferentes sistemas de calefacción (Campanas infrarrojas a gas, hornos a leña de calentamiento indirecto del aire e intercambiadores de calor por radiación con calentamiento como suplemento de las campanas infrarrojas a gas). Valores de la humedad relativa y la temperatura del aire fueron obtenidos continuamente, en tres puntos de medida de cada instalación al nivel de las aves, para determinar el índice de temperatura y humedad (ITH). Para los análisis de humedad y pH de la cama, fueron colectadas cuatro muestras en cuatro puntos diferentes en cada galpón, cada dos días, durante el periodo en que fueron utilizados los sistemas de calefacción. Los sistemas de calentamiento en campanas infrarrojas a gas y el sistema conjugado de intercambiador más campanas infrarrojas a gas, proporcionaron valores de humedad de la cama, inferiores a los observados en ambientes calentados por el sistema con horno a leña. En los aviarios calentados por los sistemas de campanas a gas y el horno a leña, los valores de ITH, se mantuvieron en una franja de confort para el bienestar de las aves, pero en el sistema conjugado con más campanas a gas, fue detectado estrés y desperdicio de energía. Para todos los sistemas de calentamiento los valores de humedad de la cama se mostraron adecuados, sin embargo, los valores del pH de las muestras de cama, colectadas en los ambientes de cría, evidenciaron un ambiente básico, tornándolo propicio para el crecimiento de bacterias amonificantes. Palabras clave: pollos de engorde, galpones avícolas, humedad, pH.

1. Introducion The importance of broiler chicken production in Brazil is growing both on a national and international scale, requiring more efficient and intensive production systems to produce

greater yields (Saraz [1]). The optimum productivity is achieved with the use of energy for growth, while keeping the birds in a comfortable temperature range, without having to expend energy to compensate for the cold or hot temperatures (Abreu [2]).

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 36-40 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Brauer-Vigoderis et al / DYNA 81 (185), pp. 36-40. June, 2014.

The birds maintain constant body temperature when the temperature ambient is thermoneutral (WELKER [3]). However, broilers, in their initial phase, are very sensitive to low temperatures, which may negatively affect their development, leading to huge financial losses, especially in conditions of harsh winters. Heat energy is added to the facilities from the metabolic production of birds, lights and motors, roofs and walls (depending on the insulation), and the fermentation of excreta. However, providing extra heat for birds is essential in the early stages of life when there is a risk of stress due to a cold environment. Their first two weeks of life are the most critical because errors at this stage can not be corrected in the future, thus affecting the final performance of broilers (NILIPOUR; BUTCHER, apud CORDEIRO [4]). Various indexes express the animal comfort in a particular environment. In general, two or more climatic variables are considered, however, there are also other variables, such as metabolic rate, insulation type, etc (BAÊTA & SOUZA [5]). THOM [6] developed the Discomfort Index, later called Temperature and Humidity Index (THI). This index was obtained by simple linear fit applied to a temperature range of dry bulb and wet bulb, expressed in eq. (1) (BAÊTA; SOUZA, [5]): 0,72

40.6

between 8 cm and 10 cm, depending on the density established, so that at the end of cycle production, the humidity is between 20% and 35%. Litters with humidity above 35% become plastered, causing discomfort for the birds, affecting the performance and lowering resistance to diseases. It is known that ammonifying bacteria (Bacillus subtilis, B. cereus, B Megatherium, Proteus, Pseudomonas, Escherichia coli, etc.) increase both the manure pH and the concentration of ammonia in and around the poultry houses. The residue of broiler contains 5 kilograms of ammonia per ton while for other adult birds it contains 3.5 kilograms per ton (BUCKLIN apud IVANOV [12]). In relation to these factors, there have been various adverse effects, such as increased food intake, decreased growth rate (GRISHCHENKO apud IVANOV [12]; FOUBER apud IVANOV [12]; KAITAZOV e STOYANCHEV apud IVANOV [12], environmental pollution (MIJS apud IVANOV [12]), increased mortality and respiratory diseases (MARDAROWICZ apud IVANOV [12]; BRADBURY apud IVANOV [12]; CARRIER apud IVANOV [12]) and development of pathogenic and immunosuppressive bacteria (BYRD apud IVANOV [12]). In acidic environments (pH <6) the ammonifying bacterial growth is inhibited, and when pH is below 5, the conditions are unfavorable for the development of Salmonellae (Byrd, 1999 apud Ivanov [12]). Acidification of the litter reduces the negative effects of high concentrations of ammonia and bacteria by inhibiting ammonifying bacteria and neutralizing the ammonia. In relation to this, various substances have been used: aluminium sulphate, ferric sulphate, phosphoric acid and acetic acid and antibiotics (Medeiros [13]). For this reason, the pH influences the quality of the bed. Therefore, this research was conducted to evaluate the influence of heating systems in broiler houses in the thermal environment and litter quality.

(1)

Where: THI = Temperature and Humidity Index td = Dry bulb temperature (oC) tw = Wet bulb temperature (oC) Table 1 shows the temperature and humidity index values ideal for broilers based on age. The heating systems that have been most commonly used in semi-acclimatized facilities for broilers in southern Brazil are infrared gas heater, furnace with direct or indirect air heating and experimental radiant “drum” system (CORDEIRO [4]; VIGODERIS [8]). However, there is little information about the effects of different heating systems on the litter quality.

2. Material and Methods

Table 1. Temperature and humidity index values ideal for broilers based on age.

Age (Weeks) 1

Temperature (oC) 32-35

2

29-32

60-70

68.4-76

3

26-29

60-70

64.5-72

4

23-26

60-70

60.5-68

5

20-23

60-70

56.6-64

6

20

60-70

56.6-60

7

20

60-70

56.6-60

Humidity (%) 60-70

The research was conducted on three integrated properties of a poultry farm with three similar sheds, equipped with different heating systems used for the production of 18,500 Cobb females, for each of the nine sheds evaluated, with an average weight at slaughter of 1.450 kg, for two flocks in the Videira city, during the winter between June and October. The city of Videira is located in the state of Santa Catarina, at an altitude of 750 m, latitude 27º00'30" South and longitude 51º 09'06" West. The climate Köppen's classification is Cfb, with average annual temperatures between 16oC and 17oC. Three different heating systems were evaluated:  infrared heater;  furnace with indirect air heating;  experimental radiant “drum” system with an infrared supplemental heating system.

THI ideal 72.4-80

Source: Adapted from Silva [7]

The broilers produced in Brazil are managed using the litter system. Litter is an absorbent material used on the aviary floor for the broilers production (UBA [9]). According to AVILA [10] and JORGE [11], the litter thickness is 37


Brauer-Vigoderis et al / DYNA 81 (185), pp. 36-40. June, 2014. Table 2. Average variation values of the temperature and humidity index (THI) in the first week of the productive cycle (1º and 2º cycles) of the birds inside the sheds, observed within the productive environments heated by three systems Infrared Experimental radiant "drum" Time Furnace heater system + infrared heater 12:00 AM 74,7a 74,9a 76,8b 12:30 AM 74,5a 74,8a 76,7b 1:00 AM 74,4a 74,9a 76,6b 1:30 AM 74,4a 74,6a 76,4b 2:00 AM 74,2a 74,3a 76,3b 2:30 AM 74,2a 74,0a 76,2b 3:00 AM 74,2a 73,9a 76,3b 3:30 AM 74,2a 73,8a 76,4b 4:00 AM 73,1a 73,3a 76,0b 4:30 AM 73,1a 73,6a 76,2b 5:00 AM 73,4a 74,1b 76,5c 5:30 AM 73,5a 74,1b 76,4c 6:00 AM 73,5a 74,0a 76,5b 6:30 AM 73,5a 73,7a 76,6b 7:00 AM 73,7a 73,6a 76,4b 7:30 AM 74,2a 74,3a 76,8b 8:00 AM 74,7a 74,5a 77,2b 8:30 AM 75,1a 75,3a 77,4b 9:00 AM 75,8a 75,9a 77,9b 9:30 AM 76,5a 76,8a 78,6b 10:00 AM 77,4a 77,3a 79,3b 10:30 AM 77,9a 77,9a 79,4a 11:00 AM 78,3a 78,2a 79,9b 11:30 AM 78,6a 78,3a 80,1b 12:00 PM 78,7a 78,3a 79,9b 12:30 PM 78,8a 78,4a 80,0b 1:00 PM 79,ab 78,7a 80,3b 1:30 PM 79,2a 79,1a 80,4b 2:00 PM 79,2a 79,1a 80,1b 2:30 PM 79,0a 78,9a 79,9b 3:00 PM 78,9a 78,7a 79,8b 3:30 PM 78,9a 78,8a 79,7b 4:00 PM 78,9a 78,5a 79,6b 4:30 PM 78,7a 78,4a 79,5b 5:00 PM 78,3a 78,2a 79,3b 5:30 PM 77,8a 78,0b 78,8c 6:00 PM 77,5a 78,1b 78,6c 6:30 PM 77,3a 77,8a 78,6b 7:00 PM 76,8ab 77,4a 78,6b 7:30 PM 76,5a 76,9a 78,5b 8:00 PM 76,2a 76,6a 78,4b 8:30 PM 76,0a 76,5a 78,1b 9:00 PM 75,9a 76,2a 77,9b 9:30 PM 75,8a 75,7a 77,7b 10:00 PM 75,6a 75,5a 77,7b 10:30 PM 75,3a 75,4a 77,5b 11:00 PM 75,0a 75,3a 77,2b 11:30 PM 74,9a 74,9a 77,0b Values followed by same letter in a row did not differ significantly at 5% probability by Tukey test

The sheds are 100 m long and 12 m wide, with polyurethane liner positioned at a height of 3 m from the floor, and the longitudinal axes are oriented East-West. The roof was composed of ceramic tiles with a slope of 30% and overhang of 0.50 m. The side closures were composed of a short wall of 0.30 m high and a metal screen that goes up to the height of the installation ceiling, combined with curtains of yellow polypropylene. All houses received a new 7cm layer of litter made of thick wood shavings. The air temperature and relative humidity was recorded continuously in three median points of each facility, at the height of the birds (10-30 cm, following the growth of animals). Measurements were performed using dataloggers with a sampling period of 15 minutes throughout the experiment for two complete production cycles. The dataloggers had a resolution of 1% (humidity) and 0.1°C (temperature), and an accuracy of 1% (humidity) and 0.1oC (temperature). With the values of air temperature and relative humidity, the temperature and humidity index – THI (THOM [6]) was calculated. The diets provided to the animals were formulated based on nutrient requirements for different stages of growth and were equal for all systems. For the analysis of litter moisture and litter pH, four samples of 50g were collected in four distinct points in the sheds used for the research, every two days during the period that heating systems were used. The four samples were homogenized and the moisture analysis was carried out using precision scales. Following the weighing procedure, the samples were dried at 105 oC. After drying, the weight was determined in a precision balance. For pH analysis the following methodology was used:  10 grams of the samples were weighed and placed in beakers of 100 ml of distilled water;  The samples were shaken with glass rod and allowed to stand for 30 minutes;  The value was obtained using a pH meter. In order to evaluate the thermal environment and the litter quality, a randomized block experimental design with three heating systems was used, in two complete productive cycles. At the beginning of the production cycle new litter was used. The data were subjected to analysis of variance and means tested with Tukey. 3. Results and Discussion In Table 2 it was observed that poultry houses in the three treatments showed values of temperature and humidity index (THI) within the range considered adequate for the development of animals in the first week of life (72.4 to 80), according to Silva [7]. In the experimental radiant “drum” system with an infrared supplemental heating system, it was detected higher values of THI. The maximum values of THI were found around 01:30pm.

38


Brauer-Vigoderis et al / DYNA 81 (185), pp. 36-40. June, 2014. Table 3. Average variation values of the temperature and humidity index (THI) in the second week of the productive cycle (1º and 2º cycles) of the birds inside the sheds, observed within the productive environments heated by infrared heater, furnace and experimental radiant "drum" system + infrared heater

Time 00:00:00 00:30:00 01:00:00 01:30:00 02:00:00 02:30:00 03:00:00 03:30:00 04:00:00 04:30:00 05:00:00 05:30:00 06:00:00 06:30:00 07:00:00 07:30:00 08:00:00 08:30:00 09:00:00 09:30:00 10:00:00 10:30:00 11:00:00 11:30:00 12:00:00 12:30:00 13:00:00 13:30:00 14:00:00 14:30:00 15:00:00 15:30:00 16:00:00 16:30:00 17:00:00 17:30:00 18:00:00 18:30:00 19:00:00 19:30:00 20:00:00 20:30:00 21:00:00 21:30:00 22:00:00 22:30:00 23:00:00 23:30:00

Infrared heater 75,2a 75,4a 75,0a 75,1a 75,1a 75,1a 75,0a 75,0a 74,8a 74,5a 74,4a 74,6a 74,7a 75,8a 76,5a 76,8a 77,1a 77,3a 77,5a 77,9a 78,3a 78,5a 78,6a 78,3a 78,5a 78,5a 78,7a 78,8a 78,6a 78,6a 78,2a 78,4a 78,5a 78,6a 78,6a 78,2a 78,2a 78,1a 77,6a 76,9a 76,5a 77,1ab 76,8ab 76,4a 76,2a 76,2ab 75,7a 75,4a

Furnace 75,6a 75,5a 75,2a 75,2a 75,0a 74,9a 74,8a 74,9a 75,1a 74,7a 74,7ab 74,8a 74,6a 75,3b 75,8b 76,2a 76,5b 76,9a 77,2a 77,4a 77,6b 78,0a 78,1a 78,0a 78,1a 78,0a 78,3b 78,4a 78,5a 78,4a 78,4a 78,4a 78,2a 78,0b 77,9b 78,1a 78,0a 78,0a 77,6a 76,9a 76,3a 76,6a 76,4a 76,2a 76,0a 75,8a 75,6a 75,5a

Table 4. Litter moisture variation (%) and pH variation of samples collected in the sheds subjected to different heating systems Heating Systems

Experimental radiant "drum" system + infrared heater 76,3b 76,5b 76,1b 75,9a 75,8a 75,9b 76,2b 76,4b 76,1b 75,6b 75,7b 75,9b 75,9b 77,1c 77,6a 78,0b 78,2c 78,2b 78,6b 78,9b 79,1c 79,4b 79,4b 79,0b 79,0b 79,1b 79,3a 79,5b 79,4b 79,3b 79,0b 79,0b 79,3b 79,1a 78,9a 78,6a 78,8b 78,8b 78,3b 77,7b 77,1b 77,6b 77,3b 77,1b 76,9b 77,2b 76,8b 76,6b

Infrared heater

Furnace with indirect air heating

Experimental radiant “drum” system with an infrared supplemental heating system

26,9b

27,5a

26,9b

Humidity (%)

pH 9,14b 9,12b 9,17a Values followed by same letter in a row did not differ significantly at 5% probability by Tukey test

In the second week of housing (Table 3), excessive values of THI higher than the recommended range by Silva [7] were detected, which lies between 68.4 and 76 in the three systems evaluated. Excessive heat can cause animal dehydration and decreased feed intake, and characterize fuel waste. In other periods, the THI values were within the recommended range. According to the results (Table 3), it was noted that in environments heated by the infrared heater and experimental radiant “drum” system with an infrared supplemental heating system, the litter moisture values were slightly below the environment heated by the furnace. This happens because these systems radiate heat more intensely to the litter than the furnace, whose operating principle is the injection of heated air. But all systems provided a litter humidity value suitable for birds, according to the values recommended by Jorge et al. (1997), which should be between 20 and 35%, showing that environments that provide environmental thermal comfort also provide adequate litter moisture. Regarding the litter pH (Table 4) of the samples, values higher than 9 were found, featuring a basic medium, making the environment susceptible to ammonifying bacterial growth, which can increase the gas concentration and negatively affect the internal environment production. To mitigate this problem, it is suggested to use various substances, such as aluminium sulphate, ferric sulphate, phosphate, phosphoric acid, acetic acid and antibiotics (Ivanov [12]; Medeiros [13]. 4. Conclusion Regarding the conditions of this experiment and the results obtained, it was concluded that:  In poultry houses subjected to three treatments, the THI values obtained were within the range considered adequate for the development of animals in their first week, respecting the requirements of animal welfare;  In the animal's second week of life, it was detected excessive amounts of THI, featuring discomfort and energy waste;  In environments heated by the test systems, the litter moisture content is maintained at a suitable value according to the literature  However, the pH values of the litter samples collected in the poultry houses showed a basic environment, making it suitable for growth of ammonifying bacteria.

Values followed by same letter in a row did not differ significantly at 5% probability by Tukey test

39


Brauer-Vigoderis et al / DYNA 81 (185), pp. 36-40. June, 2014. [7] Silva, E. T. Índice de temperatura e umidade (ITU) na produção de aves para mesorregião do noroeste e norte pioneiro paranaense. Revista Acadêmica, 5 (4), pp. 385-390, 2007.

References [1] Saraz, J. A. O., Tinôco, I. F. F., Rocha, K. S. O., Martins, M. A., De paula, M. O. Modeling and experimental validation to estimulate the energy balance for a poultry house with misting cooling. Dyna, vol. 78 (170), pp. 167-174, 2011.

[8] VIGODERIS, R. Sistemas de aquecimento de aviários e seus impactos no conforto térmico ambiental, qualidade do ar e performance animal em condições de inverno, no Sul do Brasil. Departamento de Engenharia Agrícola, Universidade Federal de Viçosa, 2006.

[2] Abreu, P. G., Abreu, V. M. N., Coldebella, A., Jaenisch, F. R. F., Paiva, D. P. Evolution of litter material and ventilation systems on poultry production: II. thermal comfort. Revista Brasileira de Zootecnia, 40 (6), pp.1356-1363, 2011.

[9] UNIÃO BRASILEIRA DE AVICULTURA (UBA). Protocolo de boas práticas de produção de frangos [Online]. Available at: http://www.abef.com.br/uba/arquivos/protocolo_de_boas_praticas_de_prod ucao_de_frangos_14_07_08.pdf

[3] Welker, J. C., Rosa, A. P., Moura, D. J., Machado, L. P., Catelan, F., Uttpatel, R. Temperatura corporal de frangos de corte em diferentes sistemas de climatização. Revista Brasileira de Zootecnia, 37 (8), pp. 1463-1467, 2008.

[10] Avila, V. S., Kunz, A., Bellaver, C., Paiva, D. P., Jaenisch, F. R., Mazzuco, H., Trevisol, I. M., Palhares, J. C. P., Abreu, P. G., Rosa, P. S. Boas práticas de produção de frangos de corte. Concórdia, Brasil: Embrapa suínos e aves, Circular Técnica 51, 2007.

[4] Cordeiro, M. B., Tinôco, I. F. F., Silva, J. N., Vigoderis, R. B., Pinto, F. A. C., CECON, P. R. Conforto térmico e desempenho de pintos de corte submetidos a diferentes sistemas de aquecimento no período de inverno. Revista Brasileira de Zootecnia, 39 (1), pp. 217-224, 2010.

[11] Jorge, M.A., Martins, N.R.S., Resende, J.S. Cama de frango e sanidade avícola – aspectos microbiológicos e toxicológicos. Proceedings of Conferência Apinco, pp. 24-37, 1997.

[5] Baêta, F. C., Souza, C. Ambiência em edificações rurais: conforto ambiental. Viçosa, Brasil: Editora UFV, 2010.

[12] Ivanov, I. E. Treatment of broiler litter with organic acid. Research in Veterinary Science, 70 (2), pp. 169-173, 2001.

[6] Thom, E. C. The discomfort index. Weatherwise, Boston, 12, (1), pp.5760, 1959.

[13] Medeiros, R., Santos, B. J. M., Freitas, M., Silva, O. A., Alves, F. F., Ferreira, E. A adição de diferentes produtos químicos e o efeito da umidade na volatização de amônia na cama de frango. Ciência Rural, 38 (8), pp. 23212326, 2008.

40


DYNA http://dyna.medellin.unal.edu.co/

The dynamic model of a four control moment gyroscope system Modelo dinĂĄmico de un sistema de control de par por cuatro girĂłscopos Eugenio Yime-RodrĂ­guez a, CĂŠsar Augusto PeĂąa-CortĂŠs b & William Mauricio Rojas-Contreras c a Facultad de IngenierĂ­as, Universidad TecnolĂłgica de BolĂ­var, Colombia. eyime@unitecnologica.edu.co b Facultad de IngenierĂ­as y Arquitecturas, Universidad de Pamplona, Colombia. cesarapc@unipamplona.edu.co c Facultad de IngenierĂ­as y Arquitecturas, Universidad de Pamplona, Colombia. mrojas@unipamplona.edu.co

Received: November 28th, 2012. Received in revised form: February 20th, 2014. Accepted: May 9th, 2014.

Abstract The dynamic model of a Four Control Moment Gyroscope (4-CMG) is traditionally obtained after computing the derivative of the angular momentum equation. Although this approach leads to a simple dynamic model, new models have been introduced due to terms not taken into account during the computation of the angular momentum equation. In this paper, a new dynamic model for a 4-CMG based on the Newton-Euler algorithm, which is well accepted in Robotics, was developed. This new approach produces a complete dynamic model. Keywords: dynamics, gyroscope, model; control. Resumen El modelo dinĂĄmico de un sistema de control de par utilizando cuatro girĂłscopos (4-CMG) tradicionalmente se obtiene al calcular la derivada de la ecuaciĂłn del momento angular total. Aunque este enfoque conduce a un modelo dinĂĄmico relativamente simple, recientemente se han introducido nuevos modelos debido a tĂŠrminos que no se han tenido en cuenta, o se desprecian, durante el cĂĄlculo de la ecuaciĂłn del momento angular. En este artĂ­culo, se propone un nuevo modelo dinĂĄmico para un 4-CMG basado en el algoritmo de Newton-Euler, el cual es bien aceptado en robĂłtica. Con este nuevo enfoque se logra obtener un modelo dinĂĄmico bastante completo. Palabras clave: dinĂĄmica; girĂłscopo; modelo; control.

1. Introduction A Four Control Moment Gyroscope (4-CMG) is an angular momentum exchange device used on satellites [1-3], submarines [4, 5], to control attitude. It is composed by four gyroscopes arranged in a pyramidal form, see Fig. 1, with the torque amplification property being its principal advantage [6]. Moreover, when used in satellites no fuel or gas propellant is required, because the motors use electricity to operate. The dynamic model of a 4-CMG is usually obtained by differentiation of the angular momentum equation [7]. This is done in [3, 4] and [8]. Probably the most exact dynamic model using this approach is the developed by Ford and Hall, [8]. The first comparison between a robot arm and a 4-CMG was performed by Bedrossian et al. [9]; in this work an analogy of velocities was considered to study the singularities on a 4-CMG. No further analogies with robot arms were stated.

In this paper a kinematics comparison between a 4-CMG and a robotic arm is used to develop a new dynamic model. The advantages of this approach are: use of a widely accepted methodology to compute a dynamic model, and more precise equations. 2. Dynamic equations for A CMG Fig. 1 illustrates a CMG with four gyroscopes, each of them composed of a flywheel and a gimbal. A coordinate frame đ??ą i , đ??˛i , đ??łi , is located at the center of the flywheel, which serves as a reference for the motion of gimbals and flywheels. The flywheels turn at a constant speed, while the gimbals can rotate around đ??˛i axis without any restriction. The angle of rotation of a đ?‘–-th gimbal is represented by đ?œƒđ?‘– , with the zero position being illustrated in the figure. The position of the four gyroscopes is denoted by the vector đ?›‰ = [đ?œƒ1 , đ?œƒ2 , đ?œƒ3 , đ?œƒ4 ]đ?‘‡ . The angle β is the pyramid’s skew angle.

Š The authors; licensee Universidad Nacional de Colombia. DYNA 81 (184), pp. 41-47. June, 2014. Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Yime-RodrĂ­guez et al / DYNA 81 (184), pp. 41-47. June, 2014.

Figure 1. Control moment gyroscope with pyramidal array. Figure 3. Gyroscope kinematics and Base body dynamics.

Fig. 2 illustrates the equivalent open kinematics chain for a 4-CMG; there the circles represent rotational joints. To compute the dynamic model, two steps are performed, see Fig. 3. The first step is the Newton-Euler algorithm for each gyroscope, this led to the reaction forces and moments applied to the base body. The second step is the dynamic equation for the base body, where the reaction forces and moments exerted by each gyroscope, đ??Œđ?‘– and đ??…đ?‘– , are involved. The angular and linear velocity of the base body, đ??Ż0 and đ?›š0 respectively, plus the angular velocities of each joint, đ?›šđ?‘–,1 and đ?›šđ?‘–,2 are required in the Newton-Euler algorithm to perform the direct kinematics of the gyroscopes [10, 11]. Computation of the Newton Euler Equations for a serial robot is also done using two steps, Tsai [12]. The first one is the kinematics calculus toward the extreme of the robot, as shown in Table 1.

Table 1. Recursive Newton-Euler formulation. Forward kinematics. Forward kinematics Angular velocity propagation cos(đ?œƒi ) sin(đ?œƒi ) 0 i đ??‘ i = [−cos(đ?›źi ) sin(đ?œƒi ) cos(đ?›źi ) cos(đ?œƒi ) sin(đ?›źi ) ] sin(đ?›źi ) sin(đ?œƒi ) −sin(đ?›źi ) cos(đ?œƒi ) cos(đ?›źi ) i−1 đ??™i−1 = [0 0 1]đ?‘‡ Angular acceleration propagation đ?›šĚ‡i = đ?›šĚ‡i−1 + đ??łi−1 đ?œƒĚˆi + đ?›ši−1 + đ??łi−1 θ̇i i đ?›šĚ‡i = iđ?‘ši−1 ( i−1đ?›šĚ‡i−1 + i−1đ??łi−1 đ?œƒĚˆi + i−1đ?›š i−1 + i−1đ??łi−1 θ̇i ) Linear velocity propagation đ??Żi = đ??Żi−1 + đ?›ši Ă— đ??Ťi i đ??Żi = iđ?‘ši−1 i−1đ??Żi−1 + iđ?›ši Ă— iđ??Ťi i đ??Ťi = [ai di sin(đ?›źi ) di cos(đ?›źi )]đ?‘‡ Linear acceleration propagation đ??ŻĚ‡ i = đ??ŻĚ‡ i−1 + đ?›šĚ‡i Ă— đ??Ťi + đ?›ši Ă— (đ?›ši Ă— đ??Ťi ) i đ??ŻĚ‡ i = iđ?‘ši−1 i−1đ??ŻĚ‡ i−1 + iđ?›šĚ‡i Ă— iđ??Ťi + iđ?›ši Ă— ( iđ?›ši Ă— iđ??Ťi ) Linear acceleration of the center of mass đ??ŻĚ‡ ci = đ??ŻĚ‡ i + đ?›šĚ‡i + đ?›šĚ‡i Ă— đ??Ťci + đ?›ši Ă— (đ?›ši Ă— đ??Ťci ) i đ??ŻĚ‡ ci = iđ??ŻĚ‡ i + iđ?›šĚ‡i + iđ?›šĚ‡i Ă— iđ??Ťci + iđ?›ši Ă— ( iđ?›ši Ă— iđ??Ťci ) Acceleration of gravity i đ?? = iđ??‘ i−1 i−1đ??

The second step is the dynamic calculus of backward computation, as can be seen in Table 2. Note, only rotational joints are considered in both tables. Table 2. Recursive newton-euler formulation. Backward dynamics. Backward dynamics Inertial forces and moments đ??&#x;i∗ = −đ?‘ši đ??ŻĚ‡ ci đ??§âˆ—i = −đ??ˆi đ?›šĚ‡i − đ?›ši Ă— (đ??ˆi đ?›ši ) Force and torque balance equations about the center of mass đ??&#x;i,i−1 = đ??&#x;i+1,i − đ?‘ši đ?? − đ??&#x;i∗ đ??§i,i−1 = đ??§i+1,i + (đ??Ťi + đ??Ťci ) Ă— đ??&#x;i,i−1 − đ??Ťci Ă— đ??&#x;i+1,i − đ??§âˆ—i Torque in rotational joint đ?›•i = i−1đ??§đ?‘‡i,i−1 i−1đ?’›i−1

Figure 2. Equivalent kinematic chain of a 4-CMG.

42


Yime-RodrĂ­guez et al / DYNA 81 (184), pp. 41-47. June, 2014.

The following assumptions have been made to simplify the equations of the Newton-Euler methodology: • Mass and inertia of the gimbals are approximately zero or negligible. • The center of mass of the Flywheel is aligned with the gimbal axis. • Velocity and acceleration of the base body are not equal to zero. The mass and inertia of the gimbal frame is neglected because the flywheel has the major contribution in the mass and inertia of the gyroscopes.

đ?‘?

đ??€0,đ?‘–

đ?‘?đ?‘Žđ?‘– đ?‘ đ?‘Ž =[ đ?‘– 0 0

−đ?‘ đ?‘Žđ?‘– đ?‘ đ?›˝ đ?‘?đ?‘Žđ?‘– đ?‘ đ?›˝ đ?‘?đ?›˝ 0

−đ?‘ đ?‘Žđ?‘– đ?‘?đ?›˝ đ?‘?đ?‘Žđ?‘– đ?‘?đ?›˝ −đ?‘ đ?›˝ 0

−đ?‘ đ?‘Žđ?‘– đ?‘&#x; đ?‘?đ?‘Žđ?‘– đ?‘&#x; ] 0 1

(1)

Where đ?‘ đ?‘Žđ?‘– , đ?‘?đ?‘Žđ?‘– , đ?‘ đ?›˝, and đ?‘?đ?›˝ stands for đ?‘ đ?‘–đ?‘›(đ?‘Žđ?‘– ), cos(đ?‘Žđ?‘– ), sin(đ?›˝), and cos(đ?›˝) respectively; đ?‘&#x; is the radius of the circle where the gyroscopes are located; đ?‘Žđ?‘– is the angle of the turn around axis đ??ł0 to align đ??˛0 with đ??˛đ?‘– , and it has any of the values of {0, đ?œ‹/2, đ?œ‹, 2đ?œ‹/3} radians. The Denavit-Hantemberg parameters for one gyroscope according to Fig. 4, are shown in Table 3. In this table đ??ż = ‖đ??Ťđ?&#x;?,đ?’Š ‖, where đ??Ťđ?&#x;?,đ?’Š is the vector from frame {0, đ?‘–} to frame {1, đ?‘–}.

2.1. Data of the links For the Newton-Euler approach, one gyroscope is composed of three links: base body, gimbal and flywheel. Each of these links is joined by a rotational joint, Fig. 3. Before computing the forward kinematics and backward dynamics, each link must have a coordinate frame. Fig. 4 illustrates the frames and vectors defined for each link. A new frame, [đ??ą 0,đ?‘– đ??˛0,đ?‘– đ??ł0,đ?‘– ], is used to express the forces and moment exerted by the gyroscopes. This frame is fixed in the base body. The other two frames are [đ??ą1,đ?‘– đ??˛1,đ?‘– đ??ł1,đ?‘– ] and [đ??ą 2,đ?‘– đ??˛2,đ?‘– đ??ł2,đ?‘– ] , with the former being the frame of the gimbals link and the latter being the frame of the flywheel link. These two frames are located at the same point, the center of the flywheel. The homogeneous matrix between the frames fixed in the base body, [đ??ą 0 đ??˛0 đ??ł0 ] and [đ??ą 0,đ?‘– đ??˛0,đ?‘– đ??ł0,đ?‘– ], is,

Table 3. Denavit - Hartemberg parameters. Joint - i đ?›źi 1 đ?œ‹/2 2

0

đ?‘Ži 0

đ?‘‘i L

0

0

đ?œƒi đ?œƒi − đ?œ‹/2 đ?œƒi,2

1) Homogeneous Transformation Matrices: The DH transformation matrices for each link can be also computed, these are:

0,đ?‘–

đ??€1,đ?‘–

sin(đ?œƒđ?‘– ) −cos(đ?œƒ đ?‘–) =[ 0 0 cos(đ?œƒđ?‘–,2 )

1,đ?‘–

đ??€2,đ?‘– = sin(đ?œƒđ?‘–,2 ) 0 [ 0

0 0 1 0

−cos(đ?œƒđ?‘– ) 0 − sin(đ?œƒđ?‘– ) 0] 0 đ??ż 0 1

− sin(đ?œƒđ?‘–,2 )

0

0

cos(đ?œƒđ?‘–,2 ) 0 0

0 1 0

0 0 1]

(2)

(3)

2) Position Vectors: The position of the frame {1, đ?‘–} with respect to {0, đ?‘–} isdefined by vector 0,đ?‘– đ??Ťđ?‘?0,đ?‘– . The vector 2,đ?‘– đ??Ť2,đ?‘– defines the position of {2, đ?‘–} related to {1, đ?‘–}. The mass centre of each link is defined by vectors 0,đ?‘– đ??Ťđ?‘?0,đ?‘– and 1,đ?‘–đ??Ťđ?‘?1,đ?‘– . These vectors have the following values, 1,đ?‘–

đ??Ť1,đ?‘– = [0 đ??Ť2,đ?‘– = [0 0,đ?‘– đ??Ťđ?‘?0,đ?‘– = [0 1,đ?‘– đ??Ťđ?‘?1,đ?‘– = [0 2,đ?‘–

0 đ??ż]đ?‘‡ 0 0]đ?‘‡ 0 0]đ?‘‡ 0 0]đ?‘‡

(4) (5) (6) (7)

0,đ?‘–

đ??Ťđ?‘?0,đ?‘– is zero because the mass and inertia of the gimbals are neglected. 3) Inertia and mass for links: For both links the values of mass and inertia are,

Figure 4. Frames used in the Newton-Euler algorithm. 2,đ?‘–

43

đ??ˆ2,đ?‘–

đ?‘š1,đ?‘– = 0 đ?‘š2,đ?‘– = đ?‘š2 1,đ?‘– đ??ˆ1,đ?‘– = đ?&#x;Ž = đ?‘‘đ?‘–đ?‘Žđ?‘”([đ??źđ?‘Ľđ?‘Ľ đ??źđ?‘Śđ?‘Ś

đ??źđ?‘§đ?‘§ ])

(8) (9) (10) (11)


Yime-RodrĂ­guez et al / DYNA 81 (184), pp. 41-47. June, 2014.

4) Base Body conditions: Different to a typical robot, the base body of a 4-CMG is in motion, which allows it to have an angular and linear velocity as well as non-zero acceleration. 0,đ?‘– đ?›š0,đ?‘– , 0,đ?‘– đ?›šĚ‡0,đ?‘– , 0,đ?‘– đ??Ż0,đ?‘– , 0,đ?‘– đ??ŻĚ‡ 0,đ?‘– , not equal to zero.

2.3. Backward Dynamics The following dynamics equations for one gyroscope are obtained after applying the equations in table 2. 1) Second Body: By using backward dynamics, the torque required by the flywheel motor can be computed as the Inertial Forces and Moments

2.2. Forward Kinematics

∗ đ??&#x;2,đ?‘– = −đ?‘š2 đ??ŻĚ‡ 0,đ?‘– − đ?‘š2 đ?›šĚ‡0,đ?‘– Ă— đ?’“1,đ?‘– − đ?‘š2 đ?›š0,đ?‘– Ă— đ?›š0,đ?‘– Ă— đ??Ť1,đ?‘–

(22)

1) First link - gimbal axis: The first link has the following velocity and acceleration.

∗ đ??§2,đ?‘– = −đ??ˆ2,đ?‘– đ?›šĚ‡0,đ?‘– − đ?›š0,đ?‘– Ă— đ??ˆ2,đ?‘– đ?›š0,đ?‘– − đ?œƒĚ‡đ?‘– Iđ?‘Śđ?‘Ś đ?›š0,đ?‘– Ă— đ?’š1,đ?‘– − đ?œƒĚ‡2,đ?‘– Iđ?‘§đ?‘§ đ?›š0,đ?‘– Ă— đ?’›1,đ?‘– − đ?œƒĚˆđ?‘– Iđ?‘Śđ?‘Ś đ?’š1,đ?‘– − đ?œƒĚ‡đ?‘– đ?œƒĚ‡2,đ?‘– Iđ?‘§đ?‘§ đ?’™1,đ?‘–

(23)

Angular Velocity

Where the following relations were used,

The following equations are derived after using the forward kinematics, table 1.

đ?›š1,đ?‘– = đ?›š0,đ?‘– + đ?’›0,đ?‘– đ?œƒĚ‡đ?‘–

đ??ˆ2,đ?‘– đ??ł0,đ?‘– = đ??źđ?‘Śđ?‘Ś đ??˛1,đ?‘– đ??ˆ2,đ?‘– đ??ł1,đ?‘– = đ??źđ?‘§đ?‘§ đ??ł1,đ?‘– đ??ˆ2,đ?‘– đ??ą1,đ?‘– = đ??źđ?‘Ľđ?‘Ľ đ??ą1,đ?‘–

(12)

Angular Acceleration đ?›šĚ‡1,đ?‘– = đ?›šĚ‡0,đ?‘– + đ?’›0,đ?‘– đ?œƒĚˆđ?‘– + đ?›š0,đ?‘– Ă— đ?’›0,đ?‘– đ?œƒĚ‡đ?‘–

(13)

Forces and Moments in the center of mass

(14)

đ??&#x;21,đ?‘– = −đ?‘š2 đ?? + đ?‘š2 đ??ŻĚ‡ 0,đ?‘– + đ?‘š2 đ?›šĚ‡0,đ?‘– Ă— đ?’“1,đ?‘– + đ?‘š2 đ?›š0,đ?‘– Ă— đ?›š0,đ?‘– Ă— đ??Ť1,đ?‘– đ??§2,đ?‘– = đ??ˆ2,đ?‘– đ?›šĚ‡0,đ?‘– + đ?›š0,đ?‘– Ă— đ??ˆ2,đ?‘– đ?›š0,đ?‘– + đ?œƒĚ‡đ?‘– Iđ?‘Śđ?‘Ś đ?›š0,đ?‘– Ă— đ?’š1,đ?‘– + đ?œƒĚ‡2,đ?‘– Iđ?‘§đ?‘§ đ?›š0,đ?‘– Ă— đ?’›1,đ?‘– + đ?œƒĚˆđ?‘– Iđ?‘Śđ?‘Ś đ?’š1,đ?‘– + đ?œƒĚ‡đ?‘– đ?œƒĚ‡2,đ?‘– Iđ?‘§đ?‘§ đ?’™1,đ?‘–

Linear Velocity đ??Ż1,đ?‘– = đ??Ż0,đ?‘– + đ?›š0,đ?‘– Ă— đ?’“1,đ?‘–

Because đ?’›0,đ?‘– and đ?’“1,đ?‘– are parallel. Linear Acceleration

(24) (25) (26)

(27) (28)

Torque in joint đ??ŻĚ‡1,đ?‘– = đ??ŻĚ‡ 0,đ?‘– + đ?›šĚ‡0,đ?‘– Ă— đ?’“1,đ?‘– + đ?›šĚ‡0,đ?‘– Ă— đ?›šĚ‡0,đ?‘– Ă— đ?’“1,đ?‘–

(15) đ?‘‡ đ?‘‡ đ?›•2,đ?‘– = đ??ł1,đ?‘– đ??ˆ2,đ?‘– đ?›šĚ‡0,đ?‘– + đ??ł1,đ?‘– đ?›š0,đ?‘– Ă— đ??ˆ2,đ?‘– đ?›š0,đ?‘– đ?‘‡ ̇ + đ?œƒđ?‘– Iđ?‘Śđ?‘Ś đ??ł1,đ?‘– đ?›š0,đ?‘– Ă— đ?’š1,đ?‘–

Acceleration of the center of mass đ??ŻĚ‡ đ?‘?1,đ?‘– = đ??ŻĚ‡1,đ?‘–

(16)

2) First Body: In these steps the torque required by gimbal motor and the reaction moments and forces in the base body are computed. Inertial Forces and Moments

2) Second Link - flywheel axis: Preforming the same steps, as the first link, the results are,

đ??&#x;1∗ = 0 đ??§1∗ = 0

Angular Velocity đ?›š2,đ?‘– = đ?›š0,đ?‘– + đ?’›0,đ?‘– đ?œƒĚ‡đ?‘– + đ?’›1,đ?‘– đ?œƒĚ‡2,đ?‘–

(17)

(18)

Linear Velocity đ??Ż2,đ?‘– = đ??Ż0,đ?‘– + đ?›š0,đ?‘– Ă— đ?’“1,đ?‘–

(30) (31)

Forces and Moments in the center of mass

Angular Acceleration đ?›šĚ‡2,đ?‘– = đ?›šĚ‡0,đ?‘– + đ?’›0,đ?‘– đ?œƒĚˆđ?‘– + đ?›š0,đ?‘– Ă— đ?’›0,đ?‘– đ?œƒĚ‡đ?‘– + đ?›š0,đ?‘– Ă— đ?’›1,đ?‘– đ?œƒĚ‡2,đ?‘– + đ?œƒĚ‡đ?‘– đ?œƒĚ‡2,đ?‘– đ??ą1,đ?‘–

(29)

(19)

đ??&#x;10,đ?‘– = −đ?‘š2 đ?? + đ?‘š2 đ??ŻĚ‡ 0,đ?‘– + đ?‘š2 đ?›šĚ‡0,đ?‘– Ă— đ?’“1,đ?‘– + đ?‘š2 đ?›š0,đ?‘– Ă— đ?›š0,đ?‘– Ă— đ??Ť1,đ?‘–

(32)

đ??§10,đ?‘– = đ??ˆ2,đ?‘– đ?›šĚ‡0,đ?‘– + đ?›š0,đ?‘– Ă— đ??ˆ2,đ?‘– đ?›š0,đ?‘– + đ?œƒĚ‡đ?‘– Iđ?‘Śđ?‘Ś đ?›š0,đ?‘– Ă— đ?’š1,đ?‘– + đ?œƒĚ‡2,đ?‘– Iđ?‘§đ?‘§ đ?›š0,đ?‘– Ă— đ?’›1,đ?‘– + đ?œƒĚˆđ?‘– Iđ?‘Śđ?‘Ś đ?’š1,đ?‘– + đ?œƒĚ‡đ?‘– đ?œƒĚ‡2,đ?‘– Iđ?‘§đ?‘§ đ?’™1,đ?‘– − đ?‘š2 đ?’“1,đ?‘– Ă— đ?’ˆ + đ?‘š2 đ?’“1,đ?‘– Ă— đ??ŻĚ‡ 0,đ?‘– + đ?‘š2 đ?’“1,đ?‘– Ă— đ?›šĚ‡0,đ?‘– Ă— đ?’“1,đ?‘– + đ?‘š2 đ?’“1,đ?‘– Ă— đ?›š0,đ?‘– Ă— đ?›š0,đ?‘– Ă— đ?’“1,đ?‘–

(33)

Linear Acceleration đ??ŻĚ‡ 2,đ?‘– = đ??ŻĚ‡ 0,đ?‘– + đ?›šĚ‡0,đ?‘– Ă— đ?’“1,đ?‘– + đ?›šĚ‡0,đ?‘– Ă— đ?›šĚ‡0,đ?‘– Ă— đ?’“1,đ?‘–

(20)

Torque in Joint đ?‘‡ đ?‘‡ đ?›•1,đ?‘– = đ??ł0,đ?‘– đ??ˆ2,đ?‘– đ?›šĚ‡0,đ?‘– + đ??ł0,đ?‘– đ?›š0,đ?‘– Ă— đ??ˆ2,đ?‘– đ?›š0,đ?‘– ̇ + đ?œƒ2 Iđ?‘§đ?‘§ đ??ł0đ?‘‡ đ?›š0,đ?‘– Ă— đ?’›1,đ?‘– + đ?œƒĚˆđ?‘– Iđ?‘Śđ?‘Ś

Acceleration of the center of mass đ??ŻĚ‡ đ?‘?2,đ?‘– = đ??ŻĚ‡ 2,đ?‘–

(21)

44

(34)


Yime-Rodríguez et al / DYNA 81 (184), pp. 41-47. June, 2014. 𝟒

2.4. Dynamic Equation for Base Body

∑ 𝑚2 𝐫𝑖′ × 𝐯̇ 𝑏 = 0 𝒂 × 𝒃 × 𝒃 × 𝒂 = −𝒃 × 𝒂 × 𝒂 × 𝒃

4

𝟒

(35)

∑(𝑛10,𝑖 + 𝐫𝒊 × 𝐟10,𝑖 )

𝑖=1

𝛕𝑒𝑥𝑡 = 𝐈𝑏 𝛚̇𝑏 + 𝛚𝑏 × 𝐈𝑏 𝛚𝑏 + ∑(𝐧10,𝑖 + 𝐫𝑖 × 𝐟10,𝑖 )

𝒊=𝟎

𝟒

= ∑(𝐈2,𝑖 𝝎̇𝑏 − 𝑚2 𝐫𝑖′ × 𝐫𝑖′ × 𝝎̇𝑏

(36)

𝑖=1

𝒊=𝟎

𝑏

𝐗1 = [ 𝑏𝑥1,1 … 𝑏𝑥1,4 ] 𝐘1 = [ 𝑏𝑦1,1 … 𝑏𝑦1,4 ] 𝜽̇ = [𝜃1̇ … 𝜃̇4 ]𝑇 𝜽̈ = [𝜃1̈ … 𝜃̈4 ]𝑇

(55) (56) (57) (58)

𝐈𝑡 = 𝑏𝐈𝑏 + ∑( 𝑏𝐑 𝑖 𝒊𝐈𝑖 𝑏𝐑𝑇𝑖 − 𝑚2 𝑏𝐫̃𝑖′ 𝑏𝐫̃𝑖′ )

(59)

𝑏

𝟒

𝑏

𝟒 𝑏

(45)

𝒊=𝟎

𝐃𝑖 = ∑( 𝑏𝐑 𝑖 𝒊𝐈𝑖 𝑏𝐑𝑇𝑖 − 𝑚2 𝑏𝐫̃𝑖′ 𝑏𝐫̃𝑖′ ) 𝒊=𝟎

𝑏

𝐈𝑡 = 𝑏𝐈𝑏 + 𝑏𝐃𝑖

(60) (61)

Where 𝐚̃ is the matrix equivalent of the cross product, 𝐚 × 0 𝐚̃ = [ 𝑎3 −𝑎2

(46)

−𝑎3 0 𝑎1

𝑎2 −𝑎1 ] 0

(62)

Therefore, the torque equation over the base body is then expressed as,

The equation of forces is obtained after replacing (45) in (35). This preliminary result is simplified if the relationship for 𝒓𝑖 is used in conjunction with the fact that for a symmetrical 4-CMG the vectors 𝒓1,𝑖 are,

𝑏

̃ 𝑏 𝑏𝐈𝑏 𝑏𝛚𝑏 + 𝐼𝑦𝑦 𝑏𝐘1 𝜃̈ 𝛕𝑒𝑥𝑡 = 𝑏𝐈𝑏 𝑏𝛚̇𝑏 + 𝑏𝛚 ̃ 𝑏 𝑏𝐘1 𝜃̇ + 𝜃̇2 𝐼𝑧𝑧 𝑏𝐗1 𝜃̇ + 𝐼𝑦𝑦 𝑏𝛚 4

(63)

̃ 𝑏 ∑ 𝑏𝐳1,𝑖 + 𝜃̇2 𝐼𝑧𝑧 𝑏𝛚

(47) (48)

𝑖=1

Where 𝑏𝐗1 , 𝑏𝐘1 , 𝑏𝐳1,𝑖 and 𝑏𝐑 𝑖 are,

The final equation is, 𝐅𝑒𝑥𝑡 = (𝑚0 + 4𝑚2 )𝐯̇ 𝑏 − (𝑚0 + 4𝑚2 )𝒈

(54)

𝐫𝑖′

It is a common practice to represent the torque’s equation in the base body coordinate frame. In this frame, the following matrices are defined,

Then eq. (32)-(33) are expressed in terms of base body variables,

𝐫1,0 = −𝐫1,2 𝐫1,1 = −𝐫1,3

𝐫𝑖′

+ 𝝎𝑏 × 𝐈2,𝑖 𝝎𝑏 − 𝑚2 𝝎𝑏 × × × 𝝎𝑏 + 𝜃̇𝑖 I𝑦𝑦 𝝎𝑏 × 𝒚1,𝑖 + 𝜃̇2,𝑖 I𝑧𝑧 𝝎𝑏 × 𝒛1,𝑖 + 𝜃̈𝑖 I𝑦𝑦 𝒚1,𝑖 + 𝜃̇𝑖 𝜃̇2,𝑖 I𝑧𝑧 𝒙1,𝑖 )

If 𝐫𝑖 , 𝐯0,𝑖 , 𝐯̇ 0,𝑖 , 𝛚0,𝑖 and 𝛚̇0,𝑖 are defined by the following expressions, 𝐫1 = [0, 𝑟, 0]𝑇 (37) 𝐫𝟐 = [−𝑟, 0, 0]𝑇 (38) 𝐫3 = [0, −𝑟, 0]𝑇 (39) 𝐫4 = [𝑟, 0, 0]𝑇 (40) 𝝎0 = 𝝎𝑏 (41) 𝝎̇0 = 𝝎̇𝑏 (42) 𝐯0,𝑖 = 𝐯𝑏 + 𝝎𝑏 × 𝐫𝑖 (43) 𝐯̇ 0,𝑖 = 𝐯̇ 𝑏 + 𝝎̇𝑏 × 𝐫𝑖 + 𝝎𝑏 × 𝝎𝑏 × 𝐫𝑖 (44)

𝐟10,𝑖 = −𝑚2 𝐠 + 𝑚2 𝐯̇ 𝑏 + 𝑚2 𝛚̇𝑏 × 𝒓𝑖 + 𝑚2 𝛚𝑏 × 𝛚𝑏 × 𝐫𝑖 + 𝑚2 𝛚̇𝑏 × 𝐫1,𝑖 + 𝑚2 𝛚𝑏 × 𝛚𝑏 × 𝐫1,𝑖 𝐧10,𝑖 = 𝐈2,𝑖 𝛚̇𝑏 + 𝛚𝑏 × 𝐈2,𝑖 𝛚𝑏 + 𝜃̇𝑖 I𝑦𝑦 𝛚𝑏 × 𝒚1,𝑖 + 𝜃̇2,𝑖 I𝑧𝑧 𝛚𝑏 × 𝒛1,𝑖 + 𝜃̈𝑖 I𝑦𝑦 𝒚1,𝑖 + 𝜃̇𝑖 𝜃̇2,𝑖 I𝑧𝑧 𝒙1,𝑖 − 𝑚2 𝒓1,𝑖 × 𝒈 + 𝑚2 𝒓1,𝑖 × 𝐯̇ 𝑏 + 𝑚2 𝒓1,𝑖 × 𝛚̇𝑏 × 𝒓𝑖 + 𝑚2 𝒓1,𝑖 × 𝛚𝑏 × 𝛚𝑏 × 𝒓𝑖 + 𝑚2 𝒓1,𝑖 × 𝛚̇𝑏 × 𝒓1,𝑖 + 𝑚2 𝒓1,𝑖 × 𝛚𝑏 × 𝛚𝑏 × 𝒓1,𝑖

(53)

The obtained result is,

4

𝐅𝑒𝑥𝑡 + 𝑚0 𝒈 = 𝑚0 𝐯̇ 𝑏 + ∑ 𝐟10,𝑖

(52)

𝒊=𝟎

The total force and moment exerted on the base body, is the sum of the force and torque for each gyroscope’s equation, (32) and (33).

(49) 𝑏

Before computing the dynamic equation for moments, the expression 𝑛10,𝑖 + 𝐫𝒊 × 𝐟10,𝑖 is simplified by using the following relations,

𝑠𝜃1 𝐗1 = [−𝑐𝛽𝑐𝜃1 𝑠𝛽𝑐𝜃1 𝑏

𝐫𝑖′ = 𝒓𝑖 + 𝒓1,𝑖

(50)

∑ 𝑚2 𝐫𝑖′ × 𝒈 = 0

(51)

0 𝐘1 = [𝑠𝛽 𝑐𝛽

−𝑠𝜃3 −𝑐𝛽𝑐𝜃4 𝑐𝛽𝑐𝜃3 −𝑠𝜃4 ] 𝑠𝛽𝑐𝜃3 𝑠𝛽𝑐𝜃4

𝑐𝛽𝑐𝜃2 𝑠𝜃2 𝑠𝛽𝑐𝜃2 −𝑠𝛽 0 𝑐𝛽

(64)

0 𝑠𝛽 −𝑠𝛽 0] 𝑐𝛽 𝑐𝛽

(65)

𝑠𝛽𝑠𝜃1 ]𝑇

(66)

𝟒

𝑏

𝐳1,1 = [−𝑐𝜃1

𝑐𝛽𝑠𝜃1

𝒊=𝟎 𝑏

45

𝐳1,2 = [𝑐𝛽𝑠𝜃2

𝑐𝜃2

𝑠𝛽𝑠𝜃2 ]𝑇

(67)


Yime-RodrĂ­guez et al / DYNA 81 (184), pp. 41-47. June, 2014. đ?‘?

đ?‘?

đ??ł1,3 = [đ?‘?đ?œƒ3

(68)

đ?‘ đ?›˝đ?‘ đ?œƒ4 ]đ?‘‡

(69)

đ?‘ đ?œƒ1 đ??‘1 = [−đ?‘?đ?›˝đ?‘?đ?œƒ1 đ?‘ đ?›˝đ?‘?đ?œƒ1

0 đ?‘ đ?›˝ đ?‘?đ?›˝

−đ?‘?đ?œƒ1 −đ?‘?đ?›˝đ?‘ đ?œƒ1 ] đ?‘ đ?›˝đ?‘ đ?œƒ1

(70)

đ?‘?đ?›˝đ?‘ đ?œƒ2 đ??‘ 2 = [ đ?‘ đ?œƒ2 đ?‘ đ?›˝đ?‘?đ?œƒ2

−đ?‘ đ?›˝ 0 đ?‘?đ?›˝

đ?‘?đ?›˝đ?‘ đ?œƒ2 đ?‘?đ?œƒ2 ] đ?‘ đ?›˝đ?‘ đ?œƒ2

(71)

−đ?‘ đ?œƒ3 đ??‘ 3 = [đ?‘?đ?›˝đ?‘?đ?œƒ3 đ?‘ đ?›˝đ?‘?đ?œƒ3

0 −đ?‘ đ?›˝ đ?‘?đ?›˝

đ?‘?đ?œƒ3 đ?‘?đ?›˝đ?‘ đ?œƒ3 ] đ?‘ đ?›˝đ?‘ đ?œƒ3

(71)

−đ?‘?đ?›˝đ?‘ đ?œƒ4 đ??‘ 4 = [ −đ?‘ đ?œƒ4 đ?‘ đ?›˝đ?‘?đ?œƒ4

đ?‘ đ?›˝ 0 đ?‘?đ?›˝

−đ?‘?đ?›˝đ?‘ đ?œƒ4 −đ?‘?đ?œƒ4 ] đ?‘ đ?›˝đ?‘ đ?œƒ4

(71)

đ?‘?

đ?‘?

đ?‘?

đ?‘ đ?›˝đ?‘ đ?œƒ3 ]đ?‘‡

−đ?‘?đ?œƒ4

đ??ł1,4

đ?‘?

đ?‘?đ?›˝đ?‘ đ?œƒ3

= [đ?‘?đ?›˝đ?‘ đ?œƒ4

Let us assume a flywheel with inertia of 0.16 in x and y axis and a value of 0.308 in z axis, which is rotating at a speed of 10000 rpm. If the body has an angular velocity and acceleration, it is clear in eq. (74) than the first two terms contribute to the total torque. Fig. 5 illustrates the results obtained for a unit angular velocity and acceleration. The continuous line represents the torque computed with eq. (74), while the dotted line is the torque computed using the traditional approach. 4. CONCLUSIONS A new dynamic model for a 4-CMG was derived using the Newton-Euler algorithm, a methodology commonly used in Robotics. Although some simplifications were done, the dynamic model is useful to study the behavior of a 4-CMG. The obtained dynamic model can also be used for computing a control law for a 4-CMG. Torque equations for the rotational joints were also found. A simulation was performed to illustrate the benefit of the proposed equations. These equations are also useful to compute and help in selecting the proper motors that will drive the joints.

Finally, the torque eq. (34) and (29), can be rearranged and expressed in base body coordinates as, đ?›•2,đ?‘–

đ?›•1,đ?‘–

đ?‘‡ đ?‘? = đ?‘?đ??ł1,đ?‘– đ??‘ đ?‘– đ?‘–đ??ˆđ?‘– đ?‘?đ??‘đ?‘‡đ?‘– đ?’ƒđ?›šĚ‡đ?‘? đ?‘? đ?‘‡ đ?‘? Ěƒ đ?‘? đ?‘?đ??‘ đ?‘– đ?‘–đ??ˆđ?‘– đ?‘?đ??‘đ?‘‡đ?‘– đ?’ƒđ?›šđ?‘? + đ??ł1,đ?‘– đ?›š đ?‘‡ đ?‘? Ěƒ đ?‘? đ?‘?đ?’š1,đ?‘– + đ?œƒĚ‡đ?‘– Iđ?‘Śđ?‘Ś đ?‘?đ??ł1,đ?‘– đ?›š đ?‘‡ đ?‘? = đ?‘?đ??˛1,đ?‘– đ??‘ đ?‘– đ?‘–đ??ˆđ?‘– đ?‘?đ??‘đ?‘‡đ?‘– đ?’ƒđ?›šĚ‡đ?‘? đ?‘? đ?‘‡ đ?‘? Ěƒ đ?‘? đ?‘?đ??‘ đ?‘– đ?‘–đ??ˆđ?‘– đ?‘?đ??‘đ?‘‡đ?‘– đ?’ƒđ?›šđ?‘? + đ??˛1,đ?‘– đ?›š đ?‘? đ?‘‡ đ?‘? Ěƒ đ?‘? đ?‘?đ?’›1,đ?‘– + đ?œƒĚˆđ?‘– Iđ?‘Śđ?‘Ś + đ?œƒĚ‡2,đ?‘– Iđ?‘§đ?‘§ đ??˛1,đ?‘– đ?›š

(74)

References [1] Kuhns, M. and Rodriguez, A. Singularity avoidance control laws for a multiple CMG spacecraft attitude control system, Proceedings of the American Control Conference (ACC), pp. 2892–2893, 1994.

(75)

[2] Kuhns, M. and Rodriguez, A. A preferred trajectory tracking steering law for spacecraft with redundant CMGS, Proceedings of the American Control Conference (ACC), pp. 3111–3115, 1995.

These equations are useful to select the motors for each actuated joint [13].

[3] Oh, S. and Vadali, S. R. Feedback control and steering laws for spacecraft using single gimbal control moment gyros, Astronautical Sciences, 39 (2), pp. 183–203, 1991.

3. Numerical example

[4] Thornton, B., Ura, T., Nose, Y. and Turnock, S. Internal actuation of underwater robots using control moment gyros, Proceedings of Oceans, pp 591–598, 2005.

The eq. (74) and (75) are useful for computing the motor requirements, while equation (63) is used to create a steering control law for the 4-CMG as is done in [14]. In the case of a flywheel motor, eq. (74), only the last term is traditionally taken into account to compute the required torque, but a numerical simulation can show how the proposed equations are better than the traditional approach.

[5] Thornton, B., Ura, T. and Nose, Y. Wind-up AUVs: Combined energy storage and attitude control using control moment gyros. Proceedings of Oceans, pp. 1-9, 2007 [6] Lappas, V. J., Steyn, W. H. and Underwood, C. I. Torque amplification of control moment gyros, IEEE Electronics Letters, 38 (15), pp. 837–839, 2002. [7] Tekinalp, O., Elmas, T. and Yavrucuk, I. Gimbal angle restricted control moment gyroscope clusters, Proceedings of 4th International Conference on Recent Advances in Space Technologies (RAST), pp. 585-590, 2009. [8] Ford, K. A. and Hall, C. D. Singular direction avoidance steering for control-moment gyros, Journal Guidance Control and Dynamics, 23 (4), pp. 648–656, 2000. [9] Bedrossian, N. S., Paradiso, J., Bergmann, E. V. and Rowell, D. Redundant single gimbal control moment gyroscope singularity analysis, Journal Guidance Control and Dynamics, 13 (6), pp. 1096–1101, 1990. [10] Toz, M. and Kucuk, S. A comparative study for computational cost of fundamental robot manipulators, Proceedings of International Conference on Industrial Technology (ICIT), pp. 289-293, 2011. [11] Negrean, I., Schonstein, C., Negrean, D.C., Negrean, A.S. and Duca, A.V. Formulations in robotics based on variational principles, Proceeding of International Conference on Automation Quality and Testing Robotics (AQTR), pp. 1-6, 2010.

Figure 5. Effect of body motion in flywheel torque. 46


Yime-Rodríguez et al / DYNA 81 (184), pp. 41-47. June, 2014. [12] Tsai, L. W. Robot Analysis: The Mechanics of Serial and Parallel Manipulators. Jon Wiley & Sons, Inc., 1999.

C. A. Peña-Cortés has been a Professor at Universidad de Pamplona in the Department of Mechatronics, Mechanics and Industrial Engineering since 2004. He received the Electromechanic Engineering degree in 2001 from Universidad Pedagógica y Tecnológica de Colombia, an MSc Degree in Electronics and Computers Engineering in 2003 from Universidad de los Andes, and his PhD degree in Automatics and Robotics from Universidad Politécnica de Madrid in 2009. His researches are focused on Service Robots, Telerobotics and Parallel Robots.

[13] Jaramillo, A. Franco, E. Guasch, L. Estimación de parámetros invariantes para un motor de inducción. Dyna, 78 (169), pp. 88–94, 2011. [14] Yime, E., Quintero, J., Saltaren R. and Aracil R. A new Approach for Avoiding Internal Singularities in CMG with Pyramidal Shape Using Sliding Control. Proceedings of European Control Conference (ECC), pp. 125-132, 2009.

W. M. Rojas-Contreras is full professor of the Pamplona University. He became a systems engineer in Francisco de Paula Santander University. He received specialist degree from Industrial University of Santander. He obtained a master degree in computer science from Autonomous University. He is candidate to PhD in Applied Sciences from Andes University. Now, he is Dean of Engineering and Architecture Faculty and head of Computer Science research Group. His researches are focused on software engineering and project management software.

E. Yime-Rodríguez is a Mechanical Engineer, graduated from Universidad del Norte, with a PhD in robotics, from Universidad Politecnica de Madrid, who actually work for Universidad Tecnologica de Bolivar, located in Cartagena, Colombia. His major research areas are parallel and serial robotics, with emphasis in kinematics, dynamics and nonlinear control. Recently he has gained significant experience in mechanical and mechatronics design applied to robotics. He can be contacted to eyime@unitecnologica.edu.co

47


DYNA http://dyna.medellin.unal.edu.co/

Design and construction of low shear laminar transport system for the production of nixtamal corn dough Diseño y construcción de un sistema de transporte laminar de bajo cizallamiento para la producción de masa de maíz nixtamalizada Jorge Alberto Ortega-Moody a, Eduardo Morales-Sánchez b, Evelina Berenice Mercado-Pedraza c, José Gabriel Ríos-Moreno d & Mario Trejo-Perea d. a

Instituto Tecnológico Superior de San Andrés Tuxtla, Veracruz, México. jorgemoody@gmail.com b CICATA-IPN, Querétaro, México. emoraless@ipn.mx c Universidad Autónoma de Coahuila, Coahuila, México, bere_3181@hotmail.com d Universidad Autónoma de Querétaro, Querétaro, México, riosg@uaq.mx d Universidad Autónoma de Querétaro, Querétaro, México, mtp@uaq.mx

Received: October 10th, 2012. Received in revised form: October 8th, 2013. Accepted: October 25th, 2013.

Abstract The tortilla is obtained by the traditional process of nixtamalization. This process has two disadvantages: production of liquid waste and it is in batches. Extrusion has resolved the problem of liquid waste, however, extrusion as of yet, has not been able to replace the traditional process of nixtamalization. This is because corn dough is a pseudoplastic fluid, which changes its viscosity in the presence of high shear velocities that occur inside the extruder. With this in mind, the following investigation proposes the mechanical design and construction of a low shear laminar transport system (LSLTS) for the production of corn dough for tortillas. This system consists of two thermally isolated stages: transport and cooking stages. The results showed that the prototype obtained meets the characteristics of homogenous cooking, low shear, absence of liquid waste and the product meets the specifications for nixtamal dough. Keywords: Mechanical design, nixtamal, low shear, tortilla. Resumen La tortilla se obtiene por medio del proceso tradicional de Nixtamalización. Este proceso tiene dos inconvenientes: producción de efluentes contaminantes y ser discontinuo. La extrusión, ha resuelto la problemática de la generación de efluentes contaminantes, sin embargo, no ha podido sustituir al proceso tradicional de nixtamalización. Esto se debe a que la masa de maíz es un fluido pseudoplástico, el cual cambia su viscosidad en la presencia de velocidades de cizalla que ocurren dentro del extrusor. Con base a lo anterior, el presente trabajo propone el diseño y construcción de un Sistema de Transporte Laminar de Bajo Cizallamiento (STLBC) para la producción de masa para tortilla. Este sistema consiste de dos etapas aisladas térmicamente: Etapa de transporte y de cocimiento. Los resultados mostraron que el prototipo obtenido cumple con las características de cocimiento homogéneo, bajo cizallamiento, no produce efluentes contaminantes, y la masa obtenida, cumple con especificaciones de masa nixtamalizada. Palabras clave: Diseño mecánico, nixtamal, bajo cizallamiento, tortilla.

1. Introduction In Mexico, the tortilla is the most consumed corn product (made in mills and with instant flours), however, in the United States the principal use of corn is the elaboration of snacks. Nixtamalization is the process used to produce the dough necessary for making tortillas. This traditional process is an environmental problem because it generates great quantities of liquid waste due to the fact that the contaminated water, product of the process, is channeled into the cities’ sewage system, thus becoming a serious ecological problem [1-3].

Different investigators have developed alternative methods in an effort to solve the problem of liquid waste; for example, Sánchez Sinencio and González Hernández [4] have reported processes and equipment in order to obtain nixtamal corn-dough based on extrusion and infrared cooking, Figueroa [5-7] has developed ecological processes to obtain nixtamal corn-dough, Martínez [8] built a corn flour plant using a method based on vapor reactors and another method of cooking using radio frequencies (RF), Vaqueiro M. C. [9] patented a process of nixtamalization based on the separation of the corn parts. One of the most viable alternatives to solve the problems

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 48-55 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Ortega-Moody et al / DYNA 81 (185), pp. 48-55. June, 2014.

cooking the material at the same time that it’s being transported and thus avoiding the presence of shear when the material is undergoing physicochemical changes. 2. The transport and cooking stages must be thermally isolated so that the cooking zone is confined to the cooker. 3. The design of the cooker must be optimum in length and cooking time. 4. The design of the fluid laminar transport system must diminish shear velocities.

of the traditional process is the use of extrusion. The investigators that have applied this method for the elaboration of tortillas are Duran de BazĂşa [10], MartĂ­nez Flores [11-12], GĂłmez Aldapa [13], San MartĂ­n MartĂ­nez [14], Reyes Moreno [15], MilĂĄn-carrillo [16] and GutiĂŠrrez Dorado [17] who developed an extrusion method for the production of corn-dough. One disadvantage of the process of extrusion is the shear velocities that are generated inside the extruder, which results in a change of viscosity in the material. These changes make the corn-dough obtained difficult to process in the tortilla machines. For this reason, extrusion has not been able to substitute the traditional process in the production of nixtamal corn-dough. However, the extrusion method has many great advantages: it is ecological because it only uses the necessary amount of water for the process, also it is energy efficient compared to gas and it significantly reduces the process time. For practical purposes, we have decided to use the name raw material to refer to the material produced by the combination of corn flour, water and calcium hydroxide. This raw material shows physicochemical changes when it cooks and reaches gelatinization becoming a NonNewtonian fluid. This type of fluid changes its viscosity in the presence of shear stresses [18]. For this reason the extrusion has not been able to substitute the traditional method in its entirety because in the traditional method the material is at rest and there is no shear. The change of viscosity is reflected in the quality of the final product [19] such as tortillas and snacks. In this context, this investigation proposes the development of a Low Shear Laminar Transport System (LSLTS) for the elaboration of instant corn flour for tortillas and/or corn-dough for snacks. The term low shear refers to the fact that the cooked corn dough does not undergo mechanical stress [18]. The basic proposal is to diminish the shear by separating the cooking stage from the transport stage where shear velocities are generated due to the rotative elements of the pump. In order to diminish the shear it is also necessary that the corn dough flows in a laminar manner, in order to obtain low shear the geometry of the cooker is determinate.

No

Conceptual Design

Result evaluation

if results meet the requirements

Yes

Design Specifications Manufacturing

CAD

Figure 1. Design Methodology.

2.2. Design 2.2.1. Cooker design In order to carry out the modeling of the design it is necessary to determine the specification of the machine. Also, it is important to validate the design and cook the material that will be processed in order to determine the mode and ranges of operation. Table 1 shows the rheological characteristics of the material to be processed [20] (Corn flour) and Table 2 shows the gelatinization temperatures of the corn flour [18]. The heat transference in the cooker is regulated by the thermodynamic constants of the material listed in Table 3.

2. Methodology and Design

Table 1. Rheological constants of food for the model of the law of power. Material Moisture k (Pa*s) n Temp. ( 0đ??ś ) (%) Corn grains 13 177 0.45-0.55 2.8x104 13 193 0.45-0.55 1.7x104 13 207 0.45-0.55 0.76x104 Source: Adapted from [20]

2.1. Methodology The methodology employed in the design can be observed in Fig. 1, which begins with a series of process specifications in order to carry out the conceptual design of the laminar cooker and transporter. Once the geometry is found that meets the initial specifications, a detailed drawing for fabrication proceeds. Finally, the simulated, conceptual and real data is validated. The specifications of the LSLTS for the elaboration of nixtamal corn flours or corn dough must have the following characteristics: 1. Transport and cooking stages must be separate to avoid

Table 2. Ranges of gelatinization. Sample Temp. of gelatinization ( 0C) Corn 70.7 Âą 0.7 to 78.1 Âą 0.7 dough Source: Adapted from [18]

49

Enthalpy of gelatinization (J/g ) 14.7 Âą 1.0


Ortega-Moody et al / DYNA 81 (185), pp. 48-55. June, 2014. Table 3. Thermodynamic constants of corn flour. đ?‘Š Product k exp. Cp ( 0 ) đ?‘š đ??ś đ?‘Š ( 0 ) đ?‘š đ??ś Corn 0.40 2.64 dough Source: Adapted from [21]

Îą

(đ?‘š2 ) đ?‘

1.4 x10−1

The eq. (1) describes the transference of heat by ∆đ?‘‡ convection in a fluid through a duct ( đ?‘œ )with a certain

k calc đ?‘Š .( 0 ) đ?‘š đ??ś 0.42

∆đ?‘‡đ?‘–

mass flow (m) [22]. Due to the fact that the constant of convection h varies according to the geometry of the cooker, it is necessary to use a correlation with a Nusselt number (đ?‘ đ?‘˘đ??ˇ ). Because the nixtamal corn flour is a pseudoplastic fluid, Nusselt is proposed for non-Newtonian fluids with variable viscosity as demonstrated in eq. (2) [22]:

According to the previous information it can be concluded that the temperature of the material while exiting the cooker should be within a range of 70.70 C to 78.10 C. Furthermore, the temperature of the walls of the cooker should not go beyond 1000 C because the water content of the material can evaporate and form vapor bubbles inside the cooker. The proposed yield for the cooker design is 1.5 Kg/h as demonstrated in Table 4. Table 4. Parameters of the cooker design. Parameter Mass Flow Internal surface temperature of the cooker Material temperature exiting the cooker Material temperature entering the cooker

∆đ?‘‡đ?‘œ ∆đ?‘‡đ?‘–

=

đ?‘‡đ?‘ −đ?‘‡đ?‘š,đ?‘œ đ?‘‡đ?‘ −đ?‘‡đ?‘š,đ?‘–

đ?‘ đ?‘˘đ??ˇ =

â„Žđ??ˇ đ?‘˜

= đ?‘’đ?‘Ľđ?‘?(−

= 1.86 (

P

ℎ′ )

đ?”Ş đ??śđ?‘?

(1)

đ?‘ đ?‘…đ?‘’đ?‘Ś đ?‘ đ?‘ƒđ?‘&#x;đ?‘Ž 1/3

)

đ??ż/đ??ˇ

Âľ 1/4

( ) Âľđ?‘

(2)

Where đ?‘ƒ is the duct perimeter, Cp is the specific head, D is the duct diameter, L is the duct length, đ?‘˜ is the thermal conductivity coefficient, Âľ is the viscosity, đ?‘ đ?‘…đ?‘’đ?‘Ś is the Reynols number and đ?‘ đ?‘ƒđ?‘&#x;đ?‘Ž is the Prandtl number. Isolating the constant of convention h in eq. (1) and eq. (2). The following equations are obtained respectively:

Value 1.5 đ??žđ?‘”/â„Ž 79 0 đ??ś 76.5 0 đ??ś 30 0 đ??ś

đ??żđ?‘›(

The geometry used for the cooker is rectangular as demonstrated in Fig. 2. This configuration is ideal for the process due to the fact that the entrance and exit are identical and there are no changes in pressure or velocity internally. Another benefit of this configuration is that the material is obtained in laminar form, which reduces shear.

â„Ž=

∆đ?‘‡đ?‘œ )đ?‘š đ??śđ?‘? ∆đ?‘‡đ?‘–

(3)

đ?‘ƒđ??ż đ?‘˜

â„Ž = 1.86 (

đ?‘ đ?‘…đ?‘’đ?‘Ś đ?‘ đ?‘ƒđ?‘&#x;đ?‘Ž 1/3

đ??ˇ

đ??ż/đ??ˇ

)

Âľ 1/4

( )

(4)

Âľđ?‘

Combining eq. (3) and eq. (4), the length of the cooker is obtained: 3/2

∆T

đ??żđ?‘›( ∆To )đ?‘š đ??śđ?‘? đ??ˇ2/3 i

đ??ż= (

1/3 Âľ 1/4 ( ) Âľđ?‘

)

(5)

1.86 đ?‘˜ đ?‘ƒ (đ?‘ đ?‘…đ?‘’đ?‘Ś đ?‘ đ?‘ƒđ?‘&#x;đ?‘Ž )

The eq. (5) is for cylindrical ducts. In order to obtain the equivalent in rectangular the diameter D is substituted for the hydraulic diameter in a rectangular duct: Figure 2. Dimensions of the rectangular cooker.

đ??ˇâ„Ž =

In order to determine the dimensions, it is necessary to characterize the cooker using thermodynamic equations for the transference of heat with a forced internal flow as shown in Fig. 3.

2 (đ?‘Ľđ?‘Ś)

(6)

đ?‘Ľ+đ?‘Ś

Substituting eq. (6) in eq. (5), the result is: ∆T

đ??ż= (

3/2

2 (đ?‘Ľđ?‘Ś) 2/3

đ??żđ?‘›( ∆To )đ?‘š đ??śđ?‘? ( đ?‘Ľ+đ?‘Ś ) i

1/3 Âľ 1/4 ( ) Âľđ?‘

)

(7)

1.86 đ?‘˜ 2đ?‘Ś (đ?‘ đ?‘…đ?‘’đ?‘Ś đ?‘ đ?‘ƒđ?‘&#x;đ?‘Ž )

The dimension đ?‘Ľ = 0.01 đ?‘š is therefore proposed and the dimension y and the length are estimated with the eq. (5). đ?‘Ľ = 0.01 đ?‘š. đ?‘Ś = 0.1 đ?‘š. đ??ż = 0.2 đ?‘š. 2.2.2. Assisted Computer Design for the Cooker Figure 3. Transference of heat by convection.

The mechanical design assisted by computer or CAD 50


Ortega-Moody et al / DYNA 81 (185), pp. 48-55. June, 2014.

đ??ˇđ?‘–đ?‘ đ?‘?đ?‘™đ?‘Žđ?‘?đ?‘’đ?‘šđ?‘’đ?‘›đ?‘Ą =

2.07đ?‘Ľ10−5 đ?‘š3 /đ?‘šđ?‘–đ?‘› 15 đ?‘&#x;đ?‘’đ?‘Ł/đ?‘šđ?‘–đ?‘›

đ??ˇđ?‘–đ?‘ đ?‘?đ?‘™đ?‘Žđ?‘?đ?‘’đ?‘šđ?‘’đ?‘›đ?‘Ą = 1.38đ?‘Ľ10−6 đ?‘š3 /đ?‘&#x;đ?‘’đ?‘Ł The use of a stainless steel tube of 47mm in diameter is proposed to design the transporter. According to the stator and the displacement, the dimensions of the vanes and rotor are determined. 2.2.4. Figure 4. Isometric View of the cooker

Assisted Computer Design for the Laminar transporter

According to the estimated dimensions, the dimensions of the laminar transporter can be determined. The first element that must be designed is the vane with holes to contain 6 mm springs as shown in Fig. 5. The next pieces that must be designed according to the specifications are the stator and the rotor as shown in figure 6 and Fig. 7 respectively. In Fig. 8 the assembly of the stator, rotor and the vanes is shown. Finally, Fig. 9 shows the complete assembly of the laminar transporter.

(computer-aided design) was developed in SolidWorks 2009 with the following dimensions: đ?‘Ľ = 0.01 m. đ?‘Ś = 0.1 m. đ??ż = 0.2 m. Fig. 4 shows the isometric view of the cooker. 2.2.3. Laminar Transport Design The design is based initially on the vane pumps with modifications according to the requirements of the cooker. This design proposed that the transporter has a mass flow of 1.5 đ??žđ?‘”/â„Ž and an exit with the same geometry as the cooker so that there are no changes in pressure or alterations in the flow lines. The first step is to determine the caudal necessary to pump 1.5 đ??žđ?‘”/â„Ž. The mass flow and the density in the eq. (8) are substituted and the result is: đ??śđ?‘Žđ?‘˘đ?‘‘đ?‘Žđ?‘™(

đ?‘š3

đ?‘šđ?‘–đ?‘›

)=

Figure 5. Vane Design of the laminar transporter.

đ??žđ?‘” ) min đ??žđ?‘” đ?œŒ( 3 ) đ?‘š

đ?‘š(

(8)

Where the đ?‘š is the mass flow and đ?œŒ is the density. đ??žđ?‘”

đ??śđ?‘Žđ?‘˘đ?‘‘đ?‘Žđ?‘™ =

â„Ž

(1.5 â„Ž )∗(160đ?‘šđ?‘–đ?‘›) 1204 đ??žđ?‘” ( 3 ) đ?‘š

đ??śđ?‘Žđ?‘˘đ?‘‘đ?‘Žđ?‘™ = 2.07đ?‘Ľ10−5 đ?‘š3 /đ?‘šđ?‘–đ?‘› Once the necessary caudal is obtained, an estimate is made as to how many revolutions the mixture must make. In this case it is proposed that the caudal make 1.5 đ??žđ?‘”/â„Ž in 15 đ?‘&#x;đ?‘’đ?‘Ł/đ?‘šđ?‘–đ?‘› (n) in order to determine the capacity in cubic meters necessary for the material contained in the transporter in one revolution. This capacity is the displacement in the eq. (9). When the values are substituted, the result is: đ??ˇđ?‘–đ?‘ đ?‘?đ?‘™đ?‘Žđ?‘?đ?‘’đ?‘šđ?‘’đ?‘›đ?‘Ą =

đ??śđ?‘Žđ?‘˘đ?‘‘đ?‘Žđ?‘™ đ?‘&#x;đ?‘’đ?‘Ł ) min

đ?‘›(

(9) Figure 6. Stator for the laminar transporter.

51


Ortega-Moody et al / DYNA 81 (185), pp. 48-55. June, 2014.

2.3. Manufacture of the LSLTS The material that was selected for the thermal isolator was Bakelite due to its thermal isolating capacities with a very low coefficient of conduction of 0.233 (see Table 5). Stainless steel was selected for the cooker due to its capacity to conduct heat and its food grade characteristics. As a result of the selection of materials, the isolator was manufactured as shown in Fig. 10. Table 5. Max. Ď đ??žđ?‘” ( 3)

CP (

đ?‘Š đ?‘š 0đ??ś

)

k(

đ?‘Š đ?‘š 0đ??ś

)

Îą(

đ?‘š2 đ?‘

)

Temp. (°C)

đ?‘š

1270

900

0.233

0.201

140

Figure 7. Rotor for the laminar transporter.

Figure 10. Manufacturing of the isolator.

Fig. 11 shows a representation of the CAD assembly and the mechanical manufacturing of the LSLTS. In Fig. 12 the isometric view of the construction of the LSLTS can be observed.

Figure 8. Assembly of rotor, vanes and stator

Figure 9. Laminar Transporter.

Figure 11. Assembly of CAD and LSLTS.

52


Ortega-Moody et al / DYNA 81 (185), pp. 48-55. June, 2014.

3.2. Temperature Test The objective of this test is to determine if the material reaches the desired temperature when exiting the cooker. In this test it is assumed that the dimensions of the cooker estimated by the theoretical method and the simulation are ideal and will concur with the results of the experiment. During this test a range of yields of 3 to 1.5 kg/h were tried using the velocity control on the motor connected to the extruder. The results can be observed in Table 7. Table 7. Temperature Results. Performance (kg/h) 2.95 2.91 2.81 1.8 1.7 1.66 1.60 1.52 Figure 12. Isometric view of the LSLTS.

Out Temp. (°C) 73 73 72.3 77.2 76.9 77.8 77.8 77.1

Finally, a comparison was made between the experimental results, theoretical results, and those obtained in the simulation as shown in Table 8. In this comparison a yield of 1.52 kg/h was selected due to the fact that it is the value that most closely adheres to those proposed by the theoretical method and the simulation (1.5 kg/h). Because the entrance temperature varied according to climate temperature, the error was estimated by taking into account the increase in temperature of the material as it passed through the cooker.

3. Tests and Results Tests were run at the CICATA campus QuerĂŠtaro with a temperature of 79 centigrade in the cooker, as proposed, and corn flour with a humidity of 55% and a granulometry of 1.3. Four things were tested: flow, temperature, viscosity, and water absorption index. 3.1. Flow test

Table 8. Comparison of results. Parameter Performance In Temperature Out Temperature Delta Temperature % Error

In order to affirm that the implementation of the transporter meets the requirements of the design, tests were done with corn flour at 55% humidity and the range was varied between 10 and 20 rev/min. The results are shown in Table 6.

Theoretical Results 1.5 30 76.5 46.5 0

Experimental Results 1.52 32.1 77.1 45 -3.2258

3.3. Viscosity Test

Table 6. Results of the yield of the laminar transporter. Velocity (rev/min)

In Temp. (°C) 33.4 33.8 31.5 33.7 33.5 33.7 32.9 32.1

Time (h)

Weight (Kg)

Kg/hr

20

0.1

0.2815

2.815

20

0.1

0.291

2.91

20

0.1

0.295

2.95

15

0.1

0.18

1.8

15

0.1

0.17

1.7

15

0.1

0.166

1.66

10

0.1

0.1605

1.605

10

0.1

0.1496

1.496

10

0.1

0.1523

1.523

As mentioned above, the viscosity plays an important role in the material obtained. If the viscosity is different, the material is difficult to process in tortilla machines and this changes the physical characteristics of the final product, the tortilla itself. In this test the raw material had a humidity of 55%, 0.35% lime and a velocity of 6 rpm. The test temperature was 79 °C ¹ 1°C in the cooker The variation in temperature is due to the compensation by the controller. This compensation produces slight changes in viscosity as shown in Table 9. Table 9. Viscosity Results. Sample Moisture (%) 1 55 2 55 3 55 4 55 5 55 6 55

In the table above it can been observed that the transporter pumps approximately 1.5 đ??žđ?‘”/â„Ž at 10 đ?‘&#x;đ?‘’đ?‘Ł/đ?‘šđ?‘–đ?‘›.

53

Vel. (rpm) 6 6 6 6 6 6

Lime (%) 0.35 0.35 0.35 0.35 0.35 0.35

Temp. (°C) 79 79 79 79 79 79

Viscosity (cP) 1379 1320 1514 1441 1751 1751


Ortega-Moody et al / DYNA 81 (185), pp. 48-55. June, 2014. [6] Figueroa, J.D., MartĂ­nez, B.F., GonzĂĄlez, H.J. et al. ModernizaciĂłn tecnolĂłgica del proceso de nixtamalizaciĂłn, Avance y Perspectiva, vol. 13, pp. 323-329, 1994.

4. Conclusions A low shear laminar transport system (LSLTS) was built for the elaboration of corn dough used in the making of tortillas. The transportation of the raw material was by means of vane pumps, whose length was modified, and the cooking by means of a laminar cooker. The term low shear refers to the following mechanical characteristics: an open system without a die, vane transport instead of screw which allows for slow revolutionary operation, and a cooker with rectangular geometry that homogenizes the cooking of the raw material and thus avoids over-cooking by reducing the force of friction. The LSLTS was designed based on thermodynamic equations which yielded the proposal of a rectangular cooker with the dimensions đ?‘Ľ = 0.01đ?‘š, đ?‘Ś = 0.1đ?‘š and đ??ż = 0.2đ?‘š in which shear velocities of 0.06 đ?‘ −1 with a laminar flow were generated. This shear velocity is below of the value at which nixtamal corn dough begins to show changes in viscosity. Therefore, it can be concluded that this design does not change the rheological properties of the end product. The viscosity values obtained are within the reported range. Gruntal reported viscosities of 1.983 to 2.202 P¡s for nixtamal corn flours by varying the soaking time, GaytĂĄn [23] of 1 to 4.5 P¡s for nixtamal corn flours by ohmic heating, Palacios [24] of 1.5 to 2.5 P¡s for commercial nixtamal corn flours and the values obtained in this investigation oscillate between 1.32 and 1.75 P¡s. Therefore, it can be concluded that the LSLTS is low shear and does not have any significant affects on the viscosity. As a result the system was patented in Mexico MX/a/2012/003687.

[7] Figueroa, J. D., Extrusor y proceso continuo para formaciĂłn de masa fresca de maĂ­z para la elaboraciĂłn de tortillas de harinas instantĂĄneas y sus derivados, SECOFI. 936544, Mechanical Invention, 04-April-1993. [8] Martinez-Montes, J. L., Selective nixtamalization process for production of fresh whole corn masa, nixtamalized corn flour and derived products, Instituto Politecnivo Nacional, US Patent, 6265013, Mechanical Invention, 24-July-2001. [9] Vaqueiro, M., Process for producing nixtamalized corn flour, IMIT AC.US Patent, 4594260. Process. 10-July-1986. [10] Duran de BazĂşa, C., Guerra, V.R. and Sterner, H., Extruded corn flour as an alternative to lime heated corn flour for tortilla preparation, Food Science, vol. 44, pp. 940-941, 1979. [11] MartĂ­nez-Flores, H.E., MartĂ­nez-Bustos, F., Figueroa, J.D. et al. Tortilla from extruded masa as related to corn genotype and milling process, Food Science, vol. 63, pp. 130-133, 1998. [12] MartĂ­nez-Flores, H.E., MartĂ­nez-Bustos, F., Figueroa, C.J.D. et al. Studies and biological assays in corn tortillas made from fresh masa, Food Science, vol. 67, pp. 1196-1199, 2002. [13] GĂłmez-Aldapa, C., MartĂ­nez-Bustos, F., Figueroa, C.J.D. et al. A comparison of the quality of whole corn tortillas made from instant corn flours by traditional or extrusion processing, Food Science, vol. 34, pp. 391-399, 1999. [14] San MartĂ­n-MartĂ­nez, E., Jaime-Fonseca, M.R., MartĂ­nez-Bustos, F. et al. Selective nixtamalization of fractions of maize grain (zea mays l.) and their use in the preparation of instant tortilla flours analyzed using response surface methodology, Cereal Chem, vol. 80, pp. 13-19, 2003. [15] Reyes-Moreno, C., MillĂĄn-Carrillo, J., GutiĂŠrres-Dorado, R. et al. Instant flour from quality protein maize (Zea mays L) Optimization of extrusion process, Food Science, vol. 36, pp. 685-695, 2003. [16] MillĂĄn-Carrillo, J., GutiĂŠrres-Dorado, R., Perales-SĂĄnchez, J.X.K. et al. The optimization of the extrusion process when using maize flour with a modified amino acid profile for making tortillas, Food Science, vol. 41, pp. 727-736, 2006.

5. Acknowledgements

[17] GutiĂŠrrez-Dorado, R., Zayala-RodrĂ­guez, A.E., MilĂĄn-Carrillo, J. et al. Technological and nutritional properties of flours and tortillas from nixtamalized and extruded quality protein maize (zea mays l.), Cereal Chem, vol. 85, pp. 808-816, 2008.

This investigation was made possible by the financial support received from Instituto TecnolĂłgico Superior de San AndrĂŠs Tuxtla, ICYTDF and IPN-SIP.

[18] Bello-PÊrez, L.A., Osorio-Díaz, P., Núùez-Santiago, C. et al. Propiedades químicas, fisicoquímicas y reológicas de masas y harinas de maíz, Agrociencia, vol. 36, pp. 319-328, 2002.

References

[19] Osorio-TobĂłn, J.F., Ciro-VelĂĄzquez, H.J. and Guillermo-Mejia, L., CaracterizaciĂłn reolĂłgica y textural de queso edam, Dyna, vol. 147, pp. 3345, 2005.

[1] Serna-Saldivar, S.O., Gomez, M.H. and Rooney, L., Technology, chemistry, and nutritional value of alkaline cooked, in Pomeranz Y. Advances in Cereal Science and Technology, St. Paul: Am. Assoc. Cereal Chem, Vol. X, pp. 243-307, 1990.

[20] Harper, J.M., Extrusion of Food. Taylor & Francis, BocaRaton Florida: CRC Press Inc, 1981.

[2] NiĂąo-Medina, G., Carvajal-VillĂĄn, E. and Lizardi, J., Maize processing waste water arabinoxylans: Gelling capability and cross-linking content, Food Chem, vol. 115, pp.1286-1290, 2009.

[21] Machado-Velazco, K.M. and VĂŠlez-Ruiz, J.F., Estudio de propiedades fĂ­sicas de alimentos mexicanos durante la congelaciĂłn y el almacenamiento congelado, Revista Mexicana de IngenierĂ­a QuĂ­mica, vol. 7, pp. 41-54, 2008.

[3] Rosentrater, K.A., A review of corn masa processing residues: Generation, properties, and potential utilization, Waste Manag, vol. 26, pp. 284-292, 2006.

[22] Geankoplis, C., Transport processes and Separation Process Principles, Pearson Education, Inc. 2003.

[4] SĂĄnchez-Sinencio, F., Apparatus for the preparation of instant fresh corn dough or masa, Centro De InvestigaciĂłn Y De Estudios Avanzados Del I.P.N, Us Patent. 5558886, Mechanical Invention, 24-September1996.

[23] GaytĂĄn-MartĂ­nez, M., Figueroa, J.D.C., VĂĄzquez-Landaverde, P.A. et al. Physicochemical, functional, and chemical characterization of nixtamalized corn flour obtained by ohmic heating and traditional process, Cyta-Journal of Food, vol. 10, pp. 182-195, 2012.

[5] Figueroa, J. .D., Proceso de nixtamalizaciĂłn limpia y rĂĄpida para la producciĂłn de masa fresca de maĂ­z para elaborar tortillas, harinas instantĂĄneas y sus derivados, Centro de InvestigaciĂłn y de Estudios Avanzados del I.P.N., IMPI 210991, Process. 07- Septiembre2006.

[24] Palacios, F.A.J., Vazquez, R.C. and RodrĂ­guez, G.M.E., Physicochemical characterizing of industrial and traditional nixtamalized corn flours, Food Science, vol. 93, pp. 45-51, 2009.

54


Ortega-Moody et al / DYNA 81 (185), pp. 48-55. June, 2014. J. A. Ortega-Moody, received a B.S. in Electronic Engineering in 2004, a M.S. degree in Mechatronic Engineering in 2007, and a PhD in Mechatronics in 2012 from the Instituto Politécnico Nacional de México. From 2012 to 2014 he worked on mechatronic projects. Currently, he is a Full Professor in the Mechatronics Systems Department at the Instituto Tecnológico Superior de San Andrés Tuxla. His research interests include: simulation, modeling and building Mechatronics Systems.

G.J Rios-Moreno received his B.S and M.S. degrees in Automatic Control in the School of Engineering, and his PhD in intelligent buildings from the Universidad Autónoma de Querétaro, México in 2003, 2005 and 2008 respectively. He is currently Professor of the School of Engineering Ingeniería at the Universidad Autónoma de Querétaro, México. His research interests include signal processing; modeling, prediction and control of thermal and lighting systems for intelligent buildings. He is currently member of the Sistema Nacional de Investigadores (SNI), CONACYT, México and a member of the Academic Group of Instrumentation and Control in the School of Engineering U.A.Q.

Eduardo Morales Sánchez. Born on the 17th of July, 1960 in Puebla, Mexico. He received a B.S. degree in Electronic Engineering from Benemérita Universidad Autónoma de Puebla, a M.S. degree and PhD in Materials Engineering from Universidad Autónoma de Querétaro. His research is in automation and industrial processes.

M. Trejo-Perea: Recieved the degrees of B.S. and M.S. in Instrumentation and Automatic Control Engineering from Universidad Autónoma de Querétaro (UAQ), Querétaro, México, in 1994 and 2005 respectively. He recieved a PhD in Engineering from the same institution in 2008. Currently he is a profesor and researcher in the área of automation in the UAQ. His research interests are: automation of industrial processes, signal processing, and the development of intelligent electric energy monitoring systems. He is also a member of the national system of researchers of Mexico (Sistema Nacional de Investigadores de México).

E. B. Mercado-Pedraza, received a B.S. in Food Chemistry and a M.S. degree in Food Technologies from Universidad Autónoma de Coahuila. Currently she is pursuing her PhD in Food Chemistry in Central Nacional de Metrología (CENAM). Her research interest is characterization of food.

55


DYNA http://dyna.medellin.unal.edu.co/

Dynamic stability of slender columns with semi-rigid connections under periodic axial load: theory Estabilidad dinámica de columnas esbeltas con conexiones semirrígidas bajo carga axial periódica: teoría Oliver Giraldo-Londoño a & J. Darío Aristizábal-Ochoa b b

a Structural Researcher, M.S. Ohio University, USA ogirald86@gmail.com. 125-Year Generation Professor, Ph.D. School of Mines, National University. Medellin, Colombia jdaristi@unal.edu.co

Received: December 15th, de 2012. Received in revised form: September 26th, 2013. Accepted: October 22th, 2013

Abstract The dynamic stability of an elastic prismatic slender column with semirigid connections at both ends of identical stiffness and with sidesway between the two ends totally inhibited, subject to parametric axial loads including the combined effects of rotary inertia and external damping is investigated in a classical manner. Closed-form expressions that can be used to predict the dynamic instability regions of slender columns are developed by making use of Floquet’s theory. The proposed solution is capable of capturing the phenomena of stability of columns under periodic axial loads using a single column element. The proposed method and corresponding equations can be used to investigate the effects of damping, rotary inertia and semirigid connections on the stability analysis of slender columns under periodically varying axial loads. The effects produced by shear deformations along the span of the column as well as those produced by the axial inertia, the coupling between longitudinal and transverse deflections and the curvature are not taken into account. Sensitivity studies are presented in a companion paper that show the effects of rotary inertia, damping and semirigid connections on the dynamic stability of columns under parametric axial loads. Keywords: Buckling; Columns; Dynamic Analysis; Damping; Semi-Rigid Connections; Parametric Loading; Periodic Loading; Stability. Resumen La estabilidad dinámica de una columna elástica prismática esbelta con conexiones semirrígidas en ambos extremos de rigidez idéntica y con desplazamiento lateral entre los dos extremos totalmente inhibido sujetos a cargas axiales paramétricos incluyendo los efectos combinados de inercia rotacional y amortiguación externas se investiga de una manera clásica. Expresiones cerradas que se pueden utilizar para predecir las regiones de inestabilidad dinámica de columnas esbeltas son desarrolladas haciendo uso de la teoría de Floquet. La solución propuesta es capaz de capturar el fenómeno de estabilidad en columnas sometidas a cargas axiales periódicas utilizando un solo elemento de columna. El método propuesto y las ecuaciones correspondientes se pueden utilizar para investigar los efectos del amortiguamiento, la inercia rotacional de la columna, y las conexiones semirrígidas en el análisis de estabilidad de columnas esbeltas sometidas a cargas axiales periódicas. Los efectos producidos por las deformaciones por cizallamiento a lo largo de la columna, así como los producidos por la inercia axial, el acoplamiento entre las deflexiones longitudinales y transversales y la curvatura no se tienen en cuenta. Estudios de sensibilidad que muestran los efectos de la inercia rotacional, el amortiguamiento y las conexiones semi-rígidas en la estabilidad dinámica de columnas sometidas a cargas axiales paramétricas son presentados en una publicación adjunta. Palabras clave: pandeo, columnas, análisis dinámico amortiguado, conexiones semirígidas, cargas paramétricas, cargas periódicas, estabilidad.

1 Introduction The stability analysis of columns and frames under time varying parametric axial loads are of great importance in civil, mechanical, and aerospace engineering. This subject generally termed “Dynamic Stability” has been investigated by numerous structural researchers since Koning and Taub [1] studied the stability of a simply supported elastic imperfect column subjected to a sudden applied axial load of known duration. Dynamic Stability encompasses many classes of problems and many different phenomena for the same configuration subjected

to the same dynamic loads. Therefore, it is not surprising that several uses and interpretations of the term exist. Simitses and Hodges [2] on page 329 of their textbook state that: “The class of problems falling in the category of parametric excitation, or parametric resonance, includes the best defined, conceived, and understood problems of dynamic stability”. The problem of parametric excitation is best defined in terms of an example. Consider an Euler column, which is loaded at one end by a periodic axial force. The other end is immovable. It can be shown that, for certain relationships between the exciting frequency and the column natural frequency of transverse

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 56-65 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Giraldo-Londoño et al / DYNA 81 (185), pp. 56-65. June, 2014.

loads using a single column element. The closed-form equations make use of Floquet´s theory to predict the dynamic instability regions of slender columns. Nonlinear effects like those produced by the axial inertia, the coupling between longitudinal and transverse deflections, and the curvature are not taken into account. Sensitivity studies are included in a companion paper [17] that shows the effects of rotary inertia, damping and semirigid connections on the dynamic stability of prismatic columns under parametric axial loads.

vibration, transverse vibrations occur with rapidly increasing amplitudes. This is called parametric resonance and the system is said to be dynamically unstable. Moreover the loading is called “parametric excitation”. Bolotin [3] is an excellent reference on Dynamic Stability. Because of space limitation a brief bibliography of some studies made in the last forty years is presented next. Wirsching and Yao [4] developed a relationship for the stability condition of a pin-ended column with initial curvature excited by physical white noise. Iwatsubo et al. [5] studied the instabilities of a cantilever column subject to an axial periodic load (Euler-type problem) and to a tangential periodic load (Beck-type problem). Ahmadi and Glockner [6] studied the stability of a viscous-elastic column subject to deterministic axial load with sinusoidal time variation as well as stationary and non-stationary random variations. Simitses [7] presented the concept of dynamic stability for suddenlyloaded elastic structural configurations along the related criteria and estimates of critical conditions. Sridharan and Benito [8] studied experimentally the interaction of local and overall buckling in thin-walled columns and the effect of suddenly applied compression dynamic end loads. Shigematsu et al. [9] studied the dynamic stability of initially imperfect columns and plates under time-dependent axial compression in elastic and elastic-plastic regions using matrix functions. Sophianopoulos and Kounadis [10] studied the dynamic stability of imperfect frames under joint displacements. Wong and Yang [11] studied the inelastic response of structures with strain-hardening and strain-softening properties as well as with elastic-plastic properties subjected to dynamic loadings using the force analogy method. Svensson [12] studied theoretically and experimentally the stability of a dynamic system with periodic coefficients including the effects of both internal damping and damping at the boundaries. Yabuki et al. [13] studied the inelastic dynamic stability of steel columns subject to a steady axial force and an alternating dynamic axial force including the effects of support conditions and the frequency of the axial force. Kumar and Mohammed [14] studied the dynamic stability of columns and frames subjected to axial periodic loads using the FEM and Newmark method for the integration of the equations of motion. Dohnal et al. [15] studied the behavior of a uniform cantilever column under a time-periodic axial force using the FEM. Mailybaev and Seyranian [16] studied the classical problem of stabilization of a statically unstable elastic column (simply supported and clamped–hinged ends) by axial harmonic vibration. The main objective of this paper is to present an analytical method and closed form equations that determine the dynamic stability of an elastic 2D prismatic column with semirigid end connections of identical stiffness and with sidesway between the two ends totally inhibited subject to parametric axial load described by a Fourier series. The proposed model and corresponding equations which are straightforward and relatively simple to apply can be used to investigate the effects of damping, rotary inertia and semirigid connections on the stability of slender columns under periodically varying axial

2 Structural model Consider the 2D prismatic column elastically connected at both ends A and B as shown in Fig. 1a. The element AB consists of the column itself A’B’ connected by elastic springs AA’ and BB’ both with identical flexural stiffness (whose dimension is force-distance/radian) at ends A’ and B’, respectively. It is assumed that the column A’B’: (1) is prismatic with cross sectional area A, principal moment of inertia I (about the axis of bending), and span L; (2) is made of a linear elastic, homogeneous and isotropic material with elastic modulus E; (3) its initial centroidal axis is a perfect straight line; (4) is subjected to a varying end axial load defined by a Fourier series n 

P (t )  P0   a n cos(nt )  bn sin( nt ) n 1

as shown by Fig. 1a; and (5) bending deformations and strains are small so that the principle of superposition can be applied.

Figure 1. Structural Model: (a) member properties and end connections; (b) forces and moments on differential element.

57


Giraldo-Londoño et al / DYNA 81 (185), pp. 56-65. June, 2014.

The ratio R   /( EI / L) will be denoted as the stiffness index of the flexural connections. The stiffness index varies from zero for perfectly pinned connections to infinity for fully restrained connections (i.e., perfectly clamped). For convenience the term   1 /(1  3 / R) denoted as the fixity factor is used in the proposed equations.

tan

 2

 

tan

 2

 

1  0 3

d 2 f (t ) df (t ) cL2   2 2 2 2 2 dt dt m  r L

M  y  V  m r 2 2  P(t ) x x t

(2)

(7b)

Equation (6) coupled with the condition given by Eq. (7b) satisfies the four boundary conditions expressed by Eqs (4ab) and (5a-b) listed above. Substituting Eq. (6) into (3) the following ordinary differential equation in f(t) is obtained:

The lateral deflection of the column is derived by applying dynamic equilibrium on the differential element (Fig. 1b) and compatibility conditions at the ends of the member. The transverse and rotational dynamic equilibrium equations are: (1)

(7a)

In terms of the fixity factor  Eq. (7a) becomes:

2.1. Dynamic Analysis

V 2 y y  m 2  c x t t

EI 0 L

  2 2 2 m   r  L2 2

2

(8a)

    2   EI    P(t ) f (t )  0   L  

2

To facilitate the analysis of columns subjected to varying axial forces, the following six dimensionless parameters are introduced:

Using the governing bending equation for EulerBernoulli columns M  EI

2 y and substituting Eq. (1) x 2

R2 

into Eq. (2), the governing fourth-order differential equation of a prismatic column becomes:  y  y  y y  y  P( t ) 2  m r 2 2 2  c m 2 0 4 t x x t x t 4

EI

2

4

2

y (0, t )  0

(3 )

o 

 y y (0, t )   (0, t )  0 2 x x At x= L: y ( L, t )  0

EI

2 y y ( L, t )   ( L, t )  0 2 x x

   L   cos  x    cos   L  2 2  1  cos

 0

 

; and

c ; 2m  0

  t .

 4 EI 4

Then Eq. (8a) is reduced to

(4a) (4b)

d 2 f ( ) 2 df ( )  2    2  p( ) f ( )  0  d  2 d 2

(5a)

where:

(5b)

p ( )  p0   a n* cos( n )  bn* sin( n ) ;

n 

(8b)

n 1

The solution of Eq. (3) can be written in the following form:

y ( x, t ) 

P(t ) ;  EI / L2 2

m L = the natural frequency of lateral Where: vibration of a simply supported beam without axial load.

2

EI

p(t ) 

   2 2 R 2  1 ;  

Equation (3) must meet the following four boundary conditions: At x= 0:

r2 ; L2

p0 

P0 a b ; an*  2 n 2 ; bn*  2 n 2 2  EI / L  EI / L  EI / L 2

2.2. Proposed Equations for Instability Borders

1

(6)

The proposed equations are based on Floquet’s theory. The derivation is presented in Appendix I for easy reference. As can be seen the character of the solutions for Eq. (8b) depends on the values of its harmonically varying coefficients leading to stable or unstable conditions. From Floquet´s theory it can be concluded that on the border

2

Where the constant  must satisfy the following condition:

58


Giraldo-Londoño et al / DYNA 81 (185), pp. 56-65. June, 2014.

types of loading described in Table 1 by considering   1 (i.e. pinned-pinned columns) and the corresponding values

between stable and unstable conditions the solutions for Eq. (8b) becomes periodic with period T or 2T with T  2 . The proposed equations shown below are capable to capture the phenomena of parametric resonance of slender columns with semi rigid connections, considering the effects of damping and rotary inertia:

for



2

   2

2

 p0  2

2  1 2  a1*  b1*  4 

   4 2   2  p0 

in Eqs. (9) and (10).

1. Enter the numerical values of the properties of the column: E , I , A , r , m , c , and  . 2. With the value of the flexural stiffness of the connection  (or the fixity factor  ), the value of the dimensionless parameter  is calculated from Eq. (7a) or (7b) using numerical methods (i.e., Newton-Raphson or secant method). The values of  must be between 1 and 2 for hinged and clamped in both ends, respectively, to obtain a coherent solution to Eq. (6). 3. Calculate the natural frequency of lateral vibration for a simple supported beam without axial load:

(9)

b) Solution with period T

 B  B 2  4 AC 2A

 

bn*

The following steps are proposed to evaluate the dynamic stability for columns subjected to periodic axial loads:

2 2

2

and

3. Step-by-step procedure

a) Solution with period 2T

 2  2  p0   2 2 

a n*

(10)

where A, B and C are calculated as indicated below. When

o 

a*n  0 : then A  2 2  2  p0 ,

 4 EI m L4

1. Determine the following dimensionless parameters: B   b1  4    p 0  8   p 0 , r2 P( ) 2 2 4 2 ; p( )  2 ;  c ; R  2 *2 *2 2 2 C   p0 4   p0  2b1  b2 L  EI / L 2m  0 2 2 2 2      R  1 ;   ;   t . 2 2 * 0 When bn  0 : then A  2   p 0 , *2

2

2

 

2

2

2

2



B   a a  a  4    p 0

2

 8  2  p 02. 2   2 a*   2  2  p0   2  p0  2   4      C 4  *  * *  a2  *  2 a1 a1  a3    p0    2    2

* 1

* 1

* 3



a*

2

2

2

b

 a

n

cos( nt )  bn sin( nt ) with the

n 1

1 P0  T

T

 P(t )dt ; 0

2 an  T

T

 P(t ) cos( nt )dt ;

bn 

2 P (t ) sin( nt )dt . T

and

0

T

Table 1 lists closed expressions for different classic types of periodic axial loads. These expressions can be used to investigate the effects of damping, rotary inertia and semirigid connections on the stability of slender columns subject to periodic axial loads in a systematic manner. For periodic loadings that are not listed in Table 1, simply use

a

n 

P (t )  P0 

terms P0, a n and bn expressed by

b*

When both n and n are different from zero, Eq. (10) is not applicable and the solution shall be obtained by solving Eq. (23b).

* n

To determine the borders between regions of stability and instability Equations (9) and (10) must be used. Notice that the varying axial load can be defined by the Fourier series

Where:

T

0

2 . 

(with

n  1, 2, 3, ... ). The values of these parameters determine the stability borders. Since the determination of the zeros of Eq. (16b) can become cumbersome when both

* n.

a n* and bn* are

different from zero, a numerical method similar to that used for solving Eq. (7a) or (7b) can be used.

Eqs. (9) and (10) with the appropriate values of and shows graphically the two first regions of instability for all

59


Giraldo-Londoño et al / DYNA 81 (185), pp. 56-65. June, 2014.

3. In order to estimate which regions at both sides of the borders, obtained from step (5), are of stability or instability is necessary to evaluate trial points located at both sides of the borders, and evaluate the behavior of

1 1   k 3  F i  h, Ui  hk 2  2 2   . k 4  F i  h, U i  hk 3 

f ( ) by applying a numerical method for solving the

Once the points corresponding to parametric resonance and those that give stable solutions are known, the regions of stability and instability can be established, respectively.

ordinary differential equations. The Runge-Kutta method is suggested in this case since it is simple, efficient and accurate. To describe how to solve Eq. (8b) using the Runge-Kutta method, the system of ordinary differential equations defined by Eq. (16) is rewritten as follows:

dU  F , U d

3. Summary and conclusions Closed-form expressions that can be used to predict the dynamic instability regions of slender Euler-Bernoulli columns are developed using Floquet´s theory. The proposed method is straightforward and the corresponding equations are relatively easy to use. The proposed closed-form equations enable the analyst to explicitly evaluate the effects of damping, semirigid connections, and rotary inertia on the nonlinear elastic response and lateral stability of slender prismatic columns with sidesway inhibited, subject to static and dynamic axial loads. The proposed equations are not available in the technical literature. The developing of simple closed-form expressions capable to predict the dynamic instability regions of slender columns with semi-rigid connections including the effects of external damping and rotary inertia under all types of periodic axial loads is a significant advance when compared with other complex numerical procedures that require high computing times to reach a good enough precision.

(11)

The method is given by:

U i 1  U i  and

h k 1  2k 2  2k 3  k 4  6

 i 1   i  h

Where:

h = time step size.

0 1   2 F , U      2  p( )  2  U ;  2     k1  F i , Ui  ; k 2  F i  1 h, U i  1 hk 1  2 2  

Table 1. Closed Equations for Stability Regions for Columns subjected to Different Types of Periodic Axial Loads.

60


Giraldo-Londo単o et al / DYNA 81 (185), pp. 56-65. June, 2014.

Continuation Table 2. Closed Equations for Stability Regions for Columns subjected to Different Types of Periodic Axial Loads.

Figure 2. Stability Regions for Pinned-Pinned Columns subjected to Different Types of Periodic Axial loads. Solution with Period T

61

Solution with Period 2T;


Giraldo-Londoño et al / DYNA 81 (185), pp. 56-65. June, 2014.

Figure 3. Stability Regions for Pinned-Pinned Columns subjected to Different Types of Periodic Axial loads.

Solution with Period 2T;

Solution with Period T

Acknowledgments

0  2 A( )     2  p ( )  2  

The authors wish to thank the Department of Civil Engineeering of the School of Mines of the National University of Colombia at Medellín and to COLCIENCIAS for their financial support.

u1 ( )  f ( ) ; and u 2 ( ) 

1  2  ;   

du1 ( ) d

p( ) is periodic, it is clear that matrix A( ) is The equations for the borders between stable and unstable periodic. The periodicity of matrix A( ) can be used to

Appendix: Derivation of instability regions

Since

obtain the solution of Eq. (8b) by making use of the Floquet´s theory. The character of the solutions for Eq. (8b) depends on the values of the harmonically varying coefficients. As shown by Timoshenko and Gere [18] depending on the magnitude and frequency of the periodic axial load, stable or unstable conditions can be reached. From Floquet´s theory it is concluded that the borders between stable and unstable conditions the solutions for Eq. (8b) become periodic with period T or 2T with T  2 . The solution with period 2T is considered first and can be expressed in the form of series expansion as follows:

conditions for slender columns under parametric excitation proposed herein are based on Floquet’s theory as some other researchers such as Svensson [12] and Kumar and Mohammed [14]. A brief description of Floquet’s theory is given next. Rewriting Eq. (8b) as a system of two first-order differential equations the following expression is obtained:

dU( )  A( )U( ) d

(12)

Where:

f ( )  u  dU  du1 / d  U   1   u2  ; d du2 / d  ;

  n   n cn sin 2   d n cos 2    n 1, 3,...  n 

  

(13)

Substituting Eq. (13) into (8b) and grouping the terms 62


Giraldo-Londoño et al / DYNA 81 (185), pp. 56-65. June, 2014.

a homogenous system of linear equations expressed in terms

appropriately, an expression of the following form is generated:

c

To satisfy Eq. (18) all terms  s , 2 k 1  c 3 , c1 , d 1 , d 3 ,  and

 c , 2 k 1  c3 , c1 , d1 , d 3 , 

d

of n and n is produced which by making the determinant of its matrix equal to zero then the non-trivial solutions can be obtained as follow:

must be equal to zero. Then

   s , 2 k 1  c3 , c1 , d 1 , d 3 ,  sin

2k  1 2

3    s ,1  c3 , c1 , d 1 , d 3 ,  sin  2 2 3   c ,1  c3 , c1 , d1 , d 3 ,  cos   c ,3  c 3 , c1 , d1 , d 3 ,  cos  2 2  2k  1    c , 2 k 1  c3 , c1 , d 1 , d 3 ,  cos  0 2    s ,3  c3 , c1 , d 1 , d 3 ,  sin

(14)

     a*  b*  2  2 a1*  a*2 2 9 1   b1*  b2*  2 2 3     2  2  p0  3    2   4 2 2 a1*  2 a1*  a*2  2 2  2 2 b1*  b2* 1 2 *      p0    2  b1   2  2 4  2 2 a1*  2 b1*  b2*  2 2  2 2 a1*  a*2 1 2 * 2  b1      p0     2  2 2 4  2 * a*  2 b1*  b2* 2 a1*  a3* 1 9 2 2  2 2 b3  2         p0  3     2  2  2 4 2     

2

a*   2 2  2    p0  1  4  2

 

1 2  2b1* 2

 0

(15a)

  

1 2   2b1* 2 0 a*   2 2  2     p0  1  4  2 

of the form

A 4  B 2  C  0 from which the

following value of

 can be obtained:

2 2 2 2  1   2  2  p0  2 2   2  2  p0  2 2   4 2   2  p0   a1*  b1* 

n 

 c

n  2 , 4 ,...

n

sin n   d n cosn 

(15b)

Eq. (15b) gives rise to a fourth-order algebraic equation

4



(16)

   s ,k  c 2 , d 0 , d 2 , d 4 ,  sin k 

In a similar fashion the solution with period T can be determined taking the solution in the form of series expansion:

f ( )  d 0 

The solution with period 2T indicated by Eq. (13) is only applicable to the odd number of regions as described by Kumar and Mohammed [14]. Since closed expressions for the regions of instability are of interest, it is necessary to consider a finite part of the determinant shown in Eq. (19a). A sufficiently close approximation for the infinite eigenvalue problem is obtained by taking n= 1 inEq. (17) according to Svensson [12]. Now using this approximation Eq. (15a) is reduced to:



   s , 2  c 2 , d 0 , d 2 , d 4 ,  sin 2 

 s ,1  c 2 , d 0 , d 2 , d 4 ,  sin  

 0  c 2 , d 0 , d 2 , d 4 ,    c ,1  c 2 , d 0 , d 2 , d 4 ,  cos 

(7)

(18)

 c , 2  c 2 , d 0 , d 2 , d 4 ,  cos 2 

Following the same procedure used for the solution with period 2T, substituting Eq. (17) into (8b) and grouping the terms appropriately, an expression of the following form is generated:

   c ,k  c 2 , d 0 , d 2 , d 4 ,  cos k    0 From the solution described in Eq. (17), only the even number of regions can be found according to Kumar and Mohammed [14]. As previously stated, closed formulas for the regions of instability are of interest, so it is necessary to consider a finite part of the determinant shown in Eq. (19a). 63


Giraldo-Londoño et al / DYNA 81 (185), pp. 56-65. June, 2014.

In a similar way, a sufficiently close approximation for the infinite eigenvalue problem is obtained by taking n= 1 in Eq. (17). By using this approximation Eq. (19b) can be obtained. Unlike the previous case, the determinant expressed by Eq. (19b) does not produce a fourth-order algebraic equation of

where A, B and C are calculated as listed below.

the form A   B   C  0 . To find an expression in 4

2

this form it is necessary that either one

a n* or bn* be equal to

C

zero. If this condition is satisfied, closed expressions for the regions of instability can be found and written as

4

When

 

 B  B 2  4 AC 2A

a*n  0 : then A  2 2  2  p0 , 2 2 B   2b1*  4 2  2  p 0   8 2  2  p 0 

When

2



2

 

 p0 4  2  p0

bn*  0

: then

2

2

 2b1*  b2*

A  2 2  2  p 0

,

2



(20)

B   2a1* a1*  a 3*   4 2  2  p 0   8 2  2  p 0  C   4 2 2  p 0  2  p 0 2  a 2   a1* a1*  a3*   2  p 0  a 2    2

  2 a*    p 0  2 2   2 b1*   2 b*  1  2   2 2  2   2 b1*  b3* 2  

2     2

   

  

2 * b1   2 2   p0   2 * a1  a3*    2 * a2  

 2 a*    p 0  2 2   2 b1*   2 b*  1  2   2 2   2

 2 

When both

2 

  





 b*  1   2   2 2  2   2 a1*   2 2  a*    2  p 0  2  2  2    2 a1*  a3*  2  

2 * b1 

*2

  

4     2 b1*  b3* 2   2 a*2   2  2 a1*  a3*  2  a* 2  2    p 0  4 2   

b*  1  2   2 2   2  2 a1*   2 a* 2  2 2    p 0  2     2

*

2  

0

(19a)

        

2 2   p0   2 *  a1  a3*   

0

(19b)

  

[6] Ahmadi, G. and Glockner, P. G., Dynamic Stability of KelvinViscoelastic Column, J. of Engineering Mechanics, 109 (4), pp. 990-999, 1983.

a n* and bn* are different from zero, Eq. (20)

is not applicable and the solution shall be obtained by solving Eq. (19b).

[7] Simitses, G. J., Suddenly-Loaded Structural Configurations, J. of Engineering Mechanics, 110 (9), pp. 1320-1334, 1984.

References

[8] Sridharan, S. and Benito, R., (1984). Columns: Static and Dynamic Interactive Buckling, J. of Engineering Mechanics, 110 (1), pp. 49-65, 1984.

[1] C. Koning, C. and Taub, T., Impact buckling of thin bars in the elastic range hinged at both ends, Luftfahrtforschung, 10(2), pp. 55-64, 1933, (translated as NACA TM 748 in 1934).

[9] Shigematsu, T., Hara, T. and Ohga, M., Dynamic Stability Analysis by Matrix Function, J. of Engineering Mechanics, 113 (7), pp. 1085-1100, 1987.

[2] Simitses, G. J. and Hodges, D. H., Fundamentals of Structural Stability, ELSEVIER Inc, Chapter 12, pp. 329-332, 2006.

[10] Sophianopoulos, D. S. and Kounadis, A. N., Dynamic Stability of Imperfect Frames Under Joint Displacements, J. of Engineering Mechanics, 120 (8), pp. 1661-1674, 1994.

[3] Bolotin, V. V., The Dynamic Stability of Elastic Systems, Holden-Day San Franscisco. 1964.

[11] Wong, K. K. F. and Yang, R., Inelastic Dynamic Response of Structures Using Force Analogy Method,” J. of Engineering Mechanics, 125 (10), pp. 1190-1199, 1999.

[4] Wirsching, P. H. and Yao, T. P., Random Behavior of Columns,” J. of the Engineering Mechanics Division, 97 (3), pp. 605-618, 1971.

[12] Svensson, I., Dynamic Instability regions in a Damped System, J. of Sound and Vibration, 244 (5), pp. 779-793, 2001.

[5] Iwatsubo, T., Sugiyama, Y. and Ishihara, K., Stability and NonStationary Vibration of Columns under Periodic Loads, J. of Sound and Vibration, 23 (2), pp. 245-257, 1972.

[13] Yabuki, T., Yasunori, A., Fumishige, A. F. and Lu, W., Nonlinear Effect on Instability of Steel Columns under Dynamic Axial Loads, J. of Structural Engineering, 131 (12), pp. 1832-1840, 2005. 64


Giraldo-Londoño et al / DYNA 81 (185), pp. 56-65. June, 2014. [14] Kumar, T. H. and Mohammed, A., Finite element analysis of dynamic stability of skeletal structures under periodic loading, J. of Zhejiang University SCIENCE A, 8 (2), pp. 245-256. 2007.

From 2012-present, he has been working as graduate assistant at Ohio University at Athens, OH, USA. He was awarded the Emilio Robledo award (Colombian Society of Engineers, 2009) and the Young Researcher award (COLCIENCIAS, 2010-2011). Also, he was COLFUTURO scholar (20122014). His research interests include analysis and design of steel and concrete structures, non-linear mechanics, finite element modeling, bridge engineering, earthquake engineering, and structural dynamics.

[15] Dohnal, F., Ecker, H. and Springer, H., Enhanced damping of a cantilever beam by axial parametric excitation, Arch Appl. Mech, 78, 935– 947, 2008. [16] Mailybaev, A. A. and Seyranian, A. P., Stabilization of statically unstable columns by axial vibration of arbitrary frequency, J. of Sound and Vibration, 328 (1-2), pp. 203–212, 2009.

J. Dario Aristizabal-Ochoa: received the Bachelor degree. in Civil Engineering in 1970 from the National University of Colombia. Medellin, Colombia, the MS and PhD degrees in Structural Engineering in 1973 and 1976, respectively from the University of Illinois at Champaign –Urbana, USA. From 1977 to 1978, he worked for the Portland Cement Association, Skokie, Illinois, USA as Structural Researcher. From 1978-1981 he worked as Marketing Manager of Seismic Applications for MTS Systems at Eden Prairie, Minnessota, USA. From 1981-1995 as Professor for the Universities of Vanderbilt at Nashville, Tennessee and for the California State University at Fullerton California, USA. Currently, he is full Professor and director of the Structural Stability Research Group at the National University of Colombia at Medellin. He was awarded the “Engineering Foundation” grant in 1982 by the American Institute of Civil Engineers (ASCE), the Raymond Reese Structural award by the American Concrete Institute (ACI) in 1984, and NSF research grant in 1988. Currently, he is a Full Professor in Civil Engineering, School of Mines, and National University of Colombia at Medellin. He is an active editorial member and reviewer of several international Journals (ICE, ASCE, ACI, PCI, ELSEVIER). His research interests include: steel, reinforced concrete structures, structural dynamics and stability, composite materials, analysis and design of bridges, nonlinear analysis, seismic design, soil-structural interaction. His research work is referenced by numerous textbooks and construction codes (ACI, AISC, ASSHTO) and technical papers.

[17] Giraldo-Londoño, O. and Aristizabal-Ochoa, J. D., Dynamic stability of slender columns with semi-rigid connections under periodic axial load: verification and examples, Revista DYNA, submitted for possible publication. [18] Timoshenko S. P. and Gere, J. M., Theory of Elastic Stability, 2nd Ed., McGraw-Hill Book Inc., New York, N.Y, 1961 Oliver Giraldo-Londoño: received the BS in Civil Engineering in 2010 from Universidad Nacional de Colombia, Sede Medellin, and the MS in Structural Engineering in 2014 from Ohio University at Athens, OH, USA. From 2006-2009, he worked as undergraduate teacher assistant for the department of mathematics at Universidad Nacional de Colombia, Sede Medellin. From 2010-present, he has been working in the Structural Stability Research Group (GES) at Universidad Nacional de Colombia, Sede Medellin, under the advice of Dr. J. Dario Aristizabal-Ochoa. From 20112012, he worked as instructor of statics of structures and numerical methods at Universidad de Antioquia.

65


DYNA http://dyna.medellin.unal.edu.co/

Dynamic stability of slender columns with semi-rigid connections under periodic axial load: verification and examples Estabilidad dinámica de columnas esbeltas con conexiones semirrígidas bajo carga axial periódica: verificación y ejemplos Oliver Giraldo-Londoño a & J. Darío Aristizábal-Ochoa b b

a Structural Researcher, M.S. Ohio University, USA ogirald86@gmail.com. 125-Year Generation Professor, Ph.D. School of Mines, National University. Medellin, Colombia jdaristi@unal.edu.co

Received: December 15th, de 2012. Received in revised form: December 20th, 2013. Accepted: December 26th, 2013.

Abstract The dynamic stability of an elastic prismatic slender column with semirigid connections at both ends of identical stiffness and with sidesway between the two ends totally inhibited, subject to parametric axial loads including the combined effects of rotary inertia and external damping was presented in a companion paper. Closed-form expressions that predict the dynamic instability regions of slender columns were developed by making use of Floquet’s theory. The proposed equations are straightforward and simple to apply. The proposed solution is capable of capturing the phenomena of stability of columns under periodic axial loads using a single column element. The proposed method and corresponding equations can be used to investigate the effects of damping, rotary inertia and semirigid connections on the stability analysis of slender columns under periodically varying axial loads. Sensitivity studies are presented herein that show the effects of rotary inertia, damping and semirigid connections on the dynamic stability of columns under parametric axial loads. Analytical studies indicate that the dynamic behavior of columns under periodic loading is strongly affected by the flexural stiffness of the end connections and the external damping, but not so much by the rotary inertia. Three examples are presented in detail and the calculated results are compared with those reported by other researchers. Keywords: Buckling, Columns, Dynamic Analysis, Damping, Semi-Rigid Connections, Parametric Loading, Periodic Loading. Stability. Resumen La estabilidad dinámica de una columna elástica prismática esbelta con conexiones semirrígidas en ambos extremos de rigidez idéntica y con desplazamiento lateral entre los dos extremos totalmente inhibido sujetos a cargas axiales paramétricas incluyendo los efectos combinados de inercia rotacional y amortiguación externas fue presentada en una publicación adjunta. Expresiones cerradas que se pueden utilizar para predecir las regiones inestabilidad dinámica de columnas esbeltas se desarrollan haciendo uso de la teoría de Floquet. Las ecuaciones propuestas son sencillas y fáciles de aplicar. La solución propuesta es capaz de capturar el fenómeno de estabilidad en columnas sometidas a cargas axiales periódicas utilizando un solo elemento de columna. El método propuesto y las ecuaciones correspondientes se pueden utilizar para investigar los efectos del amortiguamiento, la inercia rotacional de la columna, y las conexiones semirrígidas en el análisis de estabilidad de columnas esbeltas sometidas a cargas axiales periódicas. Estudios de sensibilidad presentados en esta publicación muestran los efectos de la inercia rotacional, el amortiguamiento y las conexiones semi-rígidas en la estabilidad dinámica de columnas sometidas a cargas axiales paramétricas. Los estudios analíticos indican que el comportamiento dinámico de columnas bajo carga periódica está fuertemente afectado por la rigidez a la flexión de las conexiones de los dos apoyos y por el amortiguamiento externo, pero no tanto por la inercia rotacional. Tres ejemplos se presentan en detalle y los resultados calculados se comparan con los reportados por otros investigadores. Palabras claves: pandeo, columnas, análisis dinámico amortiguado, conexiones semirígidas, cargas paramétricas, cargas periódicas, estabilidad.

1. Introduction The main objective of this paper is to present examples and sensitivity studies to verify an analytical method and closed-form equations presented in a companion paper that determine the dynamic stability of an elastic 2D prismatic column with semirigid connections with sidesway between the two ends totally inhibited, subject to parametric axial load described by a Fourier series. The proposed model and corresponding equations which are straightforward and

relatively simple to apply can be used to investigate the effects of damping, rotary inertia and semirigid connections on the stability of slender columns under periodically varying axial loads using a single column element. The closed-form equations make use of Floquet´s theory to predict the dynamic instability regions of slender columns. Sensitivity studies and three verification examples are included in this paper that shows the effects of rotary inertia, damping and semirigid connections on the dynamic stability of prismatic columns under parametric axial loads.

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 66-72. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Giraldo-Londoño & Aristizábal-Ochoa / DYNA 81 (185), pp. 66-72. June, 2014.

2. Sensitivity study

Now consider the solution with period T . The closedform expression for the instability region is obtained by

2.1. Dynamic Instability Regions for Damped Columns Subjected to Harmonically Varying Axial Loads

substituting the corresponding values of p 0 ; a n , and bn

*

*

into Eq. (10) presented in the companion paper as follows: In this section closed-form expressions are developed that determine the first two instability border lines for columns subjected to periodic loads given by P(t )  P  S cos(t ) . Knowing the values of E , I , A , r , m , c and  the non-dimensional parameters, discussed in step (4) of the companion paper, can be calculated. In this particular case the normalized axial load is written as p( )  p  s cos( ) , where: p 

correspond n 

 a

* n

 0 (with n  1, 2, 3, ... );

C 4

  8 s  4   and  2 p    2  s 

2

 p  1  s 2

2

2

2

2

2

2

4

4

2

instability regions taking p  0 and   0 (or   1 ) in Eqs. (1) and (2). Numerical results indicate that by increasing the damping, the region of instability moves from left to right acquiring some curvature as reported by Svensson [2] and Timoshenko and Gere [3]. Fig. 4 shows the variation of the region of instability corresponding to Eq. (1) for different values of slenderness parameter. The effects of the stiffness of the end connections on the instability regions are shown in Fig. 5. Fig. 2 and 3 indicate that: 1) the effects of rotary inertia on the dynamic response of slender columns subject to periodic axial loads are negligible for reasonable values of slenderness; and 2) the dynamic instability of a slender column subject to a periodic loading is greatly affected by the stiffness of the end connections.

 s ; and a  0 (with * n

n  2, 3, 4, ... ). To determine the instability border lines, solutions with period T and 2T must be considered. The closed-form expression for the region of instability corresponding to the solution with period 2T can be found *

by substituting the corresponding values of p 0 ; a n , and bn into Eq. (9) presented in the companion paper as follows:  2 2   2   2  p  2      2   1  2  16 2  2  p   2  2    4  2 s 2 2   

2

(2)

Sensitivity studies for different values of the damping parameter, rotary inertia parameter, and fixity factor are carried out. Fig. 1 shows the effect of damping in the dynamic

to

*

 p6

B  4 p 2   p   2 4

cos(n )  bn* sin( n ) with p0  p ; a1*

A  2 2  2  p

n 1

bn*

 B  B 2  4 AC 2A

where:

P S , s 2 , and   t . This 2  EI / L2  EI / L2

would

p( )  p 0 



(1)

3.0

2.0

 1.0

0

0.2

0.4

s

0.6

    Period  5%T;

0.8

1.0

    5%

Figure 1. Effects of Damping on the Stability Regions for a Pinned-Pinned Period Column subjected to Sinusoidal Axial Load 2T

Period  2%2T; Period T

67

  0% ;  


Giraldo-Londoño & Aristizábal-Ochoa / DYNA 81 (185), pp. 66-72. June, 2014. 3.0

2.0

 1.0

0

0.2

0.4

s

0.6

0.8

1.0

R=10% Figure 2. Effect of Rotatory Inertia on the Stability RegionsR=0% for Pinned-Pinned Columns subjected to Sinusoidal Axial Load R=10% R  10%

R=5% R  R=0% 5% ; and R=5%

9.0

R=0% R  0% ; R=5%

R=10%

8.0

4.0

 3.0

2.0

1.0

0

0.2

0.4

s

0.6

0.8

1.0

Figure 3. Stability Regions for Columns with Semirigid Connections subjected to Sinusoidal Axial Load.

R=10%   1.0

R=0% and R=5% 3. Comprehensive examples and verification

R=0%  0; R=5%

R=0% R=5% R=10%  0.5 ;

  0 ,   0.25 ,   0.5 ,   1 ; 2) the damping parameter is   5%

of the connections are:

Example 1: Column with Semi-rigid Connections subjected to Rectified Sine Axial Load Determine the stability regions for a damped column elastically connected at both ends, given that: 1) the fixity factor 68

  0.75 , and

; and 3) the applied axial load is given by a rectified sine wave as shown in Fig. 4.

R=10%


Giraldo-Londoño & Aristizábal-Ochoa / DYNA 81 (185), pp. 66-72. June, 2014.

p( )

are:

  1 ,   1.1692 ,   1.3844 ,   1.6649 ,

and

  2 , for the respective values of the fixity factor. Now, to

p

evaluate the instability regions the expansion in Fourier series for the given axial load must be known. The coefficients of the

0









Figure 4. Example 1: Rectified sine Axial Load

,

bn  0

. Closed expressions for the two first regions of

,

 . By solving numerically Eq. 7b the values obtained

a) Solution with period 2T



a n  4 p /   / 4n 2  1

are:

instability are obtained by substituting the values of  , and the corresponding terms of the Fourier series into Eqs. (9) and (10) presented in the companion paper.

Solution: The first necessary step to carry out the dynamic analysis of a slender column with semirigid connections is to determine the parameter  , which depends only of the fixity factor

p0  2 p / 

series

2

2

b) Solution with period T

 

 2  2  2 p /    2 2   2  2  2 p /    2 2    4 2  2  2 p /    4 p 2 / 9 2

 B  B  4 AC 2A

From step (4) the normalized function

2

Where: A  2

2



2

as

 2p / ;

B  608 p / 315  4    2 p /  2

2

8 2  2  2 p / 

2

2



2

2  2  2 p /    2  2 p /     C  4 608 p 2  2  32 p / 15 / 315 2

2

2

2 a2*

2

p( )  s cos( )

with

expression p( )  p0 

s

 a

p ( )

can be written

S . Therefore for the  EI / L2 2

n  n 1 * n

* n

cos(n )  bn* sin(n ) to be

satisfied, then p 0  0 , b  0 , n  1, 2, 3, ... , a1  s , and *

a n*  0 , n  2, 3, 4, ...

/ 4    

Closed expressions for the first two regions of instability are obtained by substituting p= 0,   1 , and   0 into Eqs. (12) and (13). Therefore:

Fig. 5 shows the two first regions of instability for the prismatic column described above. Results indicate that under damping, regions of instability move horizontally showing that a minimal value of the magnitude of the applied axial load is necessary to make the system unstable. According as the fixity factor of the connections increases, the minimal value of the applied axial load also moves horizontally. As the connection becomes stiffer, results also show that the principal region becomes narrower.

a) Solution with period 2T

1 

2 

2

 1/ 2 2

 1/ 2

1 s / 2

(3a)

1 s / 2

(3b)

b) Solution with period T EXAMPLE 2: Dynamic stability regions of a hinged-hinged column under periodic loading Determine the stability limits for a perfectly hinged-hinged steel column. Assume that: it has a 1mm×25mm rectangular cross section, L= 400 mm, and the applied axial load P (t )  S cos(t ) . Compare the results using the proposed method with those reported by Svensson [2]. Neglect the effects of damping. Solution: Note that the values of E, I, A, r, m are known,   0 (hinged at both ends),  = 1 and  = 0 (damping effects are neglected).

1  2 

1

(4a)

 1/ 2 1

1/ 2

1 s2 / 2

(4b)

The first two stability regions can be found simply by increasing the values of s from zero to 1 and plotting the four roots obtained from Eqs. (3) and (4). Fig. 6 shows these regions and the results are in accordance with those calculated and reported by Svensson [2]. His experimental results are also shown.

69


Giraldo-Londoño & Aristizábal-Ochoa / DYNA 81 (185), pp. 66-72. June, 2014.

9.0

6.0

 3.0

0

0.2

0.4

p

0.6

0.8

1.0

Figure 5. Example 1: Stability Regions for Columns with Semirigid Connections subjected to a Rectified Sine Axial Load.

  0.25 ,

  0.5 ;   3.0

    1,    

    0.75 ,    

   5%   

   

  0,  

2.0

 1.0

0

0.2

0.4

0.6

0.8

1.0

Proposed Model R=0% R=10% Svensson (2001) Figure 6. Stability Regions for Pinned-Pinned Columns subjected toI.Sinusoidal Axial Load: (Experimental) I. Svensson (2001) R=5% R=0% R=10% using Proposed Model, Theoretical, and Experimental after Svensson [2]

s

R=5% Example 3: Pinned-Pinned column subjected to saw-tooth Axial Load A slender column subject to a saw-tooth periodic axial load described by Kumar and Mohammed [4] is considered. The periodic load is defined by case 4 listed in Table 1 of the companion paper. The corresponding Fourier coefficients are:

equations for the two first regions of instability can be obtained: a) Solution with period

  2 1

p 0  p / 2 , a  0 , and b   p / n . Assume that: L= * n

* n

7 m, E=2.1×1011 Pa and I=2.003×10-5m4. Notice that:   0 ,  = 1,  = 0 and  = 1 (since the effect of rotary inertia is neglected). Thus, the following 70

2T

p 1  1  2  2  

       


Hidalgo-Barrio et al / DYNA 81 (184), pp. 20-27. April, 2014.

110

100

 [rad/s]

90

80

70

60

50

0

200

400

Figure 7. Vibration of Pinned-Pinned Columns subjected to Sawtooth Axial Load:

600 800 P [kN] R=0%  

R=5%  Proposed  Model, using

1000

R=10%

    after Kumar & Mohammed [4].

Fig. 7 shows the principal region of instability that corresponds to the solution with period 2T. It can be seen that the obtained results using the proposed method are practically identical to those reported by Kumar and Mohammed [4] obtained using the FEM

damping and the stiffness of the end connections. Analytical results indicate that: 1) instability border lines move horizontally and acquire some curvature as the damping increases; 2) as the stiffness of the end connections increases, the frequencie of the applied axial load defining the instability border lines also increase; and 3) the column axial deflection in the instability regions decreases significantly as the fixity factor  varies from zero (i.e. for perfectly pinnedpinned columns) to one (i.e. for perfectly clamped-clamped columns). These results are in accordance with those reported by other researchers.

4. Summary and conclusions

Acknowledgments

Closed-form expressions that can be used to predict the dynamic instability regions of slender Euler-Bernoulli columns were developed in a companion paper using Floquet´s theory. The proposed method is straightforward and the corresponding equations are relatively easy to use. The proposed closed-form equations enable the analyst to explicitly evaluate the effects of damping, semirigid connections, and rotary inertia on the nonlinear elastic response and lateral stability of slender prismatic columns with sidesway inhibited subject to static and dynamic axial loads. The proposed equations are not available in the technical literature. A sensitivity study and three examples are presented in detail that illustrate how to analyze the dynamic stability of slender prismatic columns with sidesway totally inhibited as the frequency and magnitude of the axial load varies. Analytical results and sensitivity studies indicate that the second-order dynamic response of a slender Euler-Bernoulli column subject to periodic axial loads is affected by the rotary inertia, external damping, and the stiffness of the end connections. It was found that for slender columns the effects of rotary inertia are not as strong as those produced by

The authors wish to thank the Department of Civil Engineeering of the School of Mines of the National University of Colombia at Medellín and to COLCIENCIAS for their financial support.

b) Solution with period

 

T

1 p2 p p2 2 2 41  p / 2  2   1  p / 2 1/ 2  2  21  p / 2

Notation The following symbols are used in both this paper and the companion paper: A = area of the column cross section; a n , bn , P0 = coefficients of Fourier series utilized to describe the applied axial load; c n , d n = constants; c = Damping coefficient;

a n* , bn* , p 0 = dimensionless coefficients of Fourier series utilized to describe the applied axial load; E = Young's modulus of the material; f(t) = Amplification function for lateral deflection of the column; I = Principal moment of inertia about its plane of bending of the column; 71


Hidalgo-Barrio et al / DYNA 81 (184), pp. 20-27. April, 2014.

L = Column span; M = Bending moment along the column; m = Uniform mass per unit of length of the column (including any additional uniformly distributed mass); P(t) = Periodic axial load applied at the ends of the column; p(t) = Dimensionless axial load; r = Radius of gyration of the column cross section; R = Slenderness parameter; V = Shear force; y(x, t) = Column lateral deflection; = Parameter used to describe the shape function of the column; = Stiffness of the rotational restraint at both ends of the column;  = Angular frequency of the applied axial load;  = Angular frequency of the applied axial load normalized with respect to  0 ;

[4] Kumar, T. H. and Mohammed A., Finite element analysis of dynamic stability of skeletal structures under periodic loading,” J. of Zhejiang University SCIENCE A, Vol. 8 (2), 245-256, 2007. http://www.springerlink.com/content/e03352h0h1712153/

= Fixity factor at the ends A’ and B’ of the column;  = Rotation of the cross section due to bending;  = Dimensionless time parameter; = Natural frequency of lateral vibration of a simply

J. Dario Aristizabal-Ochoa, received the Bachelor degree in Civil Engineering in 1970 with honors from the National University of Colombia. Medellin, Colombia, the MS and PhD degrees in Structural Engineering in 1973 and 1976, respectively from the University of Illinois at Champaign– Urbana, USA. From 1977 to 1978, he worked for the Portland Cement Association, Skokie, Illinois, USA as Structural Researcher. From 19781981 he worked as Marketing Manager of Seismic Applications for MTS Systems at Eden Prairie, Minnessota, USA. From 1981-1995 as Professor for the Universities of Vanderbilt at Nashville, Tennessee and for the California State University at Fullerton California, USA. Currently, he is full Professor and director of the Structural Stability Research Group at the National University of Colombia at Medellin. He was awarded the “Engineering Foundation” grant in 1982 by the American Institute of Civil Engineers (ASCE), the Raymond Reese Structural award by the American Concrete Institute (ACI) in 1984, and two NSF research grants in 1988 and 1989. Currently, he is a Full Professor in Civil Engineering, School of Mines, and National University of Colombia at Medellin. He is an active editorial member and reviewer of several international Journals (ICE, ASCE, ACI, PCI, ELSEVIER, etc). His research interests include: steel, reinforced concrete structures, structural dynamics and stability, composite materials, analysis and design of bridges, nonlinear analysis, seismic design, soilstructural interaction. His research work is referenced by numerous textbooks and construction codes (ACI, AISC, ASSHTO) and technical papers.

Oliver Giraldo-Londoño received the BS in Civil Engineering in 2010 from Universidad Nacional de Colombia, Sede Medellin, and the MS in Structural Engineering in 2014 from Ohio University at Athens, OH, USA. From 20062009, he worked as undergraduate teacher assistant for the department of mathematics at Universidad Nacional de Colombia, Sede Medellin. From 2010-present, he has been working in the Structural Stability Research Group (GES) at Universidad Nacional de Colombia, Sede Medellin, under the supervision of Dr. J. Dario Aristizabal-Ochoa. From 2011-2012, he worked as instructor of statics of structures and numerical methods at Universidad de Antioquia. From 2012-present, he has been working as graduate assistant at Ohio University at Athens, OH, USA. He was awarded the Emilio Robledo award (Colombian Society of Engineers, 2009) and the Young Researcher award (COLCIENCIAS, 2010-2011). Also, he was COLFUTURO scholar (2012-2014). His research interests include analysis and design of steel and concrete structures, non-linear mechanics, finite element modeling, bridge engineering, earthquake engineering, and structural dynamics.

supported beam without axial load;    2 2 R 2  1 = Dimensionless parameter;

 = Dimensionless damping parameter. References [1] Giraldo-Londoño, O. and Aristizabal-Ochoa, J. D., Dynamic stability of slender columns with semi-rigid connections under periodic axial load: Theory, Revista DYNA, accepted for publication, 2013. [2] Svensson, I., Dynamic Instability regions in a Damped System, J. of Sound and Vibration, 244 (5), pp. 779-793, 2001. [3] Timoshenko S. P. and Gere, J. M., Theory of Elastic Stability, 2nd Ed., McGraw-Hill Book Inc., New York, N.Y, 1961

72


DYNA http://dyna.medellin.unal.edu.co/

Polyhydroxyalkanoate production from unexplored sugar substrates Producción de polihidroxialcanoatos a partir de sustratos azucarados inexplorados Alejandro Salazar a, María Yepes b, Guillermo Correa c & Amanda Mora d* a

Faculty of Science. Universidad Nacional de Colombia, Medellín.salazar7@purdue.edu. Faculty of Science. Universidad Nacional de Colombia, Medellín. msyepes@unal.edu.co. c Faculty of Agricultural Sciences. Universidad Nacional de Colombia, Medellín. gcorrea@unal.edu.co. d * Faculty of Science. Universidad Nacional de Colombia, Medellín almora@unal.edu.co. b

Received: January 21th, de 2013. Received in revised form: November 1th, 2013. Accepted: December 23th, 2013.

Abstract Industrial-scale production of biopolymers is restricted by its elevated production costs in comparison with those associated with synthetic (no-biodegradable and no-biocompatible) polymers. In this study we tested for the first time two low-cost carbon substrates (i.e. carob pulp and fique juice) for lab-scale production of polyhydroxyalkanoate (PHA) with Bacillus megaterium. PHA detection and quantification was conducted by gas chromatography/mass spectrometry-selected ion monitoring (GC/MS-SIM). The results suggest that PHA production using carob pulp (from Hymenaea courbaril) may be as high as with sugar cane molasses. Moreover, it could serve for the synthesis of the most commercialized type of PHA (i.e. polyhydroxybutyrate; PHB) and/or other varieties (e.g. polyhydroxy-butyrate-co-valerate; PHBV) with different properties and potential applications. Keywords: polyhydroxyalkanoate (PHA); polyhydroxybutyrate (PHB); carob pulp; fique juice. Resumen La producción de biopolímeros a escala industrial es restringida por los elevados costos de producción, en comparación con aquellos asociados a polímeros sintéticos (no biodegradables y no biocompatibles). En este estudio evaluamos por primera vez dos sustratos de carbono de bajo costo (i.e. pulpa de algarrobo y jugo de fique) para la producción a escala de laboratorio de polyhydroxyalcanoato (PHA) con Bacillus megaterium. La detección e identificación de PHA se hizo mediante cromatografía de gases con detector selectivo de masas operado en el modo de Monitoreo de Ion Selectivo (GC-MS/SIM). Los resultados sugieren que la producción de PHA a partir de pulpa de algarrobo (de Hymenaea courbaril) puede ser tan alta como con melaza de caña. Más aún, puede servir para la síntesis del tipo de PHA más comercializado (i.e. polihidroxibutirato; PHB) y/o de otras variedades (e.g. polihidroxi-butirato-co-valerato; PHBV) con diferentes propiedades y posibles aplicaciones. Palabras clave: polihidroxialcanoato (PHA); polihidroxibutirato (PHB); pulpa de algarrobo; jugo de fique.

1. Introduction Polyhydroxyalkanoates (PHA) are organic polyesters produced by a variety of bacterial species to store carbon and energy, especially under environmental/nutritional stress [1]. These biopolymers are a good alternative to replace petroleum-based polymers, because they have similar mechanical properties to conventional polymers such as polypropylene [2], but additionally are biodegradable and can be produced from a wide range of renewable sources. [3,4]. However, industrial-scale PHA production is restricted by its elevated costs in comparison with those associated with traditional nonbiodegradable polymers [2]. One of the most extended approaches to reduce these costs is the use of inexpensive carbon substrates [1,5,6]. Colombia has a great variety of plants with carbon-rich fruit that can potentially serve as substrates for bioplastic production. One of these is the Hymenaea courbaril (carob tree), a timber tree that extends from the west coast of central Mexico southward into Bolivia and south central Brazil. It is also found in Spain, Portugal, Arabia, Somalia and the West

Indies [7]. The fruit of this tree consist in a woody capsule with hard seeds and a dry pulp rich in carbohydrates (Table 1), that is currently used for medical purposes [8] and for human and animal consumption [7]. Table 1. Composition of carob pulp. Component Total carbohydrates Water Fiber Proteins Fat Phosphorus Calcium Ascorbic acid Niacin Iron Thiamine Riboflavin Source: Adapted from [7]

Content per 100g pulp 75.3 g 14.6 g 13.4 g 5.9 g 2.2 g 143 mg 24 mg 11 mg 4.1 mg 3.2 mg 0.24 mg 0.14 mg

Another plant that produces a potentially valuable carbon substrate is the Furcraea bedinghausii (fique).

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 73-77 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Salazar et al / DYNA 81 (185), pp. 73-77. June, 2014.

This plant is highly used in Colombia and other South American countries to produce a natural fiber called cabuya. In its production, large amounts of fique juice (which represents the 90% of fique leaves) are discarded in soils and water streams [9]. This juice is composed of sugars (3% total sugars), lignin (1%), proteins (0.96%), calcium (0.24%), potassium (0.03%), magnesium (0.03%), phosphorus (0.02%), and trace amounts of sodium, iron, cooper, and zinc (from a bromatological analysis conducted in our lab). The purpose of this study was to investigate the production of PHA by Bacillus megaterium, using carob pulp and fique juice as the sole carbon sources. Glucose and sugar cane molasses were used as a control and a reference, respectively, of inexpensive carbon substrates. Although the last has been reported as an effective and inexpensive substrate for PHA production [5,10], its extensive use in the biopolymer industry is restricted by the food and biodiesel industries.

2.2. Biomass production After harvesting, culture media were centrifuged at 5000 g for 15 min. The pellets were resuspended in TrisHCl 0.01 M (pH 7.0) and frozen at -75 ºC. Finally, the pellets were lyophilized at -50 ºC, 0.05 mBar for 24 h and weighed [12]. 2.3. PHA extraction and chromatography analysis PHA was extracted by digestion with sodium hypochlorite and chloroform [13]. Lyophilized samples were combined with a hypochlorite:chloroform (1:1) solution and shaken at 200 rpm for 1h. This solution was centrifuged at 8000 g for 10 min to isolate the PHA in the organic phase. The biopolymer was precipitated from the chloroform solution with methanol (1:3) added dropwise. The methanol solution remained at 4 ºC for 24 h. The precipitated PHA was purified by washing several times with methanol. Finally, excess methanol was eliminated by evaporation and PHA polymer was identified by Gas Chromatography/Mass Spectrometry – Selected Ion Monitoring (GC/MS-SIM) [3]. The analyses were conducted using a DB-WAX (60 m x 0.25 mm x 0.25 μm) column, and a standard PHB (19.6 g; from Aldrich) as a reference. The injection was conducted in splitless mode (volume of injection = 1 μl).

2. Materials and methods Bacterial strain A strain of Bacillus megaterium was isolated from soil in Colombia, and characterized by molecular (16S rDNA sequence similarity), morphological and biochemical techniques [11]. The cultures were maintained on nutrient agar at -4 °C and a stock was stored at -20 ºC in 15% (v/v) glycerol.

2.4. Reducing sugar concentration The dinitrosalicylic (DNS) technique (Miller 1959) was used to measure the reducing sugar concentration throughout each fermentation period. Briefly, after doing a calibration curve with glucose (0 to 2 g/l), 500 µl of supernatant, obtained by centrifugation, was added to 500 µl of the color reagent. These solutions were heated in boiling water for 5 min and immediately transferred to cold water for 5 min. Finally, absorbance was measured at 540 nm.

2.1. Culture medium and inoculum preparation The culture medium was comprised of 0.6 g/l Na2HPO4, 2.0 g/l KH2PO4, 2.0 g/l (NH4)2SO4, 0.2 g/l MgSO4·7H2O, 0.02 g/l CaCl2, 0.1 g/l yeast extract, 10 ml/l trace solution (FeSO4 2 g/l, MnCl2 4H2O l0.2 g/l, NiCl2 6H2O 0.02 g/l, (NH3)6 MoO7 4H2O 0.03 g/l y Na2B4O7·10H2O 0.1 g/l) and 20 g/l sugar substrate (glucose, sugar cane molasses, carob pulp or raw fique juice). After homogenization, the medium was centrifuged (2000 g, 10 min) and filtered (0.45 µm). The pH was adjusted to 7.0 with NaOH. Culture media with carob pulp and fique juice was also sterilized by autoclaving (121 ºC, 15 min). Inoculums were prepared in test tubes containing 5 ml (10% of the total volume) of sterile culture medium. Each test tube was inoculated with a single B. megaterium colony and incubated at 30 ºC, 150 rpm for 24 h.

3. Results 3.1. Biomass production and sugar consumption

substrates

The highest biomass production was obtained with glucose at 36 h (Fig. 1, A), followed by those obtained with carob pulp and sugar cane molasses (Fig. 1, B and C, respectively). For fique juice this maximum was at 72 h (Fig. 1, D). Similarly, the largest decrease in reducing sugar concentration for media supplemented with carob pulp and fique juice was between 0 and 36 h (Fig. 1, B and D, respectively). Reducing sugar concentration in sugar cane molasses-supplemented media, increased between 0 h and 36 h and then decreased slowly to 1.06 g/l at 144 h (Fig. 1, C). The sugar concentration in media with glucose was higher than 8 g/l during all the fermentation period.

2.3. Fermentation studies Inoculums (5 ml) were transferred into 250 ml erlenmeyers containing 45 ml of sterile culture medium. Fermentations were conducted at 30 ºC and 150 rpm. Shake flask cultures were harvested and assayed for biomass production and reducing sugar concentration at 0, 36, 72 and 144 h.

74


Salazar et al / DYNA 81 (185), pp. 73-77. June, 2014.

Figure 1. Dry biomass and reducing sugars concentration in media supplmeneted with: A) glucose; B) carob pulp; C) sugar cane molasses and D) fique juice. Notice that the scales for dry biomass and reducing sugars are not the same.

Without considering the glucose (due to its expense as a substrate), the highest dry biomass was obtained with carob pulp (1.75 ± 0.16 g/l) and sugar cane molasses (1.69 ± 0.02 g/l) at 36 h. Fique juice-supplemented media shows the maximum production at 72 h (0.23 ± 0.01 g/l), but it is significantly lower than those obtained with the others substrates. After all these maximum points, the biomass production decreased 55% for carob pulp, 30% for sugar cane molasses, and 33% for fique juice, at 144 h. The highest (1.18 ± 0.10 g/l) and lowest (0.33 ± 0.06 g/l) values of initial reducing sugar were obtained with carob pulp and fique juice, respectively. The highest reducing sugar concentration (1.29 ± 0.11 g/l) with sugar cane was obtained at 36 h. From these maximum points to 144 h, the reducing sugar concentration decreased 77%, 73% and 18% for carob pulp, fique juice, and sugar cane molasses, respectively.

amount and composition of the biopolymers produced from each substrate. Similar results have been observed when comparing PHA productions from different carbon sources. Valappil et al. (2007) were able to produce PHA (using a strain of Bacillus cereus) with 3-HB, 3-HV, and 4-hydroxybityril (4-HB)-like monomer units from structurally unrelated carbon sources, such as fructose, glucose, and gluconate [3]. Similarly, Pijuan et al. (2009) found that different phosphorus-removal microbial communities produced PHA with different compositions [amount of PHB, PHV, and polyhydroxy-2-methylvalerate (PH2MV)], depending on the type of carbon source (i.e. acetate, propionate, butyrate, and glucose) [14]. Therefore, studies focused on novel carbon sources for PHA production (such as this one) have to consider not just the amount but the type of PHA produced with each carbon source. Biomass and PHA production were related to the availability of reducing sugars. The highest and lowest biomass and biopolymer productions were obtained with glucose and fique juice, which respectively showed the highest and lowest reducing sugar concentrations. Although sugar concentrations are higher in sugar cane molasses than in carob pulp, the PHB production from both substrates were similar. There are two aspects that must be considered in this case: (1) sugar cane molasses are rich in polysaccharides (mainly sucrose) that cannot be detected by the DNS technique (Miller 1959), but as the culture grows these polysaccharides are metabolized and reducing sugars are released to the culture media (Fig. 1,C and 3); (2) carob pulp has volatile compounds as methylpropanoic, methylbutanoic, hexanoic and heptanoic acids [15], that microorganisms can use for PHA synthesis [16]. It is possible that these volatile compounds have compensated for the deficiency of reducing sugars in carob pulp with respect to cane molasses, so that both PHB productions were similar.

3.2. PHA extraction and characterization The mass spectrum of the monomers obtained by derivatization of the reference PHB (Fig. 2A) and the produced PHA (Fig. 2, B to E), confirms the presence of hydoxybutyric (HB) monomers in all samples except in those from fique juice (Fig. 2, E). Besides HB monomers, another compound (possibly hydroxyvalerate) was detected when glucose and carob pulp were used as the sole carbon source (peaks at 27 min in Fig. 2, B and C, respectively). The PHA production with glucose, carob pulp, sugar cane molasses and fique juice were 2.5, 0.8, 0.8 and < 0.002 g/l, respectively. 4. Discussion All substrates tested in this research can be used as the sole carbon source for the growth of B. megaterium, which is a common bacteria used for PHA production [1,5]. Nevertheless, there are significant differences between the

75


Salazar et al / DYNA 81 (184), pp. 20-27. April, 2014.

Figure 2. GC/MS-SIM chromatograms of monomers prepared from: A) reference PHB; and from those obtained from media supplemented with B) glucose (PHB at 22.1 and possible PHV at 27.0); C) carob pulp (PHB at 22.1 and possible PHV at 27.0); D) sugar cane molasses (PHB at 22.1);and E) fique juice (undetected).

Besides HB monomers, another compound was detected in the PHA produced from glucose and carob pulp. Based on the results reported by Keum et al. (2008), the peak at 27 min in Fig. 2 B and C could represent the production of hydroxyvalerate (HV) monomers. This suggest that the PHA obtained from glucose and carob pulp is the copolymer poly(hydroxybutyrate-co-valerate) (PHBV) [17]. This biopolymer has different properties than the common PHB, and is used for different biomedical and industrial applications [18,19]. In summary, this is, to our understanding, the first evidence that carob pulp can be used as a carbon source for PHA production. The use of this and other inexpensive carbon substrates, such as beet molasses [20], extruded rice bran [21], and dairy wastes [1], could lead to significant reductions in the production costs of PHA.

Contrary to carob pulp, raw fique juice does not seem to be an adequate carbon source for PHA production. However, due to the large amounts of fique juice that are annually wasted in Colombia and other South American countries, it could be economically viable to consider pretreatments (e.g. to increase sugar concentration) to enhance the efficiency of fique juice as a substrate for PHA production. Although carob pulp represents an opportunity to reduce the production costs of PHA, more research is needed in order to reduce the gap in production costs between petroleum-based and biodegradable polymers. Acknowledgments We would like to thank the DIME (Dirección de Investigación de la Universidad Nacional de Colombia, Sede Medellín), Vicerrectoría de Investigaciones de la Universidad Nacional de Colombia, and the Colciencias (Departamento Administrativo de Ciencia Tecnología e Innovación de la República de Colombia) program “Jóvenes Investigadores e Innovadores – Virginia Gutiérrez de Pineda” for their financial support. Also we would like to thank Dr. Mauricio Marín and M.Sc. Silvia Sánchez for their contribution in the isolation and characterization of the strain. .

5. Conclusions Carob pulp is a promising carbon source for PHA production. Moreover, it may be used for the production of biopolymers with composition and properties different than those of the traditional PHB. This may be due to the presence of volatile fatty acids in carob pulp. An additional advantage of this novel carbon source is that carob trees are widely spread and their fruits are mostly unexploited. 76


Salazar et al / DYNA 81 (184), pp. 20-27. April, 2014.

[15] Mastelić, J., Jerković, I., Blazević, I., Randonić, A. and Krstulvović, L., Hydrodistillation–adsorption method for the isolation of water-soluble, non-soluble and high volatile compounds from plant materials. Talanta, 76 (4), pp. 885-891, 2008.

References [1] Pandian, S., Deepak, V., Kalishwaralal, K., Rameshkumar, N., Jeyaraj, M. and Gurunathan, S., Optimization and fed-batch production of PHB utilizing dairy waste and sea water as nutrient sources by Bacillus megaterium SRKP-3. Bioresource. Technology, 101 (2), pp. 705-711, 2010.

[16] Suriyamongkol, P., Weselake, R., Narine, S., Moloey, M. and Shah, S., Biotechnological approaches for the production of polyhydroxyalkanoates in microorganisms and plants – A review. Biotechnology Advances, 25 (2), pp. 148–175, 2007.

[2] Hong, C., Hao, H. and Haiyun, W., Process optimization for PHA production by activated sludge using response surface methodology. Biomass and Bioenergy, 33 (4), pp. 721-727, 2009.

[17] Keum, Y.S., Seo, J.S., Li, Q.X. and Kim, J.H., Comparative metabolomic analysis of Sinorizhobium sp. C4 during the degradation of phenanthrene. Applied Microbiology and Biotechnology, 80 (5), pp. 863872, 2008.

[3] Valappil, S., Peiris, D., Langley, G., Herniman, J., Boccaccini, A., Bucke, C. and Roy, I., Polyhydroxyalkanoate (PHA) biosynthesis from structurally unrelated carbon sources by a newly characterized Bacillus spp. Journal of Biotechnology, 127 (3), pp. 475–487, 2007.

[18] Jacobs, T., Declercq, H., de Geyter, N., Cornelissen, R., Dibruel, P., Leys, C., Beaurain, A., Payen, E. and Morent, R., Enhanced cell–material interactions on medium-pressure plasma treated polyhydroxybutyrate/polyhydroxyvalerate. Journal of Biomedical Materials Research Part A, 101(6), pp. 1778–1786, 2013.

[4] Villano, M., Beccari, M., Dionisi, D., Lampis, S., Micchelli, A., Vallini, G. and Majone, M., Effect of pH on the production of bacterial polyhydroxyalkanoates by mixed cultures enriched under periodic feeding. Process Biochemistry. 45 (5), pp. 714-723, 2010.

[19] Pardo-Ibáñez, P., López-Rubio, A., Martínez-Sanz, M., Cabedo, L. and Lagaron, J., Keratin–polyhydroxyalkanoate melt-compounded composites with improved barrier properties of interest in food packaging applications. Journal of Applied Polymer Science, pp. 131 (4), 2014.

[5] Kulpreecha, S., Boonruangthavorn, A., Meksiriporn, B. and Thongchul, N., Inexpensive fed-batch cultivation for high poly(3-hydroxybutyrate) production by a new isolate of Bacillus megaterium. Journal of Bioscience and Bioengineering, 107 (3), pp. 240-245, 2009.

[20] Page, W., Production of polyhydroxyalkanoates by Azotobacter vinelandii UWD in beet molasses culture. FEMS Microbiology Letters, 103 (2), pp. 149–157, 1992.

[6] Kim, B., Production of poly(3-hydroxybutyrate) from inexpensive substrates. Enzyme and Microbial Technology, 27 (10), pp. 774-777, 2000.

[21] Huang, T., Duan, K., Huang, S. and Chen, C., Production of polyhydroxyalkanoates from inexpensive extruded rice brand and starch by Haloferax mediterranei. Journal of Biotechnology, 33, (8), pp. 701–706, 2006.

[7] Aalzate, L., Artega, D. and Jaramillo, Y., Propiedades farmacológicas del algarrobo (Hymenaea courbaril Linneaus) de interés para la industria de alimentos. Revista Lasallista de Investigación, 5 (2), pp. 100–111, 2008 [8] Cartaxo, S., Souza, M. and de Albuquerque., Medicinal plants with bioprospecting potential used in semi-arid northeastern Brazil. Journal of Ethnopharmacology, 131 (2), pp. 326-342, 2010.

Alejandro Salazar, received the Bs, in Biological Eng. in 2009, and the MS degree in Biotechnology in 2012, at the Universidad Nacional de Colombia, Medellín. Currently, he is a PhD candidate in the Department of Biological Sciences at Purdue University, US.

[9] Martínez, L.F., Guía ambiental para el subsector fique. Ministerio del Medio Ambiente, Fedefique y Sociedad de Agricultores de Colombia, Colombia, 2000.

María Yepes, received the Bs. in Chemistry in 1988 and the MS degree in Chemistry in 1996. From 1996, she is a full time Professor in the School of Chemistry, Facultad de Ciencias, Universidad Nacional de Colombia. She is a member of the research group Production, Application, and Characterization of Biomolecules (PROBIOM) from the same University, and leads different research projects on areas of chemistry, food biotechnology, and environmental biotechnology, aimed at environmental, food, and social sustainability.

[10] Bengtsson, S., Pisco, A., Reis, M. and Lemos, P., Production of polyhydroxyalkanoates from fermented sugar cane molasses by a mixed culture enriched in glycogen accumulating organisms. Journal of Biotechnology, 145 (3), pp. 253–263, 2010. [11] Sánchez, A., Marin, M., Mora, A. and Yepes, M., Identificación de bacterias productoras de polihidroxialcanoatos (PHAs) en suelos contaminados con desechos de fique, Revista Colombiana de Biotecnología, 14 (2), pp. 89-100, 2012.

Guillermo Correa, received the Bs. Eng in Forest Engineering in 1995, the MS degree in Statistics in 1999, and the PhD degree in Multivariate Statistics in 2008. From 1996, he is a full time Professor in the Agronomical Sciences Department, Facultad de Ciencias Agrarias, Universidad Nacional de Colombia. He collaborates in different topics of biological and agricultural research through the design and analysis experiments.

[12] Barbosa, M., Eespinoza-Hernández, A., Malagón-Romero, D. and Moreno-Sarmiento, N., Producción de poli-β-hidroxibutirato (PHB) por Ralstonia eutropha ATCC 17697. Universitas Scientiarum, 10 (1), pp. 45– 54, 2005. [13] Jacquel, N., Lo, C.W., Wei, Y.H., Wu, H.S. and Wang, S., Isolation and purification of bacterial poly(3-hydroxyalkanoates). Biochemical Engineering Journal, 39 (1), pp. 15–27, 2008.

Amanda Mora, received the Bs. in Chemistry in 1992, the MS degree in Chemistry in 1997, and the PhD degree in Chemistry in 2006. From 2000, she is a full time Professor in the School of Chemistry, Facultad de Ciencias, Universidad Nacional de Colombia. She is a member of the research group Production, Application, and Characterization of Biomolecules (PROBIOM) from the same University, and leads different research projects on areas of environmental chemistry and biotechnology, aimed at environmental remediation and generation of value-added products from inexpensive substrates (e.g. agroindustrial wastes).

[14] Pijuan, M., Casas, C. and Baeza, J., Polyhydroxyalkanoate synthesis using different carbon sources by two enhanced biological phosphorus removal microbial communities. Process Biochemistry, 44 (1), pp. 97-105, 2009.

77


DYNA http://dyna.medellin.unal.edu.co/

Straight-Line Conventional Transient Pressure Analysis for Horizontal Wells with Isolated Zones Análisis Convencional de Pruebas de Presión en Pozos Horizontales con Zonas Aisladas Freddy Humberto Escobar a, Alba Rolanda Meneses b & Liliana Marcela Losada c b

a Facultad de Ingeniería.,Universidad Surcolombiana/CENIGAA, Colombia, fescobar@usco.edu.co Facultad de Ingeniería, Universidad Surcolombiana/CENIGAA, Colombia, albita_meneses@hotmail.com c Facultad de Ingeniería, Universidad Surcolombiana/CENIGAA, lilianalosada@outlook.com

Received: January 22th, 2013. Received in revised form: January 23th, 2014. Accepted: January 30th, 2014

Abstract It is common in the oil industry to complete horizontal wells selectively. Even, this selection is performed naturally since reservoir heterogeneity may cause segmented well performance. Segmentation may be partially open to flux due to high skin factor or low permeability bands. They can be treated as a nonuniform skin distribution. A few models have been introduced to capture these special details. Existing interpretation methodologies use non-linear regression analysis and the TDS technique; but, there is an absence of equations for the conventional technique. In this study, the conventional methodology is developed for the analysis of pressure transient tests in horizontal wells with isolated zones so directional permeabilities and skin factors can be obtained. The developed expressions were tested successfully with several examples reported in the literature and compared to results from other sources. Key Words: Horizontal well, isolation zones, partial completion, flow regimes. Resumen Es común en la industria petrolera completar los pozos en forma selectiva. Incluso, dicha selección se efectúa de forma natural ya que la heterogeneidad del yacimiento podría causar comportamiento segmentado del pozo. La segmentación podría estar parcialmente abierto al flujo en virtud a la alto factor de daño bajas vetas de permeabilidad. Ellas pueden tratarse como una distribución no uniforme del factor de daño. Algunos pocos modelos se han introducido para capturar estos detalles especiales. Las metodologías de interpretación existentes usan análisis de regresión no lineal y la técnica TDS; pero, se adolece de ecuaciones para el método convencional. En este estudio se desarrolla la metodología convencional para la interpretación de pruebas de presión en pozos horizontales con zonas aisladas de modo que se puedan estimar las permeabilidades y los factores de daño. Las expresiones desarrolladas se probaron satisfactoriamente con varios problemas encontrados en la literatura y se compararon con los resultados procedentes de otras fuentes. Palabras clave: Pozo horizontal, zonas aisladas, completamiento parcial, regímenes de flujo.

1. Introduction Selective completion is a current operation performed on horizontal wells. Reservoir heterogeneity may also cause segmentation due to low permeability bands [1]. As indicated by [1], [2] and [3] the existence of an intermediate late radial or pseudorradial flow regime is the most important feature of the transient-pressure response of segmented horizontal wells. They found that the semilog slope of the late radial flow regime is affected by the number of equal-length segments. Horizontal wells with isolated areas can be a practical solution for such problems presented in different formations such as gas/water coning, sand production and asphalt production; however, it has a great impact in the skin factor. Currently, the characterization of locations is important since it is a valuable tool which allows for the determination

of the best exploitation scenario. Recently, [4] introduced a new model for the type of geometry here discussed. They made a detailed discussion of the importance and impact of selective horizontal well completion. Also, they formulated the TDS methodology, initially introduced by [5] for horizontal wells, for interpretation of well pressure tests in segmented horizontal wells. In this study, the solution proposed by [4] is used to develop equations to be used in the conventional straightline methodology and successfully applied to field examples provided by [4] and [6]. The results of the estimated permeabilities in the x and z directions and skin factors were compared with those from [4] using the TDS technique. Escobar et al. [7] have indicated the importance of the traditional-conventional analysis even in transient-rate tests interpretation.

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 78-85 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Escobar et al / DYNA 81 (185), pp. 78-85. June, 2014.

So far, the only available methodology for well test interpretation in the systems under consideration was the one presented by [4] following the philosophy of the TDS technique [8]. However, the importance of its application is based upon some examples recently reported in the literature. For instance, [9] presented some examples for the use of zonal isolations in horizontal well completions in a hydraulic fracturing treatment which was conducted with five packer isolation systems so that five isolated zones were created. On the other hand, [10] introduced some field examples for well testing procedures in multi-zone openhole completion wells in which casing annulus packers, instead of cement, were used to provide the isolations.

LD 

LzD 

LpD 

tD 

PD  xD , yD , zD ,tD  

2.1. Mathematical Model The model developed by [1] corresponds to the dimensionless pressure governing equation based upon the following assumptions: (1) homogeneous reservoir, with constant and uniform thickness with closed top and bottom boundaries. (2) Anisotropic system but with constant porosity and permeability in each direction, (3) negligible frictional and gravitational effects, and (4) the well extends in the midpoint of the formation height. y D2

(1)

PD 

x  xw Lw

(2)

y  yw k x rw k x  Lw k y Lw k y

(3)

zD 

z  zw Lw

Lw

2 k x k y hPt  x , y , z ,t  q B

(9)

(10)

(11)

  tD   sm 1  0.80907   ,  ln  2 2  4nLPD LD   yD  z D   nL pD LD

(12)

When Z=0, Equation 12 becomes: PD 

  tD   1  0.80907  4 sm  ln  2 2  4nLPD LD   yD  z D  

(13)

(4)

zw h

(5)

z  zw  zD LD h

(6)

zwD  zD 

kx kz

Lp

 y 2  z D2  Ei   D   0.01 4t D  

In which the dimensionless parameters are as follows:

yD 

(8)

Flow regimes in horizontal wells depend upon several issues mainly related to geometry. When the ratio of the wellbore length to the reservoir thickness, LD, is less than 5 the well acts a single source/sink, then, early spherical flow develops. As example of such case is sketched in Fig. 1. If 5 < LD < 20 then, vertical early-radial flow takes places, however, if LD becomes larger than 20 then early-radial flow cannot be seen since the formation thickness results too thin compared to the wellbore length and the top and bottom of the formation are quickly reached. As stated before, radial flow regime develops in short horizontal wells (LD=20) with or without isolated zones. The governing expression presented by [4] is given by:

N     N 2 2 L2D D  cos( N z )cox N z  z   D wD   d D wD 1  2  e N 1  

xD 

Lz Lw

(7)

2.2. General equation of horizontal wells with isolated zones

4 tD  De PD  xD , yD , zD , tD ,    2nLpD 0  D

    nLzD  (n  1) LpD      erf  x D     n  2 D           x D  n( LzD  LpD )     n 1   erf       2 D     

kz kx

k xt t  x2 where:  x  k x 2 ct Lw Lw  ct

2 Formulation

Lw h

Figure 1. Spherical flow regime in an a horizontal well with selective completion

79


Escobar et al / DYNA 81 (185), pp. 78-85. June, 2014.

perforated area is so short compared to the formation thickness forcing the well to act as a single source/sink. The dimensionless pressure derivative governing equation presented by [4] is;

 t  PD '  sp

Figure 2. Intermediate radial flow regime, after [6]

Replacing the dimensionless quantities in Equation (13), dividing by the natural log of 10 and solving for the wellflowing pressure, it yields: Pwf  Pi 

81.28q B

PD 

nLP k y k z

t

  z  z 2

w

   3.2275  4 sm  2 1 / kz    

(14)

81.28qt  B nLP mER

(18)

1  s ps 2  tD

(19)

Replacing the dimensionless terms, we obtain: k x k y hP 141.2 q B

The slope, mER, from a semilog plot of pressure versus time allows for the estimation of (kykz)0.5,

k y kz 

1 4  tD

Integration of the Equation (18) leads us to obtain the dimensionless pressure for such flow regime:

   log  ct  rw 1 / k y   

1

  0.0002637k x t 2  ct L2w

 s ps

(20)

The slope of a Cartesian plot of pressure versus the inverse square root of time, mps, allows one to calculate (15) (kx)0.5, knowing mps of a Cartesian graph.

The sketch of Fig. 2 shows the development of an intermediate radial flow when a horizontal well possesses isolated zones indicating that 5 < LD < 20. The governing dimensionless pressure derivative equation for such flow regime is:

kx 

2453q BLw hm ps

ct k yt

(21)

Needless to say that the pseudo-spherical skin factor can be obtained from the intercept of such a plot. In long horizontal wells (LD>20), the early radial flow 0.5 (16) regime is hardly seen while early linear flow is dominant in  tD  PD '  n the proximities to the well. The governing dimensionless Since the semilog slope is natural log of 10 times higher pressure derivative equation for this linear flow is given by than the pressure derivative value, Equation (16) allows [4], one to find an expression to find the horizontal permeability, (kxky)0.5 from the semilog slope, mER, once the  tD (22)  tD  PD ' EL  dimensionless pressure derivative is placed in oilfield units; 2nLpD

kx k y 

162.6q B n h mER

(17)

Integration of Equation 22 leads to the dimensionless pressure governing equation; PD 

As described by Fig. 3, the pseudo-spherical (or two hemispherical) flow develops when the length of the

 tD nLpD

 st

(23)

After replacing the dimensionless quantities, we obtain:

k y hPt q B

4.064 

t

 ct

nL p

 st

(23)

Equation (23) suggests that a plot of pressure versus the square root of time provides a straight line whose slope mEL can be used to estimate the square root of ky,

Figure 3. Hemispherical flow regime, after [6]

80


Escobar et al / DYNA 81 (185), pp. 78-85. June, 2014.

At later time, the pseudorradial or late radial regime develops in the horizontal plane without influence of the vertical permeability. See Fig. 5. dimensionless pressure governing equation was introduced by [4] as follows: PD ( xD , yD , z D ,t D , )  Figure 4. System of early lineal flow for long horizontal wells with a high number of isolated zones, after [6]

2 st  1 ln t D   2  L p D LD 

flow any The also

(29)

In oilfield units; ky 

4.064q  B nL p hmEL

t

 ct

    162.6qt  B   k x t  0.8686st h Pwf  Pi  log    3.5789   2  k x k y h    ct Lw  k   Lp z k   x

(24)

Fig. 4 shows that once early-radial flow vanishes, the well acts as a hydraulic fracture, then linear flow develops. A long horizontal well (LD>20) with a high number of isolated zones also exhibits an early linear flow regime which is effective to the whole wellbore. The dimensionless pressure derivative governing equation was presented by [4] as follows:

 tD  PD ' EL

 tD

This flow regime corresponds to the radial flow regime observed in vertical wells. The dimensionless pressure derivative equation is:

 tD  PD 'PR  0.5

(25)

2nLzD

 tD nLzD

 st

kx k y 

(26)

q B

4.064 

1

 ct

nLz

 st

162.6q B hmPR

(32)

Such later flow regimes as late linear and pseudosteady state are similar to conventional horizontal wells. Different expressions to estimate the skin factors are provided in Appendix A.

Once the dimensionless terms are replaced the expression for the dimensionless pressure is presented: k y hPt

(31)

Equation (31) is useful to calculate the horizontal permeability (kxky)0.5 if the semilog slope of a plot of pressure versus time, mPR, is estimated during this flow regime, as:

After integration of Equation 25, we obtain: PD 

(30)

(27)

3. Examples Example 1 was taken from [4]. Examples 2 and 3 were taken from [6]. In both cases the examples were worked by the TDS technique.

A plot of pressure versus the square root of time should a straight line which slope, mEL, allows the estimation of ky0.5, 4.064q  B 1 ky  (28) nL p hmEL  ct

3.1. Example 1 Fig. 6 presents pressure and pressure derivative data for a pressure drawdown test run in a horizontal well having two equal-length isolated zones, each of 400 ft. Other known reservoir and well data are: q = 500 STB/D  = 0.05 μ = 0.5 cp ct = 1x10-6 psi-1 h = 62.5 ft Lw = 2000 ft rw = 0.5 ft Pi = 4000 psi Lz = 2x400 ft Lp = 2x600 ft B = 1.2 bbl/STB n=2 Estimate formation permeability in all directions using the conventional technique.

Figure 5. Pseudorradial flow for horizontal wells with isolated areas

81


Escobar et al / DYNA 81 (185), pp. 78-85. June, 2014. 4959

1.E+04

1.E+03

4957

 t * ď „P'  PR  195 psi

4956

1.E+02

Pseudorradial radial

 t * ď „P'  L1  42 psi 1.E+01

Early linear

Early radial 1.E-01

mER = 23 psi/cycle

4955 4954 4953

 t * ď „P'  ER  10 psi

1.E+00 1.E-02

P, psi

t*ď „P' and ď „P, psi

4958

1.E+00

4952 1.E+01

1.E+02

1.E+03

1.E+04

4951

1.E+05

ď „t, hr

4950

Figure 6. Log-log plot of pressure and pressure derivative vs. time for example 1. After [4]

-2

-1.5 log t, hrs

-1

Figure 9. Semilog plot of pressure vs. time during early radial flow regime for example 1

4660

(kykz)0.5 of 0.88 md using Equation 17. From this, a value of 0.49 for kz is then found.

4640

P, psi

4620

m

PR

3.2. Example 2

= 448 psi/cycle

4600

The pressure and pressure derivative data for a drawdown test of a horizontal well are given in Fig. 10. Other relevant data are given as follows:

4580

4560

q = 4000 BPD Lw = 4000 ft ď Ś = 0.1 rw = 0.566 ft đ?œ‡ = 1 cp Lp = 800 ft

4540 3.2

3.3

3.4

3.5

3.6

3.7

3.8

3.9

4

log t, hrs

Figure 7. Semilog plot of pressure vs. time during pseudorradial flow regime for example 1 4955

Solution. Early radial, early linear, intermediate radial and pseudorradial flow regimes are clearly seen in the pressure derivative curve of Fig. 10. A slope, mIR, of 80.92 psi/cycle is obtained from the semilog plot given in Fig. 11 which allows to estimate a (kykz)0.5 value of 2.82 md using Equation 15. Then, using a slope of 100.68 psi/hr 0.5 read from Fig. 12, a y-direction permeability of 4.12 md is obtained by means of Equation 24. kz is readily found from (kykz)0.5 and ky to be 1.94 md.

4950 4945

P, psi

Pi = 5000 psia ct = 0.000002 psi-1 B = 1.125 rb/STB h = 125 ft n=2 kx= 8 md

4940

mEL =82 psi/ hr 4935 4930 4925 0.2

0.4

0.6

0.8

t,

1

1.2

1.4

1.E+04

hr

t*ď „P' and ď „P, psi

Figure 8. Cartesian plot of pressure vs. the square root of time for example 1 (early linear flow)

Solution. Three flow regimes are clearly seen in Fig. 6: early radial, early linear and pseudorradial. A semilog slope mPR of 448 psi/cycle is found from Fig. 7 during the late radial flow regime. Equation 32 allows estimating a horizontal permeability, (kxky)0.5 of 0.87 md. The Cartesian plot of pressure versus the square root of time given in Fig. 8 provides a slope, mEL, of 82 psi/hr0.5. Using Equation 24 a value of ky of 1.57 md is found. Knowing (kxky)0.5 and ky, a value of kx of 0.49 md is readily obtained. From Fig. 9, a semilog slope, mER, of 23 psi/cycle during the early radial flow regime is used to calculate a value of

Pseudorradial radial

1.E+03

Early linear Intermediate radial

1.E+02

Early radial

1.E+01 1.E-01

1.E+00

1.E+01

1.E+02

1.E+03

1.E+04

ď „t, hr

Figure 10. Log-log plot of pressure and pressure derivative vs. time for example 2. After [6]

82


Escobar et al / DYNA 81 (185), pp. 78-85. June, 2014. 4940

4900

m

4935

m IR = 80.92 psi/cycle

4860

P, psi

P, psi

4880

PR

=54.08 psi/cycle

4930

4840 4925

4820 4920 1

4800 10

10

t, hrs

100

t, hrs

100

Figure 11. Semilog plot of pressure vs. time for example 2 (intermediate radial flow)

Figure 14. Semilog plot of pressure vs. time during pseudorradial flow regime for example 3

3.3. Example 3

Solution. From the pressure and pressure derivative loglog plot of Fig. 13 the early linear and pseudorradial flow regimes are clearly observed. The semilog slope during the pseudorradial flow regime, Figure 14, mPR = 54.08 psi/cycle, leads to the estimation of a horizontal permeability value (kxky)0.5 of 17.73 md. A Cartesian slope during linear flow, mEL = 30.71 psi/hr0.5, obtained from Fig. 15 is used to estimate a ky = 15.04 md from Equation 24. kx is then estimated to be 20.9 md.

Figure 13 contains the pressure and pressure derivative data form a drawdown test of a horizontal well presented by [6]. 4960

P, psi

4950

mEL =100.68 psi/ hr

5000

4940

mEL =30.71 psi/ hr

4990

P, psi

4930 4980

4920 0

0.5

t,

hr

1

1.5 4970

Figure 12. Cartesian plot of pressure vs. the square root of time for example 2 (early linear flow) 4960

Reservoir, fluid and well parameters are given below: q = 1000 BPPD Lw = 6000 ft ϕ = 0.1 rw = 0.7 ft µ = 1 cp Lp = 450 ft

0

t*P' and P, psi

1

t,

1.5

hr

Figure 15. Cartesian plot of pressure vs. the square root of time for example 3 (early linear flow)

Pi = 5000 psi ct =0.000002 psi-1 B =1.25 bbl/STB h = 53 ft n=4 kz = 10 md

4. Comments on the Result Table 1 presents the results obtained from this work and compared to the values reported by [1] and [2]. Notice that the deviation errors of the results obtained in this work are acceptable indicating that the equations developed work well.

1000

Table 1. Comparison of results.

100

Pseudorradial radial

Early linear

Example

10

1 2

1 0.01

0.5

0.1

1

10

100

1000

10000

t, hr

3

Figure 13. Log-log plot of pressure and pressure derivative vs. time for example 3. After [6]

83

Para-meters kx, md ky, md kz, md ky, md kz, md kx, md ky, md

This Study 0.48 1.57 0.49 4.12 1.93 20.9 15.04

TDS Reference 0.5 [1] 1.49 [1] 0.52 [1] 4.02 [2] 1.99 [2] 19.87 [2] 15.14 [2]

% error 4.9 4.2 4.5 2.49 2.76 5.18 0.66


Escobar et al / DYNA 81 (185), pp. 78-85. June, 2014.

The mechanical skin factor from Equation 14 is:

5. CONCLUSION The straight-line conventional method for pressuretransient analysis was complemented with new equations for horizontal wells with isolated zones. The equations were successfully applied to examples reported in the literature and provided similar results to the TDS technique.

sm 

s ps 

Oil formation factor, rb/STB Well position inside the reservoir Total system compressibility, 1/psi Formation thickness, ft Permeability, md Total horizontal well length, ft Ratio of the horizontal wellbore length to the reservoir thickness Lw/h Perforated zones length, ft Isolated zones length, ft Slope An integer from 1 to infinite. For practical purposes from 1 to 100 Number of horizontal-well sections Initial reservoir pressure, psi Well-flowing pressure, psi Pressure, psi Skin factor Flow rate, BPD Time, hr Well position along the z-axis Dimensionless well position along the z-axis Distance from wellbore to formation bottom

Lp Lz m N n Pi Pwf P s q t z zD zw

st 

đ?œ?ď€

Lp st 

141.2qď ­ B k x k y hď „Pint

(A.2)

k y hď „Pint qď ­ B

(A.3)

ďƒś k z ďƒŚ ď „Pint k x k y h ďƒ§  3.5789 ďƒˇ ďƒˇ k x ďƒ§ 162.6q ď ­ B ďƒ¨ ďƒ¸ 0.8686h

(A.4)

References [1] Kamal, M.M., Buhidma, I.M., Smith, S.A., and Jones, W.R., Pressure Transient Analysis for a Well with Multiple Horizontal Sections. Paper SPE 26444, SPE Annual Technical Conference and Exhibition, Houston, 36 October. 1993 [2] Ozkan, E., Analysis of Horizontal Well-Responses: Contemporary vs. Conventional. SPEREE 4 (4). pp. 260-269, 2001. [3] Yildiz, T. And Oskan, E., Transient Pressure Behavior of Selectively Completed Horizontal Wells. Paper SPE 28388, SPE Annual Technical Conference and Exhibition, New Orleans, USA, pp. 25-28 Sep., 1994. [4] Al Rbeawi, S. And D. Tiab, D., Effect of the Number and Length of Zonal Isolations on Pressure Behavior of Horizontal Wells. Paper SPE 142177, SPE production and operations Symposium held in Oklahoma City, Oklahoma, USA, pp. 27-29 March. 2011. [5] Engler, T. W. And Tiab, D., Analysis of Pressure and Pressure Derivatives without Type-Curve Matching. 6- Horizontal Well Tests in Anisotropic Reservoirs. Journal of Petroleum Science and Engineering, 15, pp. 153-168, 1996.

Suffices D EL ER ps int IR m PR t x,y,z

(A.1)

The total skin factor from Equation 30 is:

Change, drop Porosity Oil viscosity, cp Integration variable - time

ď Ś ď ­ď€

 0.8068

The total skin factor from Equation 23 is:

Greek ∆

325.12qt ď ­ B

The pseudo-spherical skin factor from Equation 20 is:

Nomenclature B bx ct h k L LD

ď „Pint nLp k y k z

[6] Al Rbeawi, S., Interpretation of Pressure Transient Tests of Horizontal Wells With Multiple Hydraulic Fractures and Zonal Isolations. [PhD Thesis]. Norman, OK: The University of Oklahoma, 2012.

Dimensionless Early linear Early radial Pseudo-spherical Intercept Intermediate radial Mechanical Pseudorradial or late radial Total Coordinates

[7] Escobar, F.H., Rojas, M.M. and Cantillo, J.H., Straight-Line Conventional Transient Rate Analysis for Long Homogeneous and Heterogeneous Reservoirs. Dyna. year 79, (172), pp. 153-163, 2012 [8] Tiab, D., 1995, Analysis of Pressure and Pressure Derivative without Type-Curve Matching: 1- Skin and Wellbore Storage�. Journal of Petroleum Science and Engineering, Vol 12, pp. 171-181. 1995. [9] Maddox, B., Wahrton, M., Hinkie, R., Farabee, M. and Ely, J., Cementless Multi-Zone Horizontal Completion Yields Three-Fold Increase�. Paper IADC/SPE 112774 presented at the IADC/SPE Drilling Conference in Orlando, FL. March, pp. 4-6, 2008. [10] Brooks, R.T. and Scott, S., Improvement and Testing Multi-Zone Open-Hole Carbonate Formations. Paper SPE 119426 presented at the Middle East Oil and Gas Conference. Bahrain. March, pp. 15-18, 2009.

Apendix A. Skin factor Equations

84


Escobar et al / DYNA 81 (185), pp. 78-85. June, 2014. Freddy Humberto Escobar, received a BSc degree from Universidad de America in 1989, both the MSc received in 1995 and the PhD in 2002 were obtained from the University of Oklahoma. All his degrees are in Petroleum Engineering. Dr. Escobar joined Universidad Surcolombiana in 1996 and is the director of the Research Group on Transient Well Testing and the president of Cenigaa (Research Center for Sciences and geo-agroenvironmental resources).

Alba Rolanda Meneses, received her BSc in Petroleum Engineering from Universidad Surcolombiana in 2013. She has recently joined Schlumberger Limited as a field engineer. Liliana Marcela Losada, received her BSc in Petroleum Engineering from Universidad Surcolombiana in 2013.

85


DYNA http://dyna.medellin.unal.edu.co/

Creative experience in engineering design: the island exercise Experiencia creativa en ingeniería en diseño: el ejercicio de la isla Vicente Chulvi a, Javier Rivera b & Rosario Vidal c b

a Ph.D. Universitat Jaume I, Dep. d’Enginyeria Mecànica i Construcció, Castelló (Spain). chulvi@uji.es Ph.D. Centro de Investigación y Asistencia en Tecnología y Diseño del Estado de Jalisco, A.C. (CIATEJ), Guadalajara (Jalisco, México) javier.rivera.r@gmail.com c Ph.D. Universitat Jaume I, Dep. d’Enginyeria Mecànica i Construcció, Castelló (Spain). vidal@uji.es

Received: January 23th, de 2013. Received in revised form: January 20th, 2014. Accepted: February 20th, 2014.

Abstract This work addresses the challenge of stimulating creative thought in higher education. With this aim in mind, the article describes the development of a collaborative creativity exercise designed to improve students' creative skills through self-perception of their strong and weak points. In this work the exercise is set out as a five-step methodology, which includes the determination of personality profiles using the Herrmann Brain Dominance Instrument and the design of an island, to be carried out by groups of students in the classroom. In this study, the exercise, which has been applied to first-year Technical Engineering in Industrial Design students for the last five years, is undertaken by different groups of students in five different sessions. Observations performed in the classroom and the results of the exercises, that is, both the islands that were designed and the choices made by the students, are used to draw the conclusions about the validity of the study. Moreover, the paper also compares the perceptions of the students who took part in the experiment this year and those who had done the exercise in previous years. The conclusions concern the style of working of each group of dominances, and highlight the effectiveness of the tool for enhancing students' creativity through self-reflection. The students' positive perceptions, even several years after doing the exercise, are good proof of this. Keywords: Creativity; design; brain dominances; teaching tool. Resumen El presente trabajo aborda el reto de la estimulación del pensamiento creativo en la educación superior. Para ello se muestra el desarrollo de un ejercicio de creatividad colaborativo diseñado para mejorar las aptitudes creativas de los alumnos a través de la auto-percepción de sus puntos fuertes y débiles. En el presente trabajo el ejercicio se plantea como una metodología de cinco pasos, que incluye la determinación de los perfiles de personalidad mediante Test de Dominancias Cerebrales de Herrmann y el diseño grupal de una isla, para ser realizado en una clase docente. El ejercicio, que lleva aplicándose durante cinco años sobre alumnos de primer curso de Ingeniería Técnica en Diseño Industrial, se plantea para el presente trabajo a grupos diferentes de alumnos en cinco sesiones diferentes. Results Las observaciones en el aula y los resultados de los ejercicios, tanto las islas diseñadas como las elecciones de los alumnos, sirven para extraer las conclusiones necesarias sobre la validez del estudio. Además, se muestra la comparativa de las percepciones de los alumnos que han realizado la experiencia en el presente curso con aquellos que realizaron el ejercicio en años posteriores. Las conclusiones comprenden el estilo de trabajo de cada grupo de dominancias y resalta la efectividad de la herramienta para potenciar la creatividad de los alumnos a través de la autorreflexión. Las percepciones positivas de los alumnos incluso después de varios años de haber realizado el ejercicio son una buena prueba de ello. Palabras clave: creatividad; diseño; dominancias cerebrales; herramienta educativa

1. Introduction The importance of stimulating creative thought in order to achieve original, competent ideas is a challenge in education that is currently being tackled with the creative skills training process [1][2]. On the one hand we have a wide range of tools with which to evaluate students' level of creativity or creative potential, such as those developed by Guildford [3], Torrance [4], Otis [5], Corbalán-Berná [6] or Runco [7]. On the other hand there is also a set of techniques aimed at improving or enhancing the degree of creativity of students, which has initially been measured using the aforementioned tools [8- 12]. The problem within the area of education lies in the fact that if students obtain poor results when their

creative potential is evaluated with the first group of tools, this will lead to frustration and a negative attitude when it comes to using the creative techniques. To solve this problem, the main purpose of this work is to present a tool for improving students' creativity through the perception of their capacities in a qualitative, rather than quantitative, manner. This technique has been applied to first-year Technical Engineering in Industrial Design undergraduates for five consecutive years. One of the most important elements of this method is the Herrmann Brain Dominance Instrument [13-14], which describes people’s thinking preferences or modes and thus does not use quantitative scales. This instrument has already proved its validity in a number of studies in which it was applied to students [15-17] and teachers [18-19].

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 86-93 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Chulvi et al / DYNA 81 (185), pp. 86-93. June, 2014.

This paper describes the development of a collaborative creativity exercise designed to improve students' creative skills through the self-perception of their strong and weak points. In the exercise, the Herrmann Brain Dominance Instrument is used to establish the teams. The work describes the application of the exercise, which was designed to be carried out in five different sessions, with the aim of eliminating possible dispersions and verifying the conclusions in a more consistent way. Furthermore, the paper also offers the results of a satisfaction survey that was administered to the students who carried out the exercise described here and to others who had done it in previous years. By so doing researchers aimed to evaluate students' perception of the exercise both in their recent memory and some years after the experience.

3. Method design

As stated earlier, the Herrmann Brain Dominance Instrument (HBDI) [14, 20] is a tool for measuring and describing people’s preferences or modes of thinking that was developed by Herrmann in 1979 and later validated by Bunderson [21]. It must be stressed that the purpose of this tool is not to determine the level of intelligence, but rather it is limited to defining styles of thinking in a qualitative way. Hence, there are no good or bad profiles. In his model of brain dominance, Herrmann identifies four different modes of thinking (Fig. 1), i.e. A. analytical thinking, B. sequential thinking, C. interpersonal thinking and D. imaginative thinking. A person’s brain dominance is determined by applying a 120-item questionnaire [13]. The result appears as a score for each quadrant which, taken together, allow us to determine the person’s cognitive preference. That is to say, it becomes possible to see which profile is more prominent than the others in the person’s normal performance and therefore which traits they present in their interaction with the environment and with other individuals. It should be noted that many individuals do not present one single dominance, and may be dominant in two styles, where their preferences are defined by a left-right or cortical-limbic hemisphere. In addition, there are even cases of triple or quadruple dominance (the latter being known as “total brain dominance”).

The questionnaire used in the exercise was the reduced version for students produced by Jiménez-Vélez [22], based on Herrmann’s original instrument. This test is made up of 40 items, which allow the preferential style of thinking to be identified like the full version, but it is faster and simpler both to answer and to evaluate – a fundamental requirement for it to be used in a practical teaching situation. The methodology proposed for carrying out the exercise is as follows: 1. The Jiménez-Vélez reduced questionnaire is administered to the students individually in the actual classroom, and it is made clear to them that they are not doing a test or an exam and so there are no right or wrong answers, only personal preferences. At the end of the questionnaire there are instructions on how to score it, so that the students themselves can determine their own dominance. 2. Once the dominances have been determined, the main traits in each quadrant are explained to the students and they are asked to form groups of between four and six members, bearing in mind their main dominance. Only the quadrant in which they obtained the highest score is considered and cases of double, treble or quadruple dominances are ignored. 3. The task that the groups must solve is explained to them as follows: “You have an unlimited budget with which to design an island concept” (Fig. 2). They are not given any further information or restrictions of any kind. They are given handicraft materials for them to use in the design, consisting of one DIN-A1 sheet of lightweight cardboard to be used as the base, sheets of coloured card, wax crayons and coloured pencils, glue, scissors and plasticine in different colours. They are allowed about an hour to produce their design. 4. Once the design time is up, each group chooses one of its members to give a one-minute presentation of the island they have designed to the other students. The students then vote for the design that they consider to be the best out of all those proposed by their classmates. 5. The rest of the session is devoted to getting the students think about the results and to developing selfawareness of their own dominance and that of the people around them.

Figure 1. The Herrmann Brain Dominances model

Figure 2. Instructions for the exercise

2. The Herrmann Brain Dominance Instrument

87


Chulvi et al / DYNA 81 (185), pp. 86-93. June, 2014. Table 1. The dominances found in each session (numbers betwen brackets indicate the number of formed groups) Number of Session 1 Session 2 Session 3 Session 4 Session 5 students with Dominance A 5 (1) 2 (0) 4 (1) 5 (1) 5 (1) Dominance B 11 (2) 10 (2) 9 (2) 10 (2) 7 (1) Dominance C 10 (2) 6 (1) 6 (1) 6 (1) 6 (1) Dominance D 6 (1) 10 (2) 13 (3) 6 (1) 6 (1)

4. Carrying out the experiment with students The experiment was conducted with first-year Technical Engineering in Industrial Design undergraduates. The same experiment was carried out in five sessions with different students, so that the different results could be compared and the conclusions would be more robust. Between 24 and 32 students took part in each session. Step 1. In the first step of the experiment, students were given the JimĂŠnez-VĂŠlez reduced questionnaire to complete, in order to determine their dominances. The results of the dominances for each session can be seen in Table 1. Step 2. The second step consisted in making up groups of between four and six students (see numbers between brackets in Table 1). In the second session, since there were only two students whose main dominance was A, there were not enough to form a group. They were therefore allocated according to their secondary dominance, which in these two cases were B and D. In the fifth session, the number of students with dominance B exceeded the upper limit for the number of

Figure 3. Different students working in their groups

members of the group by one, but on forming a group with dominance A, it was found to be one short. Thus, the student with dominance B that had the highest score in A was placed in group A.Step 3. In step 3, they were shown the transparency of the statement of the problem (Fig. 2) and then given the materials and asked to start the exercise, without offering them any further Information. Throughout the exercise, notes were taken about the attitudes and behaviours of each of the groups so that conclusions could later be reached. Photos were also taken and parts of the experiment were recorded so that they could be consulted after it had finished. The photographs in Fig. 3 show several different instances of the students working in their groups.

Figure 4. A student presenting the group’s island concept

88


Chulvi et al / DYNA 81 (185), pp. 86-93. June, 2014.

Figure 5. Several island designs produced by groups with dominance A

Figure 6. Several island designs produced by groups with dominance B

89


Chulvi et al / DYNA 81 (185), pp. 86-93. June, 2014.

Figure 8. Several island designs produced by groups with dominance D

Figure 7. Several island designs produced by groups with dominance C

considered to be the best island concept. The results were as follows: dominance B was the winner in sessions 1 and 5, C won in session 2 and dominance D was the most voted in sessions 3 and 4; dominance A was not chosen as the winner in any of the sessions.

Step 4. In the fourth step the spokesperson from each group presented their final island concepts (Fig. 4). Fig. 5, 6, 7 and 8 show several final designs for islands produced by the dominance A, B, C and D groups, respectively. The students then voted for what they

90


Chulvi et al / DYNA 81 (185), pp. 86-93. June, 2014.

Step 5. In the rest of the session the students are encouraged to discuss the experiment and the results obtained, and to further develop their self-awareness of their creative typology.

6. Discussion Observation of the five sessions allowed the following issues to be deduced: Dominance A group: the main motivation driving this kind of group is winning. It is usually a controversial group. All the members of the team each want to impose their own decisions. In all five sessions there were always at least a couple of members who argued and in two cases the teacher had to remind them to keep their voices down. Dominance B group: at first the group is lost. Its members need clear instructions and the first few minutes of the session are wasted by calling the teacher and trying to get answers to questions like “But… what exactly do we have to do?” “An island? How?” “What is the island going to be used for?” Once they give up trying to get instructions out of the teacher, the group agrees on what they are going to do and they set about working in an organised and fairly quiet way. Dominance C group: The groups of this type spent most of the session talking and discussing their ideas for island concepts. In this case the dialogue is sociable and friendly. Although they spend a lot of time on talking and reaching agreements, this does not stop them from going about the physical construction of the model of the island at the same time. Nevertheless, in comparison to the spokespersons from the other groups, the spokesperson of this group is the one who displays most enthusiasm when it comes to “selling” their island to their companions. Dominance D group: this group is the first to begin the manual work on building the island, often even before they start discussing the design that they are going to develop. They frequently make changes to the initial concept and do so in a rather chaotic way. The members of these groups display lively behaviour and laugh a lot. From the resulting islands and the students' votes, the following observations can be made: The island produced by groups A, despite taking into account all the functional necessities of the island, is not altogether convincing, since the model is designed in a short time and after several arguments, and therefore members' motivation is not very high. Moreover, the spokesperson is often interrupted by a companion from his or her own team, which breaks the flow of information to the audience. The island designed by groups B, despite sometimes not being very original, is nevertheless the most elaborate and detailed. The team works efficiently and thinks about all the details, which means that their island is always ranked among the best. The solutions they use are usually very rational and methodical; they take into account all the necessary functions and these are clearly differentiated in their design. The model built by groups C is usually the most original, but sometimes it is so original that it borders on irrationality and this lowers the number of votes they receive because they incorporate concepts that are not very highly valued by the members of the other groups. Their idea is well developed, however, and they stand out in the presentation, which is the group’s strong point. The islands designed by groups D range from the most original to the most chaotic. The disorganised way in which the group works results in a model that is difficult

5. Results of the satisfaction survey The survey was administered to a sample of students who participated in the experiment in the current year and also students who had taken part in the experiment in previous years. Altogether answers were collected from 49 students, of whom 23 were from the current year and 26 from previous years. Of the sample of students who answered, 20% were from group A, 24% from group B, another 24% from group C and the remaining 32% were from group D. The parameters that were taken into account in the survey referred to personal satisfaction, academic skills and professional competencies. Their responses can be seen in the graphs in Fig. 9, 10 and 11, which show the separate perceptions of students who have just done the experiment and that of those who remember it from previous years.

Figure 9. Students’ evaluation of their personal satisfaction with the exercise

Figure 10. Students’ evaluation of the degree of academic skills acquired by doing the exercise

Figure 11. Students’ evaluation of the degree of professional competencies acquired by doing the exercise 91


Chulvi et al / DYNA 81 (185), pp. 86-93. June, 2014.

to understand or which is finally left unfinished. Yet, quite often the actual design is attractive from the aesthetic point of view of the solution or it is amusing for the audience and this can capture quite a lot of votes for them. Hence, these groups usually come either first or last in the voting, but rarely finish halfway up the ranking. Lastly, from the answers to the satisfaction survey, it can be deduced that students’ evaluations of the levels of personal satisfaction and the degree of academic skills and professional competencies acquired are, overall, positive. On the other hand, it is also interesting to note that students who participated in the experiment in previous years rate it higher than those who have just done it.

References [1] López-Calchis, E., Para lograr mayor eficiencia en el proceso de formación. Revista Institucional Universidad Tecnológica del Chocó: Investigación, Biodiversidad y Desarrollo, 26, )2(pp .111-111 , 2112. [2] Duque, M., Gauthier, A., Gómez, R., Loboguerrero, J. y Pinilla, A., Formación de ingenieros para la innovación y el desarrollo tecnológico en Colombia. Revista DYNA, 128, pp. 63-82. 1999 [3] Guildford, J. P., Intelligence, creativity, and their educational implications. San Diego: Edits Pub.1691 [4] Torrance, E. P., Torrance test of creative thinking: Norms-technical manual. MA: Ginn, Lexington. 1696. [5] Otis, A. S. and Lennon, R. T., Otis-lennon school ability test. San Antonio, TX: Harcourt Assessment, Inc. 1661.

7. Conclusions

[6] Corbalán-Berná, F. J., Martínez-Zaragoza, F., Donolo, D. S., AlonsoMonreal, C., Tejerina-Arreal, M. y Limiñana-Gras, R. M., Inteligencia creativa: Una medida cognitiva de la creatividad (CREA). Madrid: TEA ediciones. (2112).

The main aim of this practical exercise is to get students to perceive the way they work within a team and to think about how they can take advantage of their strong points and improve their shortcomings. This is what makes it essential to carry out the fifth step of the session, the discussion of the results, in order to explain to them, the reasons behind their results and their attitudes. As the work was being carried out, it became clear that a group made up only of leaders (dominance A) cannot advance, because a work team really must have only one clearly-defined person in charge. It has also been seen how a group made up exclusively of persons with dominance B, despite being more organised and harder working, needs a dominant voice to guide the group and give instructions. In the same way, such excessive organisation sometimes has a detrimental effect on the originality of the work. It has also been observed how a group made up of just dominance C spends too much time on discussion. Although this results in more creative concepts, they often get stuck on a holistic level and develop a concept that is frequently unfeasible. Lastly, the workgroups made up only of dominance D display a remarkable lack of control and organisation, which often turns what could be a good idea into a dismal failure. Yet, it seems to be the group that enjoys the experiment most. Students are then made to think about these observations so that, by themselves, they come to the conclusion that a good work team must be made up of people with several dominances. There are no good or bad dominances; instead they must work together in order to obtain the best results. In other words, a team must have: leadership, to control and make decisions quickly when needed; organisation and effectiveness, so that the concepts are materialised in good designs; interpersonal dealings, so that communication flows and ideas can circulate freely from some members of the team to others; and a creative part, to give the projects an original touch that makes them stand out from the rest. The most positive point of the study is the positive perception that students have of the experiment. The fact that their perception of the exercise gets better as time goes by indicates that they consider that all the thinking they did during and after the experiment has yielded some benefit for them, both in their academic progress and later in their career.

[7] Runco, M. A. and Basadur, M. Assessing ideational and evaluative skills and creative styles and attitudes. Creativity and Innovation Management, vol 2, pp. 173-166. 1993. [8] Osborn, A., Applied Imagination: Principles and Procedures of Creative Thinking. New York: Charles Scribner's Sons. 1612. [9] Dalkey, N. C., Delphi. Santa Monica, California: The Rand Corporation. 1692 [10] Altshuller, G., Creativity as an Exact Science: The Theory of the Solution of Inventive Problems. Luxembourg: Gordon and Breach Science Publisher. 1611 [11] Dubois, S., Rasovska, I. and Guio, R. D., Comparison of Non Solvable Problem Solving Principies Issued from Csp and Triz. Paper presented at the CIRP Design Conference 2008. [12] Rivera, J., Vidal, R., Chulvi, V. y Lloveras, J., La transmisión visual de la información como estímulo cognitivo de los procesos creativos. Anales de Psicología, 26 (2), pp. 237-226, 2010 [13] Herrmann, N., 1616. Participant survey form of the Herrmann brain dominance instrument. Retrieved from: http://www.thinkingmatters.com/survey.pdf [14] Herrmann, N., The Creative Brain. Brain Books. North Carolina: Lake Lure, 1661. [15] Rojas, G., Salas, R. and Jiménez, C., Learning styles and thinking styles among university students. Estudios pedagógicos, pp, )1(22 .16-21 ,2006 [16] Velásquez-Burgos, B. M., de Cleves, N. R. and Calle, M. G., Determinación del perfil de dominancia cerebral o formas de pensamiento de los estudiantes de primer semestre del programa de bacteriología y laboratorio clínico de la Universidad Colegio Mayor de Cundinamarca. Nova, (7) pp. 519-11 , 2112. [17] Vera, S. y Valenzuela, P., Rutas de aprendizaje para la formación de ingenieros emprendedores. Retrieved from: World Congress & Exhibition Engineering, Buenos Aires, Argentina. 2010. [18] Gardié, O., Determination of the Profile of Thinking Styles and analysis of their implications in the Performance of Venezuelan University Professionals. Estudios pedagógicos 26, pp. 25-38, 2000. [19] Torres, M. and Lajo, R., Cerebral dominance associated with the labour performance of teachers in a UGEL of Lima. Revista de Investigación en Psicología, 12 (1), pp.96-83, 2009 [20] Herrmann, N., The Whole Brain Business Book. New York, NY : McGraw-Hill .1669 [21] Bunderson, V., 1611. The Validity of the Herrmann Brain Dominance Instrument. Retrieved from: http://www.hbdi.com/uploads/100021resou rces/100331.pdf

92


Chulvi et al / DYNA 81 (185), pp. 86-93. June, 2014. [22] Jiménez-Vélez, C., Cerebro creativo y lúdico. Bogotá: Cooperativa Editorial Magisterio. 2111.

of the GID (Engineering Design Group). Vidal earned a BSc in Industrial Chemical Engineering (1990), an MSc in Mechanical Engineering (1993) and a PhD in Engineering (1996).

V. Chulvi is Assistant Professor at the Department of Mechanical Engineering and Construction at the Universitat Jaume I of Castellón. Chulvi earned the BSc in Mechanical Engineering in 2001, the MSc in Mechanical Engineering in 2007, and the PhD of Technological Innovation Projects in Product and Process Engineering in 2010.

J. Rivera is Titular Engineer at the Centro de Investigación y Asistencia en Tecnología y Diseño del Estado de Jalisco, A.C. (México) and Coordinator of the Master in Generation and Management Innovation. SUV, Universidad de Guadalajara (México). Rivera earned an BSc in Industrial Designer (1983), an MSc in Engineering Projects (2006), an MSc in Science and Technology Commercialization (2009) and a PhD in Technological Innovation Projects in Product and Process Engineering (2009).

R. Vidal is Chair of Engineering Projects. For the past 15 years she has held different academic positions at the Universitat Jaume I in the Department of Mechanical Engineering and Construction. She is director

93


DYNA http://dyna.medellin.unal.edu.co/

Calibrating a photogrammetric digital frame sensor using a test field Calibración de una cámara digital matricial empleando un campo de pruebas Benjamín Arias-Pérez a, Óscar Cuadrado-Méndez b, Pilar Quintanilla c, Javier Gómez-Lahoz d & Diego González-Aguilera e a

Cartographic and Land Engineering Department, University of Salamanca, Hornos Caleros, benja@usal.es b Cartographic Center, Government of the Principality of Asturias, oscar.cuadradomendez@asturias.org c Departamento Técnico, Hispana Suiza de Perfilados, mdpqb71@hotmail.com d Cartographic and Land Engineering Department, University of Salamanca Spain, fotod@usal.es e Cartographic and Land Engineering Department, University of Salamanca, daguilera@usal.es Received: February 5th, 2013. Received in revised form: January 27th, 2014. Accepted: May 8th, 2014.

Abstract In this paper a twofold calibration approach for a digital frame sensor has been developed which tries to cope with panchromatic and multispectral calibration separately. Although there have been several improvements and developments in calibration of the digital frame sensor, only limited progresses has been made in the context of multispectral image calibration. To this end, a specific photogrammetric flight was executed to try to calibrate the geometric parameters of a large format aerial digital camera. This photogrammetric flight was performed in the “Principado de Asturias” and it has been designed with a Ground Sample Distance of 6 cm, formed by two strips perpendicular between each other, with five images each one and a longitudinal overlap of 60%. Numerous points have been presignalled over the ground, both check points and control points. Keywords: CCD sensor; large format digital camera; calibration; multispectral image; panchromatic image; aerial photogrammetry. Resumen En este artículo se presenta un doble enfoque para la calibración de una cámara digital matricial y que trata la calibración pancromática y multiespectral por separado. Aunque ha habido varias mejoras y novedades en la calibración las cámaras digitales matriciales, sólo se han hecho limitados progresos en el contexto de la calibración de imágenes multiespectrales. Con este fin, fue realizado un vuelo fotogramétrico específico para tratar de hacer la calibración de los parámetros geométricos de una cámara aérea digital de gran formato. Este vuelo fotogramétrico se realizó en el "Principado de Asturias", y ha sido diseñado con un tamaño de píxel en el terreno de 6 cm, formado por dos pasadas perpendiculares entre sí, con cinco imágenes cada una y un recubrimiento longitudinal de 60%. Se han tomado numerosos puntos preseñalizados sobre el terreno, tanto para los puntos de control como para los puntos de chequeo. Palabras clave: sensor CCD; cámara digital de gran formato; calibración; imagen multiespectral; imagen pancromática; fotogrametría aérea.

1. Introduction In the field of photogrammetry there is a great interest in optimizing the acquisition of data. It has been strengthened in recent years with the exchange of information among the manufacturers of sensors, users and experts in geospatial information. The objective is being achieved with an improvement of the methods as well as the systems used, and the implementation of new production techniques and management and processing of spatial data. The Project of European Spatial Data Research “Digital Camera Calibration & Validation” was divided into two phases: theoretical and empirical. The first was mainly dedicated to the launching of the Project, including the call for experts to form the network. In addition, an extensive report was made, where the different approaches for the calibration of sensors and the calibration methods applied by the manufacturers are

documented [1]. In the second phase empirical tests based on the experiences and recommendations of experts on the procedures commonly accepted for calibration were performed. Flights were made with the following cameras: Leica ADS40, DMC from Z/I Imaging and UltraCamD by Vexcel. The data from these flights were distributed among the members of the network who took part in the second phase. The most important results obtained are shown in a report made by Cramer [2, 3]. From these results it should be remarked that the environmental conditions in the taking of frames are different from the laboratory conditions where the manufacturer has done the calibration. So users have to perform the calibrations “in situ” (on site) to validate and refine the calibration parameters provided by the manufacturer. The calibration of the system in-flight is not common, so far, in the traditional aerial Photogrammetry, so there is a general ignorance of the

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 94-99 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Arias-Pérez et al / DYNA 81 (185), pp. 94-99. June, 2014.

characteristics and advantages of the method. The camera behaviour is not the same when tested under laboratory conditions as when performing under flying conditions and thus, some additional parameters are typically introduced when the self calibration approach is applied [4]. The results provided by the standard photogrammetric model are usually affected by the departure of the theoretical model from the camera actual geometry as well as by the existence of a certain correlation between the parameters used in it, basically between some of the interior parameters (camera geometry) and some of the exterior parameters (camera position and attitude). The additional parameters are usually split into three major groups: the first group consists of those parameters that belong to a mathematical or physical model. The second group of parameters does not account for a functional cause but rather uses an empirical expression that has been proven useful from tests. A third group comes from the blending of these two groups. In any case, the mentioned discrepancies can be determined and assumed with the introduction of additional parameters in the adjustment of the block of images. Specifically, the introduction of additional parameters mainly affects the increase of the vertical accuracy due to the limitation in the height/base ratio of digital cameras. As an example, diverse works that show that the main point of auto-collimation of this cameras is variable have been published [5], and this produces effects not only on the images obtained with this cameras, but in the whole set of sensors (GPS, Inertial Measurement Unit) involved in the capture of data. In [6], the results of determining the misalignment of the system of inertial measurement are presented by two companies that operate with UltraCamD. For one of them everything worked correctly, but for the other one some unexpected results permit one to detect a systematic trend that is finally due to the principal point of autocollimation of the camera. This reveals the necessity to contrast and to validate the internal parameters of these new photogrammetric aerial cameras. Therefore, the issue of the calibration of digital cameras of large format is in fact a matter of great relevance and high interest. Test flights were performed specifically to contrast the internal parameters of a camera (focal length and position of the principal point) together with additional parameters, especially those related to radial lens distortion and some systematic trends. Likewise, a twofold calibration approach has been developed trying to cope with panchromatic and multispectral calibration separately. Although there have been several improvements and developments in calibration of digital frame sensors, only limited progress has been made in the context of multispectral image calibration. More recently, the results published in [7] show that the geometric calibration of the panchromatic aerial images is well known. However, no attention is paid to the geometric calibration of the multispectral images of these cameras. The paper has been structured as follows: after this introduction in Section 2, a detailed description about the sensor, the calibration field, the flight requirements and the computation methods are provided. In Section 3 the

experimental results are outlined and discussed. A final section is devoted to point out the main conclusions. 2. Materials and methods 2.1. The UltraCamD camera The UltraCamD is a digital large frame aerial camera and is based on a multi-cone (multi-head) design that combines a group of 9 medium format CCD sensors in a large format panchromatic image. The multispectral channels are supported by 4 additional CCD sensors (red: 570–690 nm; green: 470–660 nm; blue: 390–530 nm; near-infrared: 670–940 nm). The focal length of the panchromatic lenses is 100 mm and for the color lenses it is 28 mm. The pixel size is 9 μm and the image obtained at full resolution is 7,500 pixels in the direction of flight and 11,500 pixels in a direction perpendicular to the direction of flight. In the multispectral bands there is a resolution of 2,672 × 4,008 pixels. The field of view is of 37° × 55°. Each panchromatic optic cone has the same field, but the CCD sensors are arranged in various positions within each focal plane. The idea is that not all the cones are triggered at the same time but from the same point (syntopic exposure). A cone acts as a master cone, to define the image coordinate system. 2.2. Calibration field and flight requirements GPS is the name for the Global Positioning System (NAVSTAR) which permits the location of a fixed or moving target on the earth surface within an accuracy of a few centimeters (if the differential GPS is used in any of its varieties) although the expected usual standard accuracy is a few meters. The system has been developed and is operated by the Department of Defense of the USA. The initial constellation has been completed by several initiatives: GLONASS (Russia), GALILEO (Europe), BEIDOU (China). All these systems share the same purpose: a global positioning. From now on we use the term GNSS for Global Navigation Satellite System. For an absolute positioning with a single GPS receiver (GNSS), the expected accuracy ranges from a few decimeters to a few meters. To improve this accuracy a second receiver is involved so that they are referenced to each other and not to an absolute framework. This also permits that one of the receivers can work in a dynamic fashion while the other (the base) is kept fixed at one position. When both receivers communicate with each other in real time by radio or modem or wifi, exchanging data received from the system and thus allowing for correcting their relative positions, this technique is known as kinematic relative positioning or Real Time Kinematic (RTK) positioning and leads to an accuracy of some centimeters. It is the way how the control points of this work have been measured. Having in mind that the smallest Ground Sample Distance (GSD) is 7 cm and assuming an image accuracy of 1/3 of the GSD we get a photogrammetric accuracy of 2.33 cm. Provided that the GNSS technique employed guarantees a precision better than 2 cm we can certify that this data are enough to be used as control points. 95


Arias-PĂŠrez et al / DYNA 81 (185), pp. 94-99. June, 2014.

heights referred to the Geodetic Reference System, European Terrestrial Reference- ETRS89. The measurements of the image coordinates both manually and automatically were performed with MatchAT v.5. from Inpho. To give more consistency to the calculation of the internal parameters, 124 tie points located on the roofs of the buildings were manually measured. The flight requirements consist of two strips in the shape of a cross, each with 5 images and with a longitudinal overlap of 60%, covering a surface about 4.6 ha. The first strip was performed in NW-SE direction and included the images: 309, 310, 311, 312 and 313. The second strip was carried out in SW-NE direction with the images: 314, 315, 316, 317 and 318. The GSD used is 6 cm, corresponding with a flight height of 675 meters approximately (Fig. 1). 2.3. Calculations (a)

The calculations were performed with BINGO v.5.4. This program can compute the focal length of the camera, the position of the principal point, the radial distortion parameters and it uses additional parameters for doing so. According to the manufacturer [9], the parameters 7, 8, 9, 10, 25, 26, 35 and 36 have radial symmetric effects since they render a distribution of distortion (on the Y-axis) over the radius (on the X-axis) in a high-order polynomial fashion. It is recommended to study the graphical effects of these parameters since some of them have quite similar consequences and thus, should not be applied simultaneously. For example, a simultaneous use of parameter 7 and 8 on one hand, as well as 25 and 26 or 35 and 36 on the other hand is not recommended. The parameters 25 and 26 as well as the parameters 35 and 36 offer an alternative to the parameters 7 and 8. The main differences from the parameters 7 and 8 are the intersection points of the distortion curve with the r-axis. Therefore the parameters 25 and 26 as well as 35 and 36 are more useful for rectangular photo formats and the parameter 7 and 8 more for squared photo formats. Anyway, we must only calculate them when the gross errors of the block have been eliminated and when we have good approximations for the unknown factors. The calculations for the calibration of the camera are of two types: bundle adjustment and spatial resection [10]. If we use several images with overlap between them, it is preferable to use bundle adjustment, taking advantage of the geometric robustness that provides both automatic and manual measurements of image coordinates of the points in different images. On the other hand, when using a single image, an option for calibration is spatial resection or inverse intersection. In particular, an iterative process is launched in which the redundant parameters are flagged for deletion and eliminated in the next iteration. This automatic selection is made according to various criteria [9].

(b) Figure 1. (a) Area of the field of calibration in the technology park. (b) Diagram of flight used for the calibration of the camera. The footprints of the images (309–318) are outlined in green colour while the footprint of the image 311 is showed in orange colour. Control points are identified by blue triangles. The black cross represents the two flight lines (from 309 to 313 and from 314 to 318).

The calibration field is located in the Technologic Park of Asturias (Spain), in the council of Llanera, next to the airfield of La Morgal. This area is chosen because, on one hand, it allows the establishment of a set of presignalized control points (evenly distributed over the working area) with good temporary stability and, on the other hand, enables the use of road marks as presignalized points available for both their measurement with GPS techniques as in the images themselves. Besides this, the buildings located in the surroundings have been used to incorporate points at different heights which can be perfectly identified in the images. A total of 52 presignalized control points were measured with GPS techniques (RTK with a baseline of 500 m., with centimetric accuracy) as well as 581 points at road marks obtaining coordinates in the cartographic projection Universal Transverse Mercator-UTM and ellipsoidal

3. Experimental results and discussion 3.1. Calibration with the panchromatic image The results are shown in tables with the following data: Control points: number of control points used; c: 96


Arias-Pérez et al / DYNA 81 (185), pp. 94-99. June, 2014. Table 1. Values obtained in the bundle adjustment from manual measurements on the full panchromatic image making use of initial approximations. Control Points 52 675 c (mm) 101.4000 101.3996 Sc (mm) 0.0018 0.0039 XH (mm) 0.0004 -0.0005 SxH (mm) 0.0018 0.0039 YH (mm) 0.0002 0.0008 SyH (mm) 0.0018 0.0039 σ0 (µm) 2.00 2.00 S0 (µm) 0.70 1.58 Ratio 0.35 0.79

Table 2. Results obtained in the spatial resection for the image 310. Control Points 52 675 c (mm) 101.4000 101.3999 Sc (mm) 0.0022 0.0055 XH (mm) 0.0000 -0.0003 SxH (mm) 0.0022 0.0055 YH (mm) 0.0000 0.0003 SyH (mm) 0.0022 0.0055 σ0 (µm) 2.00 2.00 S0 (µm) 0.90 2.21 Ratio 0.45 1.10

Therefore, the following conclusions related with the panchromatic image calibration could be pointed out: Firstly, the use of more control points does not modify the result and worsens the standard deviations. This may be due to the weighting criteria of the control points. Since these points are measured manually, their precisions can be reasonably supposed to be worse than those of the automatic measured points. In any case, the ratio between a priori and a posteriori standard deviations stays under an acceptable threshold. Secondly, it is not required to use initial approximations, so we can afford to work with unknown nominal values and perform the calibration; and lastly, as the standard deviations are slightly better in the case of Bundle Adjustment, the results obtained through space resection are totally valid.

focal length in millimetres; Sc: standard deviation a posteriori of c in millimetres; xH, yH: image coordinates of the main point of autocollimation in millimetres; SxH, SyH: standard deviations a posteriori for the image coordinates from the principal point of autocollimation in millimetres; σ0: standard deviation a priori of the image coordinates in microns; S0: standard deviation a posteriori of the image coordinates in microns; Ratio: quotient between the standard deviation a posteriori and the standard deviation a priori of the photo coordinates. The calculation of the Bundle Adjustment was separated by using the initial approximations obtained (Table 1) or not using them (these results are pretty much the same to those outlined in Table 1). The results (c, xH, yH) are very similar whether or not the initial approximations are used, so that in this case they could be omitted. First, the computed focal length, c, barely varies from the nominal value (101.4000 mm). Regarding the main point of autocollimation (xH, yH), it scarcely separates from the origin (0,0) and the displacement could be estimated as 1/4 of the pixel size (1.8 microns). Furthermore, the use of numerous control points does not improve the standard deviations (Sc, SxH, SyH) including the standard deviation a posteriori (S0). Nevertheless, for both cases S0 is lower than the standard deviation a priori (σ0). Second, the spatial resection was calculated for all the images except for those placed at the extremes of the flight strips because they had few Control Points and they were not properly distributed along the whole image. The following table (Table 2) shows the results for calibration using spatial resection for the image 310. Similar results were obtained for the images 312 and 317. Table 2 shows two calculations for the image 310, depending on the use of only the presignalized control points or on the use of all the points measured (presignalized and roadmarks). The results obtained scarcely vary the initial nominal values. Again, the focal length, c, presents slight variations in relation to its nominal value, whereas the principal point of autocollimation (xH, yH) evidences also small variations from the origin (0,0). As can be observed through an analysis of standard deviations, the results are slightly worse than those obtained by means of Bundle Adjustment. This is coherent since the geometry provided by Bundle Adjustment is more robust. Besides, the use of numerous control points worsens the standard deviations and a similar output is observed in the case of Bundle Adjustment.

3.2. Calibration with the multispectral image The UltraCamD camera has four cones to generate multispectral images, corresponding to Red, Green, Blue and NIR. Each cone is associated to a CCD, in such a way that it captures the whole area that is covered by the panchromatic image (through its 9 CCDs) and therefore, they have lower resolution on the terrain. That is why a procedure known as pan-sharpening, widely used in remote sensing, is applied which, based on the fact that the colour is a property of the area, to give the multispectral images the highest resolution that the final panchromatic image offers. With this flight the calibration of one of the multispectral cones has been made by means of bundle adjustment, the red one (cone nº 4) using the presignalized control points since the low resolution that this image offers does not allow the road markings measured on the ground to be correctly distinguished. In this case, the image corresponds to level 0 (without any type of processing), with a focal length of 28 mm and a big radial distortion. So the calibration consists basically in determining radial distortion. For the calculation we have used the additional parameters of radial distortion, 25 and 26. The results obtained using manual measurements are outlined in Table 3. The results with automatic measurements are identical except for the value of S0: 2.19 µm. In order to make a comparison, Table 4 outlines the dataset coming from the calibration certificate [11], using the parameters: 931, 932, 934, 919, 920, 930, 7, 8 and 26. It should be noted that these computations have been performed using the same software (BINGO) that the manufacturer does.

97


Arias-Pérez et al / DYNA 81 (185), pp. 94-99. June, 2014. Table 3. Results of calibration of level 0 image of the cone n° 4 of UltraCamD with bundle adjustment from manual measurements with parameters 25 and 26 of BINGO, where: r: distance to the principal point; dr: distortion at that distance. The adjustment was completed with S0 = 2.78 µm. r (mm) dr (µm) 2.5 35.9 5.0 58.8 7.5 61.3 10.0 37.8 12.5 -15.2 15.0 -99.2 17.5 -213.9 Table 4. Calibration data provided by the manufacturer. r (mm) 5.0 10.0 15.0 20.0

would be an adequate overlap between the CCDs (with the standard 60% overlap this is not achieved), and we could perform a calibration by bundle adjustment (since this calculation is much more robust than the option of spatial resection) for the CCDs as a processing unit. Note the impossibility to perform a strip with this 80% overlap for this size of GSD since the camera cannot operate at such a high frequency nor the plane fly so slowly. But this problem can be solved by performing additional strips with exactly the same trajectory as the original ones but with the projection centers shifted along the trajectory half the size of the standard flying base. References

dr (µm) 178.7 279.3 264.8 175.6

[1] Cramer, M. Digital camera calibration. Amsterdam: European Spatial Data Research, 2009. [2] Cramer, M. The EuroSDR performance test for digital aerial camera systems. Proceedings of the Photogrammetric Week, pp. 89-106, 2007.

However, it is not common the use of 7 and 8 parameters together with parameter 26. This could explain the difference obtained between our results and those provided by the manufacturer. In particular, the change of sign in the distortion is due to the different use of the pairs of parameters 7–8 or 25–26. Another important aspect that could explain these differences is the environmental conditions of the image acquisition, since the manufacturer calibration is carried out in laboratory whereas our calibration is performed in a field test.

[3] Cramer, M. The ADS40 Vaihingen/Enz geometric performance test. ISPRS-J. Photogramm. Remote Sens. 60 (6), pp. 363-374, 2006. [4] Ladstädter, R., Tschemmernegg H., Gruber, M. Calibrating the UltraCam aerial camera systems, an update. Proceedings of International Calibration and Orientation Workshop EuroCOW, 2010. [5] Honkavaara, E., Ahokas, E., Hyyppä, J., Jaakkola, J., Kaartinen, H., Kuittinen, R., Markelin, L., Nurminen, K. Geometric test field calibration of digital photogrammetric sensors. ISPRS-J. Photogramm. Remote Sens. 60 (6), pp. 387-399, 2006. [6] Arias, B., Nafría, D.A., Blanco, V., Rodríguez, O.O., Blanco, M., Antolín, F.J., Rodríguez, J., Gómez, J. Testing a digital large format camera. Proceedings of International Calibration and Orientation Workshop EuroCOW, 2008.

4. Concluding remarks

[7] Jacobsen, K., Cramer, M., Ladstätter, R., Ressl, C., Spreckels, V. DGPF project: evaluation of digital photogrammetric camera systems geometric performance. Photogramm. Fernerkund. Geoinf., 2, pp. 85 – 98, 2010.

In this paper, the results for the calibration of a large format digital camera for aerial photogrammetry UltraCamD model have been presented, with images taken in-flight. This means a change from the usual calibration in the laboratory. Through two methods of calculation, bundle adjustment and spatial resection, the accuracy of calibration parameters for the final image has been verified. The results show a higher accuracy and reliability of the calculations by bundle/beam adjustment in contrast to spatial resection, as was expected. However, the distribution of the image coordinate residuals shows the contribution of the 9 CCDs on the matricial image. One possibility to attenuate the influence of these 9 areas is the application of special additional parameters. Another possibility is the calibration in-flight of the 4 cones for the 9 CCDs of the panchromatic image at level 0, and to introduce the results of this calibration in the processing of the image until reaching level 3. This would be as if the cones were considered as the processing unit and not the whole image itself. To do this, the flight should be planned so that a large overlap between the CCDs themselves (and not between the images) can be guaranteed. This would demand firstly, that the calibration field depending on the image scale should include, a very large number of road marks as candidates to be control points, as well as the presignalized points, so that they are imaged on the same CCD for different images. Secondly, the longitudinal overlap between two adjacent images positions should be of about 80% (flying base of 20%). In this way there

[8] Gruber, M., Ladstätter, R., Geometric issues of the digital large format aerial camera UltraCamD. Proceedings of International Calibration and Orientation Workshop EuroCOW, 2006. [9] Kruck, E.J. BINGO user’s Photogrammetric Engineering, 2006.

manual.

Geoinformatics

&

[10] Kraus, K., Jansa, J., Kager, H. Photogrammetry. Bonn: Ümmler, 1997. [11] Gruber, M., Kröpfl, M. Calibration Photogrammetry, Graz, Austria, 2007, 66 P.

report.

Microsoft

B. Arias-Pérez is Professor at Salamanca University since 2010. Previously, he was professor at the University of León between 2004 and 2010, and in the University of Salamanca between 2003 and 2004. He accomplished a BS Degree in Surveying Engineering at the University of Salamanca in 1999. Also he completed a MS Degree in Geodesy and Cartography in 2002 at Salamanca University. He received his PhD from Salamanca University in 2008. He teaches in subjects related with geomatic and his research lines are focused on aerial photogrammetry and radar images. O. Cuadrado-Méndez is chief of photogrammetric mapping projects in the Cartographic Center of the Principality of Asturias (Spain). He accomplished a BS Degree in Surveying Engineering in 2001. Later he completed a MS Degree in Geodesy and Cartography in 2006 and a Master in Cartographic Geotecnologies in Engineering and Architecture in 2011 at Salamanca University. It is currently developing his doctoral thesis within the doctoral program "Geotechnology Research and Development" at the University of Salamanca. Has an experience of 17 years in the field of Surveying, Mapping and Geographic Information Systems and Remote Sensing. He previously held various positions in the private sector. Currently he is

98


Arias-Pérez et al / DYNA 81 (185), pp. 94-99. June, 2014. responsible for the implementation of free software in the field of GNSS positioning within the administration of the Principality of Asturias.

His thesis was the result of his two main academic and professional concerns, the field of computer aided learning / teaching of engineering, and the field of cartographic and photogrammetric engineering. This constitutes one of his researching lines, being the other the application of low cost aerial and terrestrial photogrammetry to world heritage. He has fostered, participated in and disseminated research projects on both lines.

P. Quintanilla has obtained her BS Degree in Surveying Engineering in 2009 and her BS Mining Engineering in 1999, both from the University of León. She has worked in construction projects such as irrigation pipelines and road tunnels until 2012. She has completed studies of Building Projects in 2014 at the IES Virgen de La Encina in Ponferrada. Currently she is engaged in building, focusing her work on measurement techniques for the geometric definition of buildings and its 3D modeling.

D. González-Aguilera is Professor at Salamanca University since 2002. He accomplished a BS Degree in Surveying Engineering at the University of Salamanca in 1999. Also he completed a MS Degree in Geodesy and Cartography in 2001 at Salamanca University. He received his PhD from Salamanca University in 2005. Based on his thesis’s results he obtained two international awards of the International Society of Photogrammetry and Remote Sensing (ISPRS). He has authored more than fifty research articles in international journals and conference proceedings. Currently, his teaching and research lines are focused on close range photogrammetry and laser scanning applied to engineering and architecture.

J. Gómez-Lahoz is Professor at Salamanca University since 1993. He has completed studies at the Complutense University of Madrid in Education Sciences in 1984, at the Polytechnic University of Madrid in Surveying Engineering in 1993 and has obtained his PhD at the University of Salamanca in 1999.

99


DYNA http://dyna.medellin.unal.edu.co/

Testing the efficiency market hypothesis for the Colombian stock market Comprobación de la hipótesis de eficiencia del mercado bursátil en Colombia Juan Benjamín Duarte-Duarte a, Juan Manuel Mascareñas Pérez-Iñigo b & Katherine Julieth Sierra-Suárez c Received: February 9th, 2013. Received in revised form: April 14th, 2014. Accepted: May 19th, 2014.

b

a Escuela de Estudios industriales y Empresariales, Universidad Industrial de Santander, Colombia, jduarte@uis.edu.co Facultad de Ciencias Económicas y Empresariales, Universidad Complutense de Madrid, España, jmascare@ccee.ucm.es c Escuela de Estudios industriales y Empresariales, Universidad Industrial de Santander, Colombia, katjulss@gmail.com

Abstract One of the basic assumptions of asset pricing models (CAPM and APT) is the efficiency of markets. This paper seeks to prove this requirement in its weak form, both for the General Index of the Stock Exchange of Colombia and for the Colombian market´s most representative assets. To this end, different statistical methods are implemented to show that stock patterns do not follow a normal distribution pattern. Additionally, when testing the Colombian efficiency market through a series of runs, BDS, LB and Bartlett test, there is no evidence of randomness in the main financial assets except Ecopetrol. Moreover, in the specific case of IGBC there is an improvement in market efficiency from 2008 to 2010, period that coincides with the onset of the global economic crisis. Keywords: efficient-market hypothesis, random walk, auto-regression, run test, BDS test, LB test and Bartlett test Resumen Uno de los supuestos básicos de los modelos de valoración de activos (CAPM y APT), es la eficiencia de los mercados. El presente trabajo busca comprobar este requisito en su forma débil, tanto al Índice General de la Bolsa de Valores de Colombia como a las acciones más representativas del mercado colombiano. Para tal fin se comprueba por diferentes métodos estadísticos que las series bursátiles no siguen el patrón de una distribución normal. Además, al indagar sobre la eficiencia del mercado colombiano, mediante los test de Rachas, BDS, LB y Bartlett, se evidencia no aleatoriedad en los principales activos financieros con excepción de Ecopetrol, mientras que para el IGBC se observa una mejora en la eficiencia del mercado del 2008 a 2010, periodo que coincide con el inicio de la crisis económica mundial. Palabras clave: hipótesis de eficiencia de mercado, caminata aleatoria, autorregresión, pruebas de rachas, prueba BDS, prueba LB e interval.

1. Introduction The empirical proof for the Efficient Markets Hypothesis (EMH) is based on determining if the price of financial instruments actually follows a Random Walk (RW), or in other words, if the price formation of these instruments is unpredictable and that future price is impossible to systematically forecast in order to obtain some extraordinary benefit in the marketplace. The EMH supposes that both the flow of future information and the investor’s reactions are generated simultaneously and causes an “instantaneous” and random movement in prices. According to Campbell et al. [1] the random walk is structured in three different versions, Random Walk 1, 2 and 3 (RW1, RW2 and RW3; respectively). RW1 is defined as a random walk in which the rise in prices and returns are independent and identically distributed (i.i.d), the RW2 on the other hand, requires that increments be independent, but not identically distributed, and finally the RW3, allows dependent but uncorrelated increases.

Among the pioneers for the efficient markets theory we have Bachelier [2], who in his doctoral thesis “Théorie de la Spéculation” developed a mathematical and statistical theory from the Brownian movement, explaining the efficiency of markets through the behavior of a martingale. Years later it was Cowles [3], who for the first time studied empirically the recommendations of stock analysts, arriving at the conclusion that their assertive opinions did not systematically prevail in the market, lending credibility to the theory of efficient markets. In modern financial economics, Fama [4], another pioneer in the field of efficient markets, used extensive empirical investigations which verify the random walk model in versatile markets, highlighting the challenge that the chartists faced in predicting stock prices in the face of randomness. The definition of EMH has been changed several times by Fama, and that is how the author incorporates into the efficient market theory the concepts of transaction and information costs to show that prices reflect information only up to the point that the marginal benefits don’t exceed the costs

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 100-106. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Duarte-Duarte et al / DYNA 81 (185), pp. 100-106. June, 2014.

(transactional and informational) [5]. Years later he would modify again the definition of EMH to incorporate market anomaly concepts into the model: “the expected value of abnormal returns is zero, but chance generates deviations from zero (anomalies) in both directions” [6]. Paralleling Fama’s work, Samuelson [7] offered the first formal theoretical argument for efficient markets, in which Price changes must fluctuate unpredictably as they incorporate. Instantaneously, the expectations and information from market participants, and that is where the author employs the martingale analogy, instead of the random walk that Fama put forward. During the last decade and a half in Latin-American there has been much work done on market efficiency. One of the first to prove the randomness of Latin-American markets was Urrutia [8], who tested the Argentinean, Chilean and Mexican markets from 1975 to 1991 through the runs tests and quotient variation, and arriving at the conclusion that these markets do not follow a random walk. Years later, in 1997, Bekaert et al. [9] analyzed the Colombian General stock Exchange Index through runs and serial correlation test; they rejected the random walk theory from 1987 to 1994. In the new millennium, Delfiner [10], proved the relative efficiency of the Argentinean market as compared to the United States Market from 1993 to 1998, using a quotient variant test, a modified R/S test, autocorrelation and runs test, wherein were detected certain levels of dependency in Argentinean returns and Alexander filters found that in that country there were extra gains, which were lost in commissions. In 2004, Maya [11] found the presence of randomness in the Colombian market. In 2006, Zuluaga and Velásquez [12] found that it is possible to obtain returns when the investment is made in dollars, obeying the signals originated from some indicators, but conditioned to the fact that low costs of transaction can be obtained, rejecting the efficient market hypothesis in the Colombian Foreign Exchange Market. Later, Eom et al. [13] computed the Hurst exponent to test the weak-form efficient market hypothesis in 60 market indexes of various countries. They empirically discovered that Colombia has a high average Hurst exponent that evidences a low degree of efficiency. However, the most of the studies test only the general index stock market in an only period. This paper seeks to prove the weak-form efficient market hypothesis, both for the General Index of the Stock Exchange of Colombia and for the Colombian market´s most representative assets in different periods of time. 2. Methodology The process of proving empirically begins with the definition of the simple space and the study variable, and below we perform a preliminary analysis of the series with the goal of defining if the behavior follows a normal distribution. Then, we test the data to prove if returns are independent and identically distributed, through runs and BDS tests. Finally, we verify the existence of serial autocorrelation through Barttlet and Ljung-Box (LB) test.

Table 1. Financial stocks selected Share

N

IGBC ECOPETROL PREC PFBCOLOM GRUPOSURA

2604 1664 659 2603 2604

CEMARGOS

2604

Initial Date 01/02/2002 11/26/2007 12/23/2009 01/03/2002 01/02/2002 01/02/2002

End Date 08/31/2012 08/31/2012 08/31/2012 08/31/2012 08/31/2012 08/31/2012

% Participation 19.9% 17.6% 10.3% 7.4% 5.0%

Source: Interpretation of data stemming from the Colombian General Stock Exchange

2.1. Data On July 3, 2011, the Colombian stock Exchange (BVC) consolidated the versatile markets of Bogota, Medellin, and Cali -which all operated independently before- into just one index. For this reason it is reasonable to begin this study 6 months prior to the opening of the index, or in other words starting in January 2002. In table 1, some selected companies have fewer numbers of observations owing to the fact that their activities were initiated after the opening day of the IGBC. Using the Pareto principle, we identify the most representative Colombian shares, using as criteria the participation on the Colombian General stock Exchange Index (IGBC). In Table 1, it is shown the stocks which form 60% of the index with the corresponding dates and number of observations. 2.2. Preliminary analysis of financial series The objective of understanding the behavior of the data makes necessary the application of a statistical analysis which allows us to define the best adjustment of the empirical distribution. To this end we implement two stages: in the first we calculate the basic statistics, together with the Jarque-Bera (JB) test in order to determine if the series has a normal distribution; second we submit the data to ordering in so as to rank the adjustments of theoretical distributions. We use return performance compounded continually as the variable for the study, considering the advantages mentioned by [1]. 2.3. Testing the efficiency market hypothesis As was illustrated in the theoretical listing, different tests are available to prove the EMH. This study will first proceed to apply the nonparametric tests (Runs and BDS) with the purpose of identifying whether rising returns are independent and identically distributed, or rather fitting version RW1 of the Random Walk. Similarly, to prove version RW3 we estimate the LB test, which together with the corresponding correlation analysis suggested by Bartlett, seek to identify the returns that are not correlated. The nonparametric runs test or the Wald–Wolfowitz [14] test, seek to test the hypothesis of market efficiency by contradicting the RW1, using this as a basis for the number of series (R) found in the sequence, so that small or large

101


Duarte-Duarte et al / DYNA 81 (185), pp. 100-106. June, 2014.

seeks to prove that, the hypothesis that the m coefficients of Autocorrelation are simultaneously zero [20]. The statistic LB is defined by eq. (3).

quantities of R imply no randomness in price generation. This variable behaves asymptotically as a normal distribution, that when standardized generates a discrete statistic for the eq. (1). Runs Test. đ?‘…âˆ’Âľ đ?‘?= , đ?œŽ

đ?‘„đ??żđ??ľ = đ?‘›(đ?‘› + 2) ∑đ?‘š đ?‘˜=1 [

where Âľ = [

2 ∗ n1 ∗ n2 ]+1˄ n1 + n2

(2∗n1∗n2∗(2∗n1∗n2−n1 −n2 ))

đ?œŽ=√

đ??śđ?‘š,đ?‘› (Îľ) − đ??ś1,,đ?‘›âˆ’đ?‘š+1 (Îľ)m đ?œŽđ?‘š,đ?‘›âˆ’đ?‘š+1 (Îľ)

(2)

Once the BDS test is completed and following the recommendations of previous tests [19] which suggest estimating the tests for various epsilons in order to substantiate their acceptance or rejection, using four different epsilons: 0.5, 1, 1.5 and 2 typical deviations of the data. We use dimensions đ?‘š= {2-6} with the intent of observing the statistical behavior as đ?‘š grows. On the other hand, to find the self-correlation in stocks returns, it is used the Bartlett test and the Ljung-Box Q test, which are shown below. 2.3.3. Ljung-Box Test. This test is a variation of the Box and Pierce Q test, which Table 2 Returns Series Statics. Share Mean Median IGBC 0.0010 0.0014 ECOPETROL 0.0011 0.0000 PREC 0.0005 0.0009 PFBCOLOM 0.0013 0.0000 GRUPOSURA 0.0012 0.0005 CEMARGOS 0.0007 0.0000 Source: Self-explanation. S is Skewness, K is kurtosis.

Std. Dev. 0.0140 0.0201 0.0232 0.0193 0.0206 0.0227

(3)

2.3.4. Bartlett Test.

Where n1 is the number of returns above the mean and n2 is the number of returns below the mean, we reject the i.i.d return if the value of p is less than 5%. BDS Test. The BDS test developed by Brock et al, and implemented in 1996 together with LeBaron [15], is characterized for being a nonparametric statistical test strongly tending away from linear and non-linear structures [16], which seeks to prove the null hypothesis that a temporary series is i.i.d The theoretical explanation of this test proves, in part, the fact that when using a series of n returns R of a financial stock, which follows some function of distribution f (R ~ f), that upon determining an epsilon (Îľ) greater than zero and less than a rangeR, or 0 < Îľ < [Max(R)– Min(R)]. The BDS test hypothesis is H0 âˆś Pm = P1m . In order to obtain probability đ?‘ƒđ?‘š , we use correlation integral đ??śđ?‘š,đ?‘› (Îľ) [17] and [18]. So that with an immersion đ?‘š and a distance of Îľ with n observations, the statistic đ?‘Š is defined by the eq. (2). đ?‘Šđ?‘š,đ?‘› (Îľ) = √đ?‘› − đ?‘š + 1

2 ] ~ đ?‘‹đ?‘š.đ?‘‘.đ?‘“.

Where đ?‘› is the size of the sample and đ?‘š is the lag. The null hypothesis is rejected if p_value <5%.

(1)

((n1 +n2 )2 ∗(n1 +n2 −1))

Ď k đ?‘›âˆ’đ?‘˜

This proof analyze the individual hypothesis that “someâ€? of the Autocorrelations are other than zero, and to this end we turn to what Bartlett demonstrated [21], meaning that if a time series is purely random (white noise), the coefficients of correlation behave asymptotically like a normal distribution with mean of zero and variance of1/n, in which case the 95% confidence level for any Ď Ě‚k is defined as Âą1.96(âˆšĎƒ2 ) Ăł Âą1.96 ∗ √1â „n.

3. RESULTS Following are the results of the statistical tests described in the methodology, and appearing in the very same order: 3.1. Preliminary series analysis In Table 2, we see the first four moments for the data, their maximums and minimum values and the probability for type I errors in the Jarque-Bera statistic. From the statistical data, we can see that the skewness and kurtosis of different financial instruments do not correspond to the characteristics of a normal distribution, especially in the fourth moment, which for all cases is higher than the typical 3 for ormal distribution. Another important observation is the fact that the minimums and maximums in the series are more than the standard Âą6 deviations, showing long tails in the distributions, which is also not typical of a normal distribution. The mean and median of the different stocks approach zero, but are different one from another, contradicting the equality which these two parameters should share in a normal distribution. In summary, from the basic statistics the following may be observed: strong leptokurtosis in all stocks; asymmetry, most especially with Ecopetrol and Cemargos; distributions largely reflecting maximum and minimum values, which leads to the conclusion that none of the analyzed series behaves as a normal distribution, however the Pacific Rubiales share can be deemed the stock which most closely resembles a normal distribution.

S -0.342 5.688 -0.012 0.103 -0.392 -7.683

102

K 15.0 112.2 4.2 7.3 15.2 217.2

Max. 0.1469 0.3789 0.0837 0.1088 0.1979 0.1692

Min. -0.1105 -0.0938 -0.0826 -0.1050 -0.2050 -0.6159

Prob. JB 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

N 2603 1163 658 2602 2603 2603


Duarte-Duarte et al / DYNA 81 (185), pp. 100-106. June, 2014.

The presumptive abnormality of the series, detected through the analysis of the first four moments, is confirmed by the Jarque-Bera test, as shown in the data from Table 2, wherein different financial stocks show P values and JB equal to zero in all cases, thereby rejecting the normality hypothesis of the instruments. Moreover, after performing the adjustment tests through the ď Ł2 statistic, we see that the distribution that best describes the majority of the series is Logistic, which ranks first in 83% of assets, while, the normal distribution comes out in second place in 4 of the 6 series analyzed, thus supporting the results from the basic statistics and the Jarque-Bera test previously analyzed. Below we proceed to perform a statistical inference to determine whether the returns behave like Random Walks 1 and 3. 3.2. Runs test From Table 3 we deduce that for most of the stocks the returns are not i.i.d, except Ecopetrol and PREC. This can be owed to the fact that from their inception these are the two national stocks with the highest trading volumes in the country (Table 1). 3.3. BDS test The Table 4 shows the results from the BDS test, from which we may conclude that the different financial series not show i.i.d, as evidenced by the fact that in all calculations the Table 4. BDS Test for 4 values of epsilon and 6 dimensions. Dim BDS Share (m) Statistic. IGBC Îľ = 0.5Ďƒ 2 0.01 Îľ =0.007 3 0.02 4 0.01 5 0.01 6 0.00 Îľ = 1Ďƒ 2 0.03 Îľ = 0.014 3 0.05 4 0.06 5 0.06 6 0.05 ECOPETROL Îľ = 0.5Ďƒ 2 0.01 Îľ = 0.0100 3 0.02 4 0.01 5 0.01 6 0.00 Îľ = 1Ďƒ 2 0.02 Îľ = 0.0201 3 0.04 4 0.05 5 0.06 6 0.05 PREC Îľ = 0.5Ďƒ 2 0.00 Îľ = 0.012 3 0.01 4 0.00 5 0.00 6 0.00 Îľ = 1Ďƒ 2 0.01 Îľ = 0.023 3 0.02 4 0.02 5 0.02 6 0.02

Table 3 Runs test Share Mean n1 IGBC 0.0010 1249 ECOPETROL 0.0011 590 PREC 0.0005 324 PFBCOLOM 0.0013 1479 GRUPOSURA 0.0012 1366 CEMARGOS 0.0007 1457 Source: Self-explanation. The Software.

n2 Âľ Ďƒ P R Z 1354 1300.4 25.5 1099 -7.91 0.0% 573 582.38 17.0 567 -0.90 36.7%* 334 329.92 12.8 308 -1.71 8.7%* 1123 1277.6 25.0 1205 -2.90 0.4% 1237 1299.3 25.4 1192 -4.22 0.0% 1146 1283.9 25.1 1178 -4.21 0.0% calculations were made using the SPSS 15.0

standardized BDS is much greater than Z2.5% (1.96) generating type 1 errors 1 of 0%. In much the same way we can see that even as we increase epsilon, the hypothesis must still be rejected, because generally the value of the statistic is greater than 10, producing p_values of zero for all stocks, epsilons and dimensions given. Note that as the dimension is increased, the statistic generally grows as well, thus ratifying the rejection of đ??ť0 âˆś đ?‘ƒđ?‘š = đ?‘ƒ1đ?‘š The results of the BDS test are coherent and consistent with previous investigations [22], [1] and [15] in the sense that this test has strong potential to detect linear and non-linear structures, and it is for this reason that we reject the i.i.d hypothesis for all stocks, indicating that the Colombian financial series contain some type of structure.

Std. Error

Z Std.

P

0.001 0.001 0.001 0.000 0.000 0.002 0.003 0.003 0.003 0.002

14.2 18.0 21.2 24.6 29.0 15.5 18.6 20.5 21.8 23.4

0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

Îľ=1.5Ďƒ Îľ= 0.021

0.002 0.002 0.001 0.001 0.000 0.003 0.004 0.005 0.005 0.004

8.0 9.6 10.7 11.4 12.8 8.9 10.6 11.6 12.1 12.6

0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

Îľ=1.5Ďƒ Îľ=0.0301

0.001 0.001 0.001 0.000 0.000 0.003 0.004 0.004 0.003 0.002

3.3 5.1 7.3 10.2 13.4 3.7 5.0 6.0 6.9 8.0

0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

Îľ=1.5Ďƒ Îľ =0.035

103

Share

Îľ =2Ďƒ Îľ= 0.028

Îľ=2 Ďƒ Îľ=0.0401

Îľ =2Ďƒ Îľ = 0.046

Dim (m)

BDS Statistic.

Std. Error

Z Std.

P

2 3 4 5 6 2 3 4 5 6

0.03 0.06 0.08 0.10 0.11 0.02 0.05 0.07 0.1 0.12

0.002 0.003 0.004 0.004 0.005 0.001 0.002 0.003 0.004 0.005

18.1 20.6 21.6 21.9 22.5 21.2 23.2 23.4 23.3 23.4

0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

2 3 4 5 6 2 3 4 5 6

0.02 0.04 0.06 0.08 0.09 0.01 0.03 0.05 0.06 0.08

0.002 0.004 0.005 0.006 0.007 0.001 0.002 0.004 0.005 0.006

8.9 11.4 12.1 12.3 12.4 9.4 12.6 13.0 13.1 13.1

0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

2 3 4 5 6 2 3 4 5 6

0.01 0.03 0.04 0.05 0.05 0.01 0.02 0.04 0.05 0.06

0.003 0.005 0.007 0.007 0.007 0.002 0.004 0.006 0.008 0.009

4.1 5.3 6.1 6.6 7.3 4.3 5.3 6.1 6.4 6.9

0% 0% 0% 0% 0% 0% 0% 0% 0% 0%


Duarte-Duarte et al / DYNA 81 (185), pp. 100-106. June, 2014. Table 4. Continuation BDS Test for 4 values of epsilon and 6 dimensions. Dim BDS Std. Share (m) Statistic. Error PFBCOLOM

Z Std.

P

Share

Dim (m)

BDS Statistic.

Std. Error

Z Std.

P 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 %

ε = 0.5σ

2

0.01

0.001

10.4

0%

ε=1.5σ

2

0.02

0.002

11.0

ε = 0.010

3

0.01

0.001

14.0

0%

ε= 0.029

3

0.04

0.003

11.8

4

0.01

0.001

18.2

0%

4

0.05

0.004

12.6

5

0.01

0.000

24.4

0%

5

0.06

0.004

13.2

6

0.01

0.000

34.4

0%

6

0.06

0.005

13.6

ε = 1σ

2

0.02

0.002

9.8

0%

ε=2σ

2

0.01

0.001

12.1

ε = 0.019

3

0.03

0.003

11.5

0%

ε=0.039

3

0.03

0.002

12.6

4

0.04

0.003

12.6

0%

4

0.04

0.003

13.0

5

0.03

0.003

13.8

0%

5

0.06

0.004

13.2

6

0.03

0.002

14.8

0%

6

0.07

0.005

13.3

Dim (m)

BDS Statistic.

Std. Error

Z Std.

P

2 3 4 5 6 2 3 4 5 6

0.02 0.05 0.07 0.08 0.09 0.02 0.04 0.06 0.08 0.1

0.002 0.003 0.004 0.005 0.005 0.001 0.002 0.003 0.004 0.005

13.5 16.0 17.3 17.9 18.5 15.6 17.5 18.2 18.4 18.6

0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

Table 4. BDS Test for 4 values of epsilon and 6 dimensions (Continuation). Dim BDS Std. Share (m) Statistic. Error GRUPOSURA ε = 0.5σ 2 0.01 0.001 ε = 0.010 3 0.01 0.001 4 0.01 0.001 5 0.01 0.000 6 0.00 0.000 ε = 1σ 2 0.02 0.002 ε = 0.021 3 0.04 0.003 4 0.05 0.003 5 0.05 0.003 6 0.05 0.003 CEMARGOS ε = 0.5σ ε = 0.011

ε = 1σ ε = 0.023

Z Std.

P

9.5 11.7 14.0 15.6 17.1 11.5 14.4 16.3 17.6 18.7

0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

ε=1.5σ ε= 0.031

Share

ε =2 σ ε =0.041

2 3 4 5 6

0.03 0.03 0.03 0.02 0.01

0.002 0.002 0.001 0.001 0.001

14.5 17.7 20.5 22.8 26.0

0% 0% 0% 0% 0%

ε=1.5σ ε = 0.034

2 3 4 5 6

0.03 0.05 0.07 0.09 0.10

0.001 0.003 0.004 0.005 0.006

17.5 18.0 18.4 18.4 18.6

0% 0% 0% 0% 0%

2 3 4 5 6

0.03 0.06 0.07 0.08 0.08

0.002 0.003 0.004 0.004 0.004

15.1 17.0 18.1 18.7 19.6

0% 0% 0% 0% 0%

ε=2 σ ε= 0.045

2 3 4 5 6

0.02 0.04 0.06 0.07 0.09

0.001 0.002 0.003 0.004 0.005

20.5 20.6 20.1 19.5 19.2

0% 0% 0% 0% 0%

Source: Self-explanation.

Upon observing the IGBC data in Table 5, we find that the p_value of the LB test is zero both for the entire period and for the two first sub-periods, thus rejecting the hypothesis of randomness until 2006, so that for the three remaining periods from 2006 to 2012 we do not reject a randomness theory for the prices in the Index, in other words, we find an improvement in the market efficiency following 2006. Note also that the highest values for type 1 errors for the combined LB test appear in the sub-period 2008 - 2010. Moreover, taking into account the Bartlett test, Table 5 highlights different periods, the significant Autocorrelations from the first moment (and fourth in sub-periods 1 and 5), which can be interpreted to mean that the Index could at least

3.4. Serial correlation analysis To contrast the LB and Bartlett tests we will proceed in the following way: first, with the intent of evaluating the IGBC over time, we estimate the tests, dividing the study period into smaller groups of 520 observations each, spanning over the period from January 2, 2002 to August 31, 2002 (Table 5); secondly, we evaluate the data for the entire analysis period (2002-2012). To decide the number of moments there is reference to Tsay [23], who basing in simulation studies, suggests taking m ≈ ln(n), Therefore, for this study, we consider it appropriate to use up to 5 lags. 104


Duarte-Duarte et al / DYNA 81 (185), pp. 100-106. June, 2014. Table 5. LB and Bartlett Tests of the IGBC. Share m ρ LB PLB

ρ

m

January 2nd 2002 - August 31st 2012 1 0,146 55,6 0% Bartlett 2 0,021 56,8 0% 0,038 3 -0,017 57,5 0% 4 -0,013 57,9 0% 5 -0,016 58,6 0% January 2nd 2002 to February 18th 2004 1 0,337 59,2 0% Bartlett 2 0,070 61,9 0% 0,086 3 -0,061 63,8 0% 4 -0,132 72,9 0% 5 -0,066 75,2 0% February 19th 2004 to March 30th 2006 1 0,220 25,3 0% Bartlett 2 -0,004 25,3 0% 0,086 3 -0,022 25,5 0% 4 -0,015 25,7 0% 5 0,050 27,0 0% Source: Self-explanation.

LB

with the most random profiles are Ecopetrol and PREC, while in the Autocorrelation analysis, the other stocks illustrate the same phenomenon are Ecopetrol and Cemargos, meaning that Ecopetrol meets the conditions of Random Walk 1 and Random Walk 3.

PLB

March 31st 2006 to May 23rd 2008 1 0,129 8,7 0,3% Bartlett 2 0,012 8,8 1,3% 0,086 3 -0,019 8,9 3,0% 4 0,026 9,3 5,4% 5 -0,026 9,7 8,5% May 27th 2008 to July 15th 2010 1 0,024 0,3 58% Bartlett 2 0,010 0,4 84% 0,086 3 -0,025 0,7 88% 4 -0,017 0,8 93% 5 -0,048 2,0 84% July 16th 2010 to August 31st 2012 1 0,092 4,4 4% Bartlett 2 0,020 4,6 10% 0,086 3 -0,017 4,8 19% 4 -0,088 8,9 6% 5 -0,047 10,1 7%

4. CONCLUSIONS Through the analysis of the first four moments and the Jarque-Bera test, we find that the main financial stocks in Colombia do not follow a normal distribution, which is ratified by the analysis of statistic 2 , further illustrating that the stock which best fits the financial series is Logistic, and notwithstanding the importance of the ranking of the adjustments, the normal distribution occupies second and third place for the four stocks (IGBC, PREC, Gruposura and Cemargos), which stocks represent 30% of the Colombian market. Upon proving EMH through series tests, the BDS test, LB and Bartlett tests, we deduce that for the entire evaluated period (2002-2012), the Colombian market lacks weak market efficiency. For the IGBC we also find that breaking down the period into sub-periods we see an improvement in the efficient markets hypothesis for the 2008 to 2010 period. We see evidence of serial correlation between 5% and 20% in the IGBC, PREC, PFBCOLOM and Gruposura stocks, concentrated principally in the first moment, which leads to the conclusion that the price of these stocks may be partly explained by Random Walk 1.

present an auto-regressive model of the first order. Again in the period 2008-2010 we reject the individual hypothesis that the moments are significant, reaffirming Random Walk 3 for this period. In Table 6 we evaluate the shares over the period from January 2, 2002 to August 31, 2002 (Table 6). Upon analyzing the data in Table 6, we find that for the Ecopetrol and Cemargos stocks, we do not reject the combined hypothesis which asserts that the autocorrelations are zero and this because of the evidence of randomness in the stocks. The opposite occurs in the case of the PREC, PFBCOLOM and Gruposura series; these reveal significant Autocorrelations in some of their moments, a reflection of non-randomness. As respects the significant individual Bartlett test, we again see what we have already evidenced since it identifies serial correlation in the first moment for PREC, PFBCOLOM and Gruposura. Note that in the analysis of the series test, the two stocks Table 6. LB and Bartlett Tests of Enterprises Share m ρ LB PLB ECOPETROL 1 -0,006 0,0 Bartlett 2 -0,027 0,9 0,057 3 -0,037 2,5 4 -0,056 6,2 5 0,030 7,2 PREC 1 0,124 10,1 Bartlett 2 0,061 12,5 0,076 3 -0,018 12,8 4 -0,025 13,2 5 -0,013 13,3 PFBCOLOM 1 0,049 6,2 Bartlett 2 0,024 7,7 0,038 3 0,020 8,8 4 -0,017 9,5 5 -0,000 9,5 Source: Self-explanation.

84% 63% 47% 19% 21% 0,1% 0,2% 0,5% 1,0% 2,1%

Share

Bartlett 0,038

Bartlett 0,038

m

ρ

LB

GRUPOSURA 1 0,134 46,6 2 0,006 46,7 3 -0,015 47,3 4 -0,010 47,6 5 -0,023 48,9 CEMARGOS 1 0,032 2,7 2 -0,009 2,9 3 -0,002 2,9 4 -0,026 4,7 5 -0,038 8,4

REFERENCES [1]

Campbell J, Lo A, Mackinley A. The Econometric Of Financial Markets Princeton, New Jersey: Princeton University Press; 1997.

[2]

Bachelier L. Théorie de la speculation. Annales scientifiques de l’école normale superieure. 1900; 3: pp. 21-86.

[3]

Cowles A. an Stock Market forecasters forecast? Econometrica. 1933; 1: pp. 309-324.

[4]

Fama E. Random Walks in Stock Market Prices. Financial analysts journal. 1965; 21: pp. 55-59.

PLB

[5]

Fama E. Efficient Capital Markets:II. The journal of finance. 1991; 46(5): pp. 1575-1617.

0,0% 0,0% 0,0% 0,0% 0,0%

[6]

Fama E. Market Efficiency, Long-term Returns, and Behavioral Finance. Journal of financial economics. 1998; 49: pp. 283-306.

[7]

Samuelson pp. Proof that properly anticipated prices fluctuate randomly. Industrial management review. 1965.

[8]

Urrutia J. Tests of random walk and market efficiency for Latin American emerging equity markets. Journal of financial research. 1995; 18: pp. 299-309.

[9]

Bekaert G, Erb C, Harvey C, Viskanta T. What matters for emerging markets investments? Emerging markets quarterly. 1997: pp. 17-26.

9,8% 23,0% 40,0% 31,8% 13,5%

[10] Delfiner M. Comportamiento de los Precios de las Acciones en el Mercado Bursátil Argentino (Un Estudio Comparativo). Working Papers, Universidad CEMA. 2002; 215.

1,3% 2,1% 3,2% 5,0% 9,1%

[11] Maya C, Torres G. The unification of the Colombian stock market:a step towards efficiency. Empirical Evidence. Latin american business. 2004; 5(4): pp. 69-98.

105


Duarte-Duarte et al / DYNA 81 (185), pp. 100-106. June, 2014. [12] Zuluaga M, Velásquez J. Selección de indicadores técnicos para la negociación en el mercado cambiario colombiano I: comportamientos individuales. Revista DYNA. 2006; 74(152): pp. 9-20.

[23] Tsay R. Analysis Of Financial Time Series: Editorial JHON WILEY & SON, INC; 2002. J. B. Duarte-Duarte, received the Bs. Eng in Industrial Engineering in 1991 from the Universidad Industrial de Santander, Colombia. He received the MS degree in Corporate Finance in 2008, and the PhD degree in Corporated Finance in 2013 both from the Universidad Complutense de Madrid, España. Since 1994, he has worked as professor. Currently, he is a Full Professor in Finance in the Escuela de Estudios Industriales y Empresariales, Universidad Industrial de Santander. His research interest include: stock markets, financial theory and economic.

[13] Eom C, Choi S, Oh G, Jung WS. Hurst exponent and prediction based on weak-form efficient market hypothesis of stock markets. Physica A. 2008; 387: pp. 4630–4636. [14] Wald A, Wolfowitz J. On a Test Whether Two Samples are from the Same Population. Ann. Math. Statist. 1940; 11(2): pp. 147-162. [15] Brock W, Dechert W, Scheinkman J, LeBaron B. A test for independence based on the correlation dimension. Econometric reviews. 1996; 15(3): pp. 197-235.

J. M. Mascareñas-Perés-Iñigo, is PhD in Economics and Business at the Universidad Complutense de Madrid (UCM) since 1984 and Grade in Economics and Business at UCM (1979). He is Full Professor in Financial Economics from 1992 (UCM) and Associated professor at IE Business School from 1998. He was also Vicedean at the Faculty in Economics and Business (UCM) 1990-1994. In South America he is Advisor in Ph D Program on Business (Universidad Nacional Autónoma de México) from 2005 and Advisor in PhD Program on Business (Universidad La Salle de México) from 2011. Specialized in Corporate Finance (Mergers & Acquisitions, Firm and Asset Valuation, and Real Options) and Financial Markets (Fixed Income, Variable Income, Derivatives), he has published more than 29 books, 65 research papers and 43 Monographs on Corporate Finance (http://www.juanmascarenas.eu/jm2pub.htm). He currently heads the Journal of IEAF Financial Analysis and serves on the Editorial Board of The Accounting and Administration Journal (Mexico), the Spanish Journal of Venture Capital and the European University Magazine.

[16] Dechert W. A Characterization Of Independence Fo A Gaussian Process In Terms Of The Correlation Integral. Working papers 8812, Wisconsin Madison - Social Systems. 1988. [17] Grassbeger P, Procaccia I. Characterization of Strange Attractors. Physical review letters. 1983a; 50(5): pp. 346-349. [18] Grassbeger P, Procaccia I. Measuring Strangeness of Strange Attractors. Physica D. 1983b; 9 : pp. 189-208. [19] Álvarez N, Matilla M, pp. P, J. R. Análisis de las deficiencias del test BDS en series temporales univariantes. ASEPUMA. 2002; 10 (1): pp. 4. [20] Ljung G, Box G. On a measure of lack of fit in time series models. Biiometrika. 1978; 65(2): pp. 297-303.

K. J. Sierra-Suárez, received the Bs. Eng in Industrial Engineering in 2012 from the Universidad Industrial de Santander, Colombia. She is master candidate in Industrial Engineering from the Universidad Industrial de Santander. Currently, she is a professor in economics engineering in the Escuela de Estudios Industriales y Empresariales, Universidad Industrial de Santander. His research interest include: stock markets, financial theory and economic.

[21] Bartlett M. On the Theoretical Specification of Sampling Properties of Autocorrelated Time series. Journal of the Royal Statistical Society. 1946; 27: pp. 27-41. [22] Barnett W, Gallant R, Hinich M, Jungeilges J, Kaplan D, Jensen M. A Single-Blind Controlled Competition among Tests for Nonlinearity and Chaos. Journal of econometrics. 1997; 82(1): pp. 157-192.

106


DYNA http://dyna.medellin.unal.edu.co/

Modeling of a simultaneous saccharification and fermentation process for ethanol production from lignocellulosic wastes by kluyveromyces marxianus Modelado de un proceso de sacarificación y fermentación simultanea para la producción de etanol a partir de residuos lignocelulósico utilizando kluyveromyces marxianus Juan Esteban Vásquez a, Juan Carlos Quintero b & Silvia Ochoa-Cáceres c a Grupo de Biotecnología, (SIU), Universidad de Antioquia, Colombia. vasquez.j.aa@m.titech.jp Grupo SIDCOP, Facultad de Ingeniería, Universidad de Antioquia, Colombia. jcquinte@udea.edu.co c Grupo de Bioprocesos, Facultad de Ingeniería, Universidad de Antioquia, Colombia. sochoa@udea.edu.co b

Received: February 14th, 2013. Received in revised form: February 20th, 2014. Accepted: May 19th, 2014

Abstract This paper presents the modeling of the main dynamics of a Simultaneous Saccharification and Fermentation (SSF) process using lignocellulosic wastes as substrate. SSF experiments were carried out using the yeast Kluyveromyces marxianus as the inoculum and oil palm wastes as the substrate, in order to obtain glucose and ethanol concentration data. The experimental data were used for the parameter identification and model validation. The resulting model predictsthe dynamic behavior of glucose and ethanol concentrations very closely. Performing a sensitivity analysis, parameters which have a higher effect in the modelpredictions are recognized, so the model can be re-optimized in particular cases with low computational requirements. The re-optimization strategy improves the model capacity to predict the dynamics of the SSF process. Keywords: Bio-ethanol; Simultaneous Saccharification and Fermentation; modeling; kluyveromyces marxianus; sensitivity analysis. Resumen En este trabajo se presenta el modelado de las principales dinámicas de un proceso de Sacarificación y Fermentación Simultaneas (SFS) utilizando residuos lignocelulósicos como sustrato. Experimentos de SSF llevados a cabo con la levadura Kluyveromyces marxianus como inóculo y desechos de palma de aceite como sustrato se realizaron para obtener datos de concentración de glucosa y etanol que permitieran identificar parámetros y validar el modelo. El modelo resultante predice el comportamiento general de las concentraciones de glucosa y etanol. Gracias a un análisis de sensibilidad, se definen los parámetros que más afectan el modelo, con el fin de flexibilizar el modelo para que pueda ser optimizado en casos particulares con pocos requerimientos computacionales. Esta estrategia de reoptimización muestra mejorar de manera importante la capacidad del modelo para predecir las dinámicas del proceso SSF. Palabras clave: Bioetanol; Sacarificación y fermentación simultánea; modelado; kluyveromyces marxianus; Análisis de sensibilidad.

1. Introduction The growing concern generated by the imminent depletion of fossil fuels has led to the search for alternative energy sources to achieve a sustainable society. Ethanol has emerged as one of the first sources that can help significantly to reduce the consumption of fossil fuels and also the emission of gases that promote global warming. Currently, the use of corn and sugar cane for ethanol production creates a major ethical concern in global food security and the rise of food prices[1,2]. That is why in recent years, research towards using lignocellulosic wastes for ethanol production has increased, in a way that is both

technically and economically viable. Among the different technologies that can be used for that purpose, the Simultaneous Saccharification and Fermentation (SSF) production process has gained especial attention. It is known that the success of the introduction of biofuels in each country depends largely on the raw materials used for its production. Colombia is one of the largest global producers of palm oil [3].This industry generates a very large amount of palm residues in the extraction process. Those residues have a very high potential for being use as a substrate in an SSF process for bio-ethanol production as a second generation biofuel [3]. The regulation for the use of ethanol as a fuel in

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 107-115. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Vásquez et al / DYNA 81 (185), pp. 107-115. June, 2014.

Colombia started in 2002, with a primary goal to achieve a production capacity of 2.5 million liters per day, in order to add 10% ethanol to the gasoline used for transportation. However, the main five ethanol plants operating in the country, produce only 1.05 million liters per day and the contribution of some small plants does not significantly increase this amount, which is only enough to supply the major cities near the Valle del Cauca, and the capital Bogotá. Therefore, it is necessary to evaluate future technically and economically feasible strategies that allow the ethanol volume of production in the country to be increased and stimulates the development of tools suitable for scaling up the processes for ethanol production from lignocellulosic wastes. Those strategies must be carried out specifically using the kind of residues widely available in Colombia. There have been few of this kind of studies and they have shown that there is a gap in technology and knowledge to overcome the challenge when scaling-up. Therefore, a deep understanding of the phenomenon taking place in the process is still required. For that, the use of modeling tools is a promising approach for gaining that understanding. In recent years, the development of models for predicting the dynamic behavior of the most important variables in the ethanol production process has been intensified, including the SSF processes [4–7]. However, studies in this field are still scarce and its application in scaling up is restricted. Besides, the models reported so far, have not been developed for alternative processes that uses microorganisms different from Saccharomyces cerevisiae and/or processes involving lignocellulosic residues of regional interest. Therefore, it is still necessary to develop a phenomenological-based model to properly predict the dynamic behavior of the different variables involved in the SSF process. In this work, an unstructured mathematical model was developed. The parameter identification and model validation were also carried out, using the experimental data for different SSF processes conducted with oil palm waste as the substrate and Kluyveromyces marxianus as the fermentative microorganism. Finally, a sensitivity analysis is proposed to be used in order to improve the parameter identification procedure. In section 2, a description of the methodology for the SSF experiments and the development of the mathematical model is presented. In section 3 the results of the model optimization and sensitivity analysis are shown, and the role of the different parameters is discussed. Also, the results of re-identification for the sensitive parameters, and its implication in the model performance are presented. Finally in the section 4, some conclusions are summarized.

in the Industrial Biotechnology Laboratory of the Universidad Nacional de Colombia, to obtain particles with a diameter of 1.5mm or less, and then a pretreatment with sulfuric acid was carried out (2%V/V, 20% W/V of solid load and 121°C during 80 minutes). The material was then dried for 12 hours in an oven at 50°C in the Biotechnology Laboratory of the Universidad de Antioquia. After that, an alkali pretreatment was performed (121°C, in a solution of NaOH 1%V/V, 10% W/V of solid load during 30 minutes). Finally the material was washed with distilled water several times, dried in an oven at 50°C for 12 hours and stored in a fresh place. 2.2. Yeast strain The yeast Kluyveromyces marxianus ATCC 36907, a thermotolerant yeast, was used in this work. The strain was kept at 4°C, in a solid medium containing Glucose 20g/L, Peptone 5g/L, yeast extract 3g/L, malt extract 3g/L and Agar 20g/L. The pH of the solid medium was adjusted to 5.0. Every three months a new culture was made. Before using the microorganism in the SSF process, and in order to reactivate it, a colony was taken from the culture in the solid medium and inoculated in a 250ml flask containing 100 ml of MGYP growth medium (20g/L glucose, 5g/L peptone, 3 g/L yeast extract and 3 g/L malt extract) with an initial pH of 4.8±0.05. The flask was kept in a shaker at 38°C and 150 rpm overnight. Finally, a new culture in solid medium was made in a Petri dish, and it was incubated for 48h at 38°C. 2.3. SSF inoculum preparation A 1L flask, containing 460ml of MGYP growth medium (pH of 4.8±0.05) enriched with ammonium sulfate 3g/L, magnesium sulfate 1g/L and monobasic potassium phosphate 2g/L. It was autoclaved at 121°C, 15 Psi for 20 min. Then, a loop of the reactivated yeast in the solid medium was added in sterile conditions. The flask was incubated in a rotatory shaker at 38°C and 150 rpm overnight. When the concentration of the yeast was close to 1g/L, achieved after 10-12 hours of incubation, at the end of the exponential phase, the inoculum was added to the SSF reactor. 2.4. Saccharification Enzyme

2. Methodology

In the SSF process, the enzymatic complex Acellerase 1500®, purchased from Genencor®, was used. The measured activity of this enzyme was 80 FPU/mL following a modified procedure of the protocol reported by Adney and Baker[8]. This activity was stable for more than 8 month while keeping the enzyme at 4°C.

2.1. Pretreatment of the lignocellulosic waste

2.5. SSF experiments

The oil palm wastes were donated by the CENIPALMA investigation center, obtained in an oil extraction factory located in Santander, Colombia. The dry wastes were milled

A description of the experiments to obtain the data for identifying the parameters and validating the model is shown in Table 1. Experiments were carried out in a 7 liter

108


VĂĄsquez et al / DYNA 81 (185), pp. 107-115. June, 2014. Table 1. Experimental Design for the Simultaneous Saccharification Fermentation experiments, using oil-palm wastes as the substrate. Experiment

SSFa SSFb SSFc1 SSFc2 SSFc3 SSFd1 SSFd2 SSFe

Conditions Agitation Solid (rpm) load (%w/v) 300 150 300 300 300 500 500 300

6 8 8 8 8 8 8 10

and

Data usedfor: Identification Validation

X X X X X X X X

Newbrunswick Bioflo 110 bioreactor with 5L of working volume. The saccharification enzyme, and 500 ml of the inoculums were added to the reactor containing 4.5 L of citrate buffer 0.5M, pH 4.8 (previously autoclaved at 121°C, 15 psi, 20 min), in order to achieve a final concentration of 15 FPU/(g of substrate) and 0.1g/L respectively. The medium also contained peptone 5g/L, yeast extract 3g/L, malt extract 3g/L, ammonium sulfate 3g/L, magnesium sulfate 2g/L and monobasic potassium phosphate 1g/L. the substrate (pretreated oil palm waste) was added at different solid loads (see Table 1). All the steps above were carried out in sterile conditions. The temperature of the process was controlled at 38°C. The pH and dissolved Oxygen concentration (DO) in the reactor were monitored. Different values of the agitation velocity were used (150, 300 or 500 rpm) in order to evaluate whether it has an important effect in the SSF process, and for it to be described in the mathematical model. Table 1 shows the experimental arrangements with their role (data used for parameter identification vs. used for model validation). In order to take into account experimental errors, a triplicate for one of the SSF experiments (randomly selected) was carried out. The standard deviation in this experiment was considered the same as the others. The SSF process was monitored for 72 h, taking samples periodically and keeping them in a freezer at -20°C for less than a week, until they were analyzed. 2.6. Analytical techniques Samples of 5ml were taken periodically during the 72h of the SSF experiments. After centrifugation (6000rpm, 10 min, 4°C) and filtering the supernatant with a cellulose filter of 0.2 ¾m, the sample was analyzed by duplicate in an HPLC. The analysis for glucose and ethanol were carried out in a Supelcol-gelŽ Column at flux conditions of 1.2 ml/min and 80°C, with sulfuric acid 5mM as the mobile phase. The yeast concentration was not measured. 2.7. Mathematical model Mass balances were performed for the SSF system, applying principles of conservation, considering the desired model resolution for making the adequate assumptions in

Figure 1.Proposed mechanism of ethanol production from lignocellulosic wastes in the SSF process.

order to describe the main process dynamics. The dynamic equations that provide valuable information are chosen and combine with the constitutive equations that complement the first principles model. Fig. 1 shows the proposed mechanism of ethanol production from lignocellulosic wastes in the SSF process. The equations for the proposed model in this work and the respective assumptions are presented. During the SSF process it is necessary for the enzyme to diffuse into the solid phase to react with the substrate, hence a distinction can be made between 2 types of enzymes. The first is the free enzyme in the bulk of the liquid (Elb). The ability of this enzyme to react changes for two reasons, because its diffusion to the solid phase and because its inactivation due to unknown phenomena. Eq.(1) describes this dynamic behavior. The second is the enzyme that has accessed the vicinity of the solid particles (Eli) whose concentration depends on the mass transfer of the enzyme from the bulk liquid and the formation of complexes with the fractions of the lignocellulosic material. This is expressed by Eq.(2) đ?’…đ?‘Źđ?’?đ?’ƒ đ?’…đ?’• đ?’…đ?‘Źđ?’?đ?’Š đ?’…đ?’•

= −đ?‘˛đ?’‚đ?’‘ (đ?‘Źđ?’?đ?’ƒ − đ?‘Źđ?’?đ?’Š ) − đ?‘˛đ?’†đ?’… đ?‘Źđ?’?đ?’ƒ

= đ?‘˛đ?’‚đ?’‘ (đ?‘Źđ?’?đ?’ƒ − đ?‘Źđ?’?đ?’Š ) − (

đ?’…đ?‘Źđ?’?đ?’Š đ?‘Şđ?’‚ đ?’…đ?’•

+

đ?’…đ?‘Źđ?’?đ?’Š đ?‘Şđ?’„ đ?’…đ?’•

(1)

+

đ?’…đ?‘Źđ?’?đ?’Š đ?‘ł đ?’…đ?’•

)(2)

Cellulose is considered to be composed of two fractions, one easily-hydrolysable amorphous cellulose and the other, a fraction of crystalline cellulose that is highly organized and whose hydrolysis takes place more slowly. The change in the concentration of these fractions over time, and of the complexes that they form with the enzymes is presented in Eqs.(3)-(6). It is considered that there is a decrease of

109


VĂĄsquez et al / DYNA 81 (185), pp. 107-115. June, 2014.

amorphous or crystalline cellulose (equations 3 and 4 respectively) when the enzyme diffused to the solid phase is adsorbed on a part of the cellulose fraction of the material. This fraction is represented by Îą for the amorphous cellulose (Ca) and β for crystalline cellulose (Cc). It is also assumed that these fractions are kept at the same proportion throughout the process. Furthermore, the cellulose for each fraction, will reappear again when the respective enzymecellulose complex is dissociated. đ?’…đ?‘Şđ?’‚ đ?’…đ?’• đ?’…đ?‘Şđ?’„ đ?’…đ?’•

= −đ?’‚đ?’‘ đ?œśđ?‘˛đ?’†đ?’„đ?&#x;? [đ?‘Źđ?’?đ?’Š ][đ?‘Şđ?’‚] + đ?‘˛đ?’†đ?’„−đ?&#x;? [đ?‘Źđ?’?đ?’Š đ?‘Şđ?’‚]

(3)

= −đ?’‚đ?’‘ đ?œˇđ?‘˛đ?’†đ?’„đ?&#x;? [đ?‘Źđ?’?đ?’Š ][đ?‘Şđ?’„] + đ?‘˛đ?’†đ?’„−đ?&#x;? [đ?‘Źđ?’?đ?’Š đ?‘Şđ?’„]

(4)

đ?’‚đ?’‘ = đ?‘ľđ?’‘ (đ?&#x;’đ??…đ?’“đ?&#x;?đ?’‘ )

đ?&#x;?+

(5)

đ?’…đ?‘Ž đ?’…đ?’•

= đ?’‚đ?’‘ đ?œ¸đ?‘˛đ?’†đ?’?đ?&#x;? [đ?‘Źđ?’?đ?’Š ][đ?‘ł] − đ?‘˛đ?’†đ?’?−đ?&#x;? [đ?‘Źđ?’?đ?’Š đ?‘ł]

đ?’…đ?’•

= −đ?’‚đ?’‘ đ?œ¸đ?‘˛đ?’†đ?’?đ?&#x;? [đ?‘Źđ?’?đ?’Š ][đ?‘ł] + đ?‘˛đ?’†đ?’?−đ?&#x;? [đ?‘Źđ?’?đ?’Š đ?‘ł]

đ?‘˛đ?’ƒđ?’ˆ đ?‘Šđ?‘Źđ?’?đ?’ƒ đ?‘˛đ?’”đ?’ˆđ?’‘ +[đ?‘Š]+

(11)

đ?? đ?‘ż đ?’€đ?’™đ?’”

(12)

[đ?‘Ž] đ?‘˛đ?&#x;?đ?’ˆ

+ đ?’Žđ?’” đ?‘ż

(13)

= đ?’“đ?’ˆđ?’‘ − đ?’“đ?’ˆđ?’„

(14)

Finally, the yeast growth and ethanol production are described by Eq.(15) and Eq.(16) respectively, whereas the expressions for the specific growth rate (assumed to be Monod kinetics with a correction for inhibition by ethanol)[4,10]and the specific rate of ethanol production are defined in Eq.(17) and Eq.(18) respectively. đ?’…đ?’•

(7)

= đ?? đ?‘ż − đ?‘˛đ?’… đ?‘ż

đ?’…đ?‘Źđ?’•đ?‘śđ?‘Ż đ?’…đ?‘ł

− đ?’“đ?’ˆđ?’‘

(6)

đ?’…đ?‘ż đ?’…đ?’•

đ?‘Š đ?‘Źđ?’•đ?‘śđ?‘Ż + đ?‘˛đ?&#x;?đ?’ƒ đ?‘˛đ?’†

đ?&#x;?+

đ?’“đ?’ˆđ?’„ =

The interaction of the enzyme with lignin is expressed in Eq.(7)and Eq.(8). The formation of the enzyme-Lignin complex (EliL) occurs by reversible adsorption of the enzyme on a portion of the lignin fraction (Îł) of the material. đ?’…đ?‘Źđ?’?đ?’Š đ?‘ł

đ?‘˛đ?’„đ?’„ [đ?‘Źđ?’?đ?’Š đ?‘Şđ?’„]+đ?‘˛đ?’„đ?’‚ [đ?‘Źđ?’?đ?’Š đ?‘Şđ?’‚]

đ?’‚đ?’‘ đ?œśđ?‘˛đ?’†đ?’„đ?&#x;? [đ?‘Źđ?’?đ?’Š ][đ?‘Şđ?’‚] −

đ?‘Š đ?‘Źđ?’•đ?‘śđ?‘Ż + đ?‘˛đ?&#x;?đ?’ƒ đ?‘˛đ?’†

(10)

On the other hand, there is a phenomenon of hydrolysis of cellobiose that leads to glucose production (Eq. 12). This hydrolysis is inhibited by the product, i.e. by the presence of glucose in the medium[6,9]. Glucose is consumed by the yeast for growth and maintenance (Eq. 13). The dynamics of glucose is then given by Equation 14.

đ?’…đ?‘Źđ?’?đ?’Š đ?‘Şđ?’„ = −đ?‘˛đ?’†đ?’„−đ?&#x;? [đ?‘Źđ?’?đ?’Š đ?‘Şđ?’„] + đ?’‚đ?’‘ đ?œˇđ?‘˛đ?’†đ?’„đ?&#x;? [đ?‘Źđ?’?đ?’Š ][đ?‘Şđ?’„] − đ?’…đ?’• đ?‘˛đ?’„đ?’„ [đ?‘Źđ?’?đ?’Š đ?‘Şđ?’„] đ?&#x;?+

=

đ?’“đ?’ˆđ?’‘ =

đ?‘Š đ?‘Źđ?’•đ?‘śđ?‘Ż + đ?‘˛đ?&#x;?đ?’ƒ đ?‘˛đ?’†

đ?‘˛đ?’„đ?’„ [đ?‘Źđ?’?đ?’Šđ?‘Şđ?’„]+đ?‘˛đ?’„đ?’‚ [đ?‘Źđ?’?đ?’Š đ?‘Şđ?’‚] )đ?‘˝ đ?‘Š đ?‘Źđ?’•đ?‘śđ?‘Ż đ?&#x;?+đ?‘˛đ?&#x;?đ?’ƒ+ đ?‘˛đ?’† đ?&#x;? đ?‘ľđ?’‘ đ??†đ?’‘ (đ?&#x;’đ??…đ?’“đ?’‘ )

The saccharification process, specifically the hydrolysis of the fractions of cellulose, leads to the production of cellobiose (B), as expressed by Eq.(11). This equation takes into account the inhibition effects of the hydrolysis of cellulose in the presence of cellobiose and ethanol. đ?’…đ?’•

đ?’…đ?‘Źđ?’?đ?’Š đ?‘Şđ?’‚ = −đ?‘˛đ?’†đ?’„−đ?&#x;? [đ?‘Źđ?’?đ?’Š đ?‘Şđ?’‚] + đ?’…đ?’• đ?‘˛đ?’„đ?’‚ [đ?‘Źđ?’?đ?’Š đ?‘Şđ?’‚]

=−

đ?’…đ?’•

đ?’…đ?‘Š

The complexes between cellulose fractions and the enzyme that has accessed the substrate are formed and dissociate as explained in the preceding paragraph, but these complexes also disappear when the saccharification reaction occurs. This reaction is inhibited by the presence of cellobiose and ethanol[9]. Accordingly, expressions for the change over time of the amorphous cellulose enzyme complex (EliCa) and crystalline cellulose enzyme complex (EliCc) are given in Eq.(5) and Eq.(6) respectively.

(

đ?’…đ?’“đ?’‘

(9)

= đ?’’đ?’‘ đ?‘ż

đ?’…đ?’•

(8)

It is considered that the area of the substrate particles decreases with time due to the hydrolysis of cellulose. Assuming spherical particles of area ap (Eq. 9) it can express the decrease of the radius of the particles according to Eq.(10), which takes into account the hydrolysis of cellulose, the density of the material of the particles (Ď p) and the number of particles in the reactor (Np).

đ?? =( đ?’’đ?’‘ =

(15)

đ?? đ??Śđ??šđ??ą đ?‘Ž đ?‘˛đ?’” +đ?‘Ž đ??

���

) (đ?&#x;? −

(16) ���� ������

)

(17)

(18)

The proposed model consists of 13 ordinary differential equations, five algebraic equations and a total of 22 parameters. Finally, the effect of agitation was not included in the model, as the experimental results at different stirring velocities showed no significant difference.

110


VĂĄsquez et al / DYNA 81 (185), pp. 107-115. June, 2014.

2.8. Parameter identification Using data from five different experimental setups (Table 1) the parameter identification was performed in the software Matlab, using the MIPT algorithm described by Ochoa et al. [11]. For the identification procedure, an objective function was defined (Eq. 19), consisting of the summation of the absolute average error of the experimental values of ethanol and glucose from the chosen SSF experiments. The calculation of the absolute average error for each set of data was performed according to Eq.(20), where AAE is the absolute value of the average error, n is the number of experimental data points, Exp indicates the experimental value and Pre the value predicted by the model. Expmax is the maximum value of the experimental data that are being used for the calculation of the AAE, and in turn the Expmin is the minimum value of the same data. The optimization problem to solve during the parameter identification is given by Eq.(21), where x is the vector of parameters to be identified, lb (lower bounds) is the vector of minimum acceptable values of the parameters, ub (upper bounds) is the vector of maximum acceptable values for the parameters and fobj is the objective function to be minimized (Eq. 19). For the sensitivity analysis, the approach of sensitivity index described by Ochoa et al. [12] was followed in order to evaluate how the model results are affected with the variation of each parameter. The procedure of parameter identification and sensitivity analysis is presented in Fig. 2. đ?‘­đ?’?đ?’ƒđ?’‹ = đ?‘¨đ?‘¨đ?‘Źđ?‘Žđ?’?đ?’–đ?’„đ?’?đ?’”đ?’‚ + đ?‘¨đ?‘¨đ?‘Źđ?‘Źđ?’•đ?’‚đ?’?đ?’?đ?’? đ?‘¨đ?‘¨đ?‘Ź =

∑đ?’? đ?’Š=đ?&#x;?|

đ?‘Źđ?’™đ?’‘đ?’Š −đ?‘ˇđ?’“đ?’†đ?’Š | đ?‘Źđ?’™đ?’‘đ?’Žđ?’‚đ?’™ −đ?‘Źđ?’™đ?’‘đ??Śđ??˘đ??§

đ?’?

are not enough experimental data available for reliable parameter identification. Usually, the number of experimental runs is limited to a couple of experiments, where different experimental conditions are analyzed (according to the design of experiments carried out). However, not all the possible conditions can be tested due to economic concerns. On the other hand, it is important to notice that if, the developed model is a first principles based model, and not an empirical one, the model uses some constitutive equations which have empirical bases. That is precisely why some parameters of the model must be reidentified when the model is tested using new experimental conditions. However, not all the parameters must be reidentified, and that is why the main objective of this paper is to propose a methodology for finding the best set of parameters under different experimental conditions, using lower computational time (which means, reducing the number of parameters that must be re-identified). Specifically, in this work the use of a re-optimization routine separately for each dataset is presented and analyzed, recalculating only the parameters classified as sensitive and keeping constant the set of non-sensitive parameters. đ?‘ˇđ?‘˛ +đ?&#x;Ž.đ?&#x;?đ?‘ˇđ?‘˛

đ?’? đ?’‘ đ?‘˛ đ?‘˛ đ?‘şđ?‘˛ đ?’Š = âˆŤđ?‘ˇđ?‘˛ −đ?&#x;Ž.đ?&#x;?đ?‘ˇđ?‘˛ |đ?‘­đ?’?đ?’ƒđ?’‹ (đ?‘ˇ ) − đ?‘­đ?’?đ?’ƒđ?’‹ (đ?‘ˇđ?’? )|đ?’…đ?‘ˇ đ?’?

(22)

đ?’?

2.9. Model validation The validation of the model was performed by comparing the dynamic behavior of the main variables predicted by the model against experimental data obtained for these variables. Also we calculated the objective function (measurement of the error) to check the model performance. Table 1 shows the experimental set-ups used for validation.

(19)

(20)

min Fobj x

s.to. lb  x  ub

(21)

Initial set of parameters

The initial values for the set of parameters were taken from values reported in the literature by several authors (see Table 2). The identification of Parameters for the proposed model (Eqs. 1-18) was made by solving the optimization problem proposed in Eq.(21) and the experimental data as shown in Table 1. The sensitivity index with respect to the identified set of parameters was calculated as described in Eq.(22). Where Sik is the sensitivity index for the kth parameter and Pok is the optimized value of the Kth Parameter. Sensitive parameters were defined as those whose sensitivity index was higher than an established tolerance. This tolerance was chosen in a way that it would be at least one order of magnitude of difference between the sensitivity index of the parameters considered sensitive and those considered non-sensitive. When first principles based models are developed for describing the dynamic behavior of complex processes (like the case study addressed in this paper), usually the number of parameters is high and there

Optimization with the MIPT using all the identification data sets in a simultaneous way Sensitivity Index (SI) calculation for each parameter

Evaluate for each parameter: SI < Tol?

Yes

Non-sensitive parameter: Keep it constant.

No Sensitive parameter: Re-optimize for each data set separately

Set of optimal parameters for the data set i Figure 2.Parameter identification procedure.

111


VĂĄsquez et al / DYNA 81 (185), pp. 107-115. June, 2014.

3. Results and discussion In Fig. 3 the dynamic behavior of glucose and ethanol can be observed. Experimentally, at the beginning of the process (the first 5 hours) a very fast increase of the glucose concentration takes place due to the high hydrolysis rate. However, after glucose starts to be available, a high glucose consumption rate is reached. This effect causes a decrease in the total glucose concentration. Such a decrease is motivated by the cellular growth. Although the glucose concentration goes to low values rapidly, a continuous production of ethanol is observed until reaching 6g/L approximately. This evidences the fact that the hydrolysis reaction occurs during the whole process and not just at the beginning. Identifying a first set of parameters using simultaneously all the data sets (ssfb, ssfc1, ssfc2, ssfd1 and ssfd2 in Table1), the objective function value decreased from 10.72 to 2.23, which indicates an improvement in the model performance due to the optimization process. A sensitivity analysis was performed to analyze which parameters mostly affected the model results, when their values vary. Table 2 shows the sensitivity index of each parameter calculated as explained in the methodology section. It was observed that 11 of the 22 parameters affect significantly the model results. Firstly it is important to realize that the parameters related to the metabolic capabilities of the yeast, specifically Îźmax, YXS, YXP and Ks are the parameters to which the model is most sensitive. This result indicates that the use of a different microorganism in the SSF process can strongly affect the results, and in turn justifies the current interest of many researchers for testing various microorganisms with different capabilities to get better results in the SSF processes [13,14]. Something similar may be said about the parameter 'Ms', which indicates the glucose consumption for maintenance, which may vary among different microorganisms and conditions. In contrast we found that the parameter Kd of cell death, does not significantly affect the model results. Furthermore, the optimized value found for this parameter is very low(close to zero), which might suggest that the effect of cell death proposed in the model could be neglected, at least for a time up to 72 hours of cultivation. However, it is possible for Kd to become an important parameter in processes that take a longer time to be completed. Furthermore, it is observed that the parameters, Kcc and Kca, which are related to the hydrolysis of cellulose fractions for producing cellobiose, significantly affect the model, as the parameter Ke, which is related to the inhibition of cellobiose production due to presence of the ethanol. According to this result, the hydrolysis of cellulose and the consequent production of cellobiose have a significant influence on the results of the SSF processes performed with lignocellulosic materials. This suggest the importance of using cellulases, which are able to maintain a good catalytic activity and at the same time are less sensitive to inhibition, when aiming to optimize the results of an SSF process.

Table 2. Results of the first optimization and the sensitivity analysis Parameter Initialvalu Identifi Sensitivit Sen None* edvalue yindex sitiv sensitive e Kd(h-1) 0.0020 0.0006 0.00007 X KietOH(g/l) 50.000[4] 12.750 0.00769 X 0 K1g (g/l) 3.1500[14] 0.9630 0.00009 X K(m-2h-1) 0.0050 0.0055 0.00560 X Kel1(m2*l/fp 0.0092[4] 0.0069 0.00006 X u*h) Kel_1(h-1) 7.2000[5] 6.8818 0.00006 X Yxp(g/g) 0.2500[15] 0.2913 0.16672 X umax(h-1) 0.4010[5] 0.2807 0.06998 X Yxs(g/g) 0.4850[5] 0.1750 0.11411 X Ks(g/l) 2.1840[5] 1.1842 0.02912 X Kec1(m2*l/fp 0.0368[5] 0.0275 0.00422 X u*h) Kec_1(h-1) 0.0092[14] 0.0016 0.00014 X Kec2(m2*l/fp 0.0106[5] 0.0056 0.00010 X u*h) Kec_2(h-1) 0.0027[14] 0.0015 0.00001 X Ke(g/l) 50.3500[4] 60.203 0.00276 X 5 Ked(h-1) 0.0020 0.0013 0.00003 X Kca(h-1) 0.0057[14] 0.0029 0.05061 X Kcc(h-1) 0.0017[14] 0.0001 0.00255 X Ms(h-1) 0.0064 0.0072 0.00682 X Kbg(h-1) 0.2000 0.1390 0.00012 X K1b(g/l) 0.0860 0.0973 0.00005 X Ksgp(g/l) 0.1229 0.1255 0.00003 X Fobj** 10.72 2.2263 *Initial value of the parameters. Those referenced were taken from the literature. The others were based on previous knowledge.

In general it was found that the parameters related to the formation of the cellulose-enzyme complex, do not strongly affect the model. The only one of these parameters that affects the model results was Kec1. This indicates that in the saccharification of lignocellulosic materials, the hydrolytic capacity of the cellulases can be more important than its ability to bind themselves to the substrate; however, no information was found in the literature to support this fact. On the other hand, the KietOH parameter significantly affects model results. This confirms what was stated before concerning the importance of the microorganism’s capabilities, specifically in this case, the ability to resist high ethanol concentrations. Finally, it was found that the mass transfer coefficient (K) significantly affects the model, indicating that when carrying out an SSF process, the access of the enzymes to the lignocellulosic material is an important fact that must be taken into account. Since the use of different stirring velocities does not significantly affect the SSF process, such accessibility must be improved by other methods such as decreasing the size of the substrate particles or changing the properties of the medium, by for example, adding surfactants to the bioreactor. Some studies have already shown that by doing so, it is possible to improve the results of the SSF process[17,18]. After the sensitivity analysis, the re-identification of the sensitive parameters was carried out for each data set

112


Vรกsquez et al / DYNA 81 (185), pp. 107-115. June, 2014.

individually (ssfa,ssfc3,ssfe) and the objective function value decreased considerably (see Table 3), which indicates the improvement of the model performance due to the coupling of the sensitivity analysis and the re-optimization process. Fig. 3 shows the results for the model fit when performing the re-optimization using each experimental set separately, for identifying just the 11 parameters considered as sensitive. In general an improvement is observed in the fit of the data of glucose and ethanol. This improvement, when comparing the fit of the model before and after optimization, leads to a better prediction of the trends for each case in particular and a reduction in the value of the objective function of 8%, 19% and 10% for the validation data of SSFa, SSFc3 and SSFe respectively (Table 3). Nevertheless, for the data of the SSFe experiment (Fig. 3c), where the prediction of the values and the trends of the variable are still close to the experimental data, the variation in the production of glucose in the first hours of the process is underestimated. It is important to notice that the change in the value of almost all the parameters is not even of one order of magnitude after re-optimization. Most of the parameters that had the biggest change are kinetic parameters related to the reactions for producing the cellobiose and for the formation of the complexes enzyme-cellullose. This fact shows that those parameters are affected by the initial solid load. According to these results, it might be possible to state that the reaction kinetics in the mentioned reactions are of a superior order, and not of order one as assumed in the development of the model. Other variables for which experimental data were not taken had a realistic behavior when simulations were performed, giving more confidence in the model performance (data not shown). Table 3. Reidentified parameters for each experiment. Parameter Value in Valuefor firstidenti SSFa fication KietOH(g/l) 12.7500 10.0400 K(m-2h-1) 0.0055 0.0017

Valuefo rSSFc3

ValueforSSFe

3.5850 0.0059

6.7200 0.0053

Yxp(g/g)

0.2913

0.2687

0.2965

0.2867

-1

Umax(h )

0.2807

0.3439

0.4763

0.2096

Yxs(g/g)

0.1750

0.1826

0.1390

0.2773

Ks(g/l)

1.1842

1.5019

1.7730

1.9186

Kec1(m *l/fpu*h)

0.0275

0.0048

0.0258

0.0412

Ke(g/l) Kca(h-1) Kcc(h-1) Ms(h-1) Fobj in optimization Fobj in re-optimization

60.2035 0.0029 0.0001 0.0072 -

13.5039 0.0006 0.0016 0.0011 0.258

23.5940 0.0039 0.0006 0.0004 0.473

53.2854 0.0019 0.0003 0.0075 1.959

-

0.237

0.384

1.767

2

4

Concentration g/l

3.5

A

3 2.5 EtOHexp Gexp EtOHmodel GModel EtOHModel Re-opt GModel Re-opt

2 1.5 1 0.5 0 0

10

20

30

40

50

60

70

80

Time h

5 4.5

B

Concentration g/l

4 3.5 3 Eexp Gexp EtOHModel GModel EtOHmodel Re-opt GModel Re-opt

2.5 2 1.5 1 0.5 0 0

10

20

30

40

50

60

70

80

Time h

6

C Concentration g/l

5

4 Gexp EtOHexp EModel GModel EtOHModel Re-opt GModel Re-opt

3

2

1

0 0

10

20

30

40

50

60

70

80

Time h

Figure 3.Model fit for the validation data: A) Ethanol and Glucose for the SSFc3 data, B) Ethanol and Glucose for the SSFa data, C) Ethanol and Glucose for the SSFe data. For the three Figures, the nomenclature is: Experimental Glucose (*), Experimental Ethanol (o), Ethanol (---) and Glucose (- - -) predicted by the model with the parameters found using all the identification data simultaneously (see Table 2, column 3) and Ethanol ( ) and Glucose (-.-.-) predicted by the model with the parameters found using re-optimization (see Table 3, columns 3-5) Data points represent the mean value from at least three separate experiments (the minimum standard deviation for ethanol was between 0.09 and the maximum was 0.99. For glucose the minimum standard deviation was 0.001 and the maximum was 0.8) Error bars are omitted for reasons of clarity

4. Conclusions Regarding the results of the fermentation process, it can be concluded that the rapid production of glucose during the first moments of the process, decreased drastically possibly due to the formation of ethanol. Likewise it is noted that even though the hydrolysis could be affected by the presence of ethanol, it is maintained throughout the process time, which is an important result for the development of this type of process. 113


Vásquez et al / DYNA 81 (185), pp. 107-115. June, 2014.

A new unstructured, first principles based model for predicting the main dynamics in the ethanol production process from lignocellulosic wastes using the Simultaneous Saccharification and Fermentation technology was developed. The proposed model contains some new features such as: a) an approach for describing the enzymatic action on a lignocellulosic substrate, considering it to consist of spherical particles whose radius decreases as the saccharification takes place, b) the formation of different enzyme-substrate complexes, c) Mass transfer issues. From a sensitivity analysis, it was found that from the 22 parameters present in the model, only 11 parameters appear to have a significant effect on the model behavior, most of them associated with characteristics related to the yeast used, while others were found to be associated with enzyme’s properties and the mass transfer in the system. The re-identification of these 11 parameters, allows one to reduce the value of the objective function. This fact suggests that such a procedure for sensitivity analysis can improve the parameter identification process, resulting in a greater flexibility when implementing the model.

Yxs Ks

g/g g/l

Kec1

m2l/fpu *h h-1

Kec_1

Kec_2

m2l/fpu *h h-1

Ke

g/l

Ked Kca

h-1 h-1

Kcc

h-1

Ms Ksgp

h-1 g/l

K1g Kbg K1b

g/l g/fpu* h g/l

Acknowledgements

K

1/m2*h

The authors acknowledge the support given by the Industrial Biotechnology research group at the Universidad Nacional de Colombia Sede Medellín, where the pretreatment of the samples was carried out. Also, the academic and financial support by the Academic Environmental Corporation of the Universidad de Antioquia and the CODI committee at Universidad de Antioquia are gratefully acknowledged.

Notationfor figures

Notation Notation Ca Eaca

Units g/l g/l

Cc Eacc

g/l g/l

L EaL El Ea B G X EtOH rp ap Np V ρp Kd KietOH Kel1 Kel_1

g/l g/l FPU/l FPU/l g/l g/l g/l g/l m m2 l g/m3 h-1 g/l m2l/fpu *h h-1

Yxp µmax

g/g h-1

Definition Amorphous cellulose concentration Complex adsorbed enzyme-Amorphous cellulose concentration Crystalline cellulose concentration Complex adsorbed enzyme-Crystalline cellulose concentration Lignin concentration Complex adsorbed enzyme-lignin concentration Free enzyme concentration Adsorbed enzyme concentration Cellobiose concentration Glucose concentration Yeast concentration Ethanol concentration Particle radius Particle area Number of particles Reactor volume Particle density Cell death constant Growth Inhibition constant by ethanol Rate constant of adsorbed enzyme-lignin complex formation Rate constant of adsorbed enzyme-lignin complex separation. Cell biomass yield by ethanol Maximum Specific rate of cell growth

Kec2

EtOHModel GModel ‘O’EtOHexp ‘*’Gexp

Cell biomass yield by glucose Saturation constant for growth using glucose as substrate Rate constant of adsorbed enzyme-amorphous cellulose complex formation Rate constant of adsorbed enzyme- amorphous cellulose complex separation. Rate constant of adsorbed enzyme-Crystalline cellulose complex formation Rate constant of adsorbed enzyme-Crystalline cellulose complex separation. Inhibition constant of cellobiose production by ethanol Inactivation rate of the free enzyme Reaction rate constant for cellobiose formation using amorphous cellulose Reaction rate constant for cellobiose formation using crystalline cellulose Glucose consumption for maintenance constant. Saturation constant for glucose production using cellobiose as substrate Inhibition constant of the free enzyme by glucose Rate constant for glucose production using cellobiose as substrate Inhibition constant of cellobiose production by cellobiose Mass transfer coeficient

Ethanol predicted y the model Glucose predicted by the model Experimental ethanol Experimental Glucose

References [1] Briceño C. O.Aspectos estructurales y de entorno que enmarcan los proyectos e inversiones para la producción de bioetanol en Colombia.[Online],2003. [consulted on 1/20 of December 2012] Available at: http://cengidoc.cengican.org/Portal/SubOtrasAreas/Etanol/Presentaciones/ ArticuloProduccionBioetanolColombia.pdf [2] Hahn-Hägerdal, B., Galbe M., Gorwa-Grauslund M. F., Lidén G. and Zacchi G. Bio-ethanol--the fuel of tomorrow from the residues of today.Trends Biotechnol., 24 (12), pp. 549–56, 2006. [3] Gutiérrez,C. X. and Arias J. F.Obtención de celulosa a partir de material lignocelulósico proveniente de la extracción de aceite de palma. Bs. thesis, Facultad de ingeniería, Universidad de Antioquia, Colombia, 2009. [4] South C. R., Hogsett A. L. and. Lynd L. R.Modeling simultaneous saccharification and fermentation of lignocellulose to ethanol in batch and continuous reactors.Langmuir, 229, pp. 797–803, 1995. [5] Kroumov A. D., Módenes A. N. and. Tait M. C. Development of new unstructured model for simultaneous saccharification and fermentation of starch to ethanol by recombinant strain.Biochem. Eng. J., 28(3), pp. 243– 255, 2006. [6] Sasikumar E. and Viruthagiri T. Simultaneous Saccharification and Fermentation ( SSF ) of Sugarcane Bagasse - Kinetics and Modeling.Int. J. Chem. Biomol. Eng. 3 (2), pp. 57–64, 2010. [7] Morales-Rodriguez R., Gernaey K. V., Meyer A. S. and Sin G. A Mathematical Model for Simultaneous Saccharification and Cofermentation (SSCF) of C6 and C5 Sugars.Chinese J. Chem. Eng., 19 (2), pp. 185–191, 2011.

114


Vásquez et al / DYNA 81 (185), pp. 107-115. June, 2014. [8] Adney B. and Baker J. Measurement of Cellulase Activities Laboratory Analytical Procedure (LAP ). USA. National Renewable Energy Laboratory, 2008, 8P. [9] Madrid L. M. and Quintero J. C.Ethanol production from paper sludge using Kluyveromyces marxianus producción de etanol de lodos papeleros usando Kluyveromyces marxianus.Dyna, 78 (170) pp. 185–191, 2011. [10] Ballesteros M., Oliva J. M., Negro M. J., Manzanares P. and Ballesteros I. Ethanol from lignocellulosic materials by a simultaneous saccharification and fermentation process (SFS) with Kluyveromyces marxianus CECT 10875.Process Biochem.,39 (12), pp. 1843–1848, 2004. [11] Ochoa S., Wozny G. and Repke J. A new algorithm for global optimization : Molecular-Inspired Parallel Tempering.Comput. Chem. Eng.,34 (12), pp. 2072–2084, 2010. [12] Ochoa S., Yoo A., Repke J. and Yang D. R. Modeling and Parameter Identification of the Simultaneous Saccharification-Fermentation Process for Ethanol Production.Biotechnol. Prog., 23 (6), pp. 1454–1462, 2007. [13] Gan Q., Allen S. and Taylor G. Kinetic dynamics in heterogeneous enzymatic hydrolysis of cellulose: an overview, an experimental study and mathematical modelling.Process Biochem., 38 (7), pp. 1003–1018, 2003. [14] Doran P.Bioprocess engineering principles, 2nd ed, Elsevier, 2006. pp. 87–160. [15] Kádár Z., Szengyel Z. and Réczey K. Simultaneous saccharification and fermentation (SSF) of industrial wastes for the production of ethanol.Ind. Crops Prod., 20 (1), pp. 103–110, 2004. [16] Margeot A., Hahn-Hagerdal B., Edlund M., Slade R. and Monot F. New improvements for lignocellulosic ethanol.Curr. Opin. Biotechnol., 20 (3), pp. 372–80, 2009.

[17] Alkasrawi M., Eriksson T., Börjesson J., Wingren A., Galbe M., Tjerneld F. and Zacchi G. The effect of Tween-20 on simultaneous saccharification and fermentation of softwood to ethanol.Enzyme Microb. Technol., 33, pp. 71–78, 2003. [18] Sun Y. and Cheng J. Hydrolysis of lignocellulosic materials for ethanol production: a review.Bioresour. Technol., 83 (1), pp. 1–11, 2002. J. E. Vásquez, received a Bs. In Biological Engineering in 2008, and MS degree in Biotechnology in 2013, he has worked in programs and projects related with biotechnology, with emphasis on alternative energy production, bioenergy production and bioprocess modeling since 2007 for the Universidad Nacional de Colombia and the Universidad de Antioquia leading to the publication of scientific articles. He is currently a PhD student at the International Development Engineering Department of the Tokyo Institute of Technology in Japan. S. Ochoa Cáceres, received her bachelor degree in Chemical Engineering in 2001 from the Universidad Industrial de Santander, a Masters of Science degree in Chemical Engineering from the Universidad Nacional de Colombia SedeMedellín in 2005 and her Doctor in Engineering degree from the TechnischeUniversität Berlin in 2010. She is currently full time professor at the Universidad de Antioquia (Colombia). Her research interests are mainly in the areas of modeling, optimization and control of chemical and biochemical processes. J. C.Quintero, received a Bs. Eng in Chemical Engineering in 1993, an MS degree in Chemical Engineering in 1998 and his PhD degree in Chemical and Environmental Engineering in 2004. He has been working as a professor in the Chemical Engineering Department since 1998, has directed research projects in Bioprocesses area and is currently head of the Chemical Engineering Department, Facultad de Ingeniería, Universidad de Antioquia.

115


DYNA http://dyna.medellin.unal.edu.co/

Simulation of a stand-alone renewable hydrogen system for residential supply Simulación de un sistema autónomo de hidrógeno renovable para uso residencial Martín Hervello a, Víctor Alfonsín b, Ángel Sánchez c, Ángeles Cancela d & Guillermo Rey e a PhD Universidad de Vigo, España. hervello@hotmail.com PhD Student., Centro Universitario de la Defensa de Marín, España, valfonsin@cud.uvigo.es c PhD Universidad de Vigo, España., asanchez@uvigo.es d PhD Universidad de Vigo, España, chiqui@uvigo.es e PhD Student., Centro Universitario de la Defensa de Marín, España, guillermo.rey@cud.uvigo.es b

Received: February 15th, 2013. Received in revised form: May 15th, 2014. Accepted: May 20th, 2014

Abstract Computer simulation is a first logical step before taking a project to physical construction beside it is a powerful tool in energy network design. Combined systems are used to improve availability for energy supplied by renewable systems. The main inconvenient of some renewable energy sources is their highly seasonal nature, with great variations over time that can impede their use as the basis for consumption and limits them to peak demand times. The aim of this work is to simulate to verify the energetic sufficiency of a family house with renewable energies (wind, solar-photovoltaic) using a hybrid system of batteries and hydrogen. For that, Simulink®-Matlab® program was used considering meteorological data provided by CINAM (Galician Center for Environmental Research and Information). The model can be applied to determine the feasibility of implementing an energy network in specific places, and to predict energy flows and system behavior throughout the year. Keywords: Hybrid, hydrogen, renewable energy, energy storage, simulation, modeling. Resumen La simulación por ordenador es un primer paso lógico previo a la realización de un proyecto de una construcción física además de ser una herramienta para el diseño de redes de energía. Los sistemas combinados son una solución para mejorar la disponibilidad de la energía suministrada con medios renovables. El principal inconveniente de las fuentes de energías renovables es su naturaleza altamente estacional, con grandes variaciones en el tiempo que pueden impedir el uso como base de consumo y limitar las horas de máxima demanda. El objetivo de este trabajo es realizar simulaciones para comprobar la autosuficiencia energética de una vivienda unifamiliar en base a energías renovables (eólica, solar-fotovoltaica) utilizando como medio de almacenamiento un sistema híbrido de baterías e hidrógeno. Para ello se ha utilizado el programa Simulink®-Matlab® teniendo en cuenta los datos meteorológicos proporcionados por METEO-Galicia. El modelo puede ser aplicado para determinar la viabilidad de implementar una red energética en regiones específicas, y predecir el flujo de energía y el comportamiento del sistema durante todo el año. Palabras clave: Híbrido; hidrógeno, energía renovable, almacenamiento de energía, simulación, modelado.

1. Introduction Against a background of increasingly expensive fossil fuels [1], due to constantly growing worldwide demand, and the environmental risks those fuels inherently bear, such is a rising greenhouse gas emissions [2], a steady but generalized use of renewable energies can be noted, often encouraged by institutional policy [3,4]. Their use is now not only contemplated for sporadic supply of small electrical systems, but has opened up to include a wide range of possibilities such as: electricity supply to places where main grid does not cover [5,6];

vehicles [7,8]; combined use with the electricity grid [9,10]; or with predominant energy supply systems [11]. All these alternatives are centered on a common axis of on-site hydrogen production. For this reason, when implementing an energy structure including renewable energies, the knowledge of meteorological data for specific areas is very important, because the geographic factor has of great importance [12,13] in order to evaluate the potential of energy, and to make appropriate configuration choices. The main disadvantage of some renewable energy sources is their seasonal nature, which means great variability over time, not forgetting that, besides this

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 116-123. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Hervello et al / DYNA 81 (185), pp. 116-123. June, 2014.

seasonal effect, solar energy must also take into consideration the number of daylight hours [14,15]. These factors make it difficult to use such sources as a basis for consumption and limits them to peak use times, leading to a waste of great quantities of clean and cheap energy in places where it cannot be fed into the grid. Due to photovoltaic and wind energies can become complementary to some extent, these have been chosen for this proposal. In this paper a model of a small wind generator combined with a number of photovoltaic panels to meet a standard system needs is presented. Energy generated is managed by a control system that distributes it according to the situation and needs of the system. 2. Model description Dynamic simulation of complex systems is a powerful design and analysis tool, with the subsequent saving in time and money as the step from theory to practice is considerably eased and the construction of physical prototypes is made less expensive once a close approximation to real behavior is known. In this case the Simulink® package from Matlab® has been used for modeling purposes. In this model, energy obtained from the wind generator and several photovoltaic panels is simulated. There are three possible routes for this energy: the first is directly to satisfy demand; the second, store it in a battery before using; and DC Bus (Aprox. 48 V)

Fuel Cell

Photovoltaic Panels Hydrogen Final Storage

Wind Generator

the third, and most complex, to feed an electrolyzer to convert electric excess into hydrogen [16], which is sent to a buffer that, once full, activates a compressor to send it to a storage tank where it can be converted back to energy in a fuel cell when required. Fig. 1 shows a general diagram of the model, in which energy flows are in red and hydrogen flows in blue. Due to meteorological data are provided every 10 minutes (which is more than acceptable as a frequency for annual studies), the individual units will be simulated in steady state (as it would not make sense to waste calculation time when the time interval is greater in range than the dynamics of any of the process units), and then the whole system will be simulated dynamically. All the power generation and consumption units are connected to a DC bus, by means of power converters, except in the case of the battery, connected directly, to provide an approximate voltage of 48 V to the bus. These units are needed to condition the voltage from the various elements to the bus; we assume efficiency at 95% for all the DC-DC converters, and at 92% for the DC-AC ones. The control for the energy flows from all of these units operates according to the energy arriving in the system and the system state. 2.1. Meteorological Data and Household Demand The location chosen for this simulation is Marco da Curra, located in the Monferro council district which is on the northwest of the Iberian Peninsula (Latitude: 43.34º, Longitude -7.89, Altitude: 651 m), and for which 2006 was chosen as a typical meteorological year. Data were provided by CINAM (Galician Centre for Environmental Research and Information). [17] Fig. 2a and 2b show solar radiation and wind speed throughout the year, respectively. It can be seen that energy production from the two systems is somewhat complementary, as solar production from the photovoltaic panels during the middle months of the year subsides at the start and end of the year, but at year’s end when the increase in stronger air currents

Compressor

Battery Hydrogen Buffer

Electrolyzer

Load

Figure 1: Energy flows of general process scheme

Figure 2: Sunshine (a) and wind speed (b) data

117


Hervello et al / DYNA 81 (185), pp. 116-123. June, 2014.

means the wind energy source can step in to mitigate this reduction. The demand curve was supplied by “Red ElĂŠctrica de EspaĂąaâ€?, and in this case we have profiles for the average household on a typical day in summer and winter, with daily energy use taking into account the inverter of 8.55 kWh and 5.66 kWh, respectively. For fall and spring months an intermediate behavior, between the summer and winter profiles, is assumed (Fig. 3).

For data regarding the parameters used and how they are obtained from the technical specification of the photovoltaic module [19]. By multiplying the unitary power for each panel by the number of panels, 17 in this case, the power being generated by the panels at each moment is obtained. The main advantage of this model, apart from its reliability, accuracy and easy implementation, is the ease with which valid parameters are obtained in wide operation ranges from the data provided by the manufacturer, which in many cases are under standard nominal conditions, usually 1000 W/m2 and 25 ºC. 2.3. Wind Generator For simulation, wind speeds are put into the model block that refers to the wind generator in order to calculate the electrical energy produced by the device from the wind’s kinetic energy. Each wind turbine has a characteristic curve that distinguishes it from the others, and defines the behavior it will have given various wind speeds, that is, it defines the power generated by the device for each wind value. The start up speed for the wind generator is approximately 3 m/s, which means that there is no energy output at wind speeds below this. The wind generator model used is a Fortis Espada with 800 W of nominal power using two polyester reinforced, fiberglass blades with a 2.2 m rotor diameter. To determine the output power of the wind turbine in this case, we use a polynomial adjustment, [20] which relates wind speed to output power.

Figure 3: Load curve for a typical day in each season

2.2. Photovoltaic Panels Radiation data are put into the block that calculates the energy supplied by each photovoltaic panel to the system. After passing through a previously created function, the power that a panel would be supplying is obtained according to room temperature and radiation being received in the case of following the maximum power curve, this curve will be calculated from the five parameter model presented by De Soto [18].

I pv

ďƒŠ V pv VR ďƒš V pv  I pv RS  I L  I 0 ďƒŞe a  1ďƒş  RSh ďƒŞďƒŤ ďƒşďƒť

(3)

Where Pwt is the output power of the wind generator (kW) and v is the wind speed (ms-1). 2.4. Battery

(1)

Where Vpv (V) and Ipv (A) are the voltage and the current of the output, respectively, a (V-1) is the ideality factor parameter, IL (A) is the light current, I0 (A) is the diode reverse saturation current, Rs (Ί) is the series resistance, and RSh (Ί) is the shunt resistance. In this case (the photovoltaic panel shows a very high value in the parallel resistance) we can discard the third term on the right hand of the previous equation, reducing the equation to four parameters.

ďƒŠ V pv VR ďƒš I pv  I L  I 0 ďƒŞe a  1ďƒş ďƒŞďƒŤ ďƒşďƒť

đ?‘ƒđ?‘¤đ?‘Ą = 0 đ?‘Ł<3 đ?‘ƒđ?‘¤đ?‘Ą = 0.007 đ?‘Ł − 6 2 + 0.06 đ?‘Ł − 6 + 0.13 3 ≤ đ?‘Ł ≤ 8.5 đ?‘ƒđ?‘¤đ?‘Ą = −0.005 đ?‘Ł − 14 2 + 0.04 đ?‘Ł − 14 + 0.068 8.5 ≤ đ?‘Ł ≤ 20 đ?‘ƒđ?‘¤đ?‘Ą = 0.0008 đ?‘Ł − 22.4 2 + 0.02 đ?‘Ł − 22.4 + 0.68 20 ≤ đ?‘Ł ≤ 25 đ?‘ƒđ?‘¤đ?‘Ą = 0 đ?‘Ł > 25

(2)

Here, the battery fulfils three functions: firstly it is shortterm storage for energy; secondly it provides the voltage for the DC bus; and thirdly it is used to provide one of the main energy management variables, which is the state of charge (SOC) of the battery, which will be explained later. In this case, the battery model and necessary parameters are provided by Agbossou et al. [21, 22]. Efficiency for battery operation is added to the model. In order to calculate voltage, we use:

U bat  U bat , 0  Rbat I bat

(4)

Where Ubat (V) is the voltage in the battery, Ubat,0 (V) is its voltage in open circuit, Rbat (Ί) is its internal resistance

118


Hervello et al / DYNA 81 (185), pp. 116-123. June, 2014.

and Ibat (A) is its current. If the sign of this last item is negative, then the battery is discharging, and if it is positive, it is charging. The total energy stored in the battery is:

EB  EB,0 

1 I Bď ¨ bat dt 3600 ďƒ˛

(5)

Where EB,0 (Ah) is the initial energy of the battery and Ρbat is its efficiency, which is assumed at 0.85 for charging processes and 1 for discharging processes. The SOC is given by:

SOC (%)  100

EB E B ,max

(6)

Where EB,max (Ah) is the total capacity of the battery, enough to provide energy for between two and three days in winter.

đ?‘‘đ?‘ đ??ť2 ,đ??ľđ?‘˘đ?‘“ đ?‘‘đ?‘Ą

= đ??šđ??ť2 ,đ??ľđ?‘˘đ?‘“,đ?‘–đ?‘› − đ??šđ??ť2 ,đ??ľđ?‘˘đ?‘“,đ?‘œđ?‘˘đ?‘Ą

(8)

Where NH2,Buf is the number of accumulated moles in the buffer, FH2,Buf,in (mols-1) the hydrogen flow entering from the electrolyzer and FH2,Buf,out (mols-1) the output towards the compressor. Applying a logical comparison system with memory, a trigger signal can be sent once a particular level is reached and a deactivation signal can be sent if a lower level is reached. Here, the trigger is at 95% of tank capacity and deactivation is at 40%. The buffer holds 1000 liters. 2.7. Compressor

2.5. Electrolyzer In the electrolyzer, the current passes through a series of electrolytic cells in which there is a water input current that, due to the electrolysis provoked by the electric current, is split into two separate currents of hydrogen and oxygen. The current-voltage curve chosen for the electrolyzer is that proposed by Satarelli et al.[5] for a bipolar 1 kW nominal power electrolyzer with 20 cells, operating at 80Âş and with Faraday efficiency assumed at 99%. In these simulations two electrolyzers are used. The number of active electrolyzers depends on the input power, when this exceeds 1 kW nominal for one electrolyzer, then both of them are switched on. The characteristic curve to describe electrolyzer cell behavior is given by the following equation:

ďƒŚ I ďƒś Ve  Ve , 0  b lnďƒ§ďƒ§ e ďƒˇďƒˇ  Re I e ďƒ¨ I e,0 ďƒ¸

reaching a set level activates an emptying signal to the compressor to evacuate the hydrogen towards the higher capacity tank. The two input variables are the electrolyzer output flow and the flow to the compressor creating a hydrogen balance that, by being integrated, represents the amount of hydrogen.

(7)

Where Ve (V) is the voltage of a cell, Ve,0 (V) is the reversible voltage for a cell, b (V-1) is a characteristic coefficient of the electrolyzer, Ie,0 (A) is the exchange current, Re (Ί) is the Ohmic resistance in the cell and I e (A) is the current. The electrolyzer only functions at powers of over 15% its nominal power. A parasitic loss of 50 W is assumed for each electrolyzer switched on 2.6. Compressor Buffer The compressor buffer is an intermediate tank between the gas outlet from the electrolyzer, which works continually, and the compressor input. The second function it has is as a control system for the tank level, which, on

When the buffer reaches the desired level, the compressor is triggered and compresses the hydrogen at 300 W, to achieve a lower volume of gas and thus a smaller final storage tank. A discontinuous flow model has been chosen, to avoid large hydrogen flows to the compressor that would need a much more powerful compressor. In this case, final storage pressure is 20 bar, the initial one being given by the buffer pressure, and compression takes place in three stages in such a way that the amount of hydrogen compressed is at a maximum, which is achieved when the ratio between the output and input pressures for each compressor stage are equal. The molar flow of compressed hydrogen is equal to: đ??šđ??ť2,đ?‘?đ?‘œđ?‘šđ?‘? = đ?‘ƒđ?‘’đ?‘Ą,đ?‘?đ?‘œđ?‘šđ?‘?

đ?‘›đ?‘?đ?‘œđ?‘™đ?‘Ś −1 đ?‘›đ?‘?đ?‘œđ?‘™đ?‘Ś

đ?œ‚đ?‘?đ?‘œđ?‘šđ?‘?

đ?œ‚đ?‘?đ?‘œđ?‘šđ?‘? đ?œ‚đ?‘?đ?‘œđ?‘™đ?‘Ś đ?‘ƒ2 đ?œ‚đ?‘?đ?‘œđ?‘™đ?‘Ś −1 ) −1] đ?‘ƒ1

(9)

��[(

npoly is the polytropic coefficient, in this case 1.36, Ρcomp is the efficiency of the compression with a value of 0.77, P1 (bar) and P2 (bar) the input and output pressures, respectively, Pet,comp is the energy used in a stage, in this case 100 W, due to the fact that the output and input pressures are equal and therefore the energy consumed at each stage is equal. 2.8. Final Storage Tank The tank is the final element in the model. The tank model is simple, here the inflow from the compressor and outflow to the fuel cell of hydrogen is taken into account at all in times and this is integrated order to have a real value for the amount of hydrogen stored throughout the year. Thus, using the evolution of the amount of hydrogen, the tank can be scaled accordingly, and this is covered in the results section. The balance for the tank is the following

119


Hervello et al / DYNA 81 (185), pp. 116-123. June, 2014. đ?’…đ?‘ľđ?‘Żđ?&#x;? ,đ?‘ťđ?’‚đ?’?đ?’Œ đ?’…đ?’•

= đ?‘­đ?‘Żđ?&#x;?,đ?’Šđ?’?,đ?‘ťđ?’‚đ?’?đ?’Œ − đ?‘­đ?‘Żđ?&#x;?,đ?’?đ?’–đ?’•,đ?‘ťđ?’‚đ?’?đ?’Œ

(10)

Battery Battery OFF OFF Electrolyser Electrolyser ON ON Fuel Fuel Cell Cell OFF OFF

START START

Where: dNH2,Tank is the number of accumulated moles, FH2,out,Tank (mols-1) the hydrogen flow entering from the compressor and FH2,Tank,out (mols-1) the output towards the fuel cell.

Yes

Ξ >0

Yes

SoC>EL

2.9. Fuel Cell No

The fuel cell works in the opposite way to the electrolyzer, thus unregulated direct electrical current and water are obtained from the combination of hydrogen and oxygen. The input current at the anode will be the hydrogen obtained by means of electrolysis, supplied by the pressurized storage tank, whereas the input current at the cathode will be air with approximately 21% oxygen, reacting gas, and nitrogen, which is inert. The cell we use in the model is the NexaTM by Ballard. In this particular case, to simulate the behavior of this element, we will use data supplied by the manufacturer, [23,24] which provides more than acceptable precision for the simulation we aim to carry out. Obtaining operational values for the cell through polynomial adjustments is relatively straightforward. 2 đ?‘ƒđ?‘“đ?‘?,đ?‘Ąđ?‘œđ?‘Ąđ?‘Žđ?‘™ = 0.1755đ??źđ?‘“đ?‘?,đ?‘›đ?‘’đ?‘Ą + 40.218đ??źđ?‘“đ?‘?,đ?‘›đ?‘’đ?‘Ą + 33.214

(11)

2 đ?‘ƒđ?‘“đ?‘?,đ?‘Ąđ?‘œđ?‘Ąđ?‘Žđ?‘™ = −0.2247đ??źđ?‘“đ?‘?,đ?‘›đ?‘’đ?‘Ą + 37.46đ??źđ?‘“đ?‘?,đ?‘›đ?‘’đ?‘Ą − 8.8164

(12)

3 2 đ?‘ƒđ?‘“đ?‘?,đ?‘Ąđ?‘œđ?‘Ąđ?‘Žđ?‘™ = 0.0033đ??źđ?‘“đ?‘?,đ?‘›đ?‘’đ?‘Ą − 0.1807đ??źđ?‘“đ?‘?,đ?‘›đ?‘’đ?‘Ą + 6.765đ??źđ?‘“đ?‘?,đ?‘›đ?‘’đ?‘Ą − 30.158

đ?‘‰đ?‘“đ?‘?,đ??ť2 = 0.3832đ??źđ?‘“đ?‘?.đ?‘›đ?‘’đ?‘Ą + 0.0419

(13) (14)

Where Pfc,total (W) is the total power of the cell, Pfc,net (W) is the output power of the cell, Pfc,para (W) is the parasitic loss of power through auxiliary equipment operation, Vfc,H2 (SLPM/s) is the flow of total hydrogen liters in standard conditions used by the cell and Ifc,net (A) is the output current produced by the cell. 2.10. Energy Control and Management What should first be borne in mind when choosing a suitable control system is that best energy use occurs when it is sent directly to meet load demands, next best is when it passes through the battery and lastly when hydrogen transformation takes place. 4 The variables used in controlling and managing energy are demand, energy entering and leaving the system, SOC, and the operational states of the fuel cell and the electrolyzer (0 and 1, on and off, respectively). Together with these variables, three points are marked (EL = 85, FCup = 50 y FCdown = 40) [18] that limit the four areas (Fig. 4) [19] where the SOC can be found and thus the areas of actuation for the operation of the battery, of the cell and of the electrolyzer are marked. The energy to be managed between the above units is defined as:

No

SoC≼FCup

Yes

Battery Battery ON ON Electrolyser Electrolyser OFF OFF Fuel Cell OFF Fuel Cell OFF

No

No

SoC≼FCdown

Yes

Fuel Cell ON?

Yes

No Battery Battery ON ON Electrolyser Electrolyser OFF OFF Fuel Fuel Cell Cell ON ON

Figure 4: Control process scheme

f  Psun  Pwind  Pfc  Pload  Pcomp

(15)

Where f (W) is the energy left over once demand is met, Psun (W) is the power supplied by the photovoltaic panels, Pwind (W) is the power supplied by the wind generator, P fc (W) is the power provided by the fuel cell, P load (W) is the load demand and Pcomp (W) the demand from the compressor. When f = 0, the energy accumulated by the system is zero, leftover energy is managed by the battery and the electrolyzer, whilst the deficits are managed by the fuel cell and the battery. In this case the electrolyzer and the battery work at variable power, depending on the requirements of the system, while the fuel cell always operates at fixed power. Below we comment on each of the zones: Zone 1 (SOC > EL): only the battery and the electrolyzer operate, but they do so is an exclusive way. Thus, if f < 0, when there is an energy shortfall, the battery operates, and when f > 0, then the electrolyzer operates due to the energy excess. Zone 2 (ELdown ≼ SOC ≼ FCup): only the battery operates. Zone 3 (FCup > SOC > FCdown): only the battery and the fuel cell operate, but they do so in a nonexclusive way. The battery always works except when f = 0, and the fuel cell, unlike the electrolyzer, gives a set power. In this zone the only requirement for fuel cell operation is that it continues to function if it was doing so in the previous time step. Zone 4 (SOC <FCdown): Here, the battery and the fuel cell operate in a nonexclusive way. Thus for SOC values lower than FCdown the fuel cell always operates.

120


Hervello et al / DYNA 81 (185), pp. 116-123. June, 2014.

3. Results and discussion Renewable energies provide the system with 3819.7 kWh. Of this energy supply, 1167.6 kWh are due to wind energy and 2652.2 kWh to solar energy. Fig. 5 shows the energy provided by each of these energy sources. For wind most energy is supplied at the start and end of the year, whereas the energy produced by the photovoltaic panels is concentrated around the center months of the year. Load consumption is 2587.9 kWh, for hydrogen compressor is 33.6 kWh. In Fig. 6 the use of energy can be seen; this is distributed equally along the three routes open to it: 1438.6 kWh (38%) go straight to the load, 1196.1 kWh (31%) go to the battery, and 1185.1 kWh (31%) go to the electrolyzer. Fig. 6 is useful for getting a general idea of the energy distribution, but a clearer picture showing the operation of the various units in meeting load coverage throughout the year might perhaps be better seen in charts showing the evolution of accumulated energy throughout the year for the battery, and the input and output flows in the case of the fuel cell and the electrolyzer. The battery remains charged to a maximum level (Fig. 7) during the central months of the year, which means that excess energy is sent to the electrolyzer for hydrogen production (Fig. 8a).

Figure 7: Energy supplied to the electrolyzer (a) and supplied by the fuel cell (b) over time

This is because the combination of the renewable energy sources at this period results in a high level of energy production. During this season peaks of over 1500 W from photovoltaic source appear, and to this must be added a respectable amount from the wind source, even though this is its low season. At both ends of the year, we find zones where the fuel cell becomes operational (particularly at the start of the year) (Fig. 8b), because, despite wind energy showing production peaks due to the strong currents typical of the period, this is not enough to meet demand as solar production does not provide the same amount as in the central months. Thus we find that demand at the beginning and end of the year can be met by hydrogen produced in the middle months. Finally we deal with the size of the hydrogen storage tank, for which it is helpful to understand the evolution of the amount of hydrogen stored over the year. This can be seen in Fig. 9; here the quantity of moles accumulated throughout the year is shown, not only for 17 solar panel system, with which the simulation was carried out, but also for 14 and 18 panels, in order to study other possibilities. The aim when producing these charts was to obtain a positive yearend balance for the number of accumulated

Figure 5: Power supplied by solar panels (a) and wind generator (b) to the system

Energy Consumption (kWh)

1600 1400

1438,6 1196,1

1185,1

1200 1000 800 600 400 200 0 Battery

Electrolyser

Load

Figure 6: Use of the energy that reaches the process from renewable sources

Figure 8: Energy supplied to the electrolyzer (a) and supplied by the fuel cell (b) over time

121


Hervello et al / DYNA 81 (185), pp. 116-123. June, 2014.

accumulated in the middle months of the year, when the short-term, battery storage is enough to meet demand. Finally, using the evolution of the amount of accumulated hydrogen we estimated a storage tank size of 11 m3. In this way the simulation of hybrid systems becomes a powerful tool for calculating and dimensioning the equipment involved, and thus a clear idea of the requirements for later construction is gained. References [1] Goldemberg, J., Ethanol for a Sustainable Energy Future Science , 315, pp. 808-810, 2007. [2] Moriarty, P., Honnery D., What Energy Levels can the Earth Sustain?, Energy Policy, 37, pp. 2469-2474, 2009. Figure 9: Evolution of the quantity of H2 accumulated throughout the year for 14 – 18 photovoltaic panels

moles, so as to have a self-sufficient system. If this were not the case, resort would have to be made to an external energy source. With regard to cases with 14 and 15 parameters, it can be seen that they are not self-sufficient; the one with 16 only just achieves this criterion, which means that at certain times during the year problems could arise in maintaining the pressure needed in the tank to supply hydrogen to the fuel cell. With 17 and 18 parameters, the balance is positive but 17 are favored because it requires less volume in the tank. It can be seen that a demand for hydrogen is produced during the first months of the year and thus these are the months when there is highest consumption of hydrogen, coinciding with reduced renewable energy production. Almost all the hydrogen is produced during the middle months, when most of the annual energy overcapacity is also produced. In the final months of the year we find a balanced state followed by a drop, due to similarity between energy production and demand which then alters when energy production decreases and the balance is lost. Thus an 11 m3 tank is necessary for 20 bar. The time of installation is also important as the initial amount of hydrogen or, lacking that, energy from an external source, must be known.

[3] Kannan, K. S., Strategies for Promotion and Development of Renewable Energy in Malaysia, Renewable Energy, 16, pp.1231-1236, 1999. [4] Jacobsson, S., Lauber V., The Politics and Policy of Energy System Transformation - Explaining the German Diffusion of Renewable Energy Technology, Energy Policy , 34, pp. 256-276, 2006. [5] Santarelli, M., Call, M., Macagno, S., Design and Analysis of StandAlone Hydrogen Energy Systems with Different Renewable Sources, Int J Hydrogen Energy, 29, pp. 1571-1586, 2004. [6] Kolhe, M., Agbossou, K., Hamelin, J., Bose, T. K., Analytical Model for Predicting the Performance of Photovoltaic Array Coupled with a Wind Turbine in a Stand-Alone Renewable Energy System Based on Hydrogen, Renewable Energy, 28, pp. 727-742, 2003. [7] Greiner, C. J., KorpÅs, M., Holen A. T., A Norwegian Case Study on the Production of Hydrogen from Wind Power, Int J Hydrogen Energy, 32, pp. 1500-1507, 2007. [8] Vidueira, J. M., Contreras, A., Veziroglu, T. N., PV Autonomous Installation to Produce Hydrogen Via Electrolysis and its use in FC Buses, Int J Hydrogen Energy , 28, pp. 927-937, 2003. [9] Chedid, R., Akiki, H., Rahman, S., A Decision Support Technique for the Design of Hybrid Solar-Wind Power Systems, IEEE Trans Energy Convers, 13, pp. 76-83, 1998. [10] Hollmuller, P., Joubert, J., Lachal, B., Yvon, K., Evaluation of a 5 kWp Photovoltaic Hydrogen Production and Storage Installation for a Residential Home in Switzerland, Int J Hydrogen Energy, 25, pp. 97-109, 2000. [11] Obara, S., Power Characteristics of a Fuel Cell Micro-Grid with Wind Power Generation, Int J Energy Res , 31, pp. 1064-1075, 2007. [12] Lenzen, M., Wachsmann, U., Wind Turbines in Brazil and Germany: An Example of Geographical Variability in Life-Cycle Assessment, Appl Energy, 77, pp. 119-130, 2004.

4. Conclusions In this work we have presented a model and its simulation for supplying a load demand by means of renewable energy sources (wind and solar), with batteries and hydrogen as short- and long-term energy reserves, respectively. The hydrogen is produced by means of an electrolyzer, compressed, stored and then converted to energy by a fuel cell when necessary. The model consists of a series of units that function in steady state, and which were implemented in a dynamic global simulation. Using a control system, energy was managed and distributed between several possibilities: 38% direct to the load 31% to the battery, and 31% to the electrolyzer. Due to meteorological conditions, hydrogen is

[13] Realpe, A., Diazgranados, J.A., Acevedo, M.T., Electricity Generation and Wind Potential Assessment in Regions of Colombia, DYNA, 171, pp. 116-122, 2012. [14] Ringel, M., Fostering the use of Renewable Energies in the European Union: The Race between Feed-in Tariffs and Green Certificates, Renewable Energy, 31, pp. 1-17, 2006. [15] Lloyd, B., Subbarao, S., Development Challenges Under the Clean Development Mechanism (CDM) — Can Renewable Energy Initiatives be Put in Place before Peak Oil?, Energy Policy, 37, pp. 237-245, 2009. [16] Busby, R.L., Hydrogen and Fuel Cells: a Comprehensive Guide, Tulsa: PennWell Corporation; 2005. [17] Xunta de Galicia, Meteogalicia (Xunta de Galicia). Available at: http://www.meteogalicia.es [Accessed December 2013].

122


Hervello et al / DYNA 81 (185), pp. 116-123. June, 2014. [18] De Soto, W., Improvement and Validation of a Model for Photovoltaic Array Performance, [PhD Thesis], 2004. [19] Ai, B., Yang, H., Shen, H., Liao, X., Computer-Aided Design of PV/Wind Hybrid System, Renewable Energy , 28, pp. 1491-1512, 2003. [20] Bilodeau, A., Agbossou, K., Control Analysis of Renewable Energy System with Hydrogen Storage for Residential Applications, J Power Sources, 162, pp. 757-764, 2006.

[21] KÊlouwani, S., Agbossou, K., Chahine, R., Model for Energy Conversion in Renewable Energy System with Hydrogen Storage, J Power Sources, 140, pp. 392-399, 2005. [22] Ballard Power Systems Inc., Nexa Power Module User’s Manual, 2003. [23] Zhou, K., Ferreira, J.A., de Haan, S.W.H., Optimal Energy Management Strategy and System Sizing Method for Stand-Alone Photovoltaic-Hydrogen Systems, Int J Hydrogen Energy, 33, pp. 477-489, 2008. [24] Ulleberg, O., The Importance of Control Strategies in PV-Hydrogen Systems, Solar Energy, 76, pp. 323-329, 2004.

123


DYNA http://dyna.medellin.unal.edu.co/

Conceptual framework language – CFL – Lenguaje de marcos conceptuales – LMC – Sandro J. Bolaños-Castro a, Rubén González-Crespo b, Victor H. Medina-García c & Julio Barón-Velandia d a

Facultad de Informática, Universidad Distrital Francisco José de Caldas, Colombia. sbolanos@udistrital.edu.co b Escuela de Ingeniería, Universidad Internacional de La Rioja, España. ruben.gonzalez@unir.net c Facultad de Informática, Universidad Distrital Francisco José de Caldas, Colombia. vmedina@udistrital.edu.co d Facultad de Informática, Universidad Distrital Francisco José de Caldas, Colombia. jbaron@udistrital.edu.co Received: February 8th, 2013. Received in revised form: February 27th, 2014. Accepted: April 22th, 2014

Abstract This paper presents the Conceptual Frameworks Language –CFL–, it aims to bridge the gap between programming languages and design languages, using the mechanism of schematizing, this approach changes the complexity of the syntax of programming languages and complexity of the diagramming for ease of assembly and nesting of frames or conceptual blocks like Lego, we present the possibilities offered by CFL as a Language nearer to solving problems using computational and scientific vocabulary, which is transparent to the user, we outline comparisons and integrations with languages like java and UML, we propose metrics and develop the platform in java language. Keywords: Language, scheme, metrics, contextualization, abstraction, vocabulary, syntax, semantics Resumen Este artículo presenta el Lenguaje de Marcos Conceptuales –LMC–, su objetivo es cerrar la brecha entre los lenguajes de programación y los lenguajes de diseño, empleando el mecanismo de la esquematización, este propone cambiar la complejidad de la sintaxis de los lenguajes de programación y la complejidad de la diagramación por la facilidad de ensamble y anidamiento de marcos o bloques conceptuales a manera de Lego, se presenta las posibilidades que brinda LMC como un leguaje más próximo a la resolución de problemas empleando un vocabulario computacional y científico, que se hace transparente al usuario, se plantean comparaciones e integraciones con lenguajes como java y UML, se proponen métricas y se hace una implementación de la plataforma en lenguaje java. Palabras clave: Lenguaje, esquema, métricas, contextualización, abstracción, vocabulario, sintaxis y semántica.

1. Introduction It is necessary to propose computer languages approaching human languages, making some unintuitive computational concepts transparent. Despite the diversity of programming and modeling languages, they care little about issues like:   

Provide a simple and intuitive representation. Talk a less computational language. Facilitate Direct Model Execution tracing.

The language should be the vehicle of abstraction; it should be simple, robust and complete to be more robust. This is achieved by hiding its complexity levels, using layers. The first layer covers a formal level which supports and extends mechanisms already developed and recognized such as encapsulation, security, generality, reuse, among others [1]. The second layer covers a particular level ease of use, focused on approaching the model and reality, taking in account the human thinking model.

To address these concerns, the paper presents the CFL language, its grammar, the comparison between schema and diagram, principles and metrics, closing with the implementation of the platform, case studies, and conclusions. 2. Conceptual Framework Language –CFL– CFL, is a modeling language focused on abstraction and contextualization of knowledge. See Fig. 1. When communicating an idea, this language can be used in both ways, spoken or written. Treasures such as the Rosetta Stone [2], unveiled a past steeped in pictograms; rock art paintings tell about the lives of our ancestors, in images that constitute a simple but expressive language.

Figure 1: Contextualization and Abstraction

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 124-131. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


BolaĂąos-Castro et al / DYNA 81 (185), pp. 124-131. June, 2014.

The language can range from an informal expression, captured in an image, to the rigorous expression represented in a word or syntactic structure. The CFL proposal is to exploit the power of formal language with the power of symbolic schematization, for that, it proposes two concepts, abstraction and contextualization. The abstraction mechanism extracts the essential characteristics, using cognitive models like: paradigms, values, principles and behaviors; abstraction uses introspection as strategy which defines the search mechanisms stock of knowledge in the same individual.. Moreover, contextualization uses an observation scheme to find the answers in external phenomena. The strategy used is immersion, which in contrast to introspection, seeks to reason about the phenomena through the use of external structures. Contextualization and abstraction are the basis for CFL construction, in which individual and collective knowledge are contrasted. Paradigms as “structured�, “object-oriented�, including “declarative models� are based on abstraction [3], losing the potential of immersion, useful in solution modeling.

language [5]. BNF defines the syntactic structure of the language. CFL has the following syntactic structures: <conceptual ::= <frame><concept> framework> <frame> ::= <closed border> | <concept> ::= <criteria> ::= <definition> ::= <inquiry> <proof> <elaboration> <archetype>

<separator> ::= <body> ::=

3. Grammar in –CFL–

Next, the concepts of grammar, derivation (production) and language are defined, all primordial for CFL formalization. A phrase structure grammar đ?‘Ž = (đ?‘˝, đ?‘ť, đ?‘ş, đ?‘ˇ) consists of a vocabulary đ?‘˝âˆ— , a subset đ?‘ť of đ?‘˝âˆ— formed by the terminal elements, and a initial symbol đ?‘ş of đ?‘˝â€“ đ?‘ť and a set đ?‘ˇ of productions. The đ?‘˝â€“ đ?‘ť set is denoted by đ?‘ľ. The elements of đ?‘ľ are called nonterminal elements [4]. Furthermore, a vocabulary đ?‘˝ (or alphabet) is a finite and non empty set, whose elements are called symbols, a word in đ?‘˝ is a finite string of đ?‘˝ elements. The empty word or empty string denoted by đ??€ is the string without symbols. The set of all words about đ?‘˝ is denoted by đ?‘˝âˆ— . A language in đ?‘˝ is a subset of đ?‘˝âˆ— [4]. Another important definition is the derivation: đ?‘Ž = (đ?‘˝, đ?‘ť, đ?‘ş, đ?‘ˇ) is a grammar with sentence structure. Also đ?’˜đ?&#x;Ž = đ?’?đ?’›đ?&#x;Ž đ?’“ (this is a concatenation đ?’?đ?’›đ?&#x;Ž đ??˛ đ?’“) and đ?’˜đ?&#x;? = đ?’?đ?’›đ?&#x;? đ?’“ about đ?‘˝. If đ?’›đ?&#x;Ž → đ?’›đ?&#x;? is a production of đ?‘Ž, we say that đ?’˜đ?&#x;? , is directly derived from đ?’˜đ?&#x;Ž , and we write đ?’˜đ?&#x;Ž ⇒ đ?’˜đ?&#x;? . If đ?’˜đ?&#x;Ž , đ?’˜đ?&#x;? , ‌ đ?’˜đ?’? are strings about đ?‘˝ such that đ?’˜đ?&#x;Ž ⇒ đ?’˜đ?&#x;? , đ?’˜đ?&#x;? ⇒ đ?’˜đ?&#x;? ‌ , đ?’˜đ?’?−đ?&#x;? ⇒ đ?’˜đ?’? we say that đ?’˜đ?’? is derivable or is ∗ derived from đ?’˜đ?&#x;Ž and will be denoted đ?’˜đ?&#x;Ž ⇒ đ?’˜đ?’? . The sequence of steps used to obtain đ?’˜đ?’? from đ?’˜đ?&#x;Ž is called derivation [4]. Finally, the language generated by đ?‘Ž (or the đ?‘Ž language) must be defined, denoted by đ?‘ł(đ?‘Ž), as the set of all terminal strings derived from initial state đ?‘ş, Equation 1. [4]. ∗

đ?‘ł(đ?‘Ž) = đ?’˜ ∈ đ?‘ťâˆ— | đ?’” ⇒ đ?’˜

(1)

::= ::= ::= ::=

<open border> | < semi closed border > <criteria> |′<archetype>′ <criteria> |′< archetype >′ <criteria><separator><body> <definition>|<inquiry>| <proof>|<elaboration> <variable>|<statement >| <free text> <logic utilization><?> <verification action><!> <user extension> Ν|<property><name>| <property><name><:> <category>|<property><name>: <category>(<interaction>) <sequential>|<parallel> Ν|<conceptual framework>|

<body>< framework>

conceptual

4. CFL as a Languague

Figure 2: CFL as a Languague

CFL, Fig. 2, is formed by: a vocabulary or conceptual element, syntax or conceptual block and semantics or conceptual method. 4.1. Vocabulary of CFL CFL is constituted by both frame and concept

3.1. Productions of CFL in BNF The Backus Naur form was initially created in order to define the syntactic structure of algol60 programming

Frame: is the frontier that separates a specific concept of the universe of discourse properly contextualized according to the domain proposed. See Fig. 3.

125


Bolaños-Castro et al / DYNA 81 (185), pp. 124-131. June, 2014.

4.2. Sintax of CFL

Figure 3: Frame

Concept: Set the domain of knowledge to be extracted from the universe of discourse. A concept consists of an archetype, criterion, a separator and a body. See Fig. 4.

Figure 4: Concept

With the definitions of frame and concept it is entirely possible to design the scheme, see Fig. 5. Figure 7: CFL Syntax

Figure 5: Concept Frame

Archetype: The archetype sets properties, identification, category and concept interaction with the universe of discourse. See Fig. 6.

Figure 6: Archetype

The properties allow security features, storage and ways to change the concept to be set. A concept can be anonymous or have a name in which case allows a peculiarity or “instance” to be established. The category defines whether the concept is in the domain of the object language or the domain of the metalanguage; with the interaction, archetype allows relationships, connections and links to be explicitly established. Criteria: Set the concept, establishing its definition, interaction, and possible ways of inquiry, testing, and processing. Separator: defines the boundary between criterion and body, the body itself also separates a conceptual framework from another. The separator marks the criterion and also establishes how the body should be interpreted. There are two interpretations: mode and type. Mode determines whether the interpretation is parallel or sequential. Type determines whether the interpretation is direct or recursive. Body: is the set of conceptual frameworks, which, at the same time is contained in a conceptual framework.

The syntax of CFL, Fig. 7, is based on the conceptual block, this consists of: definitions, inquiries, interactions, proofs and elaborations. Definition: as in any programming language, there is a concept block in CFL in which three kinds of definitions can be made: variables, statement, and annotations. A variable definition is used to form the containers of information; a statement definition is used to propose invocations, returns and overall sentences in which the variables are used; finally, an annotation definition is used for documentation. Interaction: with interactions CFL allows the user to manage input and output through which it is possible to communicate desired conditions for a program’s execution. Inquiry: CFL presents a model based on the formulation of questions about the state that variables can take, this kind of inquiry can be direct or recursive. An inquiry is direct when driving to take one path or another once, while the inquiry takes a recursive way or another a number of times. Proof: proof allows scenarios to be defined where results may be different for the same conditions, due to uncontrolled changes that variables can take in a given time. This concept can be compiled as experimentation, contrast, demonstration, argumentation, etc. Elaboration: elaboration allows extensions to be defined to extend the language with premises, operations and conclusions. 4.2 Semantics of CFL

The configuration of the conceptual frameworks consistently, constitutes the semantics of CFL. See Fig. 8. With the structure of conceptual frameworks it is possible create semantics, which representing the solution of a problem. A conceptual method forms a module [6], which, depending on the information exchange can be a “process” if it receives and produces information to the context, a “procedure” if it receives and produces information for the

126


Bolaños-Castro et al / DYNA 81 (185), pp. 124-131. June, 2014.

Figure 12: Inquiry Color (Blue)

Figure 8: CFL Semantics

For Elaboration red was proposed, which symbolizes prohibited, danger and dynamism [8]. See Fig. 13.

context or a “routine” if it does not receive or produce information for the context. 5. Schematic vs Diagram CFL produces schemes, unlike diagrams, schematic left implicit relationships through the order and nesting of frames. Composition relations are simulated by the horizontal sequencing, inheritance and realization relations are assembled sequenced vertically between conceptual methods. If the method has a higher category it is located above, if has a subclass is located below. In the horizontal direction, the client is located on the left while the provider on the right. In the vertical a white box is used, while in the horizontal a black box is used [7]. See Fig. 9.

Figure 13: Elaboration color (Red)

For proof yellow was proposed, which symbolizes enlightenment, warning and creativity [8]. See Fig. 14.

Figure 14: Proof color (Yellow)

6. CFL vs Other Languages Figure 9: Implicit relations for a block

CFL allows for expression by similar classes as do object-oriented models, such a class in UML [9] and Java can be represented as Fig. 15.

5.1. Color codes in CFL In schemes, a color code is used, which enhances its meaning. Green was assigned to Definition, which symbolizes confidence, tranquility and development [8]. See Fig. 10.

Figure 15: Class in UML and Java Figure 10: Definition color (Green)

For Interaction orange was assigned, which symbolizes the striking, socialization and transformation [8]. See Fig. 11. For Inquiry blue was assigned, it symbolizes science, idealism and functionalism [8]. See Fig. 12.

CFL has the schematic, Fig. 16. In the representation language, according to the productions in CFL:

Figure 11: Interaction Color (Orange) Figure 16: Category in CFL 127


Bolaños-Castro et al / DYNA 81 (185), pp. 124-131. June, 2014.

The schemes aim is to hide the formal layer, which is useful for language development and transparent to the user.

Figure 18: Balance of uncertainty

developing software is to eliminate uncertainty gradually to approach the certainty in which the solution is given. This metric lists and shows visually inquiry frames versus definition frames, in order to assess the degree of certainty that will be achieved in developing a solution, and constitutes an important source of choice for the design of a conceptual method. This metric produces three scenarios, see Fig. 18. In the first type of balance there are more questions than answers, producing high uncertainty, and this can cause difficulty in the developments. The type of balance where questions match responses is the type “balanced”, this is adequate to address the uncertainty, as each question has its solution. The third type of balance is more important in responses, usually due to responses that are part of protocols. The trend in software is the “balanced” type, because it has a one to one correspondence between the problem and the solution.

7. CFL Principles

Figure 17: CFL Principles

CFL is simple, robust and complete, Fig. 17, these three principles underlie the language. Simplicity [10], reduces the work, thanks to a set of abstractions with the highest level of those used in a programming language, because only blocks are used, configured according to the desired model. Also, the organization of the blocks is automatic, reducing the learning curve, there is a marked difference with programming or modeling languages. The CFL scheme synthesizes text and diagram; in both cases, the programming language and design, the complexity is reduced 11]. CFL is strong in dealing with concepts which allow the verification of algorithms directly and transparently through using the tracing of their errors, including validation of inputs and outputs. On the other hand, it is very complete, allowing documentation to be included using a native mechanism and providing direct metrics. CFL is formal due to the sound formation of its structure, which contains vocabulary and well defined production rules. 8. CFL Metrics The metrics in software allows development control. In CFL two metrics are proposed, uncertainty balance and algorithm density [12]. 8.1. Uncertainty balance Software development is a creative exercise, which begins with a problem domain with great uncertainty to reach a solution domain with high certainty. The exercise of

8.2. Algorithmic density This type of metric is similar in information to the uncertainty balance. The difference is just the graphic representation, which seeks to represent inquiries and definitions as areas, which should have a tendency to follow a Gaussian bell, see Fig. 19.

Figure 19: Algorithmic density

9. CFL Platform The Coloso platform for CFL is an application developed in the Java language, using the SWT framework [13]. See Fig. 20. The framework is managed with dialogues that summarize the semantic possibilities of CFL, this first layer of interaction with the user sets the first filter of CFL expressions. The second filter is set once the conceptual method is run in the background which launches an application on the fly that is loaded and compiled in Java, the result is uploaded and presented in the first layer without the user having to know the details of the base language.

128


BolaĂąos-Castro et al / DYNA 81 (185), pp. 124-131. June, 2014.

Figure 20: Coloso Software.

Figure 22: UML/CFL/Java integration

belonging to a class and the algorithm solved in a conceptual method and that is associated with that operation. See Fig. 22. 10. Case Studies The following describes the tests performed to validate the principles of simplicity, robustness and completeness of CFL.

Figure 21: conceptual method.

Conceptual method, Figure 21, can manage inputs and outputs, pre and post conditions, also, all conceptual framework incorporates a toolbar in which the variables that should be used when using a framework are found, also suggested values, operators, groupers and native language operations are presented. Besides being executed, a framework can be debugged, validated, verified, and measured, the outputs can be evidenced in each of the views, dedicated for every approach. Debugging allows tracking step by step a conceptual method, presenting the state of the variables defined in the framework. Verification and validation of conceptual methods may be evident in their view, the verification is displayed as a tree that maps errors, faults and defects. In the validation perspective a tabulated Hoare triplet [14-15] highlights the corresponding evaluation method. CFL presents a view of metrics in which all the framesets used are listed and allows, along with the view of density algorithms and algorithmic balance, one to see the trend of the conceptual method, supported by recommendations based on cyclomatic complexity [16], and a magic number 7 Âą 2 [17]. CFL also provides ease of integration with languages like UML, this bridge closes the gap between the class diagram and the programming language, specifically this facility allows the generation of an operation code

To perform the test of simplicity, we measured the time spent in developing a solution to a given problem, for that, the effort was compared, measured in time needed to solve the set of algorithms that conform a course of programming and algorithms. The test included 20 algorithms and was applied to teachers who teach the area. The course is normally conducted with tools like DFD [18], PSeInt [19] or programming languages like Java, the sample used includes problems like: traversals, exchanges, queries and sorts. Five problems were formulated for each subject and divided into two groups each with two teachers, one group chose the tool that they had been using before, “DFDâ€?, this group was called “alternate groupâ€?, the second group chose CFL, this group was called “CFL groupâ€?. Table 1 tabulates the time used in minutes, together with the obtained average Îź and standard deviation Ďƒ, according to equations 2 and 3. đ?’?

đ?? =

đ?&#x;? đ?’‚đ?&#x;? + đ?’‚đ?&#x;? + â‹Ż + đ?’‚đ?’? ∑ đ?’‚đ?’Š = đ?’? đ?’?

(2)

∑đ?’?đ?’Š=đ?&#x;?(đ?‘żđ?’Š − đ?? )đ?&#x;? đ?’?

(đ?&#x;‘)

đ?’Š=đ?&#x;?

đ??ˆ=√

The average development time of the algorithms in the CFL group was much lower than the alternate group. The dispersion of the CFL sample was low while the alternate group was high, the fundamental reason was reflected, in particular, in a sorting algorithm: the problem of Hanoi

129


Bolaños-Castro et al / DYNA 81 (185), pp. 124-131. June, 2014. Table 1 Simplicity test

Table 3 Completeness test Group

Topic

Group

Alternate

CFL

Exchanges

22

20

Traversal

30

23

Queries

40

27

Sorts

80

30

43 22.29

25 3.80

µ σ

Topic

5 5

Queries Sorts

5 5 5 0

5 5 5 0

11. Conclusions Group Alternate

CFL

Exchanges

5

5

Traversal

3

5

Queries

2

5

Sorts

1

5

2.75 1.47

5 0

µ σ

CFL

5 5

µ σ

Table 2 Robustness test Topic

Alternate

Exchanges Traversal

Towers [20], this problem was easily solved by the CFL group, but the alternate group was limited in DFD to solve iteratively, this greatly increased the solution time. The next test sought to establish the robustness, at this point became the possibility of using a programming language if desired. The test consisted of breaking algorithms with invalid entries in a total time of one hour. The table shows the number of algorithms tested (successfully) by category. CFL provide great ease of use thanks to its direct and transparent management of assertion concept. For the alternate group, which took the option of changing to Java, it became a time consuming task because the algorithms had to be migrated, as they couldn’t use the assert concept and reduce the task just to do comparisons. In this test, the difference between the CFL group and the alternate group was emphasized. The third test was conducted to test completeness conditions, for this, each group was requested to explain the model or code that solved each problem, using some metric about the algorithms. The time for this activity was one hour. Table 3 presents the number of documented algorithms. All algorithms for both groups were documented, but the documentation of alternate group was made apart, this decision was taken because two different tools were used, so, two separate documents should be created, otherwise, it would be necessary to make the documentation by using a third tool. Also, this documentation was not enriched with metric criteria. On the other hand the CFL group documentation was integrated into the program, based on the proposed metric for the language. The above tests give a favorable starting point to CFL, however, for future work we propose to perform more extensive testing.

Alpha testing performed on CFL, shows how the language facilitates the resolution of computational problems, due to the abstraction degree that it provides, hiding the complexity associated with knowledge of semantics and syntax of a computer language. The schemes used by CFL, as a graphic representation, eliminates the complexity introduced by the relationships of a conventional graphical model, instead, it uses assembly going to the mental model of sequence and nesting assembly sequence model and nesting. CFL emphasizes problem solving by using a scientific vocabulary, in which the predominant characteristics are the inquiry (observation), the definition of variables, the definition of tests and experiments, and elaborations; computational features characteristic of programming languages, are in the background. References [1] Budd, T. Programación Orientada a Objetos. Addison-Wesley, 1994. [2] Budge, E. A. Wallis E., Sir. Rosetta stone. London: British Museum. 1857-1934. [3] Tucker, A., Noonan, R. Programming Languages Principles and Paradigms. McGraw Hill, 2002. [4] Rosen, K. H. Matemática Discreta. Mc Graw Hill, 2004. [5] Teufel, B., Schimidt, S. T. Compiladores conceptos fundamentales. Addison-Wesley, 1995. [6] Meyer, B. Construcción de Software Orientado a Objetos. Prentice Hall, 1999. [7] Gamma, E., Helm, R., Johnson, R., & Vlisides, J. Design Patterns. Addison-Wesley, 1995. [8] Heller, E. Psicología del Color. Editorial Gustavo Gili, 2007. [9] Booch, G., Rumbaugh, J., & Jacobson, I. The Unified Modeling Language User Guide. Addison-Wesley, 2005. [10] Maeda, J. Las Leyes de la Simplicidad. Gedisa, S.A., 2005. [11] Morin, E. Introducción al Pensamiento Complejo. Gedisa, S.A., 2005. [12] Bolaños, S., Medina, V., & Aguilar, J. Principios para la Fromalización de la Ingeniería de Software. Ingeniería, pp. 31-37, 2009. [13] Hatton, R. Swt: A Developer's Notebook. O'Reilly & Associates, 2005. [14] Hoare, C.A.R. An Axiomatic Basis for Computer Programming. Communication of the ACM. ACM. Vol 12. No 10, 1969.

130


Bolaños-Castro et al / DYNA 81 (185), pp. 124-131. June, 2014. [15] Hoare, C.A.R. Viewpoint. Retrospective: an axiomatic basis for computer programming. Communication of the ACM. ACM. Vol 52. No 10, 2009. [16] Mccabe, T. A Complexity Measure. IEEE Transactions on Software Engineering. IEEE Journal & Magazines Vol SE-2 No 4. Pp 333-349, 1976. [17] Miller, G. A. The magical number seven, plus or minus two: Some limits on our capacity for processing information, Psychological Review. Vol. 63. No 2. pp. 81–97, 1956. [18] Cárdenas, F. Daza, E. Castillo, N. 1996. DFD. Online: http://freedfd.googlecode.com/files/FreeDFD-1.1.zip, 2012 [19] PSeInt. Online: http://pseint.sourceforge.net/, 2012 [20] Pickover, C. The Math Book. Sterling Publishing Co. Inc., 2011. S. J. Bolaños Castro, PhD from the Pontifical University of Salamanca in Madrid, Spain and PhD Prize awarded by the same university; graduated as a systems engineer and teleinformatics magister in the University Francisco José de Caldas in Bogotá, Colombia. He is a member of the international research group GICOGE; his researcher interest is about Software Engineering. He is teacher about Software Engineering in several programs in the university. He is also director of curriculum projects of undergraduate and graduate, currently heads the graduate in Software Engineering and IT projects at the University Francisco José de Caldas District.

R. González Crespo, is the deputy director of the School of Engineering at the Universidad Internacional de La Rioja. Professor of Project Management and Engineering of Web sites. He is honorary professor and guest of various institutions such as the University of Oviedo and University Francisco José de Caldas. Previously, he worked as Manager and Director of Graduate Chair in the School of Engineering and Architecture at the Pontifical University of Salamanca for over 10 years. He has participated in numerous projects I + D + I such as SEACW, GMOSS, eInkPlusPlusy among others. He advises a number of public and private, national and international institutions. His research and scientific production focuses on accessibility, web engineering, mobile technologies and project management. He has published more than 80 works in indexed research journals, books, book chapters and conferences. V. H. Medina García, PhD in Computer Engineering from the Pontifical University of Salamanca, Master in Computer Science from the Polytechnic University of Madrid, Specialist in Marketing from the University of Rosario, Systems Engineer from the District University Francisco José de Caldas; Director of the PhD. program in Engineering from the District University Francisco José de Caldas District in Bogota Colombia. Writer, speaker and teacher recognized internationally; he has published several books and conducted various academic and administrative units. He is academic and principal investigator of the research group GICOGE, category A.1 in COLCIENCIAS. J. Barón Velandia, MsC. In teleinformatics, systems engineer, and software engineering teacher at District University "Francisco Jose de Caldas" in Bogotá, Colombia; PhD student at the Pontifical University of Salamanca in Madrid, Spain. INTECSE investigation group's founder and director. Former REDIS director and System Engineering undergraduate's coordinator at District University.

131


DYNA http://dyna.medellin.unal.edu.co/

Flower wastes as a low-cost adsorbent for the removal of acid blue 9 Residuos de flores como adsorbentes de bajo costo para la remoción de azul ácido 9 Ana María Echavarria-Alvarez a & Angelina Hormaza-Anaguano b a

Facultad de Ciencias, Universidad Nacional de Colombia, Colombia. amechavarria@gmail.edu.co b Facultad de Ciencias, Universidad Nacional de Colombia, Colombia. ahormaza@unal.edu.co

Received: February 22th, 2013. Received in revised form: December 2th, 2013. Accepted: April 10th, 2014.

Abstract This paper describes the use of flower wastes (carnation, rose and daisy) as a potential, alternative and low-cost adsorbent for the removal of Acid Blue 9 (AB9). The best conditions to achieve an efficient adsorption were evaluated in a batch process. With an acidic pH of 2.0, a removal exceeding 90% was obtained using concentrations of AB9 of 15.0 mgL-1 and a dosage of adsorbent of 4.0 gL-1. The equilibrium of the process was modeled using the Langmuir and Freundlich isotherms, obtaining a better fit with the latter one. Kinetic studies indicated a better fit of the process to a pseudo-second order model and negligible effect of temperature. In addition, the bromatological characterization of the adsorbent is shown. Keywords: adsorption; isotherms; kinetics, acid blue 9; flower wastes. Resumen El presente artículo describe el uso de residuos de flores (claveles, rosas y margaritas) como un adsorbente potencial, alternativo y de bajo costo para la remoción del colorante azul ácido 9 (AB9). Las mejores condiciones para lograr una adsorción eficiente fueron evaluadas en un proceso discontinuo. Un pH ácido de 2.0 permitió obtener una remoción superior al 90%, usando concentraciones de AB9 de 15.0 mgL-1 y una dosificación de adsorbente de 4.0 gL-1. El equilibrio del proceso fue modelado usando las isotermas de Langmuir y Freundlich, obteniendo un mejor ajuste con la última. Estudios cinéticos señalaron un proceso de pseudo-segundo orden y un efecto de la temperatura poco significativo. Adicionalmente, se presenta la caracterización bromatológica del adsorbente. Palabras clave: adsorción; isotermas; cinética; azul ácido 9; residuos de flores.

1. Introduction Dyes are widely used in textiles, paper, plastics, rubber, leather, cosmetic, pharmaceutical and food industries. The presence of these dyes in water, even at very low concentrations is highly visible and undesirable [1]. Water pollution due to discharge of colored wastewater negatively affects aquatic life and consequently, the overall ecosystem. Dyes reduce the light penetration required for the photosynthetic activity and therefore, the water selfpurification process is reduced. More than 700.000 tonnes and around 1.000 different dyes and pigments are manufactured annually worldwide, 10% of them are discharged in wastewater [2,3]. Carcinogenic agents as benzydine and other aromatic amines compounds that can be transformed in more toxic derivatives as a result of microbial activity, are also used in dye manufacture [4,5]. Brilliant blue (acid blue 9) is a synthetic acid dye, with anionic nature belonging to triphenylmethanes, commonly used for flower dyeing, textile and in food industries. It is a

highly stable compound and thus causes the color of effluents for extended periods of time. For the treatment of colored effluents, chemical, physical and biological conventional technologies have been tested; however, the physicochemical ones are often expensive and generate toxic sludge. Likewise, on a large scale it becomes a difficult issue in biological treatments [2]. The adsorption process has been proven to be one of the best water treatment technologies around the world and activated carbon is undoubtedly considered as universal adsorbent for the removal of diverse types of pollutants from water[6]. To diversify the abundantly available agricultural waste, it has been proposed to convert it into activated carbons [7,8]. Nevertheless it has considerable production costs [2]. Therefore, it is necessary to find alternative, efficient and inexpensive methods for the treatment of colored wastewater. Biosorption has several advantages among the methods analyzed for the removal of dyes in solution, and a large number of low-cost adsorbents derived from agricultural, poultry waste and even ashes have been successfully tested [9–20]. Structurally

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 132-138 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Echavarria-Alvarez & Hormaza-Anaguano / DYNA 81 (184), pp. 132-138. June, 2014.

agricultural materials consist of lignin, cellulose, hemicellulose and some proteins, which make them effective biosorbents [21]. In this study, the use of three locally available, renewable and previously untested adsorbents, carnation, rose and daisy stalks for the removal of acid blue 9 were evaluated. The aim of this investigation was to use low-cost adsorbents to develop a low-priced dye-removal technology. The effect of various parameters on the process such as pH, adsorbent dosage and initial concentration of dye were analyzed. Also, in order to find information on the physicochemical characteristics of the adsorption, the equilibrium of the process is modeled using the Freundlich and Langmuir isotherms and the kinetics through the equations of the first and second order. 2. Materials and Methods

Figure 1. Chemical structure of AB9. Source:Merck

(Perkin-Elmer, Lambda 35). The absorbance was recorded at a wavelength of (λ = 629 nm), which correspond to the maximum adsorption peek of AB9 and the concentration was determined in the calibration curve. 2.6. Biosorption studies

2.1. Collection and preparation of the flower wastes The stems of flowers (carnation, rose and daisy) were obtained from a local market. They were washed and dried at 80 °C for 48 hours. Subsequently, they were milled to produce particles of the desired mesh size (500-700 µm) (Physis).

In order to determine the best conditions for dye removal, adsorption tests were carried out in batch and kept in a shaker Heidolp Unimax 2010 at 25±2 °C and 120 rpm. After stirring, samples were centrifuged in a Fisher Scientific and the supernatant was measured for the determination of remnant dye concentration.

2.2. Flowers waste characterization

2.6.1. Effect of pH on dye adsorption

2.2.1. Bromatological analysis

The pH of the dye solution was adjusted to a range of 2.0-10.0. This modification was carried out using 1.0 M HCl / NaOH (Merck). Each of the adsorbents (carnation, rose and daisy) was added in 30 mg to 10 mL solution containing 15 mg of dye / L.

Determination of the main components of the studied adsorbent was carried out according to the Van Soest method [22] including the evaluation of acid detergent fiber (ADF), neutral detergent fiber (NDF) and lignin. With this data the content of cellulose and hemicellulose was estimated. Starch content, ash and nitrogen were determined by using the same method. The tests were conducted at the Laboratory of Chemical and Bromatological Analysis of the Universidad Nacional de Colombia - Sede Medellín. 2.3. Chemicals The textile dye Acid Blue 9 (AB9, CI 42090; industrial grade; molecular weight 792.84, molecular formula C37H34N2OS3Na2) was obtained from Merck, Colombia and used without further purification, its chemical structure is shown in Fig. 1. 2.4. Stock solution A stock solution (500 mgL-1) was prepared by dissolving a determined amount of AB9 in distilled water. This solution was diluted to obtain the desired concentrations. 2.5. Dye concentration analysis Dye Concentration was determined spectrophotometrically using UV-Vis spectrophotometer

2.6.1. Effect of flower waste dose on removal efficiency Adsorbent dosage was varied in the range of 30-800 mg sorbent / mg of dye. A 10 mL solution with 15 mg of dye / L at pH 2.0 was used. The adsorbent used in this experiment and in the following was an equimolar mixture (carnation, rose and daisy) to get results that could be extrapolated to real situations. 2.6.2. Effect of initial dye concentration on dye sorption The effect of initial dye concentration was analyzed in a range of 0-15 mg / L. The adsorbent dose (40 mg) was added to 10 mL of solution with variable concentration of dye at pH 2.0. 2.6. Equilibrium studies The obtained data from equilibrium tests were analyzed using the Langmuir and Freundlich isotherms, in order to obtain information on the surface of the adsorbent. There are several theoretical models; however, the Langmuir and Freundlich isotherms are the most common. Quantification of the amount of dye attached to the biomass was performed using the eq. (1):

132


Echavarria-Alvarez & Hormaza-Anaguano / DYNA 81 (184), pp. 132-138. June, 2014.

qeq = (C0 − Ceq ) đ?‘‰ â „đ?‘Š

(1)

Where: qeq: Amount of adhered dye to biomass [mg/g] C0, Ceq: Initial and equilibrium concentration of the contaminant [mgL-1] V: Volume of dye solution used [L] W: Mass of added sorbent [g] 2.7.1. Langmuir isotherm The empirical model of Langmuir sets up the existence of a uniform layer in which there is a finite number of equivalent active sites distributed homogeneously. This model states through eq. (2) that: qeq = qmax∗b∗Ceq /(1 + b ∗ Ceq )

(2)

With: qmax: Langmuir constant denoting the maximum adsorption capacity of biomass [mg/g] b: Langmuir constant that indicates the affinity for the active sites Specific constants can be obtained from the intercept and slope of linearized eq.(2): 1 1 1 1 =( )+( )( ) đ?‘žđ?‘’ đ?‘„đ?‘šđ?‘Žđ?‘Ľ đ?‘? ∗ đ?‘„đ?‘šđ?‘Žđ?‘Ľ đ??śđ?‘’đ?‘ž

(3)

2.7.2. Freundlich isoterm In this model a mixed monolayer is considered in which the active sites are not independent or equivalent. The specific adsorption capacity is given by eq. (4): â „

1 n qeq = K f Ceq

đ?‘‘đ?‘ž/đ?‘‘đ?‘Ą = đ?‘˜1 (đ?‘žđ?‘’đ?‘ž − đ?‘ž)

(6)

Where qeq and q are the amounts of adsorbed dye on the biosorbent at equilibrium and at time t, respectively (mg/g) and k1 is the rate constant of first-order biosorption (1/min). After integration and applying boundary conditions, t = 0 to t = t and q = 0 to q = q; the integrated form of eq. (6) becomes: log(đ?‘žđ?‘’đ?‘ž − đ?‘ž) = đ?‘™đ?‘œđ?‘”(đ?‘žđ?‘’đ?‘ž ) − đ?‘˜1 đ?‘Ą /2.303

(7)

A fit of the experimental data through the straight line of log (qeq – q) vs. t would imply the applicability of this kinetic model. In this case, the parameter q eq must be known. It might happen that qeq is unknown and the adsorption process becomes extremely slow, the amount adsorbed being significantly smaller than the equilibrium amount. For this reason, it is necessary to extrapolate the experimental data to t = ∞ or use trial and error. Moreover, the pseudo - first order model usually fit just over the first period of sorption (20 - 30 min) and does not describe the entire process well [23]. On the other hand, the pseudo - second order kinetic describes all stages of adsorption: external film diffusion, adsorption and internal particle diffusion. The model is based on the adsorbent capacity and assumes the adsorption processes involves chemisorption mechanism. Furthermore, it considers adsorption to be the rate controlling step and is expressed as follows: đ?‘‘đ?‘ž â „đ?‘‘đ?‘Ą = đ?‘˜2 (đ?‘žđ?‘’đ?‘ž − đ?‘ž)2

(8)

Where k2 is the rate constant of second-order biosorption (g/mg/min). For the boundary conditions t = 0 to t = t and q = 0 to q = q; the integrated form of eq. (8) becomes:

(4)

With: Kf: Freundlich constant related to biomass adsorption capacity n: Freundlich constant that indicates the intensity of adsorption From the slope and intercept of the linearized equation, the value of the constants can be determined through eq. (5): lnqeq = lnK f + (1/đ?‘›)lnCeq

order equations were used, ignoring the movement of the dye ion from the liquid bulk to the liquid film or boundary layer surrounding the adsorbent. The first order kinetic is based on the adsorbent capacity and is generally expressed by eq. (6).

đ?‘Ą 1 1 = + đ?‘Ą 2 đ?‘žđ?‘Ą đ?‘˜2 đ?‘žđ?‘’đ?‘ž đ?‘žđ?‘’đ?‘ž

If the second order kinetic is applicable, the plot of t/q against t should give a linear relationship. There is no need of knowing any parameter beforehand. The rate constant is also expressed as a function of temperature by the following Arrhenius type relation:

(5)

đ?‘˜2 = đ??´0 exp(−đ??¸đ??´ /đ?‘…đ?‘‡)

2.7. Kinetic studies Kinetic studies were performed in order to determine whether the controlling steps are mass transfer or chemical reaction processes. Pseudo - first order and pseudo - second

(9)

(10)

Where A0 is the frequency factor of sorption and E A is the activation energy of sorption. The magnitude of activation energy may give an idea about the type of sorption.

133


Echavarria-Alvarez & Hormaza-Anaguano / DYNA 81 (184), pp. 132-138. June, 2014.

mg adsorbed dye/L

3. Results and Discussion 3.1. Bromatological analysis It is important to highlight the lack of reports about bromatological composition of these three types of flowers and their mixture. This fact reflects the flower stalks have been little explored as adsorbent material. Table 1 shows the chemical analysis for its major components. The percentages of cellulose, hemicellulose and lignin polymers for the foliage mixture satisfy the required conditions of a potential adsorbent, as has been reported for other agricultural residues with considerable adsorption capacity [24,25]. The ash content is lower when compared to the average value reported for other adsorbents [24–26]. 3.2. Effect of pH value on the adsorption process Experimental results show that the adsorption of AB9 is a process highly dependent on the pH. Maximum removal was observed at pH 2.0 with the three kinds of adsorbents (carnation, rose and daisy). By contrast, the adsorption is minimal at higher pH values (3.0-10.0), (Fig.2). The pH range used did not include extreme values due to degradation of the dye at these points. The dye adsorption capacity at pH 2.0 was 2.08, 3.04 and 3.09 mg/g sorbent, for rose, carnation and daisy, respectively. Similar phenomenon was reported with anionic dyes, with a maximum adsorption at pH 2.0 [15], [23]. This result could be explained by electrostatic interactions between active sites and dissolved molecules. At low pH values, active sites on the surface of Table 1. Chemical analysis for the flower stalks composition. Composition Percentage FDA 67.80 FDN 77.90 Lignin 17.60 Starch 2.20 Ashes 1.54 Nitrogen 0.90 *Hemicellulose 10.1 *Cellulose 50.2 *calculated. Source: Authors

15 10 5 0 0

5 10 g adsorbent/L

15

Figure 3. Effect of adsorbent dosage on the adsorption of AB9. Initial dye concentration: 15 mgL-1; volume: 10 mL; pH= 2.0; contact time: 24 h; stirring speed: 120 rpm. Source: Authors

the adsorbent are protonated and then positively charged, which enhances electrostatic forces of attraction with anionic molecules, such as AB9. When the pH is increased, the adsorbent is negatively charged and therefore raises repulsion with the dye molecule. 3.3. Effect of sorbent dose on adsorption process The experiments showed as expected that the adsorption of AB9 increased when a greater amount of adsorbent is used (Fig. 3). At low doses of adsorbent (1.0 gL-1) the quantity of dye removed was 5.04 mgL-1. Nevertheless, when using doses of adsorbent higher than 5.0 gL-1, the amount of removed dye increased to 14 mgL-1 approximately. Akar and coworkers [28], also found similar results for removal of acid blue 40 on Thuja orientalis biomass. Is clear that with higher doses of adsorbent for a determined dye concentration, the available sites over the surface increase, and hence, more dye molecules are retained by the sorbent. In Fig. 3 the influence of the sorbent mixture dosage on dye removal is plotted. Preliminary studies were performed at different contact times (Table 2). Satisfactory removal of the dye at 24 hours was achieved suggesting this interval as adequate to perform further studies. 3.4. Effect of initial dye concentration

3,5

mg sorbed dye/g sorbent

20

3

It was determined that initial dye concentration is highly influential on the removal percentage. As the initial concentration of AB9 increases, more molecules are adsorbed per unit mass of adsorbent until a constant value, later all active sorption sites are saturated, and then any transfer from the liquid phase is not possible. The increase

2,5 2

1,5 1

0,5 0 2

3

4

carnation

6

daisy

7

8

rose

10 pH

Figure 2. Effect of pH on the adsorption of AB9 by three flower wastes; adsorbent dosage: 30 mg; initial dye concentration: 15 mgL-1; volume: 10 mL; contact time: 24 h; stirring speed: 120 rpm. Source: Authors

Table 2. Removal percentages at different contact times. Time (h) Temperature (% removal) 25 °C 42 °C 2 73.9 80.6 8 81.1 87.7 12 84.0 94.0 24 96.7 95.5 pH = 2.0, adsorbent dosage 7.0 gL-1, stirring speed Source:Authors

134

54 °C 88.1 94.9 95.2 96.0 120 rpm.


Echavarria-Alvarez & Hormaza-Anaguano / DYNA 81 (184), pp. 132-138. June, 2014.

of the initial dye concentration from 1.0-18.0 mgL-1, improved the uptake capacity from 0.1-3.0 mg of dye adsorbed / g adsorbent in the three assessed temperatures 25, 42 and 54 °C, (Fig. 4). The temperatures were selected in order to be consistent with a process in which there is a minimal energy demand. It is important to remark that we seek to design an economical and efficient process for its possible scaling. As can be observed, temperature had a minimal effect on removal effiency. Aksu and Dönmez [23] also found similar trends for the adsorption of acid blue 161 by Trametes versicolor and remazol reactive blue for various types of yeast, respectively. This initial increase in adsorption process with dye concentration is due to the more availability of dye molecules in solution to be adsorbed. The major quantity of dye influences the increase in driving force and decreases the resistance to mass transfer from the liquid phase to solid phase. Higher dye concentrations were not used in order to avoid exceeding the limit for the absorbance measurements. 3.5. Equilibrium studies The obtained data from the equilibrium study of adsorption process of AB9 were modeled according to Langmuir and Freundlich isotherms at different temperatures (25, 42 and 54 °C). Taking into account the correlation coefficient, the Freundlich isotherm showed a better fit with an average R2 = 0.992 in the range of concentrations and temperatures evaluated, even though both models fitted very well (Table 3). 42°C

54°C

qe (mg dye /g adsorbent)

25°C 4 3,5 3 2,5 2 1,5 1 0,5 0 0

5

10

15

The linearized plots of Freundlich isotherms at different temperatures are presented in Fig. 5 The Freundlich constant Kf increased from 1.22-1.29 with a rise in temperature of 25-54 °C, indicating its slight influence on the process. The higher the temperature, the better the uptake capacity. This might be due to a rise in interactions and collisions frequency among active surface and dye molecules. The value of n, however, shows an irregular pattern, reaching its highest value at 42 °C (n = 1.204). The intensity of biosorption seems to increase until 42 °C and then registers a lower value at 54 °C (n = 1.034). This behavior could be explained by repulsive interactions at the surface, which reduce binding force between sorbent and solute. This result suggest, first that the increasing temperature has a negligible effect on the performance of the process and secondly, the surface of the adsorbent presents heterogeneous nature with different binding sites and nonequivalent adsorptive energies including electric interferences rather than a homogenous and equivalent monolayer. Osma and coworkers [15] reported a similar adjustment for the adsorption of reactive black 5 on sunflower waste. According to the Langmuir model, it was not possible to determine a clear relation between temperature and adsorption affinity, nor with adsorption capacity. 3.6. Kinetic studies The adsorption process showed a better fit for the pseudo - second order model kinetics, Table 4 and Fig. 6. Therefore, the reaction takes place in heterogeneous conditions because it depends on the amount of solute adsorbed in a time t and on the equilibrium. It indicates that the rate limiting step might be chemical biosorption involving the exchange of electrons between the dye ions and the adsorbent. Results show a negligible effect on temperature. Similar results were obtained for the adsorption of remazol reactive dye and yeasts and AB40 by cone biomass of T.orientalis [23], [28]. In physical adsorption the energy requirements are usually low (no more than 4.2 kJmol-1) since the forces

20

25°C

initial dye concentration (mg/L) Figure 4. Effect of initial AB9 concentration on its adsorption. Adsorbent dosage: 4.0 gL-1; pH = 2.0; volume: 10 mL; contact time: 24 h; stirring speed: 120 rpm. Source: Authors

1.274 1.298

1.204 1.034

0.995 0.980

6.24 40.16

0.153 0.018

54°C

ln qe

1

Table 3. Equilibrium parameters for the adsorption of AB9 on flower stalks. Tempera Freundlich Langmuir ture Kf n R² Qmax b R² 25 °C 1.223 1.056 0.992 18.55 0.042 0.989 25 °C 25 °C

42°C

1,5

0,5

0 0

0.997 0.988

pH = 2.0, adsorbent dose 4.0 gL-1, contact time 24 h, stirring speed 120 rpm. Source: Authors

0,5

1

ln Ceq

1,5

2

Figure 5. Linearized Freundlich isotherm at different temperatures. Source: Authors

135


Echavarria-Alvarez & Hormaza-Anaguano / DYNA 81 (184), pp. 132-138. June, 2014.

involved are weak. Chemical adsorption is specific and involves forces much stronger than in physical adsorption [27]. Therefore, the kinetic parameters found such as activation energy (EA = 8838.8 Jmol-1) and Arrhenius constant (Ao = 0.003) indicate a chemical nature for the adsorption process of AB9 by flower stems (Table 4). 4. Conclusions The adsorption experiments demonstrated the great potential of flower stalks as a low-cost and easily available adsorbent for the removal of AB9. It was observed that the extent of adsorption increased by lowering of initial pH up to 2.0, the uptake capacity being a maximum at this value. It was also noted that the specific adsorption capacity decreases with increasing the ratio of adsorbent-dye. The process followed Freundlich isotherm model, showing a slight increase in the uptake capacity with temperature. The adsorption process was better described by pseudo-second order kinetics. Although the potential of flower wastes as adsorbent for the removal of synthetic dyes was proved, further research is required in order to improve its adsorptive capacity, for example, through chemical modification of the surface of this material. We also suggest the evaluation of isotherms different to Freundlich and Langmuir in order to support the observations of this study. The structural characterization of the material through SEM and IR analysis is being developed in order to get a deeper understanding of the process.

25°C

400 350 300 250 200 150 100 50 0

40°C

50°C

t/qt

y = 0,64x + 5,1455 40°C R² = 0,9974 y = 0,7639x + 6,5871 50°C R² = 0,9933

200

The authors thank Universidad Nacional de Colombia – Sede Medellín, Dirección de Investigation, DIME, for the financial support through the Project code 20101007696 as well as to Universidad Nacional de Colombia Vicerrectoría de Investigación for financing the Project GTI code 40000001102. References [1]. Robinson, T., Chandran, B. and Nigam, P. Removal of dyes from a synthetic textile dye effluent by biosorption on apple pomace and wheat straw. Water research, vol. 36 (11), pp. 2824–2830, 2002. [2] Robinson, T., McMullan, G., Marchant, R. and Nigam, P. Remediation of dyes in textile effluent: a critical review on current treatment technologies with a proposed alternative. Bioresource technology, vol. 77 (3), pp. 247–555, 2001. [3] Zollinger, H. Color Chemistry Colour Chemistry: Synthesis, Properties of Organic Dyes and Pigments. Third. Zurich: Wiley-VCH, 2003, pp. 637. [4] Anliker, R. and Clarke, E. A. Organic dyes and pigments, in Part A. Anthropogenic compounds, the handbook of environmental chemistry, vol. 3, O. Hutzinger, Ed. Springer, 1980, pp. 181 – 215. [5] Kariminiaae-Hamedaani, H. R., Sakurai, A. and M. Sakakibara. Decolorization of synthetic dyes by a new manganese peroxidaseproducing white rot fungus. Dyes and Pigments, 72 (2), pp. 157–162, 2007. [6] Bhatnagar A. and Sillanpää M. Utilization of agro-industrial and municipal waste materials as potential adsorbents for water treatment: a review. Chemical Engineering Journal, vol. 157 (2–3), pp. 277–296, 2010. [7] Rangabhashiyam, S., Anu, N. and Selvaraju, N. Sequestration of dye from textile industry wastewater using agricultural waste products as adsorbents. Journal of Environmental Chemical Engineering, vol. 1 (4), pp. 629–641, 2013. [8] Nethajia, S., Sivasamya, A., Thennarasua G. and Saravananb, S. Adsorption of Malachite Green dye onto activated carbon derived from Borassus aethiopum flower biomass. Journal of Hazardous Materials, vol. 181 (1–3), pp. 271–280, 2010.

y = 0,6787x + 3,9225 25°C R² = 0,9988

0

Acknowledgements

400

600

[9] Rafatullah, M., Sulaiman, O., Hashim R. and Ahmad, A. Adsorption of methylene blue on low-cost adsorbents: a review. Journal of Hazardous Materials, vol. 177 (1–3), pp. 70–80, 2010. [10] Gupta V. K. and Suhas. Application of low-cost adsorbents for dye removal a review, Journal of environmental management, vol. 90 (8), pp. 2313–2342, 2009. [11] G. Crini, Non-conventional low-cost adsorbents for dye removal: a review. Bioresource technology, vol. 97 (9), pp. 1061–1085, 2006.

Figure 6. Pseudo-second order kinetics, t/qt vs. t (min). Source: Authors

[12] Srinivasan A. and Viraraghavan, T. Decolorization of dye wastewaters by biosorbents: a review. Journal of environmental management, vol. 91 (10), pp. 1915–1929, 2010.

Table 4. Kinetic parameters for the AB9 adsorption by flower wastes. Tempera Pseudo-first order Pseudo-second order ture r2 k1 qe r2 k2 qe

[13] Hormaza A. and Suárez García E. Estudio del proceso de Biosorción de dos colorantes estructuralmente diferentes sobre residuos avícolas. Revista de la Sociedad Química del Perú, vol. 75 (3), pp. 329–338, 2009.

20 °C

0.631

0.004

0.582

0.999

0.117

1.473

42 °C

0.676

0.003

0.700

0.997

0.080

1.563

50 °C

0.730

0.006

0.546

0.993

0.089

1.309

[14] Hirata, M., Kawasaki, N., Nakamura, T., Matsumoto, K., Kabayama, M., Tamura, T. and Tanada, S. Adsorption of Dyes onto Carbonaceous Materials Produced from Coffee Grounds by Microwave Treatment. Journal of Colloid and Interface Science, vol. 254 (1), pp. 17–22, 2002.

t (min)*

Source: Authors

[15] Osma, J. F., Saravia, V., J. L. Toca-Herrera, J. L. and Couto, S. R. Sunflower seed shells: a novel and effective low-cost adsorbent for the removal of the diazo dye Reactive Black 5 from aqueous solution. Journal of Hazardous Materials, vol. 147 (3), pp. 900–905, 2007.

136


Echavarria-Alvarez & Hormaza-Anaguano / DYNA 81 (184), pp. 132-138. June, 2014. [16] Hameed, B. H. and Ahmad, A. Batch adsorption of methylene blue from aqueous solution by garlic peel, an agricultural waste biomass. Journal of Hazardous Materials, vol. 164 (2–3), pp. 870–587, 2009.

[26] Valverde, A., Sarria, B. and Monteagudo J. Análisis comparativo de las características fisicoquímicas de la cascarilla de arroz. Scientia et Technica, vol. 5 (37), pp. 255-260, 2007.

[17] Aksu, Z. Application of biosorption for the removal of organic pollutants: a review. Process Biochemistry, vol. 40 (3–4), pp. 997–1026, 2005.

[27] Aksu, Z., Tatlı, A. I. and Tunç, Ö. A comparative adsorption/biosorption study of Acid Blue 161: Effect of temperature on equilibrium and kinetic parameters. Chemical Engineering Journal, vol. 142 (1), pp. 23–39, 2008.

[18] Garg, V. Basic dye (methylene blue) removal from simulated wastewater by adsorption using Indian Rosewood sawdust: a timber industry waste. Dyes and Pigments, vol. 63 (3), pp. 243–250, 2004. [19] Forgacs, E., Cserháti, T. and Oros, G. Removal of synthetic dyes from wastewaters: a review. Environment international, vol. 30 (7), pp. 953– 971, 2004. [20] Fernández, B. and Ayala, J. Evaluation of fly ashes for the removal of Cu, Ni and Cd from acidic waters. Dyna, vol. 77 (161), pp. 141–147, 2010. [21] Bansal, M., Garg, U., Singh, D. and Garg, V. K. Removal of Cr(VI) from aqueous solutions using pre-consumer processing agricultural waste: a case study of rice husk. Journal of Hazardous Materials, vol. 162 (1), pp. 312–320, 2009. [22] Van Soest, P. J. and Wine, R. H. Use of detergents in the analysis of fibrous feeds IV. Determination of plant cell-wall constituents. Journal of the Association of Official Analytical Chemists, 50, pp. 50–55, 1967. [23] Aksu, Z. and Dönmez, G. A comparative study on the biosorption characteristics of some yeasts for Remazol Blue reactive dye. Chemosphere, vol. 50 (8), pp. 1075–1083, 2003. [24] Chuah, T. G., Jumasiah, A., Azni, I., Katayon, S. and Thomas Choong, S. Y. Rice husk as a potentially low-cost biosorbent for heavy metal and dye removal: an overview. Desalination, vol. 175, pp. 305–316, 2005. [25] Doria, G. M., Hormaza, A. and Gallego Suarez, D. Cascarilla de arroz: Material alternativo y de bajo costo para el tratameinto de aguas contaminadas con Cromo (VI). Revista Gestión y Ambiente, vol. 14 (1), pp. 73–84, 2011.

[28] Akar, T., Ozcan, A. S., Tunali, S. and Ozcan, A. Biosorption of a textile dye (Acid Blue 40) by cone biomass of Thuja orientalis: estimation of equilibrium, thermodynamic and kinetic parameters. Bioresource technology, vol. 99 (8), pp. 3057–3065, 2008. A. M. Echavarria-Alvarez, received the Bs. Eng in Biological Engineering at the Universida Nacional de Colombia, Sede Medellin in 2010, the MS degree in in Bioprocess Engineering Design at the Delft University of Technology, Netherlands in 2012. She was a very active member of the Research Group “Synthesis, Reactivity and Transformation of Organic Compounds”, SIRYTCOR and her researching experience in the environmental topic allowed her to participate in several projects. Currently she works as a researcher in the Department of Environmental Processes at the Delft University of Technology. A. Hormaza-Anaguano, received the Chemistry degree at the Universidad de Nariño in 1994, the MS degree in Chemical Science at the Universidad del Valle in 1997, and the PhD degree in Natural Sciences at the Johannes Gutenberg-University Mainz, Germany in 2003. She began working at the a la Universidad Nacional de Colombia Sede Medellín in 1997 as assistant professor and currently she is a full-time Associate Professor in the School of Chemistry, Facultad de Ciencias, Universidad Nacional de Colombia Sede Medellín. Since 2003 she is the Director of Research Group “Synthesis, Reactivity and Transformation of Organic Compounds”, SIRYTCOR, whose research lines are focused on the treatment of industrial effluents, exploration and evaluation of alternative adsorbents, adsorption and desorption, and biological processes by solid state fermentation.

137


DYNA http://dyna.medellin.unal.edu.co/

Discrete Particle Swarm Optimization in the numerical solution of a system of linear Diophantine equations Optimización por Enjambre de Partículas Discreto en la Solución Numérica de un Sistema de Ecuaciones Diofánticas Lineales Iván Amaya a, Luis Gómez b & Rodrigo Correa c a PhD( c), Universidad Industrial de Santander, Colombia. ivan.amaya2@correo.uis.edu.co BSc on Electronics Engineering, Physicist, Universidad Industrial de Santander, Colombia. luisgomezardila@gmail.com c Professor, PhD School of Electric, Electronic and Telecommunication Engineerings, Universidad Industrial de Santander, Colombia. crcorrea@uis.edu.co b

Received: February 25th, 2013. Received in revised form: January 31th, 2014. Accepted: April 3th, 2014.

Abstract This article proposes the use of a discrete version of the well known Particle Swarm Optimization, DPSO, a metaheuristic optimization algorithm for numerically solving a system of linear Diophantine equations. Likewise, the transformation of this type of problem (i.e. solving a system of equations) into an optimization one is also shown. The current algorithm is able to find all the integer roots in a given search domain, at least for the examples shown. Simple problems are used to show its efficacy. Moreover, aspects related to the processing time, as well as to the effect of increasing the population and the search space, are discussed. It was found that the strategy shown herein represents a good approach when dealing with systems that have more unknowns than equations, or when it becomes of considerable size, since a big search domain is required. Keywords: Linear Diophantine equations; objective function; optimization; particle swarm. Resumen El presente artículo propone utilizar una versión discreta del bien conocido algoritmo metaheurístico de optimización por enjambre de partículas, DPSO, para solucionar numéricamente un sistema de ecuaciones Diofánticas lineales. Así mismo, se muestra la transformación de este tipo de problema (es decir, la solución de un sistema de ecuaciones), en uno de optimización. El presente algoritmo es capaz de encontrar todas las raíces enteras en un dominio de búsqueda dado, al menos para los ejemplos mostrados. Se utilizan algunos problemas sencillos para verificar su eficacia. Además, se muestran algunos aspectos relacionados con el tiempo de procesamiento, así como con el efecto de incrementar la población y el dominio de búsqueda. Se encontró que la estrategia mostrada aquí representa una propuesta adecuada para trabajar con sistemas que tienen más incógnitas que ecuaciones, o cuando se tiene un tamaño considerable, debido a que se requiere un gran dominio de búsqueda. Palabras clave: Ecuaciones Diofánticas lineales; enjambre de partículas; función objetivo; optimización.

1. Introduction With each passing day is easier to see the boom that the modeling and description of systems have generated in science and engineering, especially through Diophantine equations. Areas such as cryptography, integer factorization, number theory, algebraic geometry, control theory, data dependence on supercomputers, communications, and so on, are some examples [1]. Moreover, there is a strong mathematical foundation for this type of equations and their solutions (both, at a fundamental and at an applied level). These vary from the fanciest and most systematic approaches, up to the most recursive ones, but it is evident that there is no unified solution process, nor a single alternative for doing so. Furthermore, some equations may

have a single solution, while others may have an infinite number, or, possibly, may not even have a solution in the integer or rational domains. This also applies for linear systems with this kind of equations (i.e. Diophantine ones) [2]. Matiyasevich, during the early 90s, proved that it was not possible to have an analytic algorithm that allows to foresee if a given Diophantine equation has, an integer solution , or not [3]. This problem may have been one of the engines that have boosted the search for numerical alternatives. In order to solve a system of linear Diophantine equations, a variable elimination method (which is quite similar to Gauss's) is a good approach for small systems, but it becomes demanding for bigger ones. The specialized literature report some methods like those based on the

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 139-144. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Amaya et al / DYNA 81 (185), pp. 139-144. June, 2014.

theory of modules over main ideal domains, which are somewhat more systematic when looking for all the solutions of a given system, but, likewise, become too complex when dealing with big systems of equations [4], [5]. Some authors have previously proposed the solution of a Diophantine equation through artificial intelligence algorithms [6], [7]. This article proposes to solve, in case the solution exists in the given search domain, a linear system of Diophantine equations. Initially, some basic and necessary related concepts are laid out, and then the viability of using the numeric strategy is shown through some examples.

following method. Let � be a non-empty subset of �� and consider eq. (4), where �: � → � is a function. �(�) = 0,

đ?‘Ľâˆˆđ?‘‹

(4)

The problem of finding all the possible solutions for eq. (4) in the subset � can be transformed into a global optimization problem over � as follows: Let �: � → � be defined by: �(�) ≔ [�(�)]2

(5)

Then, for every đ?‘Ľ ∈ đ?‘‹ it holds that đ?‘”(đ?‘Ľ) ≼ 0.

2. Fundamentals A linear Diophantine equation, with đ?‘› unknowns, is defined by eq. (1), where đ?‘Ž1 , đ?‘Ž2 , ‌ , đ?‘Žđ?‘› are known rational, or integer, numbers, and đ?‘Ľ1 , đ?‘Ľ2 , ‌, đ?‘Ľđ?‘› are unknowns, i.e., the numbers that should satisfy them, [8]; đ?‘? is a known integer. It is said that the integers đ?‘Ą1 , ‌ , đ?‘Ąđ?‘› are a solution for eq. (1) if, and only if, đ?‘Ž1 đ?‘Ą1 + â‹Ż + đ?‘Žđ?‘› đ?‘Ąđ?‘› = đ?‘?. đ?‘Ž1 đ?‘Ľ1 + đ?‘Ž2 đ?‘Ľ2 + â‹Ż + đ?‘Žđ?‘› đ?‘Ľđ?‘› = đ?‘?

(1)

One of the basic results of number theory that can be applied to a linear Diophantine equation is the following theorem, which allows determining whether it has a solution or not (even if it is not able to calculate it): Theorem 1. Let đ?‘Ž1 , ‌ , đ?‘Žđ?‘› , đ?‘? be integers, where all đ?‘Žđ?‘– not zeros, and let đ?‘‘ = đ?‘”. đ?‘?. đ?‘‘. {đ?‘Ž1 , ‌ , đ?‘Žđ?‘› } be the g.c.d. of the numbers đ?‘Ž1 , ‌ , đ?‘Žđ?‘› . Therefore, đ?‘‘|đ?‘? if, and only if, exist đ?‘Ą1 , ‌ , đ?‘Ąđ?‘› integers, such that đ?‘Ž1 đ?‘Ą1 + â‹Ż + đ?‘Žđ?‘› đ?‘Ąđ?‘› = đ?‘?.

đ?‘? đ?‘‘ đ?‘Ž đ?‘Ś = đ?‘Ś0 – đ?›˝ ∗ đ?‘‘

Then, if for đ?‘“(đ?‘Ľ1 , đ?‘Ľ2 ) = đ?‘Ž1 đ?‘Ľ1 + đ?‘Ž2 đ?‘Ľ2 − đ?‘? a region of the plane can be determined, where a global minimum of function đ?‘”, defined by (5), and its value is zero, then any global minimizer with integer coordinates, should it exist, serves as a particular solution of eq. (2). Thus, the choice of the region is quite important to enclose, at least, a solution with integer coordinates.

Consider the following system of � linear Diophantine equations, with unknowns �1 , ‌ , �� . {

(2)

According to the previously mentioned theorem, this equation has integer solutions, and it can be shown that if (đ?‘Ľ0 , đ?‘Ś0 ) is a particular one, then all its solutions are given by eq. (3), where đ?›˝ is an integer and đ?‘‘ is an integer which represents the g.c.d. đ?‘Ľ = đ?‘Ľ0 + đ?›˝ ∗

Theorem 3. If the function đ?‘” defined in (5) has a global minimum in đ?‘‹ and this value is zero, then eq. (4) has a solution in đ?‘‹. Moreover, all global minimizers of đ?‘” are solutions of eq. (4).

2.1. System of linear equations

Thus, the problem of determining whether a linear Diophantine equation has a solution or not, is reduced to showing if the greatest common divisor of the đ?‘Žđ?‘– coefficients divide đ?‘? or not. Consider the case of two unknowns, for example, with an equation as the one shown by eq. (2), where đ?‘Ž, đ?‘?, đ?‘? are known integers, and whose solution only exists if the g.c.d. of đ?‘Ž and đ?‘? is a divisor of đ?‘?. đ?‘Žâˆ—đ?‘Ľ+đ?‘?∗đ?‘Ś = đ?‘?

Theorem 2. Suppose that eq. (4) has a solution in đ?‘‹, and let đ?‘Ž ∈ đ?‘‹. Therefore, đ?‘Ž is a solution for eq. (4) if, and only if, đ?‘Ž minimizes the function đ?‘” defined in (5). An immediate consequence of the previous theorem is that if eq. (4) has a solution in đ?‘‹, then the global minimum of đ?‘” defined in (5) exists and is zero; even more, the following theorem exists:

(6)

According to theorem 1, in order for the system (6) to have a solution, it is necessary, but not sufficient, that each of the đ?‘š equations have a solution; this is equivalent to establishing if for each đ?‘– = 1, ‌ , đ?‘š it holds that đ?‘”. đ?‘?. đ?‘‘. {đ?‘Žđ?‘–1 , ‌ , đ?‘Žđ?‘–đ?‘› } divides đ?‘?đ?‘– . To see why this condition is not sufficient, consider the system of Diophantine equations defined by x + 3y = −1 { x+ y= 4

(3)

Therefore, if a linear Diophantine equation with two unknowns has a solution in the integers, then it has infinite solutions of this kind. Even so, the problem now transforms in finding a particular solution, which can be done using the

đ?‘Ž11 đ?‘Ľ1 + â‹Ż + đ?‘Ž1đ?‘› đ?‘Ľđ?‘› = đ?‘?1 â‹Ž đ?‘Žđ?‘š1 đ?‘Ľ1 + â‹Ż + đ?‘Žđ?‘šđ?‘› đ?‘Ľđ?‘› = đ?‘?đ?‘š

(7)

Each equation from this system has a solution in the integer domain, but the system does not have a solution as a whole. Then, and in the same way that with systems of equations in real variables, the fact that one of the equations of a system has a solution, does not imply that the whole system also has. Even so, a method that generalizes finding all the roots

140


Amaya et al / DYNA 81 (185), pp. 139-144. June, 2014.

(in case they exist) of a system of equations over a given set, is shown below. Let � be a non-empty subset of �� and consider the system of equations (8), where for each � = 1, ‌ , �, �� : � → � is a function. �� (�) = 0 ⋎ { �� (�) = 0

đ?‘¤â„Žđ?‘’đ?‘&#x;đ?‘’ đ?‘Ľ ∈ đ?‘‹

(8)

Let �: � → � be defined by: �

đ?‘”(đ?‘Ľ) ≔ ∑[đ?‘“đ?‘– (đ?‘Ľ)]2

(9)

đ?‘–=1

Then for all đ?‘Ľ ∈ đ?‘‹ it holds that đ?‘”(đ?‘Ľ) ≼ 0. result is achieved:

The following

Theorem 4. Suppose that the system of equations (8) has a solution in đ?‘‹, and let đ?‘Ž ∈ đ?‘‹. Then, đ?‘Ž is a solution of the system (8) if, and only if, đ?‘Ž minimizes the function đ?‘” defined in (9). The general condition of the theorem 4 about the feasibility of solving the system (8) is important, since it is possible that the function đ?‘” defined in (9) can be globally minimized but that the system (8) does not have a solution. An immediate consequence of theorem 4 is that if the system (8) has a solution in đ?‘‹, then the global minimum of đ?‘” defined in (9) exists and it is zero; moreover, the following result exists: Theorem 5. If the function đ?‘” defined in (9) has a global minimum in đ?‘‹ and this value is zero, then the system (8) has a solution in đ?‘‹. Moreover, all global minimizers of đ?‘” are solutions of the system (8). Therefore, for the function đ?‘” defined in (9), if there does not exist a global minimum in đ?‘‹ or if it exists but is different from zero, then the system of equations (8) does not have a solution in đ?‘‹. A basic result of the mathematical analysis of the algorithm establishes that if đ?‘‹ is a compact set (i.e. closed and bounded) and đ?‘” is continuous over đ?‘‹ then the global minimum exists. Now, for đ?‘” to be continuous in đ?‘‹ it is enough that each đ?‘“đ?‘– is continuous in đ?‘‹. For the case of systems of Diophantine equations, unlike the particular case of an equation with two unknowns, the fact that a solution exists does not imply that others do, and even less that an infinite number exists. For the search of possible solutions of a system of Diophantine equations, it must hold that the set đ?‘‹ have points with integer coordinates, i.e. that đ?‘‹ ∊ ℤđ?‘› ≠∅.

2.2. The algorithm The implemented algorithm is built up from various interconnected blocks and is similar to the structure of traditional PSO (for real numbers), [9], [10]. A first stage is given by the random assignation of a swarm of user defined integers. Any size can be used here. Likewise, the definition of these values is subject to previous knowledge of the objective function (fitness), as well as to the presence of restrictions. Moreover, an initial speed of zero can be defined for the particles. After that, the algorithm evaluates, in the given search space, the objective function. With it,

local and global best values are established, and both, speed and position, of each particle, are reevaluated as shown below. This procedure is iterative and is repeated until the convergence criteria are met, or until all solutions in the search domain are found. An algorithm, considered as a variant of the traditional PSO, was used, [9]. In the same fashion as said PSO, its version for discrete solutions includes two vectors Xi and Vi , related to the position and speed of each particle, for every iteration. The first one is a vector of random numbers, initially, in a valid solution interval. The second one can also be a random vector, but it can be assumed as zero for the first iteration, in order to keep it simple. When the problems become multidimensional, the vectors transform into a position and a speed matrices, since there is a value for each unknown, [9], [11]. Discrete PSO differs from its traditional version in which the new speed and position depend on both, an equation and a decision rule, which chooses among the local and global best values for the next iteration. Assuming there is a vector đ?‘Śđ?‘– = (đ?‘Śđ?‘–1 , đ?‘Śđ?‘–2 , â‹Ż , đ?‘Ś1đ?‘› ) that allows the transition between continuous and discrete PSO, and which takes the value of (-1, 1, or, 0) according to eq. (10), where đ?‘”đ?‘™đ?‘œ is the global optimum of the swarm, and đ?‘™đ?‘œđ?‘? the local one, [9]. 1 −1 đ?‘Śđ?‘– = { 0 −1 đ?‘œđ?‘&#x; 1

if if if if

Xi = đ?‘”đ?‘™đ?‘œ Xi = đ?‘™đ?‘œđ?‘? Xi ≠đ?‘”đ?‘™đ?‘œ ≠đ?‘™đ?‘œđ?‘? Xi = đ?‘”đ?‘™đ?‘œ = đ?‘™đ?‘œđ?‘?

(10)

Afterwards, speed is updated according to eq. (11), where w is known as the inertia factor, which is used to limit the speed of the particles; c1 , c2 are constants which is usually are considered as equal to two; and r1 , r2 are random numbers between zero and one [10]. Vi+1 = Vi ∗ w + c1 ∗ r1 ∗ (−1 − yi ) + c2 ∗ r2 ∗ (1 − yi )

(11)

Then, the decision parameter, vector đ??ľđ?‘– = (đ??ľđ?‘–1 , đ??ľđ?‘–2 , â‹Ż , đ??ľđ?‘–đ?‘› ), is calculated according to eq. (12). đ??ľđ?‘– = đ?‘Śđ?‘– + Vi+1

(12)

This parameter decides if the next position of the particle is chosen as the local or global best, or if it is chosen as a random number in the search domain. Thus, position update is done according to eq. (13), where � is a constant that defines the intensification (new position equal to the local or global bests) and the diversification (new position equal to a random number) [9].

141

đ?‘”đ?‘™đ?‘œ đ?‘‹đ?‘–+1 = { đ?‘™đ?‘œđ?‘?

if đ??ľđ?‘– > đ?›ź if đ??ľđ?‘– < − đ?›ź Int Rand if −đ?›ź ≤ đ??ľđ?‘– ≤ đ?›ź

(13)


Amaya et al / DYNA 81 (185), pp. 139-144. June, 2014.

)

3. Results and Analysis This section shows the results achieved after solving some systems of linear Diophantine equations, as an example of the method. A computer with an AMD Turion X2 Dual Core RM-72 processor, at 2.1 GHz, and with 4 GB of RAM memory, was used. During all the examples, the following parameters were used: w = 0.75, c1 = 0.8, c2 = 0.2, y, � = 0.3. These values were chosen based on some preliminary tests and on the information available in the literature [1], [9]. Figure 1. Convergence time as a function of iterations for system B.

3.1. System of equations A It is required to solve the system given by eq. (14), in the set of positive integers, which represents the amount of animals bought by a farmer and its cost. The full statement of the problem is as follows: "A farmer spent 10.000.000 COP, on 100 animals: chickens (�), pigs (�) and cows (�). if he bought the chickens at 5.000 COP, pigs at 100.000 COP and cows at 500.000 COP, and if he acquired animals of all three classes, how many did he buy of each one?" [12]. � + � + � = 100 � + 20� + 100� = 2000

(14)

This system is equivalent, by Gaussian reduction, to the system {

đ?‘Ľâˆ’ đ?‘Ś+

80

19 99

19

�=0

� = 100 80

99

Its general solution is given by: = đ?‘Ą , y= 100 − đ?‘Ą , 19 19 đ?‘§ = đ?‘Ą, which for đ?‘Ą = 19 yields: đ?‘Ľ = 80; đ?‘Ś = 1; đ?‘§ = 19. In order to solve this problem with the discrete PSO algorithm, the following objective function is created:

3đ?‘‹1 + đ?‘‹2 − 6 = 0 4đ?‘‹1 + 3đ?‘‹2 + đ?‘‹3 + đ?‘‹5 − 15 = 0 3đ?‘‹1 + 4đ?‘‹2 + 3đ?‘‹3 + đ?‘‹4 + đ?‘‹5 + đ?‘‹6 − 20 = 0 3đ?‘‹2 + 4đ?‘‹3 + 3đ?‘‹4 + đ?‘‹5 + đ?‘‹6 + đ?‘‹7 − 15 = 0

3đ?‘‹3 + 4đ?‘‹4 + đ?‘‹6 + đ?‘‹7 − 6 = 0 3đ?‘‹4 + đ?‘‹7 − 1 = 0

The solution of the system can be found to be: đ?‘‹1 = 1, đ?‘‹2 = 3, đ?‘‹3 = 2, đ?‘‹4 = 2, đ?‘‹5 = 0, đ?‘‹6 = −3, đ?‘‹7 = −5 From the first equation, đ?‘‹1 = 1; and from the second one, đ?‘‹2 = 3. The third equation yields đ?‘‹5 = 2 − đ?‘‹3 , while from the fifth and sixth equations, đ?‘‹6 + đ?‘‹7 = 6 − 3đ?‘‹3 − 4đ?‘‹4 = 6 − đ?‘‹5 − 3đ?‘‹4 − 4đ?‘‹3 , which means that đ?‘‹4 = 2. Thus, the last equation provides đ?‘‹7 = −5. Substracting the fourth and fifth equations, đ?‘‹3 = 2 is obtained, which means that đ?‘‹5 = 0. Finally, the sixth equation yields đ?‘‹6 = −3. In order to solve it through the algorithm, the following objective function was defined: đ??š = (đ?‘‹1 − 1)2 + (3đ?‘‹1 + đ?‘‹2 − 6)2 + (4đ?‘‹1 + 3đ?‘‹2 + đ?‘‹3 + đ?‘‹5 − 15)2 + (3đ?‘‹1 + 4đ?‘‹2 + 3đ?‘‹3 + đ?‘‹4 + đ?‘‹5 + đ?‘‹6 − 20)2 + (3đ?‘‹2 + 4đ?‘‹3 + 3đ?‘‹4 + đ?‘‹5 + đ?‘‹6 + đ?‘‹7 − 15)2 + (3đ?‘‹3 + 4đ?‘‹4 + đ?‘‹6 + đ?‘‹7 − 6)2 + (3đ?‘‹4 + đ?‘‹7 − 1)2 = 0

đ??š = (đ?‘Ľ + đ?‘Ś + đ?‘§ − 100)2 + (đ?‘Ľ + 20đ?‘Ś + 100đ?‘§ − 2000)2 = 0

After 20 runs of the algorithm, with a swarm of 1000 particles, the same answer was always achieved. Their duration, however, varied from 1.186 s, with 204 iterations, and up to 474.043 s, with 66832 iterations. It can then be concluded that, for this system, the algorithm delivers an answer with excellent precision and accuracy, even though the number of iterations and the duration were variable. It was found that their relationship is quite close to linearity (R2=0.9955). 3.2. System of equations B Afterwards, the system of seven linear Diophantine equations shown by (15) was solved, which represents a closed-loop control system, with unitary feedback, and where it is required to find the controller (đ??ś(đ?‘†)), with six poles at đ?‘† = −1 for the plant G(s) = đ?‘‹1 − 1 = 0

� 2 +�+1

.

Once again, 1000 particles were used and the algorithm was run 20 times. As a result, the same answer is achieved, so it is important to remark the excellent quality of the results (in terms of accuracy and precision), as well as, the variability in time and iterations, when looking for all the solutions in the integer domain. When compared to the previous system, it can be seen that the convergence time increased, and an almost linear relation between iterations and time can be seen in Fig. 1. 3.3. System of equations C For this case a system of 12 linear Diophantine equations was selected:

� 3 +3� 2 +4�+3

(15

5đ?‘Ľ1 − 6đ?‘Ľ2 + 8đ?‘Ľ4 − 5đ?‘Ľ5 + 6đ?‘Ľ6 + 10đ?‘Ľ7 − 9đ?‘Ľ9 + 3đ?‘Ľ10 + 11đ?‘Ľ11 − 15đ?‘Ľ12 + 17đ?‘Ľ13 = −1 142


Amaya et al / DYNA 81 (185), pp. 139-144. June, 2014.

7đ?‘Ľ1 + đ?‘Ľ2 − 4đ?‘Ľ4 + 6đ?‘Ľ7 − 9đ?‘Ľ8 + 5đ?‘Ľ9 − 12đ?‘Ľ10 + 3đ?‘Ľ11 − 7đ?‘Ľ12 + 8đ?‘Ľ13 = 26 5đ?‘Ľ1 − 24đ?‘Ľ2 + 32đ?‘Ľ3 − 49đ?‘Ľ4 + 3đ?‘Ľ5 + 19đ?‘Ľ6 − 21đ?‘Ľ7 − 17đ?‘Ľ8 + 33đ?‘Ľ9 + 9đ?‘Ľ10 − 12đ?‘Ľ11 − đ?‘Ľ13 = 475 20đ?‘Ľ1 + 27đ?‘Ľ2 − 23đ?‘Ľ4 − 30đ?‘Ľ5 + 34đ?‘Ľ6 + đ?‘Ľ7 − 7đ?‘Ľ9 + 11đ?‘Ľ10 − 28đ?‘Ľ11 + 4đ?‘Ľ12 − 36đ?‘Ľ13 = 103 5đ?‘Ľ1 − 10đ?‘Ľ3 + 2đ?‘Ľ5 − 6đ?‘Ľ7 − 13đ?‘Ľ9 + 34đ?‘Ľ11 − 9đ?‘Ľ13 = −352 đ?‘Ľ2 + 22đ?‘Ľ4 − 26đ?‘Ľ6 − 17đ?‘Ľ8 + 19đ?‘Ľ10 − 4đ?‘Ľ12 = −84 30đ?‘Ľ1 + 24đ?‘Ľ2 − 55đ?‘Ľ3 − 15đ?‘Ľ4 − 25đ?‘Ľ5 + 10đ?‘Ľ6 + 40đ?‘Ľ7 − 10đ?‘Ľ8 + 8đ?‘Ľ9 − 3đ?‘Ľ10 − 16đ?‘Ľ11 + 4đ?‘Ľ12 − 20đ?‘Ľ13 = 283

2đ?‘Ľ + đ?‘Ś + 3đ?‘§ = 7 8đ?‘Ľ − 5đ?‘Ś + 3đ?‘§ = 11

(16)

The discrete PSO algorithm reports that after 20 or more runs, for different swarm sizes and parameters, it was not possible to find an answer. It was also observed that if a system, e.g. the one given by eq. (17), has infinite solutions, a search domain must be defined, striving to locate solutions over this given set. 10đ?‘¤ + 3đ?‘Ľ + 3đ?‘Ś + 8đ?‘§ = 1 6đ?‘¤ − 7đ?‘Ľ − 5đ?‘§ = 2

5đ?‘Ľ1 − 13đ?‘Ľ2 + 7đ?‘Ľ4 + đ?‘Ľ6 − 19đ?‘Ľ7 + 19đ?‘Ľ8 − 2đ?‘Ľ9 + 6đ?‘Ľ10 + 5đ?‘Ľ11 − 26đ?‘Ľ12 = −468 đ?‘Ľ1 + 28đ?‘Ľ2 + 33đ?‘Ľ3 − 100đ?‘Ľ5 + 5đ?‘Ľ6 + 13đ?‘Ľ7 − đ?‘Ľ8 − đ?‘Ľ9 + 11đ?‘Ľ10 − 7đ?‘Ľ11 − 3đ?‘Ľ12 + đ?‘Ľ13 = −100 7đ?‘Ľ3 − 21đ?‘Ľ4 + 35đ?‘Ľ5 − 42đ?‘Ľ6 + 7đ?‘Ľ7 + 14đ?‘Ľ8 − 35đ?‘Ľ9 + 28đ?‘Ľ10 − 7đ?‘Ľ11 + 14đ?‘Ľ12 + 56đ?‘Ľ13 = 329 5đ?‘Ľ7 + 5đ?‘Ľ8 + 10đ?‘Ľ9 − 50đ?‘Ľ10 + 20đ?‘Ľ11 − 25đ?‘Ľ12 + 30đ?‘Ľ13 = −345 2đ?‘Ľ1 − 4đ?‘Ľ2 + 4đ?‘Ľ3 − 2đ?‘Ľ4 − 6đ?‘Ľ5 + 8đ?‘Ľ6 + 10đ?‘Ľ7 + 9đ?‘Ľ8 − 12đ?‘Ľ9 + 20đ?‘Ľ10 + 6đ?‘Ľ11 − 30đ?‘Ľ12 + 16đ?‘Ľ13 = −78

(17)

4. Conclusions

whose solution is: đ?‘Ľ1 = 1; đ?‘Ľ2 = −3; đ?‘Ľ2 = 2; đ?‘Ľ4 = −1; đ?‘Ľ5 = 3; đ?‘Ľ6 = 7; đ?‘Ľ7 = 9; đ?‘Ľ8 = −4; đ?‘Ľ9 = 5; đ?‘Ľ10 = 5; đ?‘Ľ11 = −5; đ?‘Ľ12 = 10; đ?‘Ľ13 = 6 The objective function is, once again, built using the squared sum of each equation. A search space between -10 and 10 was defined, and 100 particles were used. On the same computer, an excellent quality answer (in terms of accuracy and precision) was found, but it required an average time of 129632 s (around 36 hours) and 1026435 iterations. It is worth mentioning that it was not possible to find these roots by using commercial software nor through traditional means. Fig. 2 shows the exponential increment in time, when expanding the search domain.

This research proved that it is possible to numerically solve a system of linear Diophantine equations through an optimization algorithm. Also, it was observed that it is possible to solve this optimization problem without using conventional approaches. It was shown, through some simple examples, that, at least for these systems, solutions with high precision and accuracy are achieved. Moreover, it was found that the convergence time and the number of iterations are random variables that mainly depend on factors such as the algorithm parameters, the initial swarm and the size of the system. Obviously, when solving a squared, small system, traditional approaches, including the ones found in most of the commercial mathematical software, are far quicker, even those that find all the roots of the system. However, in case that it is required to solve a system with more unknowns than equations, a typical situation, they are out of the question. Likewise, if the system is of a considerable size, the convergence time drastically increases, since a big search domain is required (a case found during the current research), so the numerical strategy proposed here gains importance as a possible solution alternative. References

3.4. System of equations D In order to further test the algorithm's effectiveness, some other Diophantine systems were used. However, in this case they do not have a solution in the set of integers, e.g. the system given by eq. (16), which has a range A = range (A ,C) = 2, and a g.c.d. (2,1,3) = g.c.d (8,−5,−3) = 1| { 7,11}.

[1] Abraham, S., Sanyal, S., and Sanglikar, M., Particle Swarm Optimization Based Diophantine Equation Solver, ArXiv, pp.1–15, Mar. 2010. [2] Bonilla E., M., Figueroa G., M., and Malabare, M., Solving the Diophantine Equation by State Space Inversion Techniques : An Illustrative Example, Proceedings of the 2006 American Control Conference, pp.3731–3736, 2006. [3] Matiyasevich, Y. V., Hilbert’s Tenth Problem. MIT Press, , 1993. [4] Wu - Shr-Hua. Time-varying feedback systems design via Diophantine equation order reduction, thesis (Ph.D. on Electrical Engineering), United States, The University of Texas at Arlington, 2007, pp. 1–140. [5] Cohen, H., Number Theory, Vol. I: Tools and Diophantine Equations and Vol. II: Analytic and Modern Tools. Springer-Verlag, New York, 2007. [6] Lugar, G., Artificial Intelligence: Structures and Strategies for Complex Problem Solving. Addison-Wesley, Boston, 2006.

Figure 2. Convergence time as a function of the search domain.

[7] Abraham, S. and Sanglikar, M., Finding Numerical Solution to a Diophantine Equation: Simulated Annealing as a Viable Search Strategy, Proceedings of the International Conference on Mathematical Sciences, 2, pp.703–712, 2008.

143


Amaya et al / DYNA 81 (185), pp. 139-144. June, 2014. [8] Contejean, E., An Efficient Incremental Algorithm for Solving Systems of Linear Diophantine Equations, Information and Computation, (113), pp.143–172, 1994. [9] Jarboui, B., Damak, N., Siarry, P., and Rebai, A, A combinatorial particle swarm optimization for solving multi-mode resource-constrained project scheduling problems, Applied Mathematics and Computation, 195 (1), pp.299–308, Jan. 2008. [10] Amaya, I., Cruz, J., and Correa, R., Real Roots of Nonlinear Systems of Equations Through a Metaheuristic Algorithm, Revista Dyna, 78 (170), pp.15–23, 2011. [11] Amaya, I., Cruz, J., and Correa, R., Solution of the Mathematical Model of a Nonlinear Direct Current Circuit Using Particle Swarm Optimization, Revista Dyna, 79 (172), pp.77–84, 2012. [12] Gonzáles - F. J. Ecuaciones Diofánticas, thesis (Apuntes de Matemática Discreta), Universidad de Cádiz, 2004, pp. 353–354.

I. Amaya, received his bachelor degree on Mechatronics Engineering from Universidad Autónoma de Bucaramanga, Bucaramanga, Santander (Colombia). Currently, he is with the School of Electrical, Electronic and Telecommunications Engineerings and is pursuing his PhD on Engineering at Universidad Industrial de Santander, Bucaramanga, Santander (Colombia). His research interests include global optimization and microwaves. ORCID: 0000-0002-8821-7137 L. Gómez, received his bachelor degree on Physics from Universidad Industrial de Santander Bucaramanga, Santander (Colombia), and also a bachelor degree on Electronics Engineering from the same University. His research interests include global optimization and Diophantine equations. R. Correa, received his bachelor degree on Chemical Engineering from Universidad Nacional de Colombia, Bogotá, Cundinamarca (Colombia), and his master degree on Chemical Engineering from Lehigh University, Bethlehem, Pensilvania (USA) and from Universidad Industrial de Santander, Bucaramanga, Santander (Colombia). He received his PhD from Lehigh University on Polymer Science and Engineering and is currently a professor at Universidad Industrial de Santander. His research interests include microwave heating, global optimisation, heat transfer and polymers.

144


DYNA http://dyna.medellin.unal.edu.co/

Some recommendations for the construction of walls using adobe bricks Algunas recomendaciones para la construcción de muros de adobe Miguel Ángel Rodríguez-Díaz a, Belkis Saroza-Horta b, Pedro Nolasco Ruiz-Sánchez c, Ileana Julia Barroso-Valdés d, Fernando Ariznavarreta-Fernández e & Felipe González-Coto f b

a University of Oviedo, Spain, mangelrd@uniovi.es Central University of Las Villas, Cuba, belkiss@fc.uclv.edu.cu c Central University of Las Villas, Cuba d University of Oviedo, Spain, ilema00@yahoo.es e University of Oviedo, Spain, ariznaf@uniovi.es f University of Oviedo, Spain, felipegc@hunosa.es

Received: March 22th, 2013. Received in revised form: January 23th, 2014. Accepted: January 30th, 2014

Abstract This paper shows some results of the analysis of wall construction with adobe bricks, carried out in a pilot building in Villa Clara, Cuba. Our main objective was to obtain some construction recommendations to avoid the humidity due to capillarity. The recommendations deal with uprising speed of construction, adequate wall longitude, binding mortar between adobe bricks, adobe protection from weathering, etc. Keywords: Adobe; building materials; collar beams; lintels; opening of the wall. Resumen En el presente artículo se estudian las condiciones en las que deben ser levantados los muros de adobe en construcciones de tierra. Para ello, se construye una edificación piloto en Villa Clara, Cuba, que ha servido de base para probar distintas soluciones constructivas. Como resultado de esta investigación se dan recomendaciones para evitar el ascenso de la humedad por capilaridad, sobre la velocidad de levantamiento, la longitud de muro adecuada, el mortero de unión tanto de adobes entre sí como de adobe con otro material, el cerramento, los dinteles, la protección de vanos así como para el revestimiento adecuado para la protección del muro de adobe del intemperismo. Palabras clave: Adobe; construcción de materiales; cerramento; dinteles; vanos en muros.

1. Introduction Mud, as a construction material, is one of the oldest materials ever used by man for construction purposes. Its use has been maintained for centuries, and even today it is of great importance, mainly in developing countries. In the case of Cuba, the use of adobe was a solution in the crisis of the 1990’s, but due to the lack of systematic knowledge about its correct use, many adobe brick buildings show today a variety of pathologies. In Table 1 the evaluated pathologies are shown and the percentage of buildings affected by each one. Even if a wall is well-designed, an appropriate building work and a relatively big thickness is necessary to guarantee the right behavior during its lifespan. In the case of adobe walls, in order to get the highest quality, the construction stage is, probably, the most important one.

Table 1. Common pathologies observed in adobe buildings Percent of damaged Observed pathology buildings Humidity in walls due to capillarity. 80% Wall inner coatings detached, with damages covering 40-60% of total area, and mainly at 70% the lower half of the wall. Wall inner coatings detached, with damages covering 5-10% of total area, and mainly above 60% the baseboards of the wall. Humidity in walls due to rains or splash. 40% Oblique cracks, almost 45° opening, below 30% windows spaces. Horizontal cracks in walls. 30% General damages in lintels. 20% Vertical crack above door lintels. 10% Diagonal cracks near the lintel corner. 10% Diagonal cracks between adjacent lintels. 10% Vertical crack close to the door frame. 10% Wall inner coatings detached, with damages covering 10-20% of total area, and mainly in 10% the lower half of the wall.

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 145-152. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Rodríguez-Díaz et al / DYNA 81 (185), pp. 145-152. June, 2014.

Based on laboratory scale studies carried out by Rodríguez Díaz and Saroza Horta [1], a pilot building was constructed at the village Crescencio Valdés (Villa Clara, Cuba), Fig. 1. These studies allowed us to define the optimal composition which should characterize the adobe found in this zone and which was used in the construction process. According to Casagrande´s classification the employed soil was classified as SC which is ideal for the adobe presenting a 60% of sand, a 15% of lime and a 25% of clay.

2.1. Kind of top-foundation The adobe walls are usually affected by humidity due to capillarity. To avoid this, the lower part of the wall can be built with other materials. This lower part is called topfoundation. To carry out this study, we have considered the following variants:  A wall completely built in adobe.  An adobe wall with 40 cm of clay bricks as topfoundation.  An adobe wall with 40 cm of concrete blocks as topfoundation. 2.2. Wall construction speed Members of Habiterra Network [2] recommend a construction speed lower than 1.5 m high per day. This is due to the slow drying process of the material and the high ownweight of the wall. In the other hand, González Limón [3] y Rodríguez et al. [4] proposes a construction speed lower than 1 m high per day, to avoid the settlement of the fresh joints. To study the influence of this parameter, 5 m length experimental wall was built using different speeds: 0.5 m high; 0.7 m high; 0.9 m high; 1.1 m high; 1.3 m high and 1.5 m high per day. 2.3. Wall length It is known that too long walls, without intermediate pillars, can suffer vertical cracks due to bending. To quantify this limit, some high walls were built with lengths of 2.5 m, 5 m, 7.5 m and 10 m. 2.4. Kind of mortar

Figure 1. Map

The combination considered appropriate to be used in processing the adobe that is going to be employed in the building was the following: SOIL + 25% of organic matter + 2% of AVE asphalt, which offers a compression strength of 1.90 MPa and a capillary absorption of 0.81 g/cm2. Taking these results as the starting point a further step was achieved by working at a real scale, focusing the research on improving the knowledge about the parameters that rule adobe wall construction. In this experimental building different tests to the more problematic and relevant aspects in the adobe wall construction have been done. The specific objective was the study of some variables that influence the adobe wall behavior, such as its type; wall construction speed; wall length; kind of mortar; collar beams, lintels and final wall coating. 2 Methodology To satisfy the objectives of our paper, we have used the following methodology. After our experience, we strongly recommended it for similar purposes.

Adobe joints are critical. Wall cracks could appear easily in these zones because of the lower mortar strength compared to adobe, and due to the fact that the adherence between mortar and adobe brick is low. Thus, horizontal joints are the weak ones in the wall. In the Peruvian Standard (NTE E.080) [5], mortars are classified in type I (mixture of cement and sand) and type II (adobe mixture). 2.4.1. Mortar between adobe materials This study was carried out with various mixtures of mortar type II. The authors consider that this is the adequate type of mortar to be used in this case, because it has similar strength properties to the adobe bricks, it gets better adherence in the interface brick/mortar, and finally its lower cost. Habiterra Network [2] proposes the addition of organic fiber in the mixture, while others, like González Limón [3], refuse it. The authors of this study, decided to avoid the use of any addition because this is not traditional in the zone and it makes the workability of the mixture more difficult. In this case, the role of the organic fiber will be supplied by the sand. The soil used as a raw material to obtain the mortar has the following composition: 0 % of gravel, 62 % of sand, 14%

146


Rodríguez-Díaz et al / DYNA 81 (185), pp. 145-152. June, 2014.

of lime and 24 % of clay. Other characteristics are: liquid limit (38.7); plastic limit (19.41); plasticity index (19.29) and specific weight (26.3 kN/m3). Table 2 shows proportions by volume of the soil and sand used to prepare the samples for a simple strength test, samples for adherence test following, Minke [6], and samples for crack tests following (Habiterra Network, 1995) recommendations. Table 2 shows the composition and simple compression resistance for each dosage. Samples number 6, 7 and 8 had no results because they were too soft to be tested. For these three samples, the amount of sand in the mixture was too much and the cohesive force of clay particles was affected. Table 2: Compression test

Nº 1 2 3 4 5 6 7 8

Soil 1 1 1 1 1 1 1 1

Sand 0.25 0.50 0.75 1 1.25 1.50 1.75 2

Rc (Mpa) 1.22 1.45 1.54 1.38 1.12 -

concrete lintels. To develop our research, we have used two kinds of solutions for the adobe bricks under windows:  Type one: with a linear disposition of vertical joints, called “junta corrida” in Spanish.  Type two: with a discontinuous disposition of vertical joints. It is called “matajunta” in Spanish. Guillaud et al. [7], Álvarez et al. [8] and Bernabeu [9] say that, if the part of the wall under window is Type two, some 45 degrees cracks could appear. Its path starts at both ends of the span. This cracking occurs, due to vertical load at this zone not being able to equilibrate the vertical pressure of the soil. In order to study this phenomenon, and to obtain confident results, we built the lintel under the windows using the same dimension for the upper one and the collar beam. For the protection of span we decided to apply two solutions as follows:  To surround the span using fired bricks.  To put a cement-sand coat. 2.6. Coatings

2.4.2. Binding mortar between adobe and other materials In this case, the authors decided to study the behavior in the joints adobe/clay bricks and in the joints adobe/concrete blocks. Three types of mortar were tested:  Soil-sand: mortar with the best tested dosage from Table 2.  Cement-sand: mortar obtained mixing cement and sand following the Peruvian norm, with a volumetric ratio of 1:5.  Lime-sand: mortar with volumetric proportion of 1:3. The tests performed were the same as in the case of adobe/adobe joints, additional crack tests are recommended. 2.5. Collar-beam, lintel and span protection The collar beam is the upper binding beam in a building. It is a key structural element in the stability and safety for adobe constructions. The best solution is to use a continuous reinforced concrete beam at the upper part of the wall. Its rigidity in the horizontal plane improves the structural behavior of the whole building. This kind of solution increases cost, due to the use of more expensive materials and the need of wood framing systems, in a country like Cuba where wood is scarce. The specific objectives in this part of the study were the optimization of collar beam support conditions, and also to decide the adequate kind of lintel. To carry out the collar beam support study, next three options were tested:  Collar beam supported together by the adobe wall and clay bricks or reinforced concrete pillars.  Collar beam supported only by pillars.  Collar beam supported only by adobe walls. In the case of the lintels, three options were tested: wood lintels, pre-cast reinforced concrete, and in place reinforced

To avoid the problems arising from wind or rain erosion, it is necessary to use the correct kind of coating, able to protect the wall from these agents. In order to select the right one it is necessary to take into account that soil walls need to transpire, due to the material permeability to water, steam and some other gases, which must be able to flow through the wall thickness. To achieve this, it is necessary to apply an incomplete impermeabilization; otherwise water released during the wall wetting and hardening will try to get outside and if it does not find its way out, it would push the coating mortar detaching it from the wall and making it fall. This is the reason to refuse cement coatings and to promote the use of clay, sand, hydrated lime and just a small proportion of cement. Observing the opinion from Houben and Guillaud [10], there is a big difference between the behavior of a material in laboratory conditions and actual ones. Many different aspects (change of scale, climatic influences, effect of the building use etc) can affect or modify durability. One of the most efficient methods to get closer to the actual behavior of an adobe wall is the construction of small prototypes, exposed to natural environment at the same place where the future building is intended to be built or in a similar one. It leads to confident composition of coating. The authors, decided to investigate the group of mixtures shown in Table 3, applying each of them to prototypical adobe walls, and measuring their behavior against cracking, erosion, and impact resistance. These aspects provide a clear definition about durability. They were analyzed during two months which is a very short period to provide final conclusions but it is certainly the first approach to the future behavior of these mixtures.

147


Rodríguez-Díaz et al / DYNA 81 (185), pp. 145-152. June, 2014. Table 3: Dosage for the Coating study (in volume)

Dosage 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Coat

Soil

Sand

Hydrated Lime

Cement (m3/m3 on mixing)

Thick Thin Thick Thin Thick Thin Thick Thin Thick Thin Thick Thin Thick Thin Thick Thin

1 1 1 1 1/2 1 1 1 1 1 1 -

3 3 3 3 3 3 3 3 4 4 1 3 3 3 1/2 3

1 1 2 2 2 2 2 2 1 2 1 1 1 2

0.043 0.043 0.043 0.043 0.043 -

It is very important to mention that, in order to obtain a good adherence, both wall surfaces were thoroughly moistened during four hours, before applying the coating. The process has to be repeated until absorption becomes visible. Coating application has two steps: a plaster and a render. 3. Results 3.1. Top foundations analysis When the wall is completely built using adobe, its behavior is bad due to humidity. The top-foundation, consist on a linear cord of fired brick, and it is concerned with the protection of the lower part of the wall against rain splash and environmental aggression but, as you can see in Fig. 2, it is not efficient enough to avoid the capillary humidity. Foot beams made of hollow cement blocks is a more efficient protection against all kind of humidity, even against capillary effects as Fig. 3 shows.

Figure 2: Fired bricks foot cord

Figure 3: Hollow blocks foot cords

3.2. Analysis of the wall construction speed After experimenting with different walls, it can be asserted that an important settlement occurs when the wall construction speed is more than 50 cm per day, due to the effect of the bricks own weight and the low resistance of fresh mortar. Horizontal and vertical bending is also present. This additional pathology is caused by mortar drying contraction being too fast, due to the Cuban climate’s high temperatures. 3.3. Wall length analysis In walls longer than 5 m, with no intermediate pillars, a serious horizontal bending could happen and a vertical crack near the middle of the length may appear. It is caused by a drying stress higher than that allowable for this material (see Fig. 4).

Figure 4: Cracks into wall without pillar

148


RodrĂ­guez-DĂ­az et al / DYNA 81 (185), pp. 145-152. June, 2014.

Figure 7: Adobe wall, mortar number three and fired bricks Figure 5: Wall with an intermediate pillar

3.5. Collar beams, lintels and span protection To avoid this pathology, we could obey the recommendation of Habiterra Network [2], of keeping the length of the wall less than 2.5 times its total height, for walls without pillars. Our experience during this case study, following this recommendation, has provided the best results, as shown in Fig. 5. 3.4. Mortar analysis Tested mortar for Walls built with adobe only. From Table 2 we can say that sample number 3 has the higher compression strength. Then, sample number 3 is the most efficient. Fig. 6 shows a very good wall, built using mortar number 3 of Table 2.

3.5.1. Collar beams When the collar beam is supported by the adobe wall and brick (or cement block) pillars, 45 degrees cracks may appear at the point of contact between the pillar and the wall. Fig. 8 shows a clear example of this pathology. The crushing of the soft material shortens the length of adobe parts and sliding between interface of adobe and bricks occurs. When the collar beam is supported only by pillars, the wall is free to small movements, because its behavior is near a cantilever wall. No vertical load is on the wall and it is very sensitive to any pulling or pushing action which could cause the collapse of the wall. Fig. 9 shows this case.

Tested mortar for Walls built mixing adobe with other material. We have also used combination number 3 of Table 2 for this purpose. Fig. 7 shows an adobe wall with some components of fired bricks, and mortar number 3.

Figure 8: Wall and pillar supporting together the collar beam Figure 6: Binding mortar for adobe walls plural o singular

149


Rodríguez-Díaz et al / DYNA 81 (185), pp. 145-152. June, 2014.

Figure 11: Pillar plural o singular 10 cm shorter than the wall

Figure 9: Only pillars supporting the collar beam

In the last case, when the collar beam is supported only by the wall, crack appearing is not probable because the settlement of the wall is uniform. To reach this kind of solution, the pillars must be almost 6 to 10 cm shorter than the wall. Fig. 10 and 11 show this situation. After settlement has finished, the free space between wall and pillar must be filled using the same mortar. As long as the wall is receiving an important amount of load, its resistance against pulling or pushing is good enough for common situations.

In the second case, when a lower lintel is used, cracks do not appear, due to the stronger behavior of this part of the wall. Obviously, this is the best solution. See Fig. 13. These upper and lower lintels must be built “in place“, because for a pre-cast one, adherence between concrete and adobe wall is too poor.

Figure 12: Fissure opening at the lower part of the wall, under the window

Figure 10: Only wall plural o singular supporting the collar beam

3.5.2. Lintels When there are no lintels at the lower part of the windows, it is very important to distribute the bricks to reach an independent small piece of wall under the window. In this case, the cracks will appear following the vertical joints as Fig. 12 shows. This solution leads to serious aesthetical and maintenance problems.

Figure 13: Solution for the lower lintel

150


Rodríguez-Díaz et al / DYNA 81 (185), pp. 145-152. June, 2014.

A pre-cast lintel will never reach a correct structural behavior because it is like a pinned support, while an “in place” lintel works as a fixed element due to adherence and length of the support. In pre-cast samples, the reaction against external forces is concentrated on a point, generating very high compression stresses, impossible to be managed by the adobe wall. And the deflection of the pinned piece is 4 or 5 times bigger than the ones in a fixed one.

 

3.5.3. Protection of door or window opening  Variant 1: Results are good due to the use of fired bricks which are stronger than adobe. Coating must be done using a mortar of cement and sand, in order to obtain a better protection of this weak area. Bricks must be placed using a linear disposition of vertical joints (called “junta corrida” in Spanish). Interface between adobe and fired bricks must be filled with a cement-sand-clay mortar (“tercio” in Spanish) to get better adherence between both materials.  Variant 2: It does not work well because adobe refuses the cement coating which will fall down. This effect is especially strong near the door or window opening because of the dynamic component of loads there.

 

3.5.4. Wall coating analysis Dosage number 11 of Table 3 (1:1 ratio of soil and sand) gives the best behavior for thick (internal) coating. For thin (external) coating, dosage number 12 of the same Table 3 (1:3 of lime and sand) is the best one. These combinations of dosage, 11 and 12, allow obtaining a good adherence after three months of a rainy season, and it maintains a good condition against erosion. Only a few fissure openings were observed at the end of the testing period. Fig. 14 shows three prototypes of wall, coated by dosage number 11 for the thick (internal) coating, and different kinds of thin (external) coating. Dosage number 2 of Table 3 was used for left sample, number 14 for central element and number 12 for the right sample. Fig. 15 shows the big difference between an uncoated wall and a wall coated using dosage number 11 for the internal coat, a 1:3 coating for the external one. It is easily visible, the difference between coated and uncoated walls.

cement blocks, as an interface with the foundation. It gives an efficient protection against capillarity, splashing or rain moisture. As the wall building is faster, the risk of pathology appearance (bending and cracking) rises. For Cuban (or tropical) environment, the wall lifting must be inferior to 50 cm per day. The use of pillars is absolutely necessary when the length of the wall is more than 2.5 times its vertical dimension. The binding mortar for the joint between adobe bricks, should be a 1:0.75 ratio of soil and sand, that delivers the best results. To take into account that the soil could vary from place to place, a simple testing like the one shown in this paper, should be done. For the interface between adobe and fired bricks or cement blocks, a mixture of soil, sand and lime must be used. The whole building or each one of its parts must be surrounded by a collar beam, and no combination of adobe walls and bricks should be done. The best building protection against openings is to use fired bricks surrounding span ring. Interface between adobe and fired bricks must be filled using a soil-sandlime mortar. The coating had to be done with an internal thick coat and an external thin coat. Our best results were obtained using a 1:1 mixture of soil and sand for the internal coating and a mixture of sand and hydrated lime in a proportion of 3:1 for the external one.

Figure 14: Different kinds of coating

4. Conclusions The article shows the results of a practical process where some adobe wall samples were built in search of an optimal structural response, with a minimal presence of pathology. Not only are the results delivered, but also a process methodology is presented. Regarding the aspects we were trying to characterize, our conclusions are the following:  Adobe walls must be built over two lines of hollow

Figure 15: Difference between coated and uncoated walls

151


Rodríguez-Díaz et al / DYNA 81 (185), pp. 145-152. June, 2014.

Acknowledgements We would like to thank the support of the Spanish Agency of Cooperation for Development and the aid of the International Relationships Bureau of the University of Oviedo, for supporting authors travelling expenses. References [1] Rodríguez Díaz, M. A., y Saroza Horta, B. Identificación de la composición óptima del adobe como material de construcción de una escuela en Cuba. Materiales de Construcción, 56 (282), 2006. [2] Habiterra Network, CYTED. Recommendation for standarizing of adobe, tapial and soil-cement buildings. La Paz, Bolivia: Habiterra CYTED network. 1995. [3] Gonzáles Limón, Teresa “Estudio de patologías y caracterización física y mecánica de los materiales de tierra”, Revista Ingeniería Civil no. 132. España. 2003. [4] M. A. Rodríguez, I Monteagudo, B. Saroza, P. Nolasco, Y. Castro. Aproximación a la patología presentada en las construcciones de tierra. Algunas recomendaciones de intervención. Informes de la Construcción. Vol. 63, 523, pp. 97-106, 2011. [5] NTE E.080. Adobe. ININVI: Adobe construction. Technical Standard for adobe building. Special disposition for seismic-resistant adobe building. Lima, Perú. [6] Minke, G., Guidelines for building with earth. Soil as a buildings material and its application in modern architecture. Motevideo, Uruguay: NordanComunidad, 2001. [7] Guillaud, Hubert y Trappeniers, Marina “Building cultures and sustainable development”, Basin News. December 2001. no. 22, 2001. [8] Álvarez, B., Menéndez, J. M., Dzioba, B., Coello, A. C., Evaluación de materias primas en una planta de beneficio de arena de sílice para aumentar la eficiencia energética del proceso de molienda. Revista Dyna, edición 177 - febrero de 2013. [9] Payá, J. J., La “transmutación” sostenible de los residuos para nuevas materias primas en el ámbito del concreto. Revista Dyna, edición 175E – octubre de 2012. [10] Houben, H. and Guillaud, H., Earth construction. A comprehensive guide. Intermediate technology publications. Paris, France: CRATerreEAG, 1994.

M. A. Rodríguez Díaz, has a PhD in Mining Engineer from the University of Oviedo. He is a professor of the School of Mining Engineering, University of Oviedo, which is part of the Department of Mines Exploitation and Exploration, since 1994. In this organization, he teaches on topics related to Soil Mechanics, Rock Mechanics, Soil Behavior, Geotechnical Civil and Mining, etc. He has directed and collaborated on several research projects with organizations from different Latin American countries such as Cuba, Perú, México, etc. Also he has directed several PhD Thesis and projects with satisfactory results, and published several articles on different subjects. He is currently the Director of a Master of Mining Geomechanics, taught in Peru to mining professionals. F. Arriznavarreta Fernández, has a PhD in Mining Engineering from the University of Oviedo. He is also a professor of the School of Mining Engineering, University of Oviedo, which is part of the Department of Mines Exploitation and Exploration, since 1995. His specialist topics are related to Soil Mechanics and Geotechnical Engineering in general and modeling structures. I. J. Barroso Valdes, A Hydraulics Engineer graduated at the Higher Polytechnic Institute Jose Antonio Echeverría in Havana, Cuba, where she taught in the Department of Civil Engineering and was part of the University Group Integrated Project Engineer and she also has a technical Works (Hydrology specialization) from the University of Ávila. She has participated as a lecturer in various university extension and summer courses taught at the University of Oviedo such as the Slope Course, Earth Dams Course, Risk Course, etc. She has also participated in several projects with Spanish and international entities. B. Saroza Horta, He holds a PhD in Technical Sciences from the Architect Central University of Las Villas, Santa Clara, Cuba. He is a professor at the Department of Civil Engineering of the University. He has participated in several joint projects between the University and the University of Oviedo, publishing several scientific papers together. P. N. Ruiz Sánchez, has a PhD in Technical Sciences and Civil Engineering from the Central University of Las Villas, Santa Clara, Cuba. He is a professor at the Department of Civil Engineering at the same University and researcher at the Center for Research and Development of Materials. F. González Coto, holds a PhD in Mining Engineering from the University of Oviedo. He is mining manager at several coal mines of the company Hunosa Group, with more than 10 years of experience in underground mining. He has participated as a lecturer in various university extension and summer courses taught at the University of Oviedo such as the Slope Course, Earth Dams Course, the Course of underground mine Risks, Ventilation, Safety and Security for underground mining, etc., and published several articles on different mining, environmental and civil subjects.

152


DYNA http://dyna.medellin.unal.edu.co/

Thermodynamic analysis of R134a in an Organic Rankine Cycle for power generation from low temperature sources Análisis termodinámico del R134a en un Ciclo Rankine Orgánico para la generación de energía a partir de fuentes de baja temperatura Fredy Vélez a, Farid Chejne b & Ana Quijano c b

a CARTIF Centro Tecnológico, España. frevel@cartif.es Universidad Nacional de Colombia, sede Medellín, Colombia. fchejne@unal.edu.co c CARTIF Centro Tecnológico, España. anaqui@cartif.es

Received: March 26th, 2013. Received in revised form: December 23th, 2013. Accepted: January 14th, 2014

Abstract This paper reports the main results of a thermodynamic study realized on the use of a low temperature heat source (150ºC as maximum) for power generation through a subcritical Rankine power cycle with R134a as working fluid. The procedure for analyzing the behavior of the proposed cycle consisted of modifying the input pressure, temperature and/or discharge pressure of the turbine with working fluid at conditions of both saturation and overheating. Results show that the efficiency of the cycle for this fluid is a weak function of temperature, i.e., overheating the inlet fluid to the turbine does not cause a significant change in the efficiency. However, when the pressure ratio in the turbine increases, it is much more efficient, and also, as the input temperature to the turbine rises, the efficiency increases more sharply. Furthermore, the effect of adding an internal heat exchanger to the cycle was analyzed, giving as a result a maximum efficiency of 11% and 14% for the basic cycle and with an internal heat exchanger, respectively. Keywords: Energy efficiency; organic Rankine cycle; power generation; waste heat; renewable energy. Resumen Este trabajo presenta los principales resultados del estudio termodinámico realizado sobre el uso de una fuente de calor de baja temperatura (150ºC como máximo) para la generación de energía a través de un ciclo Rankine subcrítico con R134a como fluido de trabajo. El procedimiento para analizar el comportamiento del ciclo propuesto consistió en modificar la presión y temperatura de entrada y/o descarga de la turbina, con el fluido de trabajo en condiciones tanto de saturación, como sobrecalentamiento. Como resultado, se puede indicar que la eficiencia del ciclo con este fluido es una débil función de la temperatura, es decir, sobrecalentar el fluido a la entrada de la turbina no causa un cambio significativo en la eficiencia. Sin embargo, cuando la relación de presión en la turbina aumenta, la eficiencia incrementa, y también, conforme la temperatura de entrada a la turbina aumenta, la eficiencia aumenta pronunciadamente. Además, se analizó el efecto de adicionar un intercambiador interno de calor que aumentó los valores de eficiencia obtenidos, dando como resultado, una eficiencia máxima del 11% y 14% para el ciclo básico y con el intercambiador interno de calor, respectivamente. Palabras clave: Eficiencia energética; ciclo Rankine Orgánico; generación de energía; calor residual; energías renovables.

1. Introduction The use of fossil fuels (e.g., oil and coal) as an energy source has many negative environmental impacts, such as the release of pollutants and resource depletion. A high consumption rate of fossil-fuels will result in an increase in environmental pollution during the next century, due to the emission of CO2 and other gases that cause global warming through what is known as the greenhouse effect [1]. In order to reduce CO2 emissions and oil dependency, each country in the world is responsible for improving the quality of its energy sources [1]. One of these improvements is the use of waste heat or low temperature sources (such as some renewables) [2], being the organic Rankine cycle “ORC” a promising technology for their conversion into power [2-5].

The ORC principle of operation is equal to the conventional Rankine cycle, with the difference of using an organic agent as working fluid. However, unlike the conventional Rankine, the change of fluid allows the energy recovery from low enthalpy sources for work or electricity production. Thus, one of the main research lines realized on this issue is the selection of a suitable working fluid due to its great influence in the design of the process [2-4], [6-10]. Depending on the application, the heat source and the temperature level, the fluid must have optimum thermodynamic properties at the lowest possible temperatures and pressures, and also satisfy several criteria such as being economical, nontoxic, nonflammable, environmentally-friendly, allowing a high utilization of the available energy from the heat source. If all these aspects

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 153-159 June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Vélez et al / DYNA 81 (185), pp. 153-159. June, 2014.

are considered, a few fluids can be used [2,4-6] and overheating interesting, others such as [21] see a positive therefore, after an extensive literature review, a preliminary impact working under this condition. Therefore, this paper has comparison of the previous aspects and the thermodynamic been developed to show in an exhaustive manner, the effective performance obtainable for heat sources up to 100ºC with thermodynamic difference with the use of this fluid at different fluids, R134a was chosen as working fluid. This saturated and overheating conditions. In addition, the selection has been done on the basis that R134a: is a influence of the input temperature in the turbine (in the range nontoxic and nonflammable fluid (belonging to the group of 60°C-150ºC) and therefore, the influence of the energy A1 based on ASHRAE 34) and its Ozone Depletion source on the performance and on the net specific work in a Potential (ODP) is zero; there is wide experience of basic system and in a system with an Internal Heat Exchanger turbomachines and heat exchangers using this fluid; R134a (IHX) has been studied. has a high molecular mass (chemical formula: CF3CH2F, The main contributions of the present paper are based on MM=102kg/kmol), so that turbines work with low enthalpy the scarcity of information and research leading to show the drops and low mechanical stresses; it has a temperature and influence of the input temperature and pressure in the critical pressure of 101.1ºC and 40.6 bar, respectively, turbine (and therefore the energy source), as well as the allowing its use in the temperature range of interest, and the inclusion of an IHX for the power cycle with R134a as condenser operates at a higher pressure than atmospheric, working fluid in low temperature heat sources for power and therefore air in-leakages do not occur. Several generation. The results show that the efficiency for this fluid researchers have investigated the application and is a weak function of temperature, i.e., overheating the inlet performance of ORC with R134a as a working fluid. In [8], fluid to the turbine does not cause a significant change in the efficiency of the ORC using benzene, ammonia, R134a, the efficiency. However, when the pressure ratio in the R113, R11 and R12 was analyzed, last two fluids obtaining turbine increases, much larger values of efficiency are greater efficiencies, however they are also substances of obtained, and also, as the input temperature to the turbine limited use [11]. An analysis of a regenerative ORC based rises, the efficiency increases more sharply. Furthermore, on the parametric optimization, using R12, R123, R134a, the effect of adding an IHX to the cycle was analyzed, and R717 as working fluids superheated at constant pressure giving as result a maximum efficiency of 11% and 14% for was carried out in [12-13]. Results revealed that selection of the basic cycle and with IHX, respectively. a regenerative ORC during overheating using R123 as working fluid appears to be a good system for converting 2. Description of the power cycle low-grade waste heat to power. In [14-15] a lowtemperature solar organic Rankine cycle system was The ORC operation principle is the same as the designed and built with R134a as working fluid that works conventional Rankine cycle. A pump pressurizes the liquid between 35.0ºC and 75.8ºC for reverse osmosis desalination fluid, which is injected into an evaporator (heat source) to in Greece. The results showed a system efficiency of about produce a vapour that is expanded in a turbine connected to 7% and 4%, respectively. Other studies that have analyzed a generator. Finally, the output vapor is condensed and the use of R134a as working fluid in the ORC cycles for sucked up by the pump to start the new cycle. An IHX can reverse osmosis desalination at an experimental level [16], be also included to make even better use of the energy from and also as a theoretical manner [17-22] have presented the expanded vapor, preheating the pump fluid that will similar efficiencies as ones previously mentioned. In this enter the evaporator as it is shown in Fig. 1. same field, [23] showed a simulation to estimate the increase in the efficiency and the energy available for desalination of an upper ORC coupled with a lower ORC with R134a, obtaining an efficiency for the latter of 4.2% and a global of 3%. Other cycles with R134a for applications for geothermal sources are reported by [24-26] and in bottoming cycles with internal combustion engine in [27]. A thermodynamic screening of 31 pure component working fluids for organic Rankine cycle is given in [4] achieving an efficiency of 7.7% with cycles that operate with R134a and temperature of 100.0ºC, whereas in [6] from the 20 fluids studied and reported, R134a was found to be the most suitable in terms of yield. Other works that have analyzed the use of R134a as working fluid in ORC cycles of low temperature have been realized by [9] and [28-31]. In view of what has been stated, there is a great interest in the use of this fluid for energy use of sources below 150ºC. However, we detected a discrepancy in the literature about the best thermodynamic conditions for its use, because while Figure 1. Schematics diagram of the process. (t) Turbine, (c) some studies like those presented in [6,12,13] do not find its Condenser, (p) Pump, (e) Evaporator, (ex) Internal Heat Exchanger. 154


Vélez et al / DYNA 81 (185), pp. 153-159. June, 2014.

isentropic efficiencies of 75% are assumed for the pump as well as for the turbine. The cycle’s total energy efficiency is:



Wt  W p Q e

(1)

Where W t is the power out from the turbine; W p is the power input in the pump defined as:

W t  m  h1  h2 

(2)

  h3  h4  W p  m

(3)

and the Q e is the heat input in the evaporator defined as: Figure 2. Typical T-s diagram for a Rankine power cycle.

According to the state points displayed in Fig. 1, Fig. 2 shows the power cycle in a T-s-diagram plotted with [32] data. As an example, an ideal cycle process is shown by segments, which are built from the state points 1, 2 is, 3 and 4is marked with (○). The line segment 1-2is represents an isentropic expansion with a production of output work. Heat is extracted from 2is to 3 along a constant subcritical pressure line. Then, an ideal compression of the saturated liquid from pressure at state point 3 to state point 4 is. Finally, the segment 4is-1 represents the heat addition at constant subcritical pressure to the highest temperature of the cycle at state point 1. The previous case, but operating under conditions in which the expansion process as well as compression process have a certain efficiency, is represented by the segments built from the state points 1, 2, 3 and 4 marked with (○) in the same Fig. 2, which are also related to Fig. 1. In order to increase the process efficiency, an IHX is introduced (as it can be seen in Fig. 1), in which a portion of the rejected heat, represented by an enthalpy drop from 2 to 2IHX at constant subcritical pressure, is transferred back to the fluid, raising its enthalpy from 4 to 4IHX at constant subcritical pressure. Net heat rejection is indicated by the enthalpy drop from 2IHX to 3 at constant subcritical pressure. State point 3 is at the lowest temperature of the cycle and above the temperature of the heat sink. Net input heat to the cycle occurs from 4 (or 4IHX) to 1 at constant pressure. Net output work is the difference between the output work from state points 1 to 2 and the input work pump from state points 3 to 4. 3. Modelling of the process The equations used to determine the performance of the different configurations are presented in this section. Using the first law of thermodynamic, the performance of a Rankine cycle can be evaluated under diverse working conditions. For both configurations, the analysis assumes steady state conditions, no pressure drop or heat loss in the evaporator, IHX, condenser or pipes, and the constant

Q e  m  h1  h4  or

  h1  h4 IHX  Q e  m

(4)

An input temperature of the condensation water T 7=15ºC and a minimum working fluid condensation temperature of T3=25ºC have been considered. Otherwise, a pinch point of 10ºC is maintained between T 3 and the output temperature of the condensation water (T8) for both configurations. In the heating process, the overheating of the inlet fluid to the turbine (T1) is considered from the condition of saturated steam up to its critical temperature. The minimum discharge pressure of the turbine (P2) is equal to the saturation pressure of the fluid in liquid state (P 3) to the temperature T3=25ºC. The thermodynamic analysis of the cycle was performed using a process simulator HYSYS® (Hyprotech Co., Canada). This simulator is useful for thermodynamic analysis, especially at steady state condition, and it has the advantage of including fluid properties and ready to use optimization tools. Its predictions have been compared with the ones from [32] and the results are very similar. The simulation flow diagram is the same as the one presented in Fig. 1, and the method for resolving all the components is widely described in [3,10]. 4. Results and discussion This section presents the results obtained in the simulations done with R134a fluid using the method described in section 3. As it was commented in the introduction, this fluid is of interest for the temperature range under study because of its good environmental characteristics, safety and thermophysical properties (temperature and critical pressure, boiling point, etc.). Furthermore, it must be taken into account that the ideal working fluid for a Rankine cycle is that whose saturated vapor line is parallel to the line of expansion of the turbine. As a consequence, a maximum efficiency is ensured in the

155


Vélez et al / DYNA 81 (185), pp. 153-159. June, 2014.

Figure 3. Influence of the P1/P2 ratio on the overall efficiency of the cycle under various conditions of saturation.

Figure 4. Influence of the P1/P2 ratio on the overall efficiency of the cycle for the overheated fluid at T1=101ºC.

turbine when this works in the area of dry steam (as it is shown in Fig. 2). This fluid has a slight negative slope in the saturation curve, and therefore the expansion process can be very close to the line of dry steam. The procedure for analyzing the behavior of this subcritical cycle consisted of varying the inlet temperature or pressure to the turbine and/or the discharge pressure of the turbine, until these conditions do not allow the fluid to be in the gaseous state neither in the input nor in the exit of the turbine. The results obtained for saturated and overheating conditions are presented in the sections below. 4.1. Saturated conditions

Figure 5. Influence of the P1/P2 ratio on the overall efficiency of the cycle for the overheated fluid at T1=95ºC.

Fig. 3 has been realized to analyze the influence of the P1/P2 ratio (in various conditions of saturation) on the efficiency of the cycle for this fluid. The discharge pressure P2 at five different saturated conditions were studied (7, 10, 15, 20 or 30 bar) maintaining both the inlet temperature to the turbine (in saturated conditions) and the pressure P 1 constant (the latter undoubtedly corresponds to the condition given by saturation temperature), for each curve. Fig. 3 shows that the highest efficiency is achieved when the inlet and the discharge pressure are the highest and lowest respectively, making higher the pressure ratio (i.e, making Δh greater and therefore producing more work). It is interesting to note that for the same pressure ratio, higher efficiencies are obtained for lower temperatures (or what it is, lower pressure P1); especially in the range from 77ºC to 101ºC, e.g., for a T 1 of 77ºC the efficiency was approximately 1.5% more than for a T1 of 101ºC. For lower temperatures this influence begins to be unappreciable. 4.2. Overheated conditions. Figures 4 to 8 present the results obtained when the influence of the P1/P2 ratio is analyzed with an overheated fluid at constant temperatures of 101°C, 95°C, 80°C, 70°C and 60°C, respectively. The inlet temperature to the turbine, T1, was kept constant for all the curves, but varying the discharge pressure P2 and for each of the inlet pressures to the turbine P1 of 15, 20, 25 and 35 bar, (i.e., analyzing the influence of the P1/P2 ratio with the fluid in overheating

Figure 6. Influence of the P1/P2 ratio on the overall efficiency of the cycle for the overheated fluid at T1=85ºC.

conditions on the overall efficiency of the cycle). Also the behavior of this cycle with each one of the saturation conditions presented in Fig. 3 was compared to the new conditions of overheating. Firstly, it can be inferred from Fig. 4 to 8 that the behavior is similar to that discussed for Fig. 3, i.e., higher efficiency is achieved when the inlet and the discharge pressure is the highest and lowest respectively, (with higher pressure ratio, i.e., higher Δh and therefore more work is produced). In addition, for the same pressure ratio higher efficiencies are obtained for lower P1 (especially in the range of 25 to 40 bar), e.g., the efficiency for a P 1 of 25 bar was approximately 1.5% more than at saturated conditions for a

156


Vélez et al / DYNA 81 (185), pp. 153-159. June, 2014.

Figure 7. Influence of the P1/P2 ratio on the overall efficiency of the cycle for the overheated fluid at T1=70ºC.

Figure 9. Influence of the input temperature to the turbine T1 on the efficiency of the cycle with constant P1/P2 ratio.

reported in the section 4.2), i.e. overheating the inlet fluid to the turbine does not cause a significant change in η. However, much higher values of η are obtained when the P1/P2 ratio increases and also as T1 rises, η increases more sharply as it is shown in Fig. 9. 4.4. Comparation of basic ORC vs. other with IHX

Figure 8. Influence of the P1/P2 ratio on the overall efficiency of the cycle for the overheated fluid at T1=60ºC.

T1 of 101ºC. For pressures less than 25 bar such influence was not very significant. Also Figs. 4 to 8 show the condition of overheating causes a slight increase in efficiency compared to that achieved at saturation conditions. This occurs because the inlet temperature to the turbine has a wide range of effects on the efficiency of the system depending on the slope of the isobaric curve in the region of overheated steam on the T-s diagram. If a fluid has a significantly steeper slope in the region of high pressure isobaric curve than in the region of low pressure, the system's efficiency increases as the inlet temperature to the turbine increases, otherwise it decreases. In our case, given that the R134a fluid analysis shows a slight negative slope in the saturation curve (Fig. 2), a slight overheating causes a slight increase in η, whereas, when the ratio P1/P2 increases, much higher values of this efficiency are obtained and in addition as T1 rises, such η increases more sharply. Thus, the following section 4.3 discusses how the inlet temperature to the turbine T1 influences the efficiency of the cycle, at various constant P1/P2 ratios.

Finally, Fig. 10 presents the results of simulations realized with a basic and with an IHX ORC (Fig. 1). The inlet pressure varies from 7 bar up to its critical pressure at four constant inlet turbine temperatures: 150°C, 120°C, 90°C and 60ºC. Also, a pinch point of 5ºC is maintained between T3 and the output temperature of the condensation water (T8) and a temperature difference (∆T) between T2 and T4 of at least 5ºC for the cycle with IHX. In the heating process, the overheating of the inlet fluid to the turbine (T1) is considered from the condition of saturated steam up to its critical temperature. In Fig. 10, the blue tendency lines and the open blue symbols indicate the energy efficiencies of the simple cycle, the green tendency lines and the green bold symbols point to the energy efficiencies of the cycle with IHX and the discontinuous red lines represent the net specific work (wne). On the other hand, symbols represented with a triangle (▲), square (■), circle (●) and rhombus (♦) are linked with the analyzed temperatures of 150ºC, 120ºC, 90ºC and 60ºC, respectively.

4.3. Influence of the input temperature to the turbine T1 on the efficiency of the cycle. In Fig. 9 the results on the η of the cycle by increasing the T1 at a constant P1/P2 ratio are presented. It is obvious that when the P1/P2 = 1.5 (which is the lowest of those studied, see Fig. 9) the efficiency, η, increases with T1. However, it should be noted that η is a weak function of temperature for the case of the fluid studied (as it was

Figure 10. Energy efficiency (η) with IHX (bold symbols) and without IHX (open symbols), and net specific work (wne) produced (discontinuos lines) vs. Pressure P1 for T1=150ºC (▲), T1=120ºC (■), 90ºC (●), and 60ºC (♦).

157


Vélez et al / DYNA 81 (185), pp. 153-159. June, 2014.

Fig. 10 shows a behavior where both the η and the wne at all four temperatures increase. There are several reasons for this: noting firstly, that the input temperature and the discharge pressure of the turbine are fixed; second, we assume that the work produced by this device is that given by equation (2); third, if the Mollier diagram of this fluid is analyzed, it can be seen that, for a constant temperature, when the pressure increases, Δh rises, and W t along with it, which ensures the increase of both η and the wne of the cycle. It is interesting to note how, for a same inlet pressure to the turbine P1, there is no appreciable increase in η with increasing temperature in the case of the basic cycle. However, for the cycle with IHX, the difference is more representative, both in the same cycle with IHX at different temperatures, as compared to the basic cycle at the same T 1, except in the latter case, for temperatures of 60ºC and 90ºC, in which it is not very noticeable the inclusion of an IHX. This occurs because as was mentioned in the preceding sections, in the case of the basic cycle, the overheating of the fluid does not cause an appreciable increase in η, except for pressures close to the maximum allowed by the T 1 studied (i.e. to higher pressure ratios), where this effect starts to be noticeable. As an example for illustrating this, considering an inlet pressure to the turbine of 20 bar, an increase of the inlet temperature to the turbine from 90ºC to 150ºC, caused a raise in the efficiency of 2.5% for the case of cycle with IHX. For this same pressure and inlet temperature to the turbine of 90ºC, the inclusion of the IHX caused an increase in the efficiency of 0.8%, approximately, with respect to the basic cycle. On the other hand, for the same inlet pressure to the turbine P1, there is an appreciable increase in η with increasing temperature in the case of a cycle with IHX due to the recovery of energy. As a consequence, the amount of energy required from the heat source decreases and the overall cycle efficiency increases, as can be seen in the same eq. (1). 5. Conclusions Based on the simulations carried out, the system’s efficiency proposed is a weak function of temperature, because overheating the inlet fluid to the turbine does not cause a significant change in the overall efficiency of the cycle. However, when the pressure ratio in the turbine increases (obviously limited by the temperature of the heat source), much larger values of efficiency are obtained (≈5% more as maximum for the same temperature T 1) and also, as the inlet temperature to the turbine rises, the efficiency increases more sharply (≈1% more as maximum for the same pressure ratio P1/P2). Furthermore, adding an internal heat exchanger to the cycle increases significantly the efficiency values obtained (≈3% more as maximum). Moreover, considering the energy analysis carried out, it can be concluded that the ORC with R134a as working fluid is suitable for the production of useful energy using low enthalpy heat, as it is possible to operate in relatively low temperature ranges. In addition, many of the aspects taken

into account nowadays in these processes, such as environmental issues, safety and efficient and rational use of energy have been satisfied. Acknowledgements Authors acknowledge all the invaluable comments by Eng. Cecilia Sanz M. from CARTIF. Fredy Vélez. thanks the scholarship awarded by the “Programa Iberoamericano de Ciencia y Tecnología para el Desarrollo”, CYTED, CARTIF Technological Center and University of Valladolid in order to carry out his doctoral thesis, on which this paper is based. References [1] Realpe, A., Diaz-Granados, J.A. and Acevedo, M.T., Electricity generation and wind potential assessment in regions of Colombia. Dyna, vol 171, pp, 116-122, 2012. [2] Vélez, F., Segovia, J., Martín, M.C., Antolín, G., Chejne, F. and Quijano, A., A technical, economical and market review of organic Rankine cycles for the conversion of low-grade heat for power generation. Renewable & Sustainable Energy Reviews, vol. 16, pp. 4175–4189, 2012. [3] Vélez, F., Chejne, F., Antolín, G. and Quijano, A., Theoretical analysis of a transcritical power cycle for power generation from waste energy at low temperature heat source. Energy Conversion and Management, vol. 60, pp. 188–195, 2012. [4] Saleh, B., Koglbauer, G., Wendland, M. and Fischer, J., Working fluids for low temperature organic Rankine cycles. Energy, Vol. 32, pp. 1210– 1221, 2007. [5] Quolin, S., Declaye, S., Tchange, B.F. and Lemort, V., ThermoEconomic optimization of waste heat recovery organic Rankine cycles. Applied Thermal Engineering, vol. 31, pp. 2885-2893, 2011. [6] Tchange, B.F., Papadakis, G., Lambrinos, G. and Frangoudakis, A., Fluid selection for a low-temperature solar organic Rankine cycle. Applied Thermal Engineering, vol. 29, pp. 2468–2476, 2009. [7] Quolin, S., Aumann, R., Grill, A., Schuster, A., Lemort, V. and Spliethoff, H., Dynamic modeling and optimal control strategy of waste heat recovery organic Rankine cycles. Applied Energy, vol. 88, pp. 2183– 2190, 2011. [8] Hung, T.C., Shal, T.Y. and Wang, S.K., A review of organic Rankine cycles (ORC`s) for the recovery of low-grade waste heat. Energy, vol. 22 (7), pp. 661-667, 1997. [9] Chen, H., Goswami, Y. and Stefanakos, E., A review of thermodynamic cycles and working fluids for the conversion of low-grade heat. Renewable & Sustainable Energy Reviews, vol. 14, pp.3059–3067, 2010. [10] Vélez, F., Segovia, J., Martín, M.C., Antolín, G., Chejne, F. and Quijano, A., Comparative study of working fluids for a Rankine cycle operating at low temperature. Fuel Processing Technology, vol. 103, pp.71–77, 2012. [11] U.S. Environmental Protection Agency. Class I Ozone Depleting Substances. [Online]. [date of reference March 11th of 2013] Available at: www.epa.gov/ozone/science/ods/classone.html. [12] Roy, J.P., Mishra, M.K. and Misra, A., Parametric optimization and performance analysis of a waste heat recovery system using organic Rankine cycle. Energy, vol. 35, pp. 5049-5062, 2010. [13] Roy, J.P., Mishra, M.K. and Misra, A., Parametric optimization and performance analysis of a regenerative organic Rankine cycle using lowgrade waste heat for power generation. International Journal of Green Energy, vol. 8 (2), pp. 173–196, 2011.

158


Vélez et al / DYNA 81 (185), pp. 153-159. June, 2014. [14] Manolakos, D., Papadakis, G., Mohamed, E., Kyritsis, S. and Bouzianas, K., Design of an autonomous low-temperature solar Rankine cycle system for reverse osmosis desalination. Desalination, vol. 183, pp. 73–80, 2005. [15] Manolakos, D., Papadakis, G., Kyritsis, S. and Bouzianas, K., Experimental evaluation of an autonomous low-temperature solar Rankine cycle system for reverse osmosis desalination. Desalination, vol. 203, pp. 366–374, 2007. [16] Manolakos, D., Kosmadakis, G., Kyritsis, S. and Papadakis, G., On site experimental evaluation of a low-temperature solar organic Rankine cycle system for RO desalination. Solar Energy, vol. 83, pp. 646–656, 2009. [17] Manolakos, D., Kosmadakis, G., Kyritsis, S. and Papadakis, G., Identification of behaviour and evaluation of performance of small scale, low-temperature organic Rankine cycle system coupled with a RO desalination unit. Energy, vol. 34, pp. 767–774, 2009. [18] Bruno, J.C., López-Villada, J., Letelier, E., Romera, S. and Coronas, A., Modelling and optimisation of solar organic Rankine cycle engines for reverse osmosis desalination. Applied Thermal Engineering, vol. 28, pp. 2212–2226, 2008. [19] Delgado-Torres, A. and García-Rodríguez, L., Analysis and optimization of the low-temperature solar organic Rankine cycle (ORC). Energy Conversion and Management, vol. 51, pp. 2846–2856, 2010. [20] Thanche, B.F., Lambrinos, G., Frangoudakis, A. and Papadakis, G., Exergy analysis of micro-organic Rankine power cycles for a small scale solar driven reverse osmosis desalination system. Applied Energy, vol. 87, pp. 1295–1306, 2010. [21] Schuster, A. and Karl, J., Simulation of an innovative stand-alone solar desalination system using an organic Rankine cycle. Int. J. of Thermodynamics, vol. 10(4), pp. 155-163, 2007. [22] Karellas, S., Terzis, K. and Manolakos, D., Investigation of an autonomous hybrid solar thermal ORC-PV RO desalination system. The Chalki island case. Renewable Energy, vol. 36, pp. 583-590, 2011. [23] Kosmadakis, G., Manolakos, D. and Papakakis, G., Parametric theoretical study of a two-stage solar organic Rankine cycle for RO desalination. Renewable Energy, 35, pp. 989–996, 2010. [24] Franco, A. and Villani, M., Optimal design of binary cycle power plants for water-dominated, medium-temperature geothermal fields. Geothermics, vol. 38, pp. 379–391, 2009. [25] Aneke, M., Agnew, B. and Underwood, C., Performance analysis of the Chena binary geothermal power plant. Applied Thermal Engineering, vol. 31, pp. 1825-1832, 2011. [26] Astolfi, M., Xodo, L., Romano, M. and Macchi, E., Technical and economical analysis of a solar–geothermal hybrid plant based on an organic Rankine cycle. Geothermics, vol. 40, pp. 58–68, 2011. [27] Vaja, I. and Gambarotta, A., Internal combustion engine (ICE) bottoming with organic Rankine cycles (ORCs). Energy, vol. 34, pp. 767– 774, 2009. [28] Schuster, A., Karellas, S., Kakaras, E. and Spliethoff, H., Energetic and economic investigation of organic Rankine cycle applications. Applied Thermal Engineering, vol. 29, pp. 1809–1817, 2009.

[30] Mikielewicz, D. and Mikielewicz, J., A thermodynamic criterion for selection of working fluid for subcritical and supercritical domestic micro CHP. Applied Thermal Engineering, vol. 30, pp. 2357-2362, 2010. [31] Lakew, A. and Bolland, O., Working fluids for low temperature heat source. Applied Thermal Engineering, vol. 30, pp. 1262–1268, 2010. [32] Lemmon, E.W., Huber, M.L. and Mclinden, M.O., Reference fluid thermodynamic and transport properties (REFPROP). NIST Standard Reference Database, 23, Version 8.0; 2007. Fredy Vélez. Ph.D in Energetic and Fluid-Mechanical Engineering by University of Valladolid (2011). MSc in Engineer -Emphasis in Chemical Engineering- (2007) and Chemical Engineer (2004), both by Colombian National University, where he was also an Associate Professor at the Energy and Process School. He started to work as researcher in the field of renewable energies in 2004. Since 2007 he is actively working in Energy Department of CARTIF in RTD projects about energy efficiency and integration of renewable energy (solar, geothermal, biomass) for the production of heating and cooling and/or electricity generation in buildings (to achieve zero emission buildings and near-zero energy balance) and industrial processes. He has experience in national and international (European and Latin American) projects, and has published many papers in peer review and technical journals and contributions in conferences about these themes. ORCID 0000-0003-0764-1321 Farid Chejne is a post-doctoral researcher at University Libre of Brussels (1997). He earned a PhD in Energy systems at Technical University of Madrid (1991). BSc in Physic (1989) by University of Antioquia and Mechanical Engineer by University Pontificia Bolivariana (1983). From 1983 to 2002, he worked as Full Professor in the Thermodynamical department in the University Pontificia Bolivariana, starting his career in National University of Colombia in 2002 where he currently continues his work as Full Professor. His research interests includes: modeling and simulation of processes, analysis of energy systems, advanced thermodynamic, energy resources, optimization and rational use of the energy, new technologies, combustion and gasification, management and technology. Furthermore, he has carried out projects for different companies within the power and industrial sectors. He has participated in multiple national and international research projects, contributing in addition in the publication of a large number of publications, including papers in peer review and technical journals, contributions in conferences and books about these themes. He has organized several international workshops, scientific sections and panels at international conferences and conducted various theses. Ana Quijano. BSc in Environmental Science (2003) and MSc in Research in Engineering for the Agroforestry Development (2010), by the University of Salamanca and University of Valladolid, respectively. She has been working in the Biofuels Area of CARTIF during 2005-2012, leading and participating in several National and European R&D projects on technologies for energy valorisation of biomass. She was nominated as Member of the Action COST “Innovative applications of regenerated wood cellulose fibres” in 2012 for her trajectory in the field of biorefineries. In 2013, she was incorporated to the Energy Area, specializing the execution of FP7 Projects in the field of efficiency and renewable energy generation systems in buildings. Furthermore, she is co-author of 4 articles in scientific journals, 6 articles in dissemination journals and 12 contributions in conferences.

[29] Chen, H., Goswami, Y., Rahman, M. and Stefanakos, E., A supercritical Rankine cycle using zeotropic mixture working fluids for the conversion of low-grade heat into power. Energy, vol. 36, pp. 549-555, 2011.

159


DYNA http://dyna.medellin.unal.edu.co/

Hydro-meteorological data analysis using OLAP techniques Análisis de datos hidroclimatológicos usando técnicas OLAP Néstor Darío Duque-Méndez a, Mauricio Orozco-Alzate b & Jorge Julián Vélez c a

Departamento de Informática y Computación, Universidad Nacional de Colombia Sede Manizales, Colombia. ndduqueme@unal.edu.co Departamento de Informática y Computación, Universidad Nacional de Colombia Sede Manizales, Colombia. morozcoa@unal.edu.co c Departamento de Ingeniería Civil, Universidad Nacional de Colombia Sede Manizales, Colombia. jjvelezu@unal.edu.co

b

Received: April 8th, 2013. Received in revised form: November 1th, 2013. Accepted: November 25th, 2013.

Abstract The wealth of data recorded by meteorological networks provides a great opportunity for analyzing and discovering knowledge. However, efficient data storage and its effective handling are prerequisites for meteorological and hydro-climatological research and require strategies for capturing, delivering, storing and processing that guarantee quality and consistency of the data. The purpose of this work is to develop a conceptual model for a data warehouse in a star schema that allows the structured storage and multidimensional analysis of historical hydro-climatological data. Information registered by two telemetered networks of hydro-meteorological stations has been collected in the city of Manizales, Colombia. From the designed data warehouse schema, the data warehouse exploits the data (in some cases extending back more than 50 years) in order to apply online analytical processing (OLAP) techniques and discovery potential high-value hidden relationships, in a region particularly affected by climate change and climate variability phenomena. The core contribution of this paper encompasses the exploration of alternatives to the traditional storage and analysis methods and the presentation of a number of cases, showing the effectiveness of the proposed model in the evaluation of the data quality and the visualization of relationships among diverse variables in different scales and for specific cases. Keywords: data mining; OLAP techniques; hydro-climatological data analysis. Resumen La riqueza de los datos registrados por las redes de estaciones hidrometeorológicas ofrece una gran oportunidad para analizar, conocer y entender mejor las variables hidroclimatológicas. Por lo tanto, el almacenamiento eficiente de los datos y su tratamiento eficaz son un requisito previo para la investigación meteorológica e hidrológica que requiere de estrategias para que la captación, transmisión, almacenamiento y procesamiento de datos que garanticen su calidad y consistencia. El propósito de este trabajo es desarrollar un modelo conceptual para una bodega de datos diseñada en un esquema en estrella que permita el almacenamiento estructurado y el análisis multidimensional de series históricas de datos hidroclimatológicos. La información registrada por las redes telemétricas de estaciones hidrometeorológicas existentes en Manizales y en el Departamento de Caldas son la fuente de información. El esquema de bodega de datos propuesto aprovecha los datos disponibles (en algunos casos más de 50 años) con el fin de aplicar procesamiento analítico en línea (OLAP) para analizar la calidad de la información y descubrir relaciones ocultas entre las variables, en una región particularmente afectada por el cambio climático y especialmente por fenómenos de variabilidad climática. La principal contribución de este documento abarca la exploración de alternativas a los métodos tradicionales de almacenamiento y análisis de información y la presentación de un número de casos que demuestran la eficacia del modelo propuesto en la evaluación de la calidad de los datos y de la visualización de las relaciones entre las diversas variables a diferentes escalas y para casos específicos. Palabras clave: minería de datos; técnicas OLAP; análisis de información hidro-climatológica.

1. Introduction The large amount of data collected at the hydrometeorological stations, have previously only been used to describe the current or historical conditions of the variables. But, nowadays, exploiting the physical measurements is a high-interest issue [1, 2]. In order to profit from that high volume of data, coming from different hydro-climatological variables registered in real-time at small sampling periods, it is necessary to design and implement suitable storage schemata and apply tools that exploit the large amount of data, get information from

them, discover hidden knowledge through their relationships and trends and, hopefully, allow the forecasting of future behaviors. Since the last decades, multiple applications of artificial intelligence in hydrology and water resources research have been reported. All these approaches include different techniques such as autoregressive moving-average (ARMA) models [3], genetic algorithms [4], adaptive neural-based fuzzy inference system (ANFIS) techniques [5], artificial neural networks (ANNs) approaches [6], genetic programming (GP) models [7], support vector machine (SVM) methods [8] and, most recently, data mining

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185). pp. 160-167. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Duque-MĂŠndez et al / DYNA 81 (185), pp. 160-167. June, 2014.

techniques as On-Line Analytical Processing (OLAP) and data warehouses [9] which allow the users to organize and query large hydrological data collections. Handling, analyzing and preparing climate and hydrological data as well as producing information are usually the most tedious stages of water resources management, but also the most important ones because they reduce the uncertainty associated with the data. This paper is aimed to tackle this problem through an appropriate data management tool. Domiguez et al. [10] assessed the importance for society of being informed about the weather; in addition, temperature changes suffered at Coahuila state (Mexico) are shown in a graphical, measurable and detailed way by using a data warehouse containing information about the temperature, wind, hours and dates, which allows analyzing information by means of a tool called Power Play Transformer. Ma et al. [9] proposed the design and implementation of a meteorological data warehouse, along with a report schema based on Microsoft SQL server; their data warehouse is aimed at the meteorological analysis and research, it uses online analytical processing (OLAP) and multidimensional reports by analyzing the stored data, which is available online. Tan [11] described a data warehouse for climate forecasting and developed four analysis schemata based on multidimensional analysis, determining valid combinations between facts and dimensions. Bartok et al. [12] described a study on parameterized models and methods for the detection and forecasting of significant meteorological phenomena, including, firstly, methods for integration of distributed meteorological data required for the functioning of the prediction and model formation and, secondly, the data extraction aimed at achieving a fast and efficient prediction of the phenomena, even at random. Monitored data arranged in a consistently formatted database, from which the model could learn probabilistic relationships between model elements, is presented in Williams and Cole [13] demonstrated the use of Bayesian networks for data-mining and decision-making. Cortez and Morais [14] explored data mining techniques, supplied with meteorological data, to predict forest fires in areas of Portugal. Meanwhile, in Lemus et al. [15], a knowledge discovery in databases (KDD) process is presented in which attribute selection and regression tasks are performed in order to analyze dependencies between meteorological parameters and estimate secondary ones. In a previous work, Duque et al. [16] applied data mining techniques on historical data systematically collected by a network of telemetered hydrometeorological stations. As a result, the authors have obtained a first approach to the understanding of trends in the behavior of some variables at the study site. Chen et al. [17] collected data in a comprehensive database built by a water information process in China; taking the decision support of water as an example, they discuss how to build a data warehouse system based on comprehensive database and designing a general structure of data warehouse. Wang et al. [18] have performed a comparison of several artificial

intelligence methods for the monthly forecasting of discharge time series. From the above review it can be determined that the central problems of these articles are the need to define schemas, models and mechanisms for organizing and storing the obtained physical data measurements; besides, another main interest includes the techniques for data processing, not only oriented to descriptive results, but also with the goal of discovering hidden knowledge. The proposed data management tool allows data assimilation in order to understand and improve inputs and outputs in geoscience models. The data analysis typically performed depends on the type of information. In the case of discharge time series, homogeneity tests are performed; for precipitation time series, double mass analyses are proposed, as well as nonparametric statistical tests for independence analysis. In the case of temperature, its verification analysis is usually performed with other variables such as temperature vs. precipitation, also called climographs [19]. For climate variability studies, it is also interesting to understand the day/night variability patterns and cycles. This approach points to establish trends in the behavior of such variables and, insofar as possible, to predict them. The ultimate goal is building a data warehouse that allows organizing historical information, updating data with the measurements gathered at the stations and the application of OLAP techniques, allowing the generation of multidimensional reports and meteorological forecasts. The remaining part of this paper is organized as follows. Section 2 presents the distribution of the deployed stations and describes the data collected. Related concepts and the description of the proposed model are presented in Section 3. Results are presented in the subsequent section and, finally, conclusions and future work are discussed in the last section. 2. Monitoring system and captured data Currently, data from two telemetered networks of hydrometric and meteorological stations are available in Manizales and Caldas. Such stations transmit data, via radio and in real-time, to a receiving base station located in Manizales city at two specific places: the Corpocaldas headquarters and at Instituto de Estudios Ambientales (IDEA) of Universidad Nacional de Colombia - Sede Manizales. In Table 1, sampling periods and recorded variables are detailed; a number of them are recorded at a rate of 1 sample/5min. A more detailed description of this process can be found in Mejia and Botero [20], and VĂŠlez at al. [21]. Due to the difficult climatic conditions in the monitoring zones, several setbacks have interrupted the continuous data acquisition. In 2009-2010, a total rehabilitation process for all the stations was started. Currently, IDEA is installing new stations at strategic places and agreements for the preventive maintenance and permanent surveillance of the data emission at the stations have been signed. Such endeavors are aimed to detect failures at the stations, in such a way that they can be repaired in time. Each station

161


Duque-Méndez et al / DYNA 81 (185), pp. 160-167. June, 2014.

collects, in real time, information about temperature, relative humidity, rainfall, wind direction and speed, and solar radiation, all along with time stamps including hour and date of the last transmission, which can be constantly updated. These data and a Meteorological Bulletin can be accessed through the IDEA web site (http://idea.manizales.unal.edu.co/). Daily reports for each station, as well as monthly and annual ones, can be found. Table 1. Variables and periods obtained from stations where T: temperature, RH: Relative humidity, Ppt: Precipitation, SR: Solar radiation, SB: Solar brightness, WDV: Wind direction and velocity, BP: Barometric pressure, FL: Flow level.

Agronomia

Altitude Variables (m.a.s.l) 2150 T, RH, SB, Ppt

Cenicafé

1320

Almacafé

3580

Naranjal

1200

Santágueda

1010

Montevideo

1370

Municipal

1750

Sancancio

2000

Posgrados

2180

Emas

2060

Station

Period

Figure 1. Phases of the KDD process with multidimensional models of data warehouses.

There are two modes to store data in data warehouses: relational databases and multidimensional databases. The efficient data storage and its correct manipulation are problems and prerequisites for success in meteorology and climatology [23]. The core of OLAP is the multidimensional analysis, where a dimension can be any aspect of the data and the main purpose is to explore the data in order to find relationships or patterns instead of just looking at its isolated behavior. In the business field data warehouse has shown great benefits exploiting volumes of data, so it is expected that also the application of data warehouse technologies in the domain of hydro-climatology is advantageous due to the manipulation of large amounts of data from different origins and the possibility of applying diverse data analysis techniques. In this work, we will show the benefits from storing hydro-meteorological data in a multidimensional model and an application of OLAP techniques, allowing the generation of multidimensional reports and, therefore, obtaining relevant information for the historical analysis in real-time and meteorological forecasts.

19562010 T, RH, SB, Ppt 19422010 T, RH, SB, Ppt 20002010 T, RH, SB, Ppt 19562010 T, RH, SB, Ppt 19642010 FL 19602010 FL 20062008 FL 19792010 T, RH, WDV, SB, SR, 1942BP, Ppt 2010 T, RH, WDV, SB, SR, 1942BP, Ppt 2010

The objective of this paper is building an integrated system that includes a data warehouse that allows historical information to be organized, updating data with the measurements gathered at the stations and applying KDD techniques in order to obtain underlying knowledge from the data. 3. KDD process for the hydro-climatological stations Knowledge discovery in databases (KDD) is a field from computer science that tries to exploit the overwhelming amount of available information, extracting hidden knowledge that could assist humans to carry out tasks in an efficient and satisfactory way. KDD can be defined as a non-trivial process to identify valid, novel, potentially useful, and, ultimately, understandable patterns from the data [22]. It is a process that covers different stages as shown in Fig. 1. As seen in Fig. 1, building a data warehouse implies taking decisions about the architecture to be implemented and the previous process of Extract, Transform, Load (ETL) from the original data (operational) to the data warehouse. ETL operations are specific to the data set to be considered.

3.1. Multidimensional model in star The multidimensional model in data warehouses is a logical design technique that seeks to present data in an intuitive form and with high performance. Each multidimensional model is composed of a table having multiple foreign keys, called fact table, and a set of smaller tables called dimension tables. Attributes of the dimension table determine search restrictions in queries of the data warehouse and, typically, are used as row headings resulting from SQL queries. There are two approaches in the design: the star schema and the snowflake schema [24]. In the star schema, there is a fact table and, within it, foreign keys to each one of the dimension tables of the model. Each dimension table is directly related with the fact table. A simple structure in star has only one fact table. Facts are measurements of variables that are considered and often associated to numerical values that support calculations. Dimensions, or text-type attributes that describe things, are used to define restrictions and serve as headings in the reports. By adding restrictions to a search, a drilling down is carried out; that is, a higher level of detail is achieved. An efficient drilling down mixes attributes from

162


Duque-MĂŠndez et al / DYNA 81 (185), pp. 160-167. June, 2014.

Figure 2. Design of the data warehouse - star schema.

the different dimensions in order to make robust reports. Keys in data warehouses must be surrogate, that is, they do not mean anything about the system and keys from the original data sources are not used. Granularity represents the level of detail of the data units in the warehouse [24]. Fig. 2 graphically shows the design of the data warehouse and the components of the unique fact table as well as those of the dimensions involved. The core of the model is the fact table from which the dimensions depend on. In this case study, attributes of the fact table are: station_sk, date_sk, rainfall, temperature_min, temperature_max, temperature_med, brightness, hr, average flow, average level, wind speed and direction, barometric pressure, evapotranspiration and average solar radiation. The basic dimensions are station and time. There are several levels of granularity for the case of the time: year, month, day, trimester, semester, lustrum and decade; in such a way, multidimensional views can be extended. An identifier and the location (municipality and coordinates) are saved for the station dimension and the data granularity can be divided into specific areas, regions, latitude, longitude, etc. 3.2. ETL process Due to the amount of data, the different schemata used to store them and the different periods to which they belong, it is common the presence of noise, inconsistent or redundant data and, for our particular case, a disturbing factor was the organization of the data acquired at the stations in spreadsheets in order to provide better visual formats. In order to store the data in the data warehouse, the application of a preprocessing techniques to the data sets is required; such a process is known as ETL. The objective of ETL is to obtain data sets such that, when applying OLAP techniques, results representing reality are generated and relevant views are delivered. It is implemented to reinforce quality and consistency of the data sets and to adapt them to the formats required for processing and analysis.

The source data for this study exhibited drawbacks regarding the organization and formats since they come from stations that have been working for many years. Those data had been saved in different files and adequate templates for human-based processing but inconvenient for an automatic one. This occurred in particular with the dates, because there are different administrators of the climate monitoring network. On the other hand, in that long period, transmission conditions suffered changes and, in some cases, there were no values for several variables under observation. The free software Talend Open Studio was used to perform some of the extraction and load steps. Data in inappropriate forms were manually debugged in order to find data according to the reported variables. Missing data were treated as NULL values, in order to guarantee that they were not considered in the sum operations. For the case of inconsistent data, such as negative flows or excessive temperatures, values were either discarded or replaced by averages according to the opinion of the experts. In order to obtain the dimension date, we generated a table including all the dates, from the older to the most recent one, within the given date intervals (e.g. day by day). The following queries were applied to generate values of the other fields having different granularity: 1. update date_dim set trimester= 1 where month between 1 and 3; 2. update date_dim set nameDay="Sunday" where dayWeek=1; 3. update date_dim set nameMonth= "December" where month=12; 4. update date_dim set lustrum=13 where year between 2010 and 2014; The process of migrating the original data to the fact table is not trivial and requires that the respective dates are maintained and the measured values are updated according to the dimensions involved. This was achieved by using specialized SQL queries. At the end, the model shown in Fig. 2 was completely populated and enabled to apply OLAP techniques. 4. Application of OLAP tools OLAP techniques have been widely used in finance, sales and marketing; nonetheless, their applications in scientific studies are relatively recent [25]. Consequently, the proposal presented here can be considered as a novelty. The application of multidimensional analysis techniques from different approaches oriented to different tasks, in addition to validation with real data is one of the contributions of this work. Considering the proposal in Ma et al. [9], different multidimensional analyses were carried out, obtaining valuable results not just to assess the data quality but also to evaluate relationships among the variables. Some examples are given below:

163


Duque-Méndez et al / DYNA 81 (185), pp. 160-167. June, 2014.

thereby, a problem with the calibration of the Montevideo station is evidenced. It indicates that experts must check and correct this situation. Something similar happens with the other stations in the network. On the other hand, Fig. 6 allows appreciating relationships and trends of flow and level during the last 15 lustra for three stations. Temporal aggregation can be performed easily within the model as shown in Fig. 6, but it can affect the results and mask discontinuities and errors because of the aggregation process.

Figure 3. Detection of Missing data. Max. daily precipitation (mm), Mean daily temperature (°C) and Mean daily solar brightness (hours).

4.1. Data quality Fig. 3 easily reveals that there are missing data for temperature data at Santágueda station during the fourth trimester of 2008. Similarly, it shows the good behavior of the variables solar brightness and precipitation. A similar situation is observed in Fig. 4, where there is a significant decrement in the average temperature during 2008, which demands a revision to determine whether the data are erroneous or it is a change due to a climatic phenomenon.

Figure 5. Relationship between average discharge (line) and average level (bars) at Montevideo, Municipal and Sancancio gauge stations.

It is worth mentioning that this initial quality analysis of the information must be complemented with rigorous statistical tests that demonstrate correlations, changes in the average values, trends and data consistency, which are also available in the model. 4.2. Relationships Fig. 5 exhibits the behavior of the average flow and average level for three gauge stations, with data for 50, 3 and 30 years for Montevideo, Municipal and Sancancio gauge stations, respectively. In this figure, the existent correlation between level and flow should be visible;

Figure 4. Climograph for Santágueda station (1981-2010). Mean daily precipitation (mm).

Figure 6. Relationship between average flow and level during 15 lustra.

4.3. Multiscale analysis

Figure 7. Solar radiation and barometric pressure at Posgrados Station. Date: 2010/1/1, Interval: 5 Minutes.

164


Duque-Méndez et al / DYNA 81 (185), pp. 160-167. June, 2014.

Figure 10. Daily Temperature for Station/Time.

Figure 8. Cumulative precipitation at a year time scale from 2002-2010.

Mutiple possibilities are offered by the model, from the different levels of granularity, which allows the analysis in different time scales ranging from every five minutes to every five years. All these advantages are available for users by just making a few selections. Fig. 7 is an example of data obtained for a single day, having measurements every five minutes for two different variables. Fig. 7 extracts the best information related with day/night cycles observed in climatological data, which can be exploited by researchers. Moreover, it allows cumulative values from instantaneous data to be obtained. In Fig. 8, the behavior of the cumulative rainfall for two stations in the period 20022010 can be seen, it is called double mass curve and explains the continuity of registered data and its relationship between rain gauges. In order to obtain larger time periods of analysis, it is enough to group by larger time units as shown in Fig. 9. Where it is shown an increment in rainfall which is mainly caused by La Niña phenomenon, from 1999-2001. Therefore, climate variability analysis can be carried out satisfactorily. 4.4. Variability analysis The possibilities offered to users and researchers that, with a few actions can change the type of the variables, the time scale, the stations and the visual display of the data, are an added value that turns this proposal into an important tool not just for the analysis in a given detail level but also

Figure 9. Average Rainfall (mm) in the period of the ENSO Phenomena. (1997-2002)

for the application of summary operations over the stored measurements. Fig. 10 is a mixture of results obtained by just changing the selection of the parameters for analysis. It demonstrates the versatility of the proposed model. 4.5. Trends Fig. 11 registers, on a monthly basis, averaged values of temperature and precipitation at Cenicafé station during the last few years. The behavior, with a slight incremental trend in the temperature, can be seen. The above-mentioned examples are just a sample of the possibilities offered with the implemented model. Practical results are already in use, they are a valuable tool for data cleaning and consistency assessment. Facilities included in the proposed model allows researchers to interact in an easy way and obtain immediate results. Operations such as roll up, slice, dice and rotation (pivot) provide a great versatility in the usage. 5. Conclusions The existence of a large volume of hydro-climatological data with registers taken during many years is not a guarantee, by itself, of obtaining valuable results. For such a purpose, the application of storing and data analysis techniques is needed that exploit the registers in order to obtain information and knowledge.

Figure 11. Climograph trend at Cenicafé station from 1981 to 2010.

165


Duque-Méndez et al / DYNA 81 (185), pp. 160-167. June, 2014.

The good results obtained in the validation of the proposed model are due to the proper design of a multidimensional warehouse in star schema, correctly defining the dimensions, facts and measures; as well as to the proper application of OLAP techniques, this is reflected in the data quality assessment processes, data aggregation for group analysis, temporal multi-scale analysis, for relationships among obtained measurements and as a first approach to the underlying trends. The organization of the data in the data warehouse, by itself, is already an added value for the work of the researchers. Automated processes are being implemented in order to update measurements coming from the stations. Data inconsistencies are currently being solved in order to get more reliable results, new stations are being installed and the model is going to be enlarged to receive different dimension scales. The research group, starting from the above-reported results, has included new variables and precise geographical coordinates of the stations, which will allow spatial analysis. The climate and water resources data require the exploration of the quality of available data through data mining techniques, which allows the researcher to understand not only the quality of the data by itself but also different relationships with other variables that may explain the over-parameterization, the variable dependence and equifinality observed in geoscience conceptual models. Acknowledgments The authors would like to thank financial support from “Convocatoria Nacional de Investigación y de Creación Artística de la Universidad Nacional de Colombia 2010 2012” to the “Programa de Fortalecimiento de Capacidades Conjuntas para el Procesamiento y Análisis de Información Ambiental (code 12677)”. The information and data were supplied by Cenicafé, IDEA-UNAL, Environmental Agency CORPOCALDAS and Alcaldía de Manizales (OMPAD). References [1] Puertas O., Carvajal, Y. and Quintero, M., Study of monthly rainfall trends in the upper and Middle Cauca River basin, Colombia. DYNA, vol. 169, pp. 112-120, 2011. [2] Hernández Q., Espinosa, F., Saldaña R. and Rivera, C. Assessment to wind power for electricity generation in the state of Veracruz (Mexico). DYNA vol. 171, pp. 215-221, 2012.

[7] Whigam, P.A. and Crapper, P.F., Modelling rainfall-runoff relationships using genetic programming. Mathematical and Computer Modelling, vol. 33, pp. 707-721, 2001. [8] Dibike, Y.B., Velickov, S., Solomatine, D. and Abbott, M.B., Model induction with support vector machines: introduction and applications. Journal of Computing in Civil Engineering, vol. 15 (3), pp. 208-216, 2001. [9] Ma, N., Yuan, M., Bao, Y., Jin, Z. and Zhou, H., The Design of Meteorological Data Warehouse and Multidimensional Data Report, Proceedings of Second International Conference on Information Technology and Computer Science, pp. 280-283, 2010. [10] Domínguez, A. J., Torres, S.S, Alba, D. M., and Silva, A. E., Medición y Análisis de Datos Meteorológicos, utilizando Bodega de Datos, Proceedings of Simposio de Metrología, 2008. [11] Tan, X., Data Warehousing and its Potential Using in Weather Forecast, Proceedings of 22nd International Conference on Interactive Information Processing Systems for Meteorology, Oceanography, and Hydrology, Atlanta, GA, 2006. [12]. Bartok, J., Habala, O., Bednar, P., Gazak, M. and Hluchý, L., Data Mining and Integration for Predicting Significant Meteorological Phenomena., Procedia Computer Science, pp. 37-46, 2012. [13] Williams, B.J. and Cole, B., Mining monitored data for decisionmaking with a Bayesian network model. Ecological Modelling, vol. 249, pp. 26-36, 2013. [14]. Cortez, P. and Morais, A., A data mining approach to predict forest fires using meteorological data, New trends in artificial intelligence: proceedings of the 13th Portuguese Conference on Artificial Intelligence (EPIA 2007). [15]. Lemus, C., Rosete, A., Turtós, L., Zerquera, R. and Morales, A., Estimación de parámetros meteorológicos secundarios aplicando Minería de Datos. Instituto Cujae. Cuba, 2009. [16]. Duque, N.D., Orozco, M. and Hincapié, L., Minería de Datos para el Análisis de Datos Meteorológicos, Tendencias en Ingeniería de Software e Inteligencia Artificial, vol. 3, 2010. [17]. Chen, D.Q, Wang W.Y. and Yang, H.K., Application Research on Data Warehouse of Hydrological Data Comprehensive Analysis. Proceedings of 3rd IEEE International Conference. vol. 9. pp. 140-143, 2010. [18]. Wang W.C., Chau, K.W., Cheng, C.T. and Qiu, L., A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series. Journal of Hydrology, vol. 374, pp. 294306, 2009. [19] Oliver, J. 2007. The Thermohyet Diagram as a Teaching Aid in Climatology. Journal of Geography. Vol. 67 (9), 1968, pp. 554- 563. Available online: 02 Nov 2007. [20]. Mejía F. and Botero, B. A., Monitoreo Hidrometeoro-lógico de los glaciares del Parque Nacional Natural Los Nevados. Glaciares, Nieves y Hielos de América Latina. Cambio Climático y Amenazas. Colección Glaciares, Nevados y Medio Ambiente. Editores: C.D López y Ramírez J. Instituto Colombiano de Geología y Minería, Bogotá, 2009.

[3] Carlson R.F., Maccormick, A.J.A. and Watts, D.G., Application of linear random models to four annual streamflow series. Water Resources Research, vol. 6 (4), pp. 1070-1078, 1970.

[21]. Vélez, J. J., Mejía, F., Pachón A. and Vargas, D., An Operative Warning System of Rainfall-Triggered Landslides at Manizales, Colombia. Proceedings of World Water Congress and Exhibition IWA 2010, Montreal, Canada. Sept 19-24, 2010.

[4] Wang, Q.J., The genetic algorithm and its application to calibrating conceptual rainfall-runoff models. Water Resources Research, vol. 27 (9), pp. 2467-2471, 1991.

[22]. Hernández, J., Ramírez, M.J. and Ramírez, C., Introducción a la Minería de Datos. Pearson, Prentice Hall, Madrid, 2004.

[5] Jang. J. S. R., ANFIS: adaptive-network-based fuzzy inference systems. IEEE Transactions on Systems, Man and Cybernetics, vol. 23 (3), pp. 665685, 1993.

[23]. Dimri, P. and Gunwant, H., Conceptual Model For Developing Meteorological Data Warehouse In UttaRakhand- A Review., Journal of Information and Operations management, vol. 3 (1), pp. 107–-110, 2012.

[6] ASCE Task Committee. Artificial neural networks in hydrology - I: preliminary concepts. Journal of Hydrologic Engineering. ASCE, vol. 5, pp. 115-123, 2000.

[24]. Darmawikarta, D., Dimensional Data Warehousing with MySQL: A Tutorial. BrainySoftware, 448 p, 2007.

166


Duque-Méndez et al / DYNA 81 (185), pp. 160-167. June, 2014. [25]Chaudhuri, S., Dayal, U. and Narasayya, V., An overview of business intelligence technology. Commun. ACM vol, 54, 8. pp. 88-98, 2011. Néstor Darío Duque-Méndez, Associate Professor from Universidad Nacional de Colombia, Manizales and head from the Research Group in Adaptive Intelligent Environments GAIA. He develops his master studies in Systems Engineering, and his PhD in Engineering from Universidad Nacional de Colombia. His PhD thesis with Cum Laude honors. Author of a number of articles in scientific journals and book chapters including topics on their research and academic work, speaker at major national and international events; head in the development process of national and international research projects, member of academic committees of a dozen national and international journals, academic review in post-graduate academic programs and special events. Hi as received some meritorious distinction for researching and teaching in the Faculty of Administration from Universidad Nacional de Colombia at Manizales. Mauricio Orozco-Alzate received his undergraduate degree in Electronic Engineering, his M.Eng. degree in Industrial Automation and his Dr.Eng.

degree in Automatics from Universidad Nacional de Colombia - Sede Manizales, in 2003, 2005 and 2008 respectively. Since August 2008, he has been with the Department of Informatics and Computing, Universidad Nacional de Colombia - Sede Manizales. His main research interests encompass pattern recognition, digital signal processing and their applications to analysis and classification of seismic, bioacoustic and hydro-meteorological signals. Jorge Julián Vélez, received the Bs. Eng in Civil Engineering in 1993, the Ph.D. degree in Water Resources Management in 2003, he worked in hydrology, hydraulics and hydro-climatological projects with emphasis on hydrology and environmental issues. His research interests include: hydrological modelling, distributed models, flood forecasting, water balance, rainfall-runoff process, GIS, flood analysis, fluvial analysis, climate change and ecohydrology. He is currently in charge of the Hydraulic Laboratory at Departamento de Ingeniería Civil of the Facultad de Ingeniería y Arquitectura, Universidad Nacional de Colombia Sede Manizales.

167


DYNA http://dyna.medellin.unal.edu.co/

Monitoring and groundwater/gas sampling in sands densified with explosives Monitoreo y muestreo de aguas subterráneas y gases en arenas densificadas con explosivos Carlos A. Vega-Posada a, Edwin F. García-Aristizábal b & David G. Zapata-Medina c a

Facultad de Ingeniería, Universidad de Antioquia, Colombia. cvega@udea.edu.co Facultad de Ingeniería, Universidad de Antioquia, Colombia. egarcia@udea.edu.co Facultad de Minas, Universidad Nacional de Colombia, Colombia. dgzapata@unal.edu.co b

c

Received: May 5th, 2013. Received in revised form: February 10th, 2014. Accepted: April 10th, 2014.

Abstract This paper presents the results of a blast densification field study conducted at a waste disposal landfill located in South Carolina, United States, to determine the type of gases released and their in-situ concentrations in the ground after blast densification. The BAT probe system was used to collect groundwater and gas samples at the middle of the targeted layer and to measure the porewater pressure evolution during and after the detonation of the explosive charges. In addition, standard topographic surveys along the centerline of the tested zones were conducted after each blast event to quantify the effectiveness of the blast densification technique to densify loose sand deposits. The results of this study show that: a) the BAT probe system is a reliably in situ technique to collect groundwater and gas samples before and after blasting; b) the soil mass affected by the detonation of the explosives fully liquefied over a period of 6 hours while the in-situ vertical effective stresses returned to their initial values after about 3 days; and c) significant induced vertical strains were observed in the blasting area after each detonation, indicating that the soil mass has been successfully densified. Keywords: Blast densification, gassy soils, BAT probe, densification, loose sands, liquefaction. Resumen Este manuscrito presenta los resultados de un estudio de densificación de suelos en campo utilizando explosivos y realizado en un relleno sanitario localizado en Carolina de Sur, Estados Unidos; este estudio se realizó con el objeto de determinar los tipos de gases que se liberan y sus respectivas concentraciones in situ después del proceso de densificación. Se utilizó un sistema de sonda BAT para recolectar las muestras de aguas subterráneas y de gas en la mitad del estrato en estudio, así como para medir la evolución de las presiones del agua durante y después de la detonación de las cargas explosivas. Adicionalmente, se hicieron mediciones topográficas a través del eje central longitudinal de la zona de estudio después de cada explosión para medir la magnitud y la efectividad de esta técnica de densificación en depósitos de arena sueltas. Los resultados de este estudio mostraron que: a) el sistema de sonda BAT puede ser una técnica confiable para recolectar muestra de agua subterránea y gas en campo antes y después de la explosión; b) la masa de suelo afectada por la detonación de los explosivos licuó por un periodo de 6 horas, mientras el esfuerzo vertical efectivo alcanzó sus valores iniciales después de 3 días; y c) se observaron deformaciones verticales significativas en el área de estudio después de cada explosión, lo cual indica que la masa de suelo fue exitosamente densificada. Palabras clave: Densificación con explosivos, suelos gaseosos, sonda BAT, densificación, arenas sueltas, licuación.

1. Introduction Blast densification has been used for more than 70 years to densified loose and saturated sand deposits. This technique is commonly used to densify large areas of loose sand deposits and thus increase the strength and liquefaction resistance of the soil. During this process, large amounts of gas are produced and released in the ground. These gasses may remain trapped in the ground for months or even years [1-3]. Because gas in free form or dissolved in the pore fluid increases the pore fluid compressibility [4] and significantly

affects the mechanical response of the soil [5-9], it is important to determine the type of gases produced by typical explosives and quantify their in-situ concentrations. For loose sands which exhibit strain softening responses during undrained shear, and thus are susceptible to liquefaction and flow, the effect of gas in the sample is to change the responses from softening to hardening as the amount of gas in the soil increases [8]. For dense sands the presence of gas has the effect of reducing the “undrained” shearing resistance of sands [7]. The amount of hardening decreases as the amount of free gas increases. This paper presents the results of a field investigation

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 168-174. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Vega-Posada et al / DYNA 81 (185), pp. 168-174. June, 2014.

study conducted at a waste disposal landfill located in South Carolina, United States, to determine the type of gas present in the soil and quantify their in-situ concentrations after the sand has been densified with explosives. For this study, a BAT sampling system was adopted to collect pressurized samples at the middle of the blasted layer and to monitor the porewater pressure evolution after blasting. The results show that the BAT probe system is a suitable technique to collect groundwater and gas samples, and the blast densification technique is an effective technique to improve the density of a saturated loose sand deposit.

2.1. Groundwater and gas sampling Fig. 1 shows the BAT probe configuration when used for groundwater and gas sampling. The BAT probe is assembled as shown in Fig. 1b and carefully lowered down the 1-inch extension pipe. The double ended needle mounted in a quick coupling simultaneously penetrates the septum in the filter tip and the septum in the bottom of the container, allowing the in-situ liquid/gas to enter the

10

8 7 6

ON

11

ENTER

4

5

1 Filter tip 2 Quick coupling 3 3 Double ended needle 4 Container housing 2 5 35 ml test container 6 Extension adapter 7 Blue needle 8 Transfer nipple 1 9 BAT/IS sensor 10 Battery unit 11 BAT/IS field unit

2. Bat probe system description The BAT probe system has been used for more than 25 years in groundwater and offshore investigations to collect fluid and gas samples, and to measure in-situ pore pressure, temperature, and the hydraulic conductivity of soils. This device was originally designed for sampling in-situ pore fluid but it was later modified to collect fluid/gas samples [10, 11]. This system is manufactured and sold by BAT Geosystems AB, Sweden. The main components of the BAT probe are the BAT filter tip, the BAT/IS sensor, the battery unit, and the BAT/IS field unit. The filter tip is sealed at the top with a flexible septum that will automatically reseal after sampling. The septum can be penetrated with a needle several times without losing its self-sealing functions. The sensor is used for measuring/logging the pore pressure and temperature inside the filter tip. A hypodermic needle attached at the tip of the sensor is used to penetrate the filter tip. The battery unit is used to store the readings. The field unit is used to take real-time pressure and temperature readings and is also equipped with an internal atmospheric pressure sensor. Using the field unit, the sensor can be programmed to take readings at pre-established intervals. A detailed description of the device components, installation procedures and testing sequences are found in Christian and Cranston [11]. The in-situ testing technique presented herein was utilized to determine the type of gases released during blasting and their in-situ concentrations. However, this technique can be implemented to collect gases trapped in marine sediments, measure porewater, and temperature pressures at certain depths, detect shallow gas pockets during offshore oil or gas field developments, sample and identify contaminated soils, and to determine the coefficient of permeability of soil deposits.

(a)

1" Extension pipe

9

(b)

Figure 1. BAT system (a) schematic diagram and (b) assembled and ready for testing.

container. Because the sensor is connected to the top of the container with a needle, it is also possible, using the field unit, to measure and monitor the pressure changes inside the container at any time during sampling. No change in pressure indicates that coupling was not achieved and sampling has not begun. Another advantage of this testing configuration is that pressurized samples can also be collected, if needed. To collect in-situ groundwater/gas samples, the BAT probe system must be assembled as shown in Fig. 1b. Before placing the test container in the container housing, the air inside the container is removed by either applying vacuum to the container or by flushing and pre-charging the container with an inert gas that is not found in the ground. The time needed to collect a sample may vary from a couple of minutes to up to 24 hours or more depending on the soil type, sample collection technique and the difference in pressure between the inside of the container and the in-situ

169


Vega-Posada et al / DYNA 81 (185), pp. 168-174. June, 2014.

pore pressure. Pre-charging the container is desirable because it minimizes the uncertainties introduced by gases left inside the container when vacuum is applied [3]. 3. Type and fate of gases produced during blast densification The principal gases produced by typical explosives are water vapor (H2O), carbon dioxide (CO2), and nitrogen (N2) in a mole ratio of 1:2:5 [12]. Hryciw [13] calculated that 1 kg (2.2 lb) of Ammonium Nitrate Fuel Oil (ANFO) will produce approximately 43 moles of these gases, which corresponds to about 1.0 m3 (35 ft3) of gas at standard temperature and pressure. However, after blasting some gas will escape to the surface, some will rapidly condense in the presence of cooling groundwater, and some will migrate and diffuse with time, making it difficult to predict a priori the exact amount of gas trapped in the soil. Fig. 2 illustrates the fate of these gases following detonation of ANFO. From the released gases, nitrogen is the main gas that may remain trapped in the ground for a long period of time because the absolute pressure acting on this gas, at depths where blast densification is applicable, is relatively low and it does not dissolve easily in the pore fluid at these pressures (solubility coefficient, β= 0.015 mL of N2/mL of water). 4. Influence of gas on soil response

Volume of gas species

Previous studies have shown that the mechanical behavior of soil is significantly affected by the presence of gas in either dissolved or free form. Grozic et al. [8] conducted a series of monotonic triaxial compression tests on loose specimens of gassy sand. They found that the

stress-strain soil response is considerably affected by the amount of gas present in the soil mass. As shown in Fig. 3, the sample response changes from completely strainsoftening to strain-hardening as the amount of gas increases, or the degree of saturation (S) of the sample decreases. Rad et al. [7] showed that the shear strength of dense specimens of gassy sand is affected by the gas type, gas amount, and the pore pressure level. In contrast to the case of loose gassy sands, the presence of gas in free form has the effect of reducing the globally undrained shearing resistance of dense sands, because the increase in shear strength will be affected by the reduction in negative pore water pressure development. Because nitrogen is a significant component of the explosion released-gasses and it may remain trapped in the soil for a long period of time, it is important to determine the type of gases released during blasting and their concentrations. These data are needed to evaluate, through laboratory testing, the behavior of blast-densified sands at a particular ground improvement project. 5. Field experimental program 5.1. Description of the site

N2

H2O

CO 2 Time

1.0 Partial pressures Total pressure

Figure 3. Stress-strain response for five representative loose gassy specimens. Source: (After Grozic et al. [8])

N2

H2O

CO 2 Time

As part of an ongoing blast densification program, two zones were blasted in 2011 at a waste disposal landfill in South Carolina, United States, to densify a loose sand deposit located between 7.5 and 12 m below the ground surface, and thus increase its resistance to liquefaction. Fig. 4 presents the results from the Cone Penetration Test (CPT) performed before blasting to determine the position of the loose sand layer. In average, the depth to the top and thickness of the loose sand layer are 7.5 m and 4.0 m, respectively. Only the portions of the sand deposited in a very loose to loose state, N-values < 10 or qc/Pa < 4 MPa [14], were considered to liquefy and contribute to ground surface settlements after blasting. The initial in-situ void ratio of the tested sand was e0=0.96, and the minimum and maximum void ratios were determined as emin = 0.62 and emax = 1.05, respectively.

Figure 2. Fate of gases released by explosives.

170


Vega-Posada et al / DYNA 81 (185), pp. 168-174. June, 2014.

second blast event, and the initial references values for the pore water pressure were measured two days after to ensure that the excess pore water pressure due to the second blast event and the installation of the BAT probes had dissipated at the time of the readings. 5.4. Preparation and installation of the BAT probes

Figure 4. CPT results in zones 16 and 18 before blasting.

5.2. Ground surface settlements Prior to ground improvement, standard topographic surveys along the centerline of each zone were conducted to establish the initial ground surface elevation condition. Ground surface elevations were also conducted after each blast event to measure the cumulative surface settlement at any stage during the blasting program. The monitoring of these surface settlements is essential to assess density changes as a result of blasting, and therefore to evaluate the effectiveness of the blasting program. 5.3. Blast configuration and instrumentation As part of the field program, two areas termed zones 16 and 18 each measuring 30.5 m x 45.7 m were blasted. A total of six BAT probes were installed at a depth of approximately 10 m to collect groundwater and gas samples before and after blasting. Fig. 5 shows the blasting configuration, geometry and location of the BAT probes at these two zones. A total of four blast coverages were implemented at each zone to achieve the desirable ground surface settlement. The explosive charges were placed at a depth of approximately 10 m (middle of loose sand layer) and spaced in a square grid pattern with a fixed spacing of 6.1 m. The explosive used for this project was Hydromite 860, and a total weight of approximately 15.4 kg was placed in each blast hole. More details of the soil condition, soil properties, and blasting program at the site can be found in Vega-Posada [3] and GeoSyntec Consultant Inc. [15]. The BAT probes were installed four days after the

The preparation and installation sequence of the BAT probes was as follows:  A total of 40 ml, four times the volume the filter can hold, of de-aired water was flushed through the filter from the tip by using a syringe. After saturation, the filter was kept in a bucket under de-aired water to prevent desaturation.  Using a Geoprobe 8040DT equipment, a drill pipe was pushed through the ground to approximately 1.5 m above the final depth of the filter tip. The drill pipe had a circular opening at the tip with a diameter of approximately 4 cm. An inner rod was placed inside the drill pipe to prevent soil from entering the drill pipe during pipe driving.  After pushing the drill pipe to the desired depth, the inner rod was removed and the inside of the drill pipe was filled with water. The filter tip was screwed onto a 2.54 cm adapter pipe, while remaining submerged under de-aired water, and the first section of extension pipe was attached to the adapter pipe.  Then, the bucket was quickly removed, the filter placed inside the drill pipe, and installation began. Extension pipes were used to reach the desired depth and a thread sealing agent was used at each connection to prevent leakage of water into the pipe.  After lowering the filter tip through the drill pipe, it was pushed by the Geoprobe 8040DT approximately 1.5 m into the soil to reach the final depth. and third set of samples were collected one day after the 5.5. Container’s preparation and testing Four sets of groundwater/gas samples were collected during this blast densification program. The first, second, 30.5 m

6.1 m P. 16-1

6.1 m P. 16-2

P. 16-3

30.5 m

P. 18-1

P. 18-3

45.7 m P. 18-2

45.7 m

Zones 16 1st coverage 2nd coverage

3rd coverage 4th coverage

BAT probe (noted by P)

No

rth

Zones 18

Figure 5. Blasting configuration and location of BAT probes in zones 16 and 18.

171


Vega-Posada et al / DYNA 81 (185), pp. 168-174. June, 2014.

third blast event, immediately after the fourth blast event, and three days after the fourth blast event, respectively. For this set of samples, the BAT probe was assembled as shown in Fig. 1 and then a vacuum ranging from 85% to 90% was applied to the container from the bottom of the test container housing, to remove the air trapped in the container and in the sensor cavity. The fourth and last set of samples was collected 27 days after the fourth blast event. For this set of samples, each container was flushed and pre-charged with Helium to minimize the uncertainties in in-situ gas concentration encountered when the vacuum method was used. The containers were pre-charged with a pressure slightly higher than the atmospheric pressure to ensure that contamination with atmospheric gases would not occur at any time during the sampling process. Gas Chromatography (GC) tests were conducted on all the pre-charged containers before sampling to verify that no air was left inside. Helium was chosen to pre-charge the containers because it is an inert gas that is not readily found in the ground, it is not a gas produced by typical explosives, and it is different than the gas used as the carrier gas (argon) in the gas chromatography test.

the vacuumed and pre-charged containers, respectively. For the pre-charged containers, the concentration of helium was not included in these tables since it was not part of the sampled gases. The concentrations of CO2, N2, and O2 are expressed in percentage (%) and the concentrations of CO and CH4 are expressed in ppmv (parts per million by volume). 1% by volume corresponds to 10,000 ppmv. The concentration of nitrogen in the blasted layer ranged from 72.2% to 78.7% when vacuumed containers were used and from 5.0% to 8.5% when pre-charged containers were used. The concentration of gas obtained from these two techniques varied significantly. However, the results alone do not provide any valuable information about whether or not they are present in the ground in either dissolved or free form. The amount of gas that is being sampled is highly dependent on the difference in pressure between the container and the in-situ pressure, and on the volume of water that enters the container. Rad and Lunne [10] proposed a set of equations to determine if the gases sampled with the BAT probe are present at the test location in dissolved and/or free form.

6. Results of field investigation

The initial reference values for the in-situ pore pressures were recorded six days after the second blast event and one day before the third blast event. The excess pore water pressure due to the second blast event and the installation of the BAT probes had dissipated at the time of the readings. Table 3 summarizes the initial porewater pressure readings at all sample locations measured at a depth of 10 m. At these points, the temperature recorded was constant and equal to 20.1 oC.

6.2. Porewater pressure measurements

6.1. Groundwater/gas samples After collecting each set of samples, the containers were immediately sent to a commercial laboratory for GC tests to analyze the free gas in the headspace of the containers. The concentration of carbon dioxide (CO2), carbon monoxide (CO), oxygen (O2), nitrogen (N2) and methane (CH4) were determined.Tables 1 and 2 summarize the GC results from Table 1. Results from GC tests - vacuumed containers.

Second set of samples

First set of samples

Third set of samples

Borehole #

CO2 (%)

P. 16-1

1.8

75.2

19.7

6

41

3.3

73.8

18.7

24

39

-

-

-

-

-

P. 16-2

1.4

77.1

13.8

>2250

3680

2.2

73.2

17.5

4400

3300

2.4

76.0

14.8

4800

3700

P. 16-3

1.4

77.6

18.5

34

208

2.8

72.8

19.7

57

300

2.3

73.2

21.4

10

244

P. 18-1

2.4

78.7

15.9

15

75

3.3

76.5

16

20

124

2.8

72.4

20.6

10

90

P. 18-2

1.5

76.9

17.8

231

140

2.5

75.2

18.5

51

123

2.4

72.2

20.8

14

144

P. 18-3

2.2

77.6

16.4

12

12

3.2

77.0

15.8

24

12

2.0

74.3

19

9

13

N2 (%)

O2 (%)

CO (ppmv)

CH4 (ppmv)

CO2 (%)

N2 (%)

O2 (%)

CO (ppmv)

CH4 (ppmv)

CO2 (%)

N2 (%)

O2 (%)

CO (ppmv)

CH4 (ppmv)

Table 2. Results from GC tests - precharged containers. Sample 1

Sample 2

Sample 3

CO2

N2

CO

CH4

CO2

N2

CO

CH4

CO2

N2

CO

CH4

(%)

(%)

(ppmv)

(ppmv)

(%)

(%)

(ppmv)

(ppmv)

(%)

(%)

(ppmv)

(ppmv)

P. 16-3

0.6

8.5

1

89

0.3

6.7

1.2

51

0.4

6.8

<1

83

P. 18-1

0.4

6.1

1.5

11

0.2

6.2

1.7

10

0.3

6.9

1.6

20

P. 18-2

0.3

6.3

1.2

13

0.3

6.2

1.0

22

0.3

6.4

1.1

26

P. 18-3

0.4

7.4

1.7

<1

0.2

5.0

1.3

<1

0.3

5.9

1.4

1.0

Borehole #

172


Vega-Posada et al / DYNA 81 (185), pp. 168-174. June, 2014.

Table 3. Initial in-situ pore pressure after installation of the BAT probes (depth = 10 m). Borehole Average pore (#) Pore Pressure (kPa) pressure (kPa) P. 16-1 95.4 95.8 P. 16-2 97.5 P. 16-3 94.4 P. 18-1 92.0 P. 18-2 91.0 92.1 P. 18-3 93.4

The BAT probe system was used to measure the excess pore pressure dissipation over time. Fig. 6a and 6b show the pore pressure dissipation after the third and fourth blast events measured at boreholes P.16-3 and P.18-3, respectively. The pore water pressure was equivalent to the in-situ total vertical stress, indicating that initial liquefaction was induced in these zones after blasting. The effective insitu vertical stress in these two zones was approximately 100 kPa. The excess pore pressure decreased to the preblasting value in approximately 70 hr and the majority of the blast-induced settlements is expected to occur during this period of time [16]. 6.3. Ground surface settlements Fig. 7 shows the cumulative axial strains after each blast event in zones 16 and 18. The total settlement measured at the ground surface is expected to occur within the blasted loose layer [16, 17], and the ground surface to experience an one-dimensional consolidation response (εv εa) [18].

(a)

(b)

Figure 7. Cumulative axial strains in zones 16 and 18.

The axial strain measured in the targeted layer was 3.2%, 6.5%, 9.1% and 10.8% after the first, second, third, and fourth blast event, respectively. The decreased in void ratio after the fourth blast event was 0.22 (∆e=εv (1+e0)), and the resultant void ratio was 0.74. This void ratio corresponds to a relative density of 72%, where a dilative response is expected when subjected to axial compressive loading (i.e., embankment) and hence, after densification, the soil is not considered susceptible to liquefaction and flow. 7. Summary and conclusions The BAT probe testing technique was successfully implemented to collect groundwater and gas samples and to measure the porewater pressures response during and after blasting. From the groundwater/gas samples collected in the densified zones and considering that the porewater pressure at the depth of sampling is low, nitrogen is most likely to be the only gas remaining in the ground in the form of free gas. The percentage of nitrogen detected in the BAT containers’ headspace ranged from 72.2% to 78.7% and 5.0% to 8.5% when vacuumed and pre-charged containers were used, respectively. The porewater pressure recorded after detonation showed that liquefaction was induced in the tested zones. The sand remained liquefied for a period of approximately 6 hours and the excess pore pressure decreased to the preblasting value in approximately 70 hr. The results obtained from the topographic surveys proved that the blast densification technique is an effective technique to improve the density of the sand deposit. A total axial strain of 10.8 % was achieved in the targeted layer after the fourth and last blast event. Acknowledgments

Figure 6. Pore pressure dissipation over time after blasting measured at (a) zone 16 and (b) zone 18.

173

Financial support for this work was provided by the Infrastructure Technology Institute (ITI) of the Northwestern University and the National Science Foundation.


Vega-Posada et al / DYNA 81 (185), pp. 168-174. June, 2014.

References [1] Okamura, M., Ishihara, M. and Tamura, K. Degree of saturation and liquefaction resistances of sand improved with sand compaction pile. Journal of Geotechnical and Geoenvironmental Engineering, 132 (2), pp. 258-264, 2006.

[13] Hryciw, R. D. A study of the physical and chemical aspects of blast densification of sand. Ph. D. thesis, Civil Engineering, Northwestern University, Evanston, 1986. [14] Kulhawy, F. H. and Mayne, P. W. Manual on estimating soil properties for foundation design: Cornell University, 1990. [15] GeoSyntec Consultants, I. Ground improvement implementation phase II – Zones 4, 5, 19, 20, 21, and 22, 2007.

[2] Okamura, M., Takebayashi, M., Nishida, K., Fujii, N., Jinguji, M., Imasato, T., Yasuhara, H. and Nakagawa, E. In-Situ Desaturation Test by Air Injection and Its Evaluation through Field Monitoring and Multiphase Flow Simulation. Journal of Geotechnical and Geoenvironmental Engineering, 137 (7), pp. 643-652, 2011.

[16] Narsilio, G. A., Santamarina, J. C., Hebeler, T. and Bachus, R. Blast Densification: Multi-Instrumented Case History. Journal of Geotechnical and Geoenvironmental Engineering, 135(6), pp. 723-734, 2009.

[3] Vega-Posada, C.A. Evaluation of liquefaction susceptibility of clean sands after blast densification. Ph.D thesis, Northwestern University, Evanston, IL, 2012.

[17] Bachus, R. C., Hebeler, T. E., Santamarina, J. C., Othman, M. A. and Narsilio, G. A. Use of field instrumentation and monitoring to assess ground modification by blast densification. Proceedings of 15th Great Lakes Geotechnical/Geoenviromental Conference, 2008.

[4] Sparks, A.D.W. Theoretical considerations of stress equations for partly saturated soils. Proceedings, 3rd Regional Conference for Africa on Soil Mechanics, Salisbury, Rhodesia, pp 215-218, 1963.

[18] Vega-Posada, C. A., Zapata-Medina, D. G. and Garcia-Aristizabal, E. F. Ground surface settlement of loose sands densified with explosives. Rev. Fac. Ing. Univ. Antioquia, 70, pp. 224-232, 2014.

[5] Thomas, S.D. The consolidation behaviour of gassy soil. Ph. D. Thesis, Civil Engineering, University of Oxford, 1987. [6] Alvarez, A.E., Macias, N. and Fuentes, L.G. Analysis of connected air voids in warm mix asphalt, DYNA, 79(172), pp. 29-37, 2012. [7] Rad, N.S., Vianna, A. J. D. and Berre, T. Gas in soil. II: efect of gas on undrained static and cyclic strength of sand. Journal of Geotechnical and Geoenvironmental Engineering, 120(4), pp. 716-736, April 1994. [8] Grozic, J. L., Robertson, P. K. and Morgenstern, N. R. The behavior of loose gassy sand. Canadian Geotechnical Journal, 36(3), pp. 482-492, 1999. [9] Grozic, J. L. H., Robertson, P. K., and Morgenstern, N. R. Cyclic liquefaction of loose gassy sand. Canadian Geotechnical Journal, 37(4), pp. 843-856, 2000. [10] Rad, N. S. and Lunne, T. Gas in soil. I: detection and h-profiling. Journal of Geotechnical and Geoenvironmental Engineering, 120(4), pp. 697-715, 1994. [11] Christian, H. A. and Cranston, R. E. A methodology for detecting free gas in marine sediments. Canadian Geotechnical Journal, 34(2), pp. 293304, 1997. [12] U.S.-Army-Corps-of-Engineers. Systematic drilling and blasting for surface excavations: Engineering Manual, EM 110-2-3800, Office of the Chief, United States Army of Corps of Engineers, 1972, 119 P.

C. A. Vega-Posada, in 2002 received a B.S. in Civil Engineering from the National University of Colombia, Medellin, in 2008 a M.S. in Structural/Geotechnical Engineering from Ohio University, and in 2012 a Ph.D. in Geotechnics from Northwestern University. Currently, he is an Assistant Professor in the Civil and Environmental Department at the Universidad de Antioquia, Medellín. His research interests include: Soil classification, soil-structure interaction problems, liquefaction, unsaturated soils mechanics, and laboratory and field instrumentation and testing. E.F. García-Aristizábal, in 1999 received the B.S. in Civil Engineering from the National University of Colombia, in 2005 a M.Eng in Civil Engineering from Tokyo University, and in 2010 a Ph.D in Geotechnics from Kyoto University. Currently, he is an Assistant Professor in the Civil and Environmental Department at the Universidad de Antioquia, Medellín. His research interests include: infiltration, unsaturated soils, slope stability, laboratory and field testing, and numerical analysis. D. G. Zapata-Medina, in 2004 received a B.S. in Civil Engineering from the National University of Colombia - Medellin campus, in 2007 a M.S. in Geotechnical Engineering from the University of Kentucky, in 2012 a Ph.D. in Geotechnics from Northwestern University. Currently, he is an Assistant Professor in the Civil Engineering Department at the National University of Colombia - Medellin campus. His research interests include: soil characterization and constitutive soil modeling for geotechnical earthquake engineering applications; field instrumentation, numerical simulation and performance evaluation of earth retaining structures; and analytical solutions to calculate the static and dynamic stability of soilstructure interaction problems.

174


http://dyna.medellin.unal.edu.co/

Characterization of adherence for Ti6Al4V films RF magnetron sputter grown on stainless steels Caracterización de la adherencia para películas de Ti6Al4V depositadas por medio de pulverización catódica magnetrón RF sobre acero inoxidable Carlos Mario Garzón a, José Edgar Alfonso b & Edna Consuelo Corredor c a b

Departamento de Física, Universidad Nacional de Colombia sede Bogotá, Colombia. cmgarzono@unal.edu.co Departamento de Física, Universidad Nacional de Colombia sede Bogotá, Colombia. jealfonsoo@unal.edu.co c Facultad de Ingeniería, Universidad Libre de Colombia, Colombia. ednaconsueloc@gmail.com Received: May 5th, 2013. Received in revised form: January 31th, 2014. Accepted: May 16th, 2014.

Abstract Ti6Al4V films were grown on UNS S31600 austenitic stainless steel samples by RF magnetron sputtering. On top of samples, macroindentation tests were carried out to characterize the film to substrate adherence according to the recommended VDI 3198 procedure. Sputter deposition experiments varying both chamber pressure and target applied power were carried out. All films displayed superior adhesion to the substrates. A non-monotonic relationship between adherence and chamber pressure or target applied power was observed. The lowest adherences were associated with intermediate pressures and powers. The outstanding film to substrate adherence was mainly addressed to monophasic and nanometric film character (crystallite size was roughly 20 ± 10 nm) and to rather high continuity and homogeneity of films. Keywords: Thin films, Nanostructures, Biomaterials, RF-Magnetron sputtering. Resumen El acero inoxidable UNS S31600 fue recubierto con películas de la aleación Ti6Al4V, por medio de la técnica pulverización catódica magnetrón rf. La superficie de los materiales obtenidos se sometió a ensayos de macroindentación, de acuerdo con el procedimiento VDI 3198, para caracterizar la adherencia película-sustrato. Se realizaron experimentos de depósito variando tanto la presión del gas al interior de la cámara de depósito como la potencia aplicada al blanco. La adherencia película-sustrato varío no monotónicamente con la presión y la potencia; la menor adherencia se observó para las presiones y las potencias intermedias. En términos generales se observó una excelente adherencia película-sustrato, la cual se atribuyó principalmente al carácter nanométrico de los cristalitos de las películas (diámetro promedio alrededor de 20 ± 10 nm), a su carácter monofásico y a su elevado grado de uniformidad tanto química como morfológica. Palabras Clave: Películas delgadas, Nanoestructuras, Biomateriales, Pulverización catódica magnetrón rf.

1. Introduction Nowadays, the supply of low-cost prosthesis for replacing bones in humans, like the ones in artificial hips, is deficient. That negatively impacts life quality of poor people in countries with weak public health systems, like many of the countries in Latin America. Consequently, several research groups in the region had already been carried out works where microstructure optimization of materials in surgical appliances and prostheses was targeted [1,2]. Austenitic stainless steels, titanium alloys, and hidroxiapatita are widely employed for the production of prosthesis due to their low cost, good levels of biological compatibility, good bioactive response or high mechanical strength [1-10].

However, manufacture of volumetric bone replacements, like the ones in artificial hips, using only any one of the above referenced materials (austenitic stainless steels, titanium alloys, and hidroxiapatita), and simultaneously observing all the positive properties required in that uses, is not possible. Every one of the depicted materials displays serious drawbacks among the claimed properties. Stainless steels are not biologically active, and their biological compatibility is restricted; titanium alloys are rather expensive, and their levels of biological activity are rather restricted and; finally, hidroxiapatita is not as tough as it is necessary in that kind of surgical appliances and prostheses. Recent progresses in surface engineering and materials science have triggered the development of a wide variety of technical processes for manufacturing multi-structure materials, where these new materials have mechanical, tribological, or other functional properties that cannot be

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 175-181. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Garzón et al / DYNA 81 (185), pp. 175-181. June, 2014.

achieved from the individual traditional materials alone. In those multi-structure materials, chemical and mechanical properties vary locally from material surface toward material inner regions. The main example of these multistructure materials are those materials obtained by deposition of customized films on traditional substrates [912]. Traditional materials coated with thin films can be tailored to exhibit high levels of overall toughness, high tribological performance in different tribo-systems, and high levels of biological compatibility. Unfortunately, these materials coated with thin films have sharp variations in their mechanical properties and consequently are prone to decohesion at the region of transition between film and substrate due to local adherence failure. Decohesion failures can be triggered by low levels of film to substrate adherence, high levels of residual stresses at the region near to the surface, and deformation incompatibilities due to local steep variations in the stress– strain relationships. Low levels of film to substrate adherence can be induced by inadequate preparation of surface substrate during material processing, contamination of substrate surfaces prior to film deposition or low adherence levels regarding the crystal lattices of both film and substrate (the so-called intrinsic adherence). Thus, it is common that the same class of film grown on two different substrates presents different levels of adhesion, even if all the processing parameters are unmodified [13,14]. It has also been reported that very different levels of film to substrate adhesion could be obtained after grow TiN, Ti(C,N) and TiAlN films on the same steel but with different periods of precleaning the substrate surface with plasma ions [13]. There exist diverse standardized and non-standardized experimental procedures for assessing film to substrate adherence [14,15]. Unfortunately, these tests give results concerning film to substrate adherence that actually are performance parameters where several others factors but not only the intrinsic adherence are involved [14,15]. Thus, a priori, results on film to substrate adherence obtained from different types of tests are not totally interchangeable [15,16]. The following can be listed as the most frequently used adherence tests: scratch, four-points bending, cavitation due to ultrasonic vibration in a fluid (commonly water), impact, macroindentation, acoustic microscopy and laser-acoustic wave attenuation [15]. In adherence tests by macroindentation, a hardness indentation on top of samples is carried out and later evaluation in the microscope of damaged regions is done [16,17]. This is a destructive test, which is fast and reliable, as well as easy to do in almost every materials science facility equipped for mechanical properties evaluation. Recommended procedure VDI 3198 reports guidelines for performing macroindentation adherence tests [16,17]. In this paper, it is depicted an experimental research

Table 1. Chemical composition (wt-%) of the materials.

Cr

Ni

Mn Si

Mo C

Fe

Ti

UNS S31600 Steel

18.5 9.5

2.0

0.85 2.0

0.03 Bal. -

Titanium target

-

-

-

-

-

-

-

Al

V

-

-

Bal. 6.0

4.0

dealing with magnetron sputtering deposition of Ti6Al4V thin (around 1 m in tick) films on a conventional austenitic stainless steel and latter characterization of film to substrate adherence by macroindentation, following the recommended procedure VDI 3198. The aim of this research was to study the effect of variations on both target applied power and pressure inside the vacuum chamber, on film to substrate adherence for materials obtained. A discussion on the effects of film integrity on film to substrate adherence is also presented. 2. Materials Samples of the austenitic stainless steel UNS S31600, in the annealed condition, were used as substrate material. For the target material the titanium alloy Ti6Al4V was used. A commercially available alloy that has impurity levels lower than 0.01 wt-%, was used. Table 1 gives the chemical composition of both, substrate and target, materials. 3. Experimental procedures 3.1. Growth of Ti6Al4V films by magnetron sputtering Thin films (around 1 m thick) of titanium alloy Ti6Al4V were grown on samples of UNS S31600 steel through RF magnetron sputtering, using an Alcatel HS 2000 apparatus, which consists of an RF magnetron and a deposition chamber evacuated using both mechanical and turbo-molecular pumps. The equipment is implemented with gas mixer, gas flux controller, and pressure controller. Deposition experiments varying both pressure inside the vacuum chamber (between 0.002 and 0.05 mbar) and target applied power (between 200 and 500 W) were carried out. In these experiments, neither external heating of substrate nor external applied voltage (bias) were used. Prior to film deposition, steel substrates were sputter cleaned for 10 min using plasma power of 200W. This stage of sputter cleaning was performed with the aim of removing oxide layers at the steel surface, mainly the passive film. 3.2. Characterization of test samples Both, phase identification and determination of crystallite size were carried out by X-ray diffraction experiments, using a Philips PW 1710 conventional diffractometer. X-ray experiments were carried out with monochromatic radiation Cu K-, = 1.54056 Å, generator voltage of 40

176


Garzón et al / DYNA 81 (185), pp. 175-181. June, 2014.

kV and generator current of 30 mA. The X-ray irradiation length was 5 mm. Phase identification was made using diffraction patterns obtained from XRD scans where the angular step was 0.02°, the swept time was 2 s and the scanning Bragg angles, 2, were varied from 20° to 90°. Collected XRD patterns were compared to theoretical X-ray polycrystalline patterns filed by the ICDD (International Center Diffraction Data). Crystallite size measurements were made following a Scherrer analysis, using diffraction patterns obtained from XRD scans where the angular step was 0.02°, the swept time was 20 s and the scanning Bragg angles, 2, were varied from 33° to 42°. It was aimed at to record diffraction peaks corresponding to the crystallographic planes (110) from the BCC titanium phase. Morphology and spatial distribution of defects in films were evaluated by examining sample surfaces in a scanning electron microscope. A Philips XL30 conventional microscope was used. Experiments of electron microscopy were carried out varying the acceleration voltage between 15 and 25 kV and the working distance between 8 and 15 mm. Chemical composition of films was assessed by EDS microanalysis (Energy-dispersive X-ray spectroscopy), using the same scanning electron microscope referred. Four EDS measurements on top of each sample were performed. Film thicknesses were assessed by profilometry measurements. Film hardness was measured by carrying out indentation hardness tests on top of samples. These hardness experiments were carried out in a nanohardness tester coupled to an atomic force microscope. A diamond-tip Berkovitch indenter was used with a maximum load of 5 mN. The low indentation load (5 mN) aimed at to avoid substrate effects on the results of film hardness. Twelve indentations in each sample were made, and results reported are the outcome of a statistical analysis of that twelve data for sample. 3.3. Film to substrate adherence Film to substrate adherence was assessed by means of macroindentation tests, carried out according to VDI 3198 recommended procedure [16,17]. Several indentation imprints were made on top of each sample and then damage inside and around the imprints was evaluated using micrographs obtained in the scanning electron microscope. Indentation experiments were carried out using a diamond-tip Otto Wolpert-Werke indenter and indentation load of 150 kg-f. Five indentations were made in each sample, being the distance between them carefully set larger than ten times the indentation length. It is worth noting that these indentation parameters are the ones used in RockwellC hardness tests. VDI 3198 indentation tests allowed the assessment of a set of HF indexes that specify the level of film to substrate adherence. Those adherence indexes can be assessed by comparison of appearance of damage at indented surfaces

with a series of standardized damage patterns presented in Fig. 1 in the paper of Vidakis et al. [17]. As a result of such comparison, a HF index is assigned to the specific film – substrate pair. These indexes range from HF1 to HF6, where lower HF values are associated to better film to substrate adherences. According to Vidakis et al. [17], HF1 to HF4 indexes are related to good enough adhesion levels while HF5 and HF6 indexes are indication of poor film to substrate adherence. 4. RESULTS 4.1. Characterization of test samples All samples showed almost the same chemical composition. Chemical composition of the films was around 7.0 ± 1.0 Al wt-%, 3.0 ± 1.0 V wt-% and balance Ti. Heterogeneities in chemical composition at different regions of the sample, if they actually exist, were lower than the detection limit (1.0 wt-%) of EDS microanalysis for the conditions of this study. Profilometry measurements showed that film thicknesses were around 1 m: film thicknesses ranging between around 0.8 m and 1.5 m were detected. An analysis on the effect of variations of deposition parameters, namely target applied power and chamber pressure, on film thickness was not appraised because scattering of film thicknesses data were close to thickness variations aimed to detect. Fig. 1 shows the diffraction patterns obtained for two representative samples. patterns for the other samples are similar to these two in Fig. 1, and they are not shown for simplicity. After visual juxtaposition of obtained X-ray diffraction patterns (Fig. 1) with polycrystalline theoretical patterns filed by the ICDD, it was identified the Ti- (BCC) phase. On the other hand, a priori, X-ray diffraction patterns allowed to state that the films were virtually free of both Ti- (HC) phase and amorphous phase. Scanning electron microscopy (SEM) results showed that the films are rather continuous, in despite of some porous observed. Heterogeneous distributions of porous, with area fraction concentrations lower than 0.1 area-%, with cylindrical, polygonal, and irregular shapes were observed, like these shown in Fig. 2. The size of these porous varied between 1.0 and 4.0 m equivalent diameter. The results on average crystallite size, average hardness, and HF adherence index are given in Tables 2 and 3. Crystallite sizes referenced in those tables are the average diameters of crystallites XRD-estimated with regard to the crystallographic planes (110) of Ti- phase. These crystallite sizes were computed following a Scherrer analysis. Average crystallite sizes between 10 and 25 nm were measured. Strong variations in crystallite size were induced when the power applied to target was varied, while modifications in the chamber pressure induced only mild variations in that crystallite size. Higher powers and lower pressures induced coarser crystallites.

177


Garzón et al / DYNA 81 (185), pp. 175-181. June, 2014. Table 3. Main features of samples processed with 350 W target applied power. Pressure (mbar)

Crystallite size (nm)

Hardness (GPa)

Adherence index HF (dimensionless)

2×10-3

15.4±2

10.1±0.3

HF1

5×10-3

16.6±2

8.5±0.6

HF3

-2

10.6±1

10.4±0.5

HF3

5×10

Figure 1. Diffraction patterns for two representative samples.

Excepting one set of samples, average film hardness was not sensitive to variations in the deposition parameters, average film hardness was around 10.0 GPa. In samples obtained with chamber pressure of 5×10-3 mbar and target applied power of 350 W, the average hardness was 8.5 GPa. Hardness distribution histograms for data collected in each sample are shown in Fig. 3. Even though average hardness was very close in all samples (excepting one set of samples), every sample displayed a distinctive hardness distribution histogram. That heterogeneity in hardness values inside each sample could be addressed not to actual different levels of hardness in different microregions but to variations in distribution and size of defects. This relationship between statistical distribution of hardness and defect concentration will be also discussed in this article.

Figure 2. On top surface general aspect for a sample processed at 350 W – 0.005 mbar. SEM photograph. Table 2. Main features of samples processed with 5×10-3 mbar chamber pressure. Power (W)

Crystallite size (nm)

Hardness (GPa)

Adherence index HF (dimensionless)

200

11±1.5

10.4±0.7

HF1

350

16.5±2

8.5±0.6

HF3

500

24.1±3.5

10.3±0.6

HF1

Figure 3. Hardness level distribution histogram.

4.2. Film to substrate adherence Fig. 4 shows, for each sample, three SEM micrographs corresponding to three different indentation imprints. SEM micrographs were taken at ×200 magnification, which is the recommended magnification in VDI 3198 procedure (refer Section 3.3). After comparing the five indentation imprints (only three of them are shown in Fig. 4) for each sample, not significant differences were detected. That is interpreted in this work as high reproducibility levels of adherence tests. Comparison of micrographs in Fig. 4 and damage charts on VDI 3198 procedure (Fig. 1 in the paper of Vidakis et al. [17]) allows stating that all samples exhibited high adherence levels: classifications HF1 and HF3. Fig. 4 allows to conclude that variations in deposition parameters, namely power and pressure, induced significant variations of adherence levels. That is interpreted in this work as high detectability level of adherence tests. In Fig. 4 a non-monotonic relationship between adherence and chamber pressure or target applied power could be pointed out, where the poorest adherences can be associated to intermediate pressures and/or powers (insets b and c in Fig. 4). Fig. 5 shows SEM micrographs of damaged regions took at ×1000 and ×5000 magnifications. Representative micrographs are shown; however reproducibility levels were high, as it was mentioned.

178


Garzón et al / DYNA 81 (185), pp. 175-181. June, 2014.

High magnification SEM micrographs in Fig. 5 allow pointing that the main mechanisms of damage were: crack propagation, plastic deformation, and debris detachment. Cracking patterns are a powerful qualitative result that encloses rich information on film to substrate adherence [14,17]; however, their interpretation is not straightforward. In Fig. 5, after analyzing the periphery of imprints, one can see both crack propagation in the axial direction and debris detachment, being these two damage traces less intense as better is film to substrate adhesion. Samples with the highest levels of film adherence (Fig. 5, insets a and d) show mild cracking patterns and scarce debris detachment at both regions, imprint periphery and imprint inner regions.

(a) 2×10-3 – 350 W; (b) 5×10-3 – 350 W; (c) 5×10-2 – 350 W; (d) 5×10-3 – 500 W; (e) 5×10-3 – 200 W. Figure 4. SEM micrographs at ×200 magnification taken on top of samples after macroindentation tests.

Generally, all samples displayed an intense pattern of sleep lines following circular directions of imprints. However, not significant crack growth in those circular

directions was observed. The accentuated patterns of sleep lines arise from intense plastic deformation of substrate steel (which is ductile and soft). That intense plastic deformation of substrate imposes a high stress level to films, which are hard and do not display significant ductility. Thus, the fact that only minor cracking was observed in the circular directions of imprints is a key evidence of high film to substrate adherence.

(a) 2×10-3 – 350 W; (b) 5×10-3 – 350 W; (c) 5×10-2 – 350 W; (d) 5×10-3 – 500 W; (e) 5×10-3 – 200 W. Figure 5. SEM micrographs at ×1000 and ×5000 magnification taken on top of samples after macroindentation tests.

179


Garzón et al / DYNA 81 (185), pp. 175-181. June, 2014.

5. Discussion Superior levels of film to substrate adherence were obtained, even though film hardness was very high (around 10 GPa) with regard to the hardness of the substrate (around 1.6 to 1.8 GPa). It can be deduced that high elasto-plastic constrains are imposed to films during the indentation process. That is because of both, step hardness variations in the regions of transition from film to substrate and high levels of plastic deformation of substrate (substrates are ductile and soft). In spite of those high elasto-plastic constrains, it was observed that a net brittle character of the film failure was not induced. In Fig. 5, there are even traces of ductile behavior of films (i.e., propagation of deformation inside films in a plastic mode), which is viable because of the metallic character of films. This film ductility differs from the one distinctive brittle behavior of hard films composed of titanium nitrides or carbides [14]. The observed combination of hard and tough character of films can be addressed to their monophasic and metallic character as well as to the nanometer character of film structures. It was pointed out in this work that crystallite size variations did not lead to significant variation in hardness, being film hardness almost insensitive to changes of deposition parameters. In addition, the monophasic and metallic character of films is not expected to notably vary among the diverse samples (with experimental results as a basis). From those annotations a virtual wrong conclusion could be formulated: it should be expected that variations in deposition parameters induce very small changes in film to substrate adherence. However, experimental results showed a sharp variation of adherence when deposition parameters were varied, mainly when the target applied power was changed. To elucidate this virtual misunderstanding, it is essential to correlate results presented in Fig. 3 to 5. This correlation between spread of individual hardness values inside each sample and film to substrate adherence allows to conclude that those films where the hardness distribution histogram was sharper (Fig. 3 - pressure 2×10-3 mbar and power 350 W) are the films displaying better adherence levels in Fig. 4 and 5. The mentioned homogeneity in hardness histograms inside some samples can be addressed to low concentration of defects, and vice versa, defect concentrations are high in films which show high spreads in their individual hardness values, or even lessened hardness averages (pressure = 5×10-3 mbar and power = 350 W & pressure = 5×10-2 mbar and power = 350 W). As it was stated, while deposition parameters where varied, film to substrate adherence underwent significant variations. Both, high reproducibility and high detectability character of the tests, allowed noticing even smooth variations in adherence. As a matter of fact, regarding to the six adherence levels reported in VDI 3198, in this work it was possible to detect smoother adherence variations among samples where the HF index was constant. In Fig. 4, one can see that samples obtained with 200 W – 5×10-3 mbar,

500 W – 5×10-3 mbar and 350 W – 2×10-3 mbar exhibit the same adherence level HF1, whereas combining Fig. 4 and 5, it can be stated that among these three sets of sample the ones obtained with pressure = 2×10-3 mbar and power = 350 W are the samples with the highest adherence level while samples obtained with 500 W – 5×10-3 mbar have associated the poorest adherence of this group of three samples. As lower was the level of film to substrate adherence, micrographs with more pronounced axial cracking and more evidence of debris detachment were obtained. After comparing samples displaying HF3 adherence index (pressure = 5×10-3 mbar and power = 350 W & pressure = 5×10-2 mbar and power = 350 W), it is also possible to state that from the two groups, the samples processed with 350 W – 5×10-3 mbar display lower adherence levels. In conclusion, indentation tests and SEM detailed analysis of damage allowed to classify in a very individualized ranking the film to substrate adherence, with a more gradable classification that the one proposed in VDI 3198 procedure. Results presented about hardness, morphological and chemical continuity of the films and film to substrate adherence, when analyzed in the light of results in literature [10-14] show that materials in this research are adequate for using in surgical appliances and prostheses where high wear resistance, high toughness, and high ductility are a must. Due to the bulk part of these materials is composed of a conventional stainless steel and the outmost part is composed of a titanium alloy, it can be expected that these materials should be not expensive and can display moderate levels of biocompatibility. Although the core material has not high levels of biocompatibility, both the high continuity of films and its high level of wear resistance avoid the possibility that body tissues and fluids keep in contact with the steel, inhibiting the flux of dangerous metallic ions (e.g., ions of nickel) from the material to the body. 6. Conclusions Experiments assessing macroindentation adherence of Ti6Al4V films grown on stainless steel by RF magnetron sputtering were carried out. Analysis of results from this experiment revealed that deposition procedures depicted in this work are suitable for obtaining Ti6Al4V films highly adherent to steel substrates. Among the major factors inducing the high levels of film to substrate adherence, the following can be listed: small size of crystallites (around 20 ± 10 nm), monophasic film character (BCC), metallic character of both the film and the substrate and, finally, rather high continuity and homogeneity of films. Indentation experiments allowed detecting even small differences in film to substrate adherence produced by chamber pressure or target applied power variations; the poorest adherences being associated to intermediate pressures and/or powers. The major factor inducing variations in film to substrate adherence was hardness heterogeneities inside each sample,

180


Garzón et al / DYNA 81 (185), pp. 175-181. June, 2014.

being that hardness heterogeneities a result of defect distributions in the films: the sharper the distribution of hardness values the better the adherence.

[14] Heinkea, W., Leylanda, A., Matthewsa, A., Bergb, G., Friedrichb, C. and Broszeitb, E. Evaluation of PVD nitride coatings, using impact, scratch and Rockwell-C adhesion tests, Thin Solid Films, 270, pp 431-438, 1995.

References

[16] VDI - Verein Deutscher Ingenieure, Procedimiento 3198: Beschichten von Werkzeugen der Kaltmassivumformung; CVD- und PVD-Verfahren, Alemania, 1992.

[1] García, C., Paucar, C. y Gaviria, J. Estudio de algunos parámetros que determinan la sintesis de hidroxiapatita por la ruta de precipitación. Dyna, 148, pp 9-15, 2006.

[15] Ollendorf, H. and Schneider, D. A comparative study of adhesion test methods for hard coatings, Surface and Coatings Technology, 113, pp 86102, 1999.

[2] Copete, H., Lopez, M., Vargas, F., Echavarría, A. y Rios, T. Evaluación del comportamiento in vitro de recubrimientos de hidroxiapatita depositados mediante proyección térmica por combustión oxiacetilénica sobre un sustrato de Ti6Al4V. Dyna, 177, pp 101-107, 2013.

[17] Vidakisa, N. Antoniadis, A. and Bilalis N. The VDI 3198 indentation test evaluation of a reliable qualitative control for layered compounds, Journal of Materials Processing Technology, pp 143-144, 481-485, 2003.

[3] Anselme, K., Noel, B. and Hardouin, P. Human Osteoblast Adhesion on Titanium Alloy, Stainless Steel, Glass and Plastic Substrates with Same Surface Topography. J. Materials Science-Mater. Med., 10, pp815-819, 1999.

Acknowledgements

[4] Balazic, M., Kopac, J., Jackson, J. M. and Ahmed, W. Titanium and Titanium Alloy Applications in Medicine. Int. J. of Nano and Biomaterials, 1, pp 3-34, 2007. [5] Dorozhkin, S.V. Bioceramics Based on Calcium Orthophosphates, Glass and Ceramics. 64, pp 442-447, 2007. [6] Niinomi, M. Mechanical Biocompatibilities of Titanium Alloys for Biomedical Applications. J. Mechanical Behaviour of Biomedical Materials, 1, pp 30-42, 2008. [7] Miura, K., Yamada, N., Hanada, S., Jung, T. and Itoi, E. The bone tissue compatibility of a new Ti-Nb-Sn alloy with a low Young's modulus. Acta Biomaterialia, 7, pp 2320-2326, 2011. [8] Yilmazer, H., Niinomi, M., Nakai, M. Cho, K. Hieda, J., Todaka, Y. and Miyazaki, T. Mechanical properties of a medical β-type titanium alloy with specific microstructural evolution through high-pressure torsion. Materials Science and Engineering C, 33, pp 2499–2507, 2013. [9] Lee, C., Chua, J., Chang, W., Lee, J., Jangc, J. and Liawd, P. Fatigue property improvements of Ti–6Al–4V by thin film coatings of metallic glass and TiN: a comparison study. Thin Solid Films, in press, corrected proof, disponible en linea (http://dx.doi.org/10.1016/j.tsf.2013.08.027) 14 August 2013. [10] Zalnezhad, E., Baradaran, S., Bushroa, R. and Sarhan, A. Mechanical Property Enhancement of Ti-6Al-4V by Multilayer Thin Solid Film Ti/TiO2 Nanotubular Array Coating for Biomedical Application. Metallurgical and Materials Transactions A, 45A, pp 785-797, 2014. [11] De Souza, G., De Lima, G., Kuromoto, N., Soares, P., Lepienski, C., Foerster, C. and Mikowski, A. .Tribo-mechanical characterization of rough, porous and bioactive Ti anodic layers. Journal of the mechanical behavior of biomedical materials, 4, pp 796-806, 2011. [12] Erişir, E., Suadiye, E. and Gümüş, S. Microstructural characterization of wear of Arc-PVD Ti and Ti6Al4V coated austenitic stainless steels in Ringer’s solution, Proceedings of 3rd International Conference of Engineering Against Failure (ICEAF III), pp. 421-427, 2013. [13] Zoestbergen, E. and De HOSSON, J. Crack Resistance of PVD Coatings: Influence of Surface Treatment Prior to Deposition, Surface Engineering, 18, pp 283-288, 2002.

This work was financially supported by “Dirección de Investigación, DIB” at the National University of Colombia and “Colciencias”. Structural support was also provided by the “Centro Internacional de Física, CIF”, Physics Department of National University of Colombia and Metallurgical and Materials Engineering Department of University of São Paulo (Brazil). C.M. Garzón, received the Bs. Eng in Mechanical Engineering in 1994 from the Universidad Nacional de Colombia. He earned a MS degree (2001) and a PhD degree (2004) both in Metallurgical and Materials Engineering from the Universidade de São Paulo, Brazil. He has attended two full-time postdoctoral positions, one of them at the Brazilian Synchrotron Light Laboratory in Campinas (Brazil), 2005-2006, and the other one at the Grenoble Institute of Technology in Grenoble (France), 2008-2009. He started his academic carrier as a full professor at the Universidad de Ibagué in 1998, where he worked by two years. Currently he is a full professor at the Physics Department of the Universidad Nacional de Colombia, Bogotá. His research work has been focused on Materials Science and Engineering, in the realm of metallic materials processing and properties. J. E. Alfonso, completed his Bachelor’s degree in Physics in 1987 and his Master’s degree in Science - Physics in 1991, both of them being obtained from the Universidad Nacional de Colombia. In 1997 he completed his PhD in Science -Physics at the Universidad Autonoma de Madrid (Spain). He has been linked to the Universidad Nacional de Colombia as a full professor since 2000, where his research has been focused on material science, particularly on thin film processing as well as performance characterization, studying thin film optical, electrical and mechanical properties. Edna Consuelo Corredor, completed her Bachelor’s degree in Physics from the Universidad Pedagogica y Tecnologica de Colombia in 2005. In 2009 she obtained her Master’s degree in Physics and Physical Technologies and, in 2012 completed her PhD in Physics, both at the University of Zaragoza, Zaragoza, Spain. Currently, she is a full professor in the Science Division, Department of Engineering, Universidad Libre de Colombia. Her research focuses on: the structural and corrosion studies of metallic thin films grown on stainless steel substrates and, on the other hand, the preparation and characterization of magnetic nanostructures with potential application in high density recording media, magnetic sensors and magnetic actuators.

181


http://dyna.medellin.unal.edu.co/

UV-vis in situ spectrometry data mining through linear and non linear analysis methods Minería de datos UV-vis in situ con métodos de análisis lineales y no lineales Liliana López-Kleine a & Andrés Torres b b

a Associate Professor, Statistics Department, Universidad Nacional de Colombia, llopezk@unal.edu.co Associate Professor, Civil Engineering Department, Pontificia Universidad Javeriana, andres.torres@javeriana.edu.co

Received: April 10th, de 2013. Received in revised form: January 27th, 2014. Accepted: January 31th, 2014.

Abstract: UV-visible spectrometers are instruments that register the absorbance of emitted light by particles suspended in water for several wavelengths and deliver continuous measurements that can be interpreted as concentrations of parameters commonly used to evaluate physico-chemical status of water bodies. Classical parameters that indicate presence of pollutants are total suspended solids (TSS) and chemical demand of oxygen (CDO). Flexible and efficient methods to relate the instruments’s multivariate registers and classical measurements are needed in order to extract useful information for management and monitoring. Analysis methods such as Partial Least Squares (PLS) are used in order to calibrate an instrument for a water matrix taking into account cross-sensitivity. Several authors have shown that it is necessary to undertake specific instrument calibrations for the studied hydro-system and explore linear and non-linear statistical methods for the UV-visible data analysis and its relationship with chemical and physical parameters. In this work we apply classical linear multivariate data analysis and nonlinear kernel methods in order to mine UV-vis high dimensional data, which turn out to be useful for detecting relationships between UV-vis data and classical parameters and outliers, as well as revealing non-linear data structures. Keywords: UV-visible spectrometer, water quality, multivariate data analysis, non-linear data analysis Resumen: Los espectrómetros UV-visibles son captores que registran la absorbancia de luz emitida por partículas suspendidas en el agua a diferentes longitudes de onda y proporcionan mediciones en continuo, las cuales pueden ser interpretadas como concentraciones de parámetros comúnmente usados para evaluar el estado físico-químico de cuerpos de agua. Parámetros clásicos usados para detectar la presencia de contaminación en el agua son los sólidos suspendidos totales (TSS) y la demanda química de oxígeno (CDO). Métodos de análisis flexibles y eficientes son necesarios para extraer información útil para fines de gestión y monitoreo a partir de los datos multivariados que proporcionan los captores. Se han usado métodos de calibración de tipo regresión parcial por mínimos cuadrados parciales (PLS). Varios autores han demostrado la necesidad de realizar la calibración para cada tipo de datos y cada cuerpo de agua, así como explorar métodos de análisis lineales y no lineales para el análisis de datos UV-visible y para determinar su relación con parámetros clásicos. En este trabajo se aplican métodos de análisis multivariado lineales y no lineales para la minería de datos UV-vis de alta dimensión, los cuales resultan útiles para la identificación de relaciones entre parámetros y longitudes de onda, la detección de muestras atípicas, así como la detección de estructuras no lineales en los datos. Palabras clave: Espectrómetro UV-visible, calidad de agua, análisis multivariado, análisis de datos no lineal,

1. Introduction One of the most recent continuous water quality monitoring measurement techniques, which allows reducing difficulties of traditional sampling and laboratory water quality analysis [20], is UV-Visible in situ spectrometry. UV-Visible spectrometers register the absorbance of emitted light by particles suspended in water. The light is emitted from UV (< 400 nm) to visible wavelengths (> 400 nm). These sensors deliver more or less continuous measurements (approx. one per minute) and can be interpreted as concentrations of parameters commonly used

to evaluate the physico-chemical quality of water bodies. Some of these parameters that indicate presence of pollutants are Total Suspended Solids (TSS) and Chemical Oxygen Demand (COD). The usefulness of these sensors implies constructing functional relationships between absorbance and classical measurement of pollutants concentrations such as TSS and COD in the studied water system taking into account different wavelengths. Due to the composition of water from urban drainage, which depends on specific properties depending on the urban zone drained (industrial, residential, etc.), the

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 182-188. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


López-Kleine & Torres / DYNA 81 (185), pp. 182-188. June, 2014.

pollutant concentrations vary spatially and temporally [5]. Moreover, the monitoring of residential waste water exhibits a simultaneous presence of several dissolved and suspended particles and leads therefore to an overlapping of absorbances that can induce cross-sensitivities and consequently incorrect results. This situation implies that the construction of a functional relationship between absorbance and classical pollutant measurements is especially challenging and could need application of more sophisticated statistical tools. In order to find appropriate relationships several linear statistical tools have been applied so far (see e.g. [16], [4]). Chemometric models such as Partial Least Squares (PLS) [5] are used in order to calibrate a sensor for a water matrix taking into account cross-sensitivity [9]. Nevertheless, direct chemometric models can only be used if all components are known and if the Lambert-Beer law is valid, which is not the case when a great number of unknown compounds are involved [9]. Therefore, several authors (see for example [6], [9], [19] have shown that it is necessary to undertake specific sensor calibrations for the studied water system and explore linear and non-linear statistical methods for the UV-Visible data analysis and its relationship with chemical and physical parameters. Some aspects that still need to be addressed are: selection of informative wavelengths, outlier detection, calibration and validation of functional relationships. These aspects are especially relevant for urban drainage, which has particular characteristics [5]. In this work we explore the use of descriptive multivariate (linear) and data mining kernel (non-linear) methods in order to detect data structure and address the above mentioned issues for in situ UV-Vis data analysis. Results are obtained through the analysis of a real-world data set from the influent of the San Fernando Waste Water Treatment Plant, Medellín, Colombia. We found that most of the variable combinations (more than 90%) are linear and can therefore be explained using classical linear statistical methods. Nevertheless, the presence of a slight non-linearity can be revealed using non-linear kernel methods. 2 Materials and Methods 2.1. UV-Visible spectrometry The spectrometer spectro::lyser, sold by the firm s::can, is a submergible cylindrical sensor (60 cm long, 44 mm diameter) that allows absorbances between 200 nm and 750 nm to be measured at intervals of 2.5 nm. It includes a programmable self-cleaning system (using air or water). This instrument has been used for real time monitoring of different water bodies [8], [5], [3]. Measurements are done in situ without the need of extracting samples, which avoids errors due to standard laboratory procedures [9]. 2.2. UV-Vis data used for the analysis The Medellín river’s source is in the Colombian department of Caldas and along its 100 km length (approx.)

has 64 tributaries that pass through densely populated urban areas. Before the implementation of the cleaning plan of the Medellín river titled “Programa de Saneamiento del Río Medellín y sus Quebradas Afluentes” [2], these small rivers carried residential, industrial and commercial pollution to the Medellín river without any treatment. The plan has allowed the construction of the San Fernando WWTP (SFWWTP) (Planta de Tratamiento de Aguas Residuales San Fernando) in the locality of ltagüí (South side of the city of Medellín). SF-WWTP receives waste water from the industrial and residential localities of Envigado, Itagüí, Sabaneta, La Estrella and part of the South of the city of Medellín. The facility has been constructed for a maximum flow rate of 1.8 m3/s. Preliminary, primary and secondary treatment through activated sludges (thickened and stabilized in anaerobic digesters and then dehydrated and sent to a landfill) is undertaken at the WWTP facility [1]. As the result of the secondary treatment, between 80 % and 85% of the pollution is eliminated before the water is returned to the Medellín river. During 2006 the facility treated 39.4 million m3 and produced approx. 36000 tonnes of bio-solids [2]. The data set was obtained from EPM (Empresas Públicas de Medellín), public utility company in charge of Medellin’s drainage and treatment systems, at the influent of SF-WWTP and corresponds to UV-Vis absorbance spectra as well as Total Suspended Solids (TSS), Chemical Oxygen Demand (COD) and filtered Chemical Oxygen Demand (fCOD) for 124 samples obtained during dry weather. These samples were obtained in order to get a local calibration of the spectro::lyser sensor at the inlet of the SF-WWTP. 2.2. Statistical analysis 2.2.1. Principal Component Analysis Principal Component Analysis consists in obtaining axes (PCs) that are linear combinations of all variables and that resume in the first PCs using as much information as possible. The amount of information in each PC is measured as the percentage of variance retained. These axes are constructed by resolving an eigenvalue problem and therefore each new PC is orthogonal to the other PCs generated, assuring that information is not redundant [10]. This is important in the context of UV-Vis measures, as close wavelengths should measure redundant information. So, highly similar wavelengths can be filtered and information can be reduced to some PCs. Moreover correlation to response variables different from UV-Vis measurements can be investigated. 2.2.2. Kernel Methods Suppose that a group of n objects needs to be analyzed. These objects can be of any nature, for example images, texts, water samples, etc. The first step before an analysis is conducted is to represent the objects in a way that is useful for the analysis. Most analyses conduct

183


López-Kleine & Torres / DYNA 81 (185), pp. 182-188. June, 2014.

a transformation of objects, that needs to be known. Kernel methods project objects into a high dimensional space (therefore allowing the determination of non linear , relationships) through a mapping that does not need to be explicitly known. This mapping is achieved through a kernel function and results in a similarity matrix comparing all objects in pairs . The resulting matrix has to be semidefinite positive ). ( This kernel matrix is then used as an entry to different kind of kernel methods, such as Support Vector Machines and Kernel Canonical Correlation Analysis (KCCA). The flexibility of this kind of methods is due to the data representation as pair comparisons, because this does not depend on the data’s nature. Moreover, as the same objects are compared independently from the data type, several kernels can then be combined by addition and heterogeneous data types can be integrated to conduct the analysis. Most simple kernels are linear kernels, constructed obtaining the inner product. Other kernels like Gaussian, polynomial and diffusion kernels have been used for genomic data (Vert et al. 2004). The most common kernel that will also be used here is the Gaussian kernel: , where d is the Euclidian distance between objects and sigma is a parameter that is chosen via crossvalidation. 2.2.3. Kernel Principal Component Analysis (KPCA) This is a Principal Component Analysis conducted on kernels (therefore abbreviated kernel-PCA or KPCA). The search of principal components in the high dimensional . Objects are space H, is based on the mapping assumed to be centered in the original space and in the high dimensional feature space. The new components are found by solving the following problem and finding v and lambda , T being the so that the equality variance-covariance matrix of the individuals in space H: . Writing v as: , the system can be rewritten as: and therefore reduced as in classical PCA to the solution of the following , K being the kernel eigenvalue problem: the solution of this function. Due to the mapping eigenvalue problem provides non-linear principal components in the high dimensional space, without explicitely. The amount of non-linearity is knowing mainly tuned through the kernel hyperparameters, here the parameter sigma of the Gaussian kernel. 2.2.4. Kernel-K-means for the detection of outliers On the non-linear projection of samples obtained after KPCA we conducted a k-means clustering algorithm on

three data sets: 1) UV-Vis spectrometry data, 2) Response variables (TSS, COD, fCOD) and 3) UV-Vis and response variables together in order to compare clustering and detect samples that behave differently using these three data sets. Different behaviors should detect samples that show contradictory information between UV-Vis and response variables. The K-means algorithm on non linear projection works the same way as on a linear space. It starts by creating k random clusters (here 3) and reorganizing individuals until within cluster distances to the centroid (mean) of the cluster are as low as possible. At each step individuals are rearranged and centroids recalculated to determine the distance of each individual in the cluster to the centroid [11]. 2.2.4. Support Vector Machine Regression Support Vector Machine regression and classification is very useful in order to detect patterns in complex and nonlinear data. The main problem that is resolved by SVM is the adjustment of a function that describes a relationship between objects X and the answer Y (that is binary when classification is done) using S (the data set). If the objects are vectors of dimension P, the relationship is described by . Primarily, SVM is used for the classification into two categories, where Y is a vector of labels, but an extension to SVM regression exists [18]. SVM classification and regression allow a compromise between parametric and non-parametric approaches. They allow the adjustment of a linear classifier in a high dimensional feature space to the mapped sample: . In this case, the classifier is a hyperplane. The hyperplane, equidistant to the nearest point of each class can be rescaled to 1: . After rescaling, the distance of the nearest point is 1/|w| and between the two groups is 2/|w|, which is called the margin. In order to maximize it the following optimization problem has to be . In order to solved: min |w|2 subject to solve this optimization problem, that apparently has no local minima, and leads to a unique solution, a loss function is . This function introduced: penalizes large values of f(x) with opposite sign of y. Points that are on the boundaries of the classifier and therefore are called the satisfy the equality support vectors. The solution consists is though solving the following optimization problem: ∈ , where Hk is the high dimensional space and µ>0 controls the trade-off the fit of f and the approximation capacity of the space to which f belongs to. The larger µ is, the more restrictive the search space is [12],[18]. In SVM regression the loss function differs and a new parameter (ε) appears: . This loss function ignores errors of size less than ε≥0. In classification, when a pattern is far from the margin

184


LĂłpez-Kleine & Torres / DYNA 81 (185), pp. 182-188. June, 2014.

it does not contribute to the loss function. The basic SV regression seeks to estimate the linear function , where x and b are elements of H, the high dimensional space. A function minimizing error is adjusted, controlling training error and model complexity. Small w indicates a flat function in the H space. In nu-SVM regression, Îľ is calculated automatically making a trade-off between model complexity and slack variables (variables added for the optimization to work). If nu>1, necessarily Îľ=0, which is not helpful, because no errors will be considered in the loss function. So, nu should be lower than 1. Here we used SVM regression to adjust a model for the prediction of response values. Therefore, we divided the samples randomly into approx. 2/3 for training and calibration (82 samples) and approx. 1/3 for validation (41 samples). For the SVM regression the R [15] kernlab [7] package was used. To adjust the sigma parameter based on differential evolution [14] we used the DEoptim package [13]. This procedure was done only for the calibration data, using as the objective function the quadratic differences between observed and SVM-regression estimated data for the water quality parameter (TSS, COD and fCOD) independently. 3. Results and Discussion Independent intensity values of the data set show a decrease in intensity when wavelength increases. Variability is similar for all wavelengths but is slightly higher for lower wavelengths. Few extreme values are present at the univariate level, but no evidence exists that they could be due to sampling errors and therefore no data filtering was undertaken. Response variables (TSS, COD, fCOD) have different behaviors, COD being skewed to the left (high frequency of lower values) and the other two showing a normal distribution. Extreme high values are also found at the univariate level, but were not filtered either for the same reasons as before. PCA conducted on all UV-Vis data showed that these data are extremely correlated and therefore redundant as they can be resumed in the first PC with 90.1% of variance. The first two PCs resume 99.5% of the variance. The most important variables on the first PC (contributing with highest variance) are the following wavelengths (in nm): 435, 432.5, 437.5, 440, 430, 307.5, 305, 302.5, 300 and 297.5. This result indicates that only with these wavelengths, enough information on samples could be obtained, because approx. 90% of variance is explained using these variables. Most samples are very similar (showing projections with very similar coordinates on the PC space). Individuals (samples) that behave differently in this sense are 69 and 67 because they separate clearly from the other samples on the first PC (Figure 1). These samples have extreme values on the wavelengths 435 nm, 432.5 nm, 437.5 nm, 440 nm, 430 nm, 307.5 nm, 305 nm, 302.5 nm, 300 nm and 297.5 nm as

Figure 1: Scatter plot of individuals on the first two PCs (99.5% of variance) on UV-Vis spectrometry data. Projection of variables are indicated by arrows on the same space. Upper left corner shows the barplot of eigenvalues. D=20 indicate width of squares.

expected. This can be due to measurement errors or real variations in water quality, observed with more sensitivity at these wavelengths, which could be confirmed by measuring water quality parameters. PCA including response variables (TSS, COD, fCOD) does not change the individual structure, meaning that most information is already contained in UV-Vis data. The projection of response variables shows that they are also strongly correlated to the UV-Vis variables, especially TSS as this variable is highly correlated to the first PC (very low angle with PC1 axis) where 90.1 % of the variance is explained. Moreover, COD and fCOD are strongly correlated to the second PC which explains only 9.4 % of variance. These two variables are strongly correlated to each other. Variables most highly correlated to the second PC and therefore to COD and fCOD are (in nm): 205, 207.5, 210, 212.5, 215, 217.5, 220, 222.5, 225, 227.5 (Figure 2). This result indicates clearly that chemical parameters detected at visible wavelengths are related to suspended solids and that parameters related to organic pollution are related to non-visible wavelengths. Moreover, it is possible to conclude that the difference between filtered and non filtered COD is very low, showing similar variance behaviors between them. This does not mean that the COD and the fCOD values are similar but that the information they provide (in terms of variance) is similar. PCA analysis made it also possible to determine precisely which wavelengths are more related to each of the chemical parameters. TSS is close to wavelengths 435.0 nm, 432.5 nm, 437.5 nm and 440.0 nm, COD to wavelengths 222.5 nm, 220.0 nm, 217.5 nm and 215.0 nm, fCOD differs most from the other response variables and is close to wavelengths 212.5 nm, 210.0 nm, 207.5 nm and 205.0 nm. These results are in accordance with the relationship of wavelengths correlated to PC 1 for TSS and PC2 for COD and fCOD (Figure 2).

185


López-Kleine & Torres / DYNA 81 (185), pp. 182-188. June, 2014.

Figure 2: Scatter plot of individuals on the first two PCs (99.5% of variance) on UV-Vis and response variables. Projection of variables are indicated by arrows on the same space. Upper left corner shows the barplot of eigenvalues.

The K-K-means best grouping structure was obtained for 3 clusters on all three data sets (UV-Vis, Response Variables, UV-Vis+Response Variables). Therefore, we worked with 3 clusters. For these data sets, the best Gaussian kernel parameter was sigma= 0.0001. This low value indicates that even though variables are highly linearly correlated as was shown through PCA, some nonlinear structure in the data persists and is extracted trough kernel projection on a Hilbert space (Figure 3). In order to detect samples that behave differently in regard of UV-Vis and response variable data, we compared the three clusters generated with K-K-means and we detected that 6 samples (0.05 %) were systematically placed in a different cluster (13, 14, 27, 52, 75 and 89). For the rest of the individuals, most of them (approx. 80 %) were placed

Figure 4: Scatter plots of response variables highlighting PCA-outliers (“o”) and Kernel-outliers (“*”). Outliers, both at the linear and non linear level, can be used to highlight samples in order to a) detect measurement or sampling errors or b) to implement a water quality alert system.

in the same cluster and approx. 20% were placed in two different clusters when UV-Vis alone, UV-Vis and response variables and response variables alone were used. These outlier samples are outliers at a non-linear level and PCA outliers 69 and 67 are outliers at a linear level as had already been detected by PCA (Fig. 4). Notice that samples 67 and 69 have different absorbance fingerprints (Figure 1) but their TSS, COD and fCOD concentration values are exactly the same (931 mg/L, 956 mg/L and 150 mg/L, respectively for both samples 67 and 69, see Fig. 4). This indicates a problem with the data set received and confirms the detections of these samples as outliers. The outliers could have originated due to wrong measurements of the The spectrometer spectro::lyser or the laboratory measurements. Because of the low weight of laboratory measurements (only 3), the outliers are more likely to exist, due to the absorbance measurements. SVM-regression is much more robust and reliable than linear regression, because it will be less affected by linear outliers that showed to have a very different behavior from the other samples at the linear level (Fig. 2,4) as has been shown in other works [12]. Fig. 5-7 show the results of the SVM regression applied on UVVis data in order to model response variables. Predictions on calibration data are very high, but predictions

Figure 3: Scatter plot of individuals and cluster representation of first two PCs obtained by KPCA and K-means. Individuals of each cluster obtained k-means are represented with different colors and symbols. Upper left: 1) UV-Vis data, Upper right: 2) Response Variables, Bottom: 3) UV-Vis + Response variables.

Figure 5: nu-SVM calibration (left) and validation (right) results for TSS concentrations. Calibrated sigma parameter = 0.82.

186


López-Kleine & Torres / DYNA 81 (185), pp. 182-188. June, 2014.

Figure 6: nu-SVM calibration (left) and validation (right) results for COD concentrations. Calibrated sigma parameter = 0.294

on validation data are not satisfactory for TSS, COD and fCOD.

This approach opens a new possibility for the use of kernel methods in the advanced identification of outliers for future continuous monitoring of water quality controls (detection of measurement or sampling errors or alert in treatment facilities, valve operation, etc.) and its objective and automatic operation. Finally, it can be concluded that Support Vector Machine regression allowed a very accurate prediction on calibration data, but predictions on validation data are not satisfactory, especially for TSS. This means that, despite the robustness of the predictions using SVM regression, prediction accuracy needs to be improved, especially for the organic pollution (COD and fCOD variables in this work). In order to improve prediction, a filtering on wavelengths could be performed before calibration. Moreover, other machine learning methods like decision trees or neural networks or even fuzzy classification techniques [17] could be explored. Acknowledgements Authors acknowledge Medellín Water and Sewage Company (Empresas Públicas de Medellín – EPM) for providing the data used in this research.

Figure 7: nu-SVM calibration (left) and validation (right) results for filtered COD concentrations. Calibrated sigma parameter = 0.697.

3. Conclusions Similar wavelengths of UV-Vis data are highly linearly correlated between each other and to the chemical parameters TSS and COD. This analysis showed that very few wavelengths resume the behavior of all the waterquality measures (less than 5% of all measured wavelengths) and that the wavelengths with the highest variability are visible wavelengths (> 400 nm). This could mean that the spectrometer measures visible pollution in a more accurate way than non-visible pollution or that visible pollution is more variable. The PCA analysis performed on the spectrometry data coupled with the laboratory measured data (TSS, COD and fCOD), showed that it is possible to detect wavelengths related to each pollutant in relationship with the variance of the Uv-Vis data. Visible wavelengths (432.5 nm to 440.0 nm) were more correlated to TSS and UV wavelengths were more correlated to COD (215.0 nm to 222.5 nm) and fCOD (205.0 nm to 212.5 nm). These results are in accordance with the information given by the constructor of the spectrometer used, spectro::lyser (http://www.s-can.at/). Moreover, a group of wavelengths to be used for prediction could be chosen with the help of a PCA. The PCA allowed detecting outliers, which is not a standard procedure to detect them, but could be a helpful approach when monitoring water in real time, due to its simple and fast application. The use of kernel-k-means allowed the detection of non-linear outliers, which would have remained undetected using linear analysis methods (PCA, linear clustering, multivariate outlier detection, etc.).

Refrences [1] Empresas Públicas de Medellín (EPM). Planta de tratamiento de aguas residuales San Fernando premio nacional de ingeniería año 2000. http://xue.unalmed.edu.co/mdrojas/evaluacion/PLANTA%20DE%20TRA TAMIENTO%20DE%20AGUAS%20RESIDUALES%20SAN%20FERN ADO.pdf, 2007 (accessed 10 January 2012) [2] Empresas Públicas de Medellín (EPM). EPM y su Programa de saneamiento del río Medellín.http://www.epm.com.co/docsbid/aguas/Proyectos_Saneamiento_Rio_Medell%C3%ADn_Espa%C3%B1 ol.pdf (accessed10January 2012), 2009. [3] Gamerith, V., High resolution online data in sewer water quality modelling. PhD thesis: Faculty of Civil Engineering, University of Technology Graz (Austria), May 2011, P. 236, þ annexes, 2011. [4] Gruber G., Bertrand-Krajewski J.-L., de Bénédittis J., Hochedlinger M. and Lettl, W., Practical aspects, experiences and strategies by using UV/VIS sensors for long-term sewer monitoring. Water Practice and Technology (paper doi10.2166/wpt.2006.020), 1(1), P. 8 ISSN 1751-231X, 2006. [5] Hochedlinger, M., Assessment of combined sewer overflow emissions. PhD thesis: Faculty of Civil Engineering, University of Technology Graz (Austria), June 2005, P.174 þ annexes, 2005. [6] Hofstaedter, F., Ertl, T., Langergraber, G., Lettl, W. and Weingartner, A., On-line nitrate monitoring in sewers using UV/VIS spectroscopy. In: Wanner, J., Sykora, V. (eds): Proceedings of the 5th International Conference of ACE CR “Odpadni vody – Wastewater 2003”, 13–15 May 2003, Olomouc, Czech Republic, pp. 341–344, 2003. [7] Karatzoglou, A., Smola, A., Hornik, K. and Zeileis, A., kernlab - An S4 Package for Kernel Methods in R. Journal of Statistical Software 11(9), pp. 1-20. URL http://www.jstatsoft.org/v11/i09/, 2011. [8] Langergraber, G., Fleischmann, N., Hofstaedter, F. and Weingartner, A., Monitoring of a paper mill wastewater treatment plant using UV/VIS spectroscopy. Trends in Sustainable Production, 49(1), pp. 9–14, 2004. [9] Langergraber, G., Fleischmann, N. and Hofstaedter, F., A multivariate calibration procedure for UV/VIS spectrometric quantification of organic

187


López-Kleine & Torres / DYNA 81 (185), pp. 182-188. June, 2014. matter and nitrate in wastewater. Water science & technology, 47(2), pp. 63–71, 2003. [10] Lebart, L., Piron, M. and Morineau, A., Statistique exploratoire multimensionnnelle. Dunod, Paris, 1995. [11] MacQueen, J. B., Some Methods for classification and Analysis of Multivariate Observations. 1. Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability. University of California Press, 1967. [12] Moguerza, J.M. and Muñoz, A., Support Vector Machines with Applications. Statistical Science, 21, pp. 322-336, 2006. [13] Mullen, K., Ardia, D., Gil, D., Windover, D. and Cline, J., 'DEoptim': An R Package for Global Optimization by Differential Evolution. Journal of Statistical Software, 40 (6), 1-26. URL http://www.jstatsoft.org/v40/i06/, 2011. [14] Price, K.V., Storn, R.M. and Lampinen, J.A., Differential Evolution A Practical Approach to Global Optimization. Berlin Heidelberg: SpringerVerlag. ISBN 3540209506, 2006. [15] R Development Core Team R, A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org/, 2012. [16] Rieger, L., Vanrolleghem, P., Langergraber, G., Kaelin, D. and Siegrist, H., Long-term evaluation of a spectral sensor for nitrite and nitrate. Water Science and Technology. Vol. 57 (10). pp. 1563–1569, 2008. [17] Soto, S. and Jimenez, C., Aprendizaje supervisado para la discriminación y clasificación difusa. Dyna. Vol. 169. pp. 26-33, 2011. [18] Schölkopf, B., Smola A. J. Learning with kernels. The MIT Press, Cambridge, Massachusetts, 2002.

[19] Torres, A. and Bertrand-Krajewski, J.L. Partial Least Squares local calibration of a UV-Visible spectrometer used for in situ measurements of COD and TSS concentrations in urban drainage systems. Water Science and Technology 57, pp. 581–588, 2008. [20] Winkler, S., Bertrand-Krajewski, J.-L., Torres, A. and Saracevic, E., Benefits, limitations and uncertainty of in situ spectrometry. Water science and technology: a journal of the International Association on Water Pollution Research, 57 (10), 1651, 2008. L. López-Kleine is a biologist from the Universidad Nacional de Colombia – Sede Bogotá. She received her Master’s degree in Ecology, Evolution and Biometry from the University of Lyon 1 (France) and her PhD in Biology and applied statistics at AgroParisTech in Paris (France). She is an associate professor at the Statistics Department of the Universidad Nacional de Colombia since 2009. Her main research areas are systems biology and statistical genomics, in which she focuses on applying and developing multivariate and data mining analysis. She is director of the research group “Métodos en Bioestadística”. A. Torres revieved his civil engeneering degree and a specialist degree in management systems in engeneering from the Pontificia Universidad Javeriana, sede Bogotá. He made a MSc in civil engeneering and PhD in urban hydrology at the Institut National des Sciences Appliquées in Lyon, France. He is associate professor at the Pontificia Universidad Javeriana, sede Bogotá and director of the research group “Ciencia e Ingeniería del Agua y el Ambiente (http://190.216.132.131:8080/gruplac/jsp/visualiza/visualizagr.jsp?nro=000 00000000048). His main research areas are urban hydrology, especially related to urbain drainage systems, water quality measurements and management of sewer systems.

188


DYNA http://dyna.medellin.unal.edu.co/

A refined protocol for calculating air flow rate of naturallyventilated broiler barns based on CO2 mass balance Un protocolo refinado basado en balance de masa de CO2 para el cálculo de la tasa de flujo de aire en instalaciones avicolas naturalmente ventiladas Luciano Barreto-Mendes a, Ilda de Fatima Ferreira-Tinoco b, Nico Ogink c, Robinson Osorio-Hernandez d & Jairo Alexander Osorio-Saraz e a

Facultad de Ciencias Agrarias, Universidad Federal Vicosa, Brasil. luciano.mendes@ufv.br b Facultad de Ciencias Agrarias, Universidad Federal Vicosa, Brasil. iftinoco@ufv.br c Livestock Research Institute, Wageningen University and Research Center, The Netherlands. nico.ogink@wur.nl d Facultad de Ingeniería Agrícola, Universidad Federal Vicosa, Brasil. robinson0413@ufv.br e Facultad de Ciencias Agrarias, Universidad Nacional de Colombia, Colombia. aosorio@unal.edu.co Received: May 9th, 2013. Received in revised form: May 2th, 2014. Accepted: May 22th, 2014.

Abstract This study was conducted to evaluate relatively simple protocols for monitoring ventilation rate (VR) in naturally-ventilated barns (NVB). The test protocols were first applied to a mechanically-ventilated broiler barn (MVB), where VR was estimated more accurately and then were used to calculate VR in the NVB. CO2 concentrations were measured with two different sampling schemes: (S1) the average of indoor measurements along the length of the building at two heights of 0.5 m and 1.5 m from the litter floor; and (S2) same as previous but with concentration measurements taken only at 0.5 m from litter. The dynamic metabolic CO2 production rate of the birds was predicted with two different algorithms: (A1) remaining constant throughout the dark and light periods, and (A2) varying with animal activity on an hourly basis. The results demonstrated that the combination of S2 with A1 or A2 yielded the best estimate of building VR in the NVB. Keywords: building ventilation rate; aerial emissions from broiler barns; CO2 mass balance; bland-altman chart. Resumen Este estudio se realizó para evaluar protocolos relativamente simples para el monitoreo de la tasa de ventilación (TV) en instalaciones pecuarias con ventilación natural (ENV). Los protocolos de ensayo se aplicaron primero a una instalación de pollos de engorde mecánicamente ventilada (EMV), donde TV se estimó con mayor precisión y luego se utilizaron para calcular la TV en la ENV. Las concentraciones de CO2 se midieron con dos esquemas de muestreo diferentes: (S1) la media de las mediciones al interior y a lo largo de la longitud de la instalación a dos alturas de 0,5 m y 1,5 m del suelo; y (S2) igual que la anterior pero con las mediciones de concentración fueron realizadas únicamente a 0,5 m del suelo. La tasa de producción de CO2 metabólico dinámico de las aves se predijo con dos algoritmos diferentes: (A1) el mantenimiento constante durante los periodos de luz y oscuridad, y (A2) que varía con la actividad de los animales sobre una base horaria. Los resultados demostraron que la combinación de S2 con A1 o A2 presentó la mejor estimación de TV en el ENV. Palabras clave: tasa de ventilación en instalaciones, emisiones aéreas de instalaciones de pollos de engorde, balance de masa de CO 2, gráfico de bland-altman.

1. Introduction A considerable amount of published literature on ammonia (NH3) emissions from poultry production is devoted to the quantification of NH3 emissions from mechanically-ventilated, environmentally-controlled poultry houses [1]. This kind of installation, however, is not typical in Brazil where poultry production systems feature open sidewalls without total control of the inside conditions. This practice is to take advantage of the tropical climate for natural ventilation to reduce production

costs [2]. However, there are technical difficulties associated with measuring building ventilation rate (VR) and thus aerial emissions of such facilities [3]. Determination of the VR in mechanically-ventilated buildings (MVB) is relatively easy compared to naturallyventilated buildings (NVB), because the VR in an MVB can be determined by directly measuring the airflow rates through all the openings or ventilation fans of the building and then totalized to obtain the overall building VR [4]. On the other hand, a NVB has large open contact areas with its outdoor

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 189-195. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Barreto-Mendes et al / DYNA 81 (185), pp. 189-195. June, 2014.

environment, which is difficult to delimit. Various approaches exist to determine the VR of a NVB. One of the methods is based on the use of tracer gas, applicable to both mechanically and naturally ventilated buildings. The technique is based on mass balance of a tracer gas with a known release rate inside the building. Several gases can serve as tracers, with metabolically produced carbon dioxide (CO2) being the most used because of advantages such as homogeneity in the air and reduced costs since it is readily available from the housed animals [5]. The challenge in using CO2 as a tracer to determine the VR is the proper estimation of metabolic CO2 production by the animals and other sources such as manure. There are two major methods of determining metabolic CO 2 production: one requires information of the respiratory quotient (RQ), and simplifies animal activity in dark and light periods [5]; (b) the other approach includes hourly variation of animal activities on the total metabolic CO2 produced [4,6]. The question of how these two approaches compare to one another still needs to be answered. On the other hand, [7] mentioned that a crucial challenge for the determination of the VR in a NVB is choosing the locations for measuring the concentrations of CO 2. This reinforces the importance of the development of sampling schemes that lead to monitoring the representative average barn CO 2 concentrations. Hence, the objective of this study was to define and evaluate an effective protocol to determine the VR of a NVB from several alternative methods. The potential refinements (i.e., test methods) that were investigated included: (a) use of an algorithm that best represents the production and release of metabolic CO2 in the barn, which includes either diurnal or constant animal activity, and (b) determination of a sampling scheme of CO2 concentration that leads to the best average indoor concentration, amongst the tested schemes. Due to the non-existence of a reference method for the determination of the VR in NVBs that can be used to challenge the test methods, we applied these test methods to a MVB, where the comparison with a reference method (summation of airflow from all exhaust fans) was possible. The test method that yielded the best fit to the reference value was then used for the determination of the VR in the NVB. 2. Material and methods 2.1. Characteristics of the broiler barns and flock management The study was conducted in two commercial broiler barns, one naturally ventilated and the other mechanically ventilated, both located on the same farm in the state of Minas Gerais, Brazil. The mechanically ventilated barn (MVB) measured 120.0m L × 14.0m W × 2.5m H, with a fiber-cement tile roof, and a polyurethane drop ceiling (the same material as used for the sidewall curtains). The sidewall curtains were closed most of the time, and the ventilation was provided with 8 newly installed exhaust fans (specified capacity of 39,329 m3 h-1 each of 1.5 HP and with

a static pressure of 12.0 Pa) placed on the west end of the building. Fresh air was brought into the barn through air inlets located at the east end, and the inlet openings were adjusted manually, as needed. The ventilation program was based on indoor air temperature and consisted of 7 stages including minimum ventilation at the early age of the birds (< 3wk). The barn had an initial placement of 23,100 male Cobbs® chicks, and freshly dried coffee husks, serving as floor bedding, which had never been used to raise broilers before. Chicks were reared up to a marketing age of 45 d. The naturally ventilated barn (NVB) that was used for the application of the best VR protocol measured 75m L x 12m W x 2.75m H. It had ceramic tiles and a polyurethane drop ceiling (the same material as used for sidewall curtains). Ventilation was provided through manual operation/opening of the sidewall curtains (fully open, half open, or nearly closed). The initial bird placement was 10,000 female Cobbs®, and had the same kind of non-used litter described for the MVB. This flock was also reared up to a marketing age of 45 d. The lighting program was similar for both sexes and followed the management guidelines of [8], consisting of a 1-hour dark period between the ages 2 - 10 d; then the number of dark hours was increased to 9, and then decreased again to 8, 7 and 6 hours of dark at the ages of 22, 23 and 24 days, respectively. The light schedule then remained the same until the bird age was 39 d, when the number of dark hours was set to 5, decreasing one hour a night till pick up day. Because the birds were being reared during the summer period in Brazil, the dark hours were synchronized to sunrise, and thus the time of minimum animal activity (hmin) in eq. (4) usually occurred between 2:00AM and 5:00 AM. 2.2. Sampling scheme and data collection Carbon dioxide (CO2) concentrations were measured with a hand-held sensor (model AZ 77535 CO2/Temp/RH Meter, AZ Instrument Corp., Taichung City, Taiwan) that had a measuring range of 0 ~ 9999 ppm, resolution of 1 ppm and accuracy of ± 30 ppm ± 5% of the reading (according to specifications, and calibrated at the factory). Data collection was done once every three hours, for a 48hour period, performed weekly throughout the 7–week grow-out period. Sampling scheme 1 (S1): Indoor CO2 concentrations were measured at three different distances along the central axis of the building (at 20, 60 and 100 m from the exhaust fans), and at two different heights (0.5 and 1.5 m above the litter). Outdoor CO2 concentrations were measured at three different points along the south side of the building, which was considered to be the air inlet, as during the experimental period the wind was consistently coming from the south. Sampling scheme 2 (S2): Same as S1, except that the indoor concentrations were taken only at 0.5 m above the litter, i.e. closer to the CO2 plume generated by the birds.

190


Barreto-Mendes et al / DYNA 81 (185), pp. 189-195. June, 2014.

2.3 Algorithms for calculating building ventilation rate (VR) and statistical analysis 2.3.1 Algorithms for calculating building ventilation rate (VR)

0.5 kg and equal to 0.185 m3 h-1 hpu-1 for body weight ≥ 0.5 kg [9], while the relative animal activity was calculated with the equations used by [10], namely the animal activity is represented by a sinusoidal model, of the following form.

 2π   A=1-a×sin    h+6-h min   24  

Building ventilation rate was estimated through eq. (1) [9].

VR=

A×  CO2 metabolic +  CO2 litter

(1)

Δ  CO2 

Where: 3 -1

-1

VR = the building ventilation flow (m h hpu ); A = the relative animal activity (non dimensional); (CO2)metabolic = metabolic CO2 production by the animals

(m3h -1hpu -1 ); (CO2)litter 3 -1

=

the

CO2

released

by

the

Where: a = 0.08, a constant that expresses the amplitude with respect to the constant 1 [9]; h = time of the day (hour); hmin = the time of the day with minimum activity after midnight (hour). Total heat production data for both algorithms was adjusted for environmental temperature deviation from neutrality by using the eq. (5).

litter

THPcorr.T =4×10-5  20-T 

3

-1

(m h hpu ); Δ[CO2] = [CO2]indoors - [CO2]outdoors, the indoor and outdoor CO2 concentrations, respectively (ppm). CIGR [9] define 1 hpu (heat production unit) as 1000 W of total heat at 20oC. Two different algorithms were tested to estimate building VR, as follows. Varying metabolic CO2 production (algorithm A1): For this algorithm, A in eq. (1) was set constant and equal to 1, while the metabolic production of CO2 was calculated using eq. (2) and (3), the same ones that were used by [5]. Here, animal activity was simplified to a dark and light period behavior, represented by two different respiratory quotients (RQ).

THP

CO2 metabolic = 16.18 RQ

(2)

+5.02

THP=N×10.62×m0.75

(3)

Where: THP = the total heat production of the animals under thermoneutral conditions (W); RQ = respiratory quotient, non-dimensional; N = number of animals in the barn; m = animal body mass (kg). The RQ value for broilers used in this algorithm was 0.9 [5], for all ages. According to [5], RQ values of tom turkeys remained constant and around 1.0 for ages of 1 – 35 days, and since broilers and young turkeys are similar in their bioenergetics, we used a constant RQ value for the tested bird age range. In this algorithm, dark vs. night activity was taken into consideration by reducing THP during the night period by 25%, as recommended by [5]. Varying relative animal activity (algorithm A2): For this algorithm, metabolic CO2 production was set constant and equal to 0.180 m3 h-1 hpu-1 for chicks with body weight <

(4)

(5)

Where: THPcorr,T = correction factor for temperature on total heat producton from the animals, based on the thermoneutral level of 20 oC, non-dimensional; T = indoor air temperature, oC. The contribution of CO2 production from the litter was accounted for by multiplying the metabolic CO2 production rate in both algorithms by a correction factor of 1.077, as suggested by [5]. In this study, the results obtained from the three sampling schemes were used as input to the two algorithms, resulting in four different test methods, which were compared to the reference method. The labels attributed to the four test methods are: Test method 1- sampling scheme 1 and algorithm 1; Test method 2 - sampling scheme 1 and algorithm 2; Test method 3 sampling scheme 2 and algorithm 1; Test Method 4 Sampling scheme 2 and algorithm 2. 2.3.2 Reference method for the determination of building VR and statistical analysis The reference procedure for calculating the VR of the MVB in this study consisted of the summation of the flow rate through all the running exhaust fans. The flow rate of the exhaust fans was calculated by measuring upstream air velocity via a hot wire anemometer (model 425, Testo ®, São Paulo, Brazil), with a specified measurement range of 0 ~ 2 m s-1, resolution of 0.01 m s-1 and accuracy of ± (0.03 m s-1 + 5% of the reading), and was factory calibrated. Measurements were taken at 16 traverse points evenly distributed across the fan’s entrance area. Mean air velocity was then multiplied by the fan area to obtain the fan VR. Because the flow rate through the exhaust fans can vary with barn static pressure, which was not monitored, every time a new stage of fans was activated, a new measurement of their flow rate was taken.

191


Barreto-Mendes et al / DYNA 81 (185), pp. 189-195. June, 2014.

In order to assess how the reference measurement method for the VR is related to each of the test methods, an analysis was performed with the software SAS 9.2®, that delineates agreement between each of the test methods and the reference method. Specifically, the agreement was assessed by regressing the difference between the reference and test methods (ΔVR = VRreference - VRtest) and the mean VR obtained by both methods (̅​̅​̅​̅​̅= [VRreference + VRtest]/2), where if the null hypothesis Ho of ΔVR = 0 cannot be rejected, there would be no difference between the two methods. This method was proposed by [11] and improved by [12], in which ΔVR is regressed on ̅​̅​̅​̅​̅ (eq. 6).

ΔVR=β0 +β1×VR i +εi

(6)

Where: ΔVR = (VRreference – VRalternative), m3 h-1 bird-1; βo = Y-intercept, a measure of systematic positive or negative bias (m3 h-1 bird-1); β1 = Slope, a measure of nonsystematic heterogeneous bias, non-dimensional; ̅​̅​̅​̅​̅= is the average of VR measured with reference and alternative method (m3 h-1 bird-1); εi = independent normally distributed homogeneous random error (m3 h-1 bird-1). In Equation 6, the intercept (β0) and the slope (β1) represent homogeneous and heterogeneous systematic bias, respectively. A test of significance for each coefficient was carried on with PROC REG in SAS to assess if the systematic bias is statistically different from zero. After the measurement of estimated bias, attention was also given to the nature of the random error in the

measurements provided by each test method. The magnitude of the error was calculated as the standard error (SE) and the data set tested for non-uniform error distribution [13] with the Heteroscedasticity Test (HCT) in SAS. When the existence of non-uniformity was detected, the calculation of the adjusted SE (also called “white” error) for β0 and β1 was done through the option ACOV in the MODEL statement of PROC REG in SAS. 3. Results and discussion 3.1 Assessment of Agreement between Reference and Test Methods in the MVB The regression lines for the comparison between reference and the four different test methods for measurement of VR are presented in Fig. 1. The plots show heterogeneous patterns in the spread of the data points along the measurement range for all four test methods, the points tended to spread further apart in the mid-range of the tested values. Evidence at the level of 95% confidence for the existence of heterogeneous spread of the random error can be seen in Table 1. The results confirm the existence of heterogeneous error distribution for most of the test methods. As a reaction to that, the significance test for the coefficients in the regression represented by eq. (6) was done with the corrected SEs (SEβ0 and SEβ1, Table 1) [12]. Regression results for the model in eq. (6) for all four test methods can also be seen in Table 1. The significance test for β0 in eq. (6) revealed that for all test methods, a systematic positive homogeneous error was present. A systematic underestimation of CO2 production leads to an

Figure 1. Plots for the difference between each of the alternative CO2 balance methods for measuring ventilation flow rate (ΔVR) and the standard method against the average of the two methods ( VR ) being compared. The four test methods are the combination of: (1) sampling scheme 1 and algorithm 1; (2) scheme 1 and algorithm 2; (3) sampling scheme 2 and algorithm 1; (4) Sampling scheme 2 and algorithm 2.

192


Barreto-Mendes et al / DYNA 81 (185), pp. 189-195. June, 2014.

Table 1. Summary of statistics for the evaluation of agreement between reference and 4 test methods, by regressing data to the model

ΔVR=β0 +β1 VR Test Method

1

2

3

4

, (m3 h-1 bird-1)

2.2* ± 0.5

2.9* ± 0.6

1.1* ± 0.8

1.4* ± 0.9

(non-dimensional)

-0.2 ± 0.1

-0.4 ± 0.2

0.2 ± 0.2

0.1 ± 0.2

0.02

0.08

0.02

0.0099

p-value for regression

0.1270

0.0128

0.1405

0.5389

N

64

64

64

64

̂ ̂

Adj. R

2

* Estimated coefficient is significantly different than zero at the level of 95% probability;

underestimation of ventilation rate (eq.1) and explains why the bias represented by β0 had a positive nature, the lowest regressed intercept values were (1.1 ± 0.8) m3h-1bird-1 for test method 3 and (1.4 ± 0.9) m3h-1bird-1 for test method 4 (Table 1). Potential reasons for the existence of systematic bias could be the models used to estimate the metabolic CO2 production in the barn, which were based on empirical coefficients (either RQ for algorithm 1 or the constant rate of production of CO2 in algorithm 2) for broiler breeds tested during the 90s or earlier, while the modern broilers have growth rates that are increasing over the years, consequently increasing animal metabolic rate, and thus producing more CO2 [5]. Calvet et al. [14] suggested that the model for determination of VR from CO2 balance for broiler litters should be adjusted with updated coefficients that account for both environmental temperature and modern strains of broilers that have been obtained by advances in genetics and improved feed composition. Havenstein et al. [15] stated that broiler growth rate in terms of body weight (BW) increased by approximately 73 g yr-1 from 1976 to 1991, while increased BW means increase in metabolic rate, improved feed conversion rate and consequently more release of CO2 per animal, with time. Hence, it is necessary that new experiments be designed and performed on the quantification of the bioenergetics of the modern breeds of broilers in order to update and improve the metabolic CO 2 production estimates.

The significance test for β1 indicated that the majority of test methods presented systematic heterogeneous bias that did not statistically differ from 0, presenting values that ranged from (-0.4 ± 0.2) to (0.2 ± 0.2). This outcome indicates that systematic heterogeneous bias was nearly inexistent for any of the test methods. Results of the non-linear regression analysis performed to adjust VR vs. Δ[CO2] data from all test methods to the model in VR = a×Δ CO2 b can be seen in Table 2. Empirical values obtained for the coefficient b for test methods 3 and 4 were (-0.9 ± 0.1), being significantly equal to -1 (p < 0.0001), indicating that these test methods present the best fit to the model. Hence, because tests methods 3 and 4 presented the lowest systematic bias while yielding the best fit with the theoretical model, they were regarded as the most appropriate for application to the NVB. 3.2. Calculations of VR for the NVB with the Best Selected Test Methods Calculated mean VR values by age for the NVB, obtained with test methods 3 and 4 are presented in Table 3 along with those obtained by [16], who recommended minimum, maximum VR values related to bird body weight. The VR data obtained with test methods 3 and 4 in this study compare well with those from [16], especially for the maximum recommended values.

Table 2. Non-linear regression results for the fit of experimental and calculated data from the mechanically ventilated building (MVB) to the model

VR = a×Δ CO2 

b

Test method

Mean Δ[CO2], ppm

a ± SE

b ± SE

Adj. R2

1

977

6273 ± 4383

-0.6 ± 0.1

0.33

Regression P-value < 0.0001

2

977

7828 ± 5948

-0.6 ± 0.1

0.31

< 0.0001

3

1015

1109 ± 718

-0.9* ± 0.1

0.51

< 0.0001

4

1015

1297 ± 899

-0.9* ± 0.1

0.48

< 0.0001

*the parameter estimate is not significantly different than the theoretical value of -1 at a confidence level of 95%.

193


Barreto-Mendes et al / DYNA 81 (185), pp. 189-195. June, 2014. Table 3. Data of bird body weight and ventilation rates (VR, m 3 h-1 bird-1) for broiler chickens as a function of bird age (d) presented by [16] and calculated in this study for the naturally ventilated barn (NVB) with test methods 3 and 4.

Bird age (wk)

Lacambra, 1997

1 2 3 4 5 6

0.3 0.8 2.1 4.3 7.5 11.5

This study Test method 3 0.8 ± 0.2 2.0 ± 0.5 1.6 ± 0.2 2.1 ± 0.3 4.1 ± 0.6 8.1 ± 0.7

The range of variability in VR by age for the NVB calculated with test methods 3 and 4 are also presented in Table 3 in terms of standard error of the mean (SE), going from 0.2 m3 h-1 bird-1 to 0.7 m3 h-1 bird-1 for ages 1 and 6 wk, respectively for test method 3 and from 0.2 m3 h-1 bird-1 to 0.8 m3 h-1 bird-1 for ages 1 and 6 wk, respectively for test method 4. The uncertainty associated with the use of the CO2 mass balance for NVBs in livestock has been evaluated by [10] for pig barns and by [17] for dairy cow barns. One of the reasons for the relative variability appointed in most of these studies is the dependence on wind forces that cause both the concentrations to vary by a large extent and most importantly, causing the difference of CO2 concentration between the inside and outside of the barn (ΔCO2) to be small, which is when the CO2 balance method should be used with care in naturally ventilated barns; Ouwerkerk and Pedersen [18] suggested that ΔCO2 values should not be lower than 200 ppm in order for the method to yield reliable results. In this study, more than 90% of the values for ΔCO2 used in tests methods 3 or 4 for the NVB met the > 200 ppm criteria of [18], ranging from 105 to 1596 ppm with mean and SE of (577 ± 34) ppm for both test methods. 4. Conclusions Four combinations of two sampling schemes and two sets of calculations were tested to measure the ventilation rate of a mechanically ventilated negative pressure broiler barn based on the metabolic CO2 mass balance method. The best test methods were selected in terms of presence of smallest systematic and heterogeneous bias, and were applied to calculate ventilation rate across a naturally ventilated broiler house. The following conclusions can be drawn:  Including either variable animal activity throughout the day or averaged dark and light periods in calculations of metabolic CO2 production combined with sampling CO2 concentrations measured at the influence zone of the birds yielded the best estimates of VR for the negative pressure barn as compared to the reference method;  Estimates of air flow rate for the naturally ventilated barn were calculated by considering variable animal activity throughout the day and with measurements of CO2 concentrations at the bird influence zone, with values ranging from (0.44 ± 0.04) m3 h-1 bird-1 and (10 ± 1) m3 h-1 bird-1 at the ages of 1 and 7 weeks.

Test method 4 0.8 ± 0.2 2.0 ± 0.4 1.6 ± 0.2 2.1 ± 0.3 4.6 ± 0.7 8.9 ± 0.8

References [1] Xin, H., Gates, R.S., Green, A.R., Mitloehmer, F.M., Moore JR, F.M. and Wates, C.M., Environmental impacts and sustainability of egg production systems. In: Emerging Issues: Social Sustainability of Egg Production Symposium. Poultry Science, doi: 10.3382/ps.2010-00877, 2011. [2] Tinôco, I. F. F. Novas tendências em projetos para a avicultura industrial. In: FACTA, Jaboticabal - SP, 2002. pp. 1-101. [3] Ogink, N.W.M., Mosquera, J., Calvet, S. and G. Zhang. Methods for measuring gas emissions from naturally ventilated livestock buildings: developments over the last decade and perspectives for improvement. Accepted for publication in Biosystems Engineering, 2012. [4] Calvet, S.; Gates, R. S., Zhang, G., Estellés, F., Ogink, N.W.M; Pedersen, S. And Berckmans, D. 2012.Measuring gas emissions from livestock buildings: a review on uncertainty analysis and error sources. Accepted for publication in Biosystems Engineering. [5] Xin, H., Li, H., Burns R.T., Gates R.S., Overhults D.G. and Earnest, J.W. Use of CO2 concentration or CO2 balance to assess ventilation rate of commercial broiler houses. Transactions of the ASABE, 52 (4), pp. 13531361, 2009. [6] Pedersen, S., Blanes-Vidal, V., Joergensen, H., Chwalibog, A., Haeussermann, A., Heetkamp, M.J.W., and Aarnink, A.J.A. Carbon dioxide production in animal houses: a literature review, Agricultural Engineering International 10, Manuscript BC 08 008, 2008. [7] Zhang, G., Pedersen, S. and Kai, P. Uncertainty analysis of using CO2 production models by cows to determine ventilation rate in naturally ventilated buildings, In: . XVII World Congress on Agric CIGR. Eng, pp.1‐ 10, 2011. [8] Cobb-Vantress. Suplemento de crescimento e nutrição para frangos de corte, Cobb 500TM, 2012 [date of reference August 23 of 2012]. Available at: http//www.cobb-vantress.com [9] CIGR, Climatization of Animal Houses – Heat and Moisture Production at Animal and House Level. 4th Report of CIGR Working Group. Horsens, Denmark., 2002. [10] Blanes, V. and Pedersen, S. Ventilation in pig houses measured and calculated by carbon dioxide, moisture and heat balance equations. Biosystems Engineering, 92 (4), pp. 483 – 493, 2005. [11] Altman, D. G. And Bland J. M. Measurement in medicine: the analysis of method comparison studies. The Statistician., 32 (3), pp. 307317, 1983. [12] Fernandez, R. and Fernandez, G., Validating the Bland-Altman Method of Agreement. Annual Conference of Western Users of SAS Software, San Jose, California, USA, pp. 1-17, 2009. [13] Hopkins, W. G. Measures of reliability in sports medicine and science. Sports Medicine, 30 (1), pp. 1-15, 2000. [14] Calvet, S., Estellés, F., Cambra-López, M., Torres, A. G. and Van den Weghe, H. F. A. The influence of broiler activity, growth rate, and litter on carbon dioxide balances for the determination of ventilation flow rates in broiler production. Poultry Science., 90 (11), pp. 2449-2485, 2011.

194


Barreto-Mendes et al / DYNA 81 (185), pp. 189-195. June, 2014. [15] Havenstein, G. B., Ferket, P. R. and Qureshi, M. A. Growth, livability and feed conversion of 1957 versus 2001 broilers when fed representative 1957 and 2001 broiler diets. Poultry Science., 82 (1), pp. 1500-1508, 2003. [16] Lacambra, J. M. C. Sistemas de ventilación y refrigeración en avicultura. Selecciones Avícolas, 39 (6), pp. 347-357, 1997. [17] Samer, M., Berg, W., Müller, H.-J., Fiedler, M., Gläser, M., Ammon, C., Sanftleben, P. and Brunsch, R. Radioactive 85Kr and CO2-balance for ventilation rate measurements and gaseous emissions quantification through naturally ventilated barns. Transaction of the ASABE., 54 (3), pp. 1137-1148, 2011. [18] Van Ouwerkerk, E. N. J., and S. Pedersen. Application of the carbon dioxide mass balance method to evaluate ventilation rates in livestock buildings. In Proc. XII World Congress on Agric CIGR. Eng, pp. 516‐529, 1994. Luciano Barreto Mendes, achieved a Bs. Degree in Agricultural Engineering on April 2008 from the Federal University of Campina Grande (Paraíba State, Brazil), a M.Sc degree in Agricultural and Biosystems Engineering on September 2010 from Iowa State University (Iowa, U.S.A.) and in March of 2014 he achieved a D.Sc degree from the Federal University of Viçosa (Minas Gerais state, Brazil). He works in the area of protected environments for animals and has experience in (1) instrumentation and monitoring of aerial emissions from poultry manure and (2) animal bioenergetics, (3) mitigation of emissions based on management of poultry systems and (4) use of the CFD tool to predict air motion patterns in animal barns. Dr. Mendes is currently a post-doctoral researcher at the Wageningen UR Livestock Research (Wageningen, the Netherlands). Ilda de Fátima Ferreira Tinôco, has a Bachelor in Agricultural Engineer (1980), has a MSc degree in Animal Structures (1988), both achieved at the Federal University of Lavras (Minas Gerais state, Brazil, 1980). Mrs. Tinôco has a D.Sc. degree in Animal Sciences from the Federal University of Minas Gerais (Minas Gerais state, Brazil, 1996). She is currently Associate Professor at the Department of Agricultural Engineering (DEA) of Federal University of Viçosa (UFV) and coordinates the UFV branch of the following scientific exchange programs: (1) CAPES/FIPSE Agreement involving the University of Illinois and University of Purdue, in the U. S. A.; (2) Umbrella Agreement amongst UFV and Iowa State University, University of Kentucky (both in the U. S. A.); (3) Scientific and Technical Agreement amongst UFV and University of Évora (Portugal),

National University of Colombia (Colombia), Iowa State University and University of Kentucky (U.S.A.). Professor Tinôco also coordinates the Center for Research in Animal Microenvironment and Agri-Industrial Systems Engineering (AMBIAGRO) and is the President of the DEA-UFV Internation Relations Committee. Her research areas are design of rural structures and evaporative cooling systems, thermal confort for animals, air quality and animal welfare. Nico Ogink, achieved a Bs. degree in Animal Science (1981), a M.Sc. degree in Animal Production (1985) and a PhD degree in Tropical Animal Sciences (1993), all from Wageningen University (Wageningen, the Netherlands). Dr. Ogink is Research Program Leader and Senior Scientist at the Wageningen UR Livestock Research. He is member of the advisory group of the Dutch Ministry of Environment and the Dutch Ministry of Agriculture for the evaluation of low emission housing systems in the Netherlands, advisor for both ministries on selecting BAT-options, member of the national Environmental Assessment Committee, member of the National Health Advisory (the Netherlands) committee that reports on livestock production and public health, and involved in the international VERA-initiative, aimed at developing international measurement protocols for the assessment of ecological performances of livestock production systems within the EU. Dr. Ogink is involved in exchanges with several research groups in Europe, the U.S.A. and Brazil. Robinson Osorio Hernandez, is Bs. in Agricultural Engineer from the National of Colombia (2006), achieved a MSc degree in Design of Rural Constructions from the Federal University of Viçosa (2012). Mr. Osorio has worked from 2007 to 2010 with Processing and Quality Control for Coffee at the Colombian Federation of Coffee Farmers (Antioquia). From 2007 to 2010 Mr. Hernandez was a Professor of Electrotechniques and Design of Animal Structures at the National University of Colombia. Since August 2014 he pursuing a D.Sc. degree in Livestock Production at the Federal University of Viçosa (Minas Gerais, Brazil). His research areas are: rural Constructions, thermal environment for livestock production, grain processing and storage and computational modeling. Jairo Alexander Osorio Saraz, is a Bachelor in Agricultural Engineering (1998), is specialized in Environmental Legislation (2001) and has a MSc degree in Materials Engineering (2006), all titles obtained from the National University of Colombia (UNAL, Medellín, Colombia). In 2011 he achieved a D.Sc. degree in Rural constructions from the Federal University of Viçosa (Minas Gerais state, Brazil). Since 2003 Dr. Osorio is Professor at the UNAL, teaching and advising in the areas of Design of Livestock Housing, Materials Technology for Livestock Housing, air quality and animal welfare. Dr. Osorio is the Dean of the Faculty of Agricultural Sciences at UNAL since 2012.

195


http://dyna.medellin.unal.edu.co/

Use of a multi-objective teaching-learning algorithm for reduction of power losses in a power test system Uso de un algoritmo de enseñanza-aprendizaje multi-objetivo para la reducción de pérdidas de energía en un sistema de potencia de prueba Miguel A. Medina a, Juan M. Ramirez a, Carlos A. Coello b & Swagatam Das c a

CINVESTAV del IPN Unidad Guadalajara, Jal. México. Departamento de Ingeniería Eléctrica, mmedina[jramirez]@gdl.cinvestav.mx b CINVESTAV del IPN unidad Zacatenco, D.F. México. Departamento de Computación, ccoello@cs.cinvestav.mx c Indian Statistical Institute, Electronics and Communication Sciences Unit, Kolkata, India. Swagatam.das@isical.ac.in Received: June 1th, 2013. Received in revised form: January 23th, 2014. Accepted: February 26th, 2014.

Abstract This paper presents a multi-objective teaching learning algorithm based on decomposition for solving the optimal reactive power dispatch problem (ORPD). The effectiveness and performance of the proposed algorithm are compared with respect to a multi-objective evolutionary algorithm based on decomposition (MOEA/D) and the NSGA-II. A benchmark power system model is used to test the algorithms’ performance. The results of the power losses reduction as well as the performance metrics indicate that the proposed algorithm is a reliable choice for solving the problem. Keywords: Multi-objective evolutionary algorithm based on decomposition (MOEA/D), Multi-objective Teaching-learning algorithm, Optimal reactive power dispatch. Resumen Este artículo presenta un algoritmo de enseñanza-aprendizaje multi-objetivo basado en descomposición para resolver el problema del despacho óptimo de potencia reactiva (ORPD). La efectividad y el desempeño del algoritmo propuesto son comparados con respecto a un algoritmo evolutivo multi-objetivo basado en descomposición (MOEA/D) y con el NSGA-II. Un modelo de sistema de potencia de referencia se utiliza para probar el desempeño de los algoritmos. Los resultados de la reducción de las pérdidas de energía así como las métricas de desempeño indican que el algoritmo propuesto es una opción fiable para resolver el problema. Palabras clave: Algoritmo evolutivo multi-objetivo basado en descomposición (MOEA/D), Algoritmo de enseñanza-aprendizaje multiobjetivo, Despacho óptimo de potencia reactiva. 1. Introduction Optimal reactive power dispatch (ORPD) is a useful tool in modern energy management systems. It plays a significant role for safe operation of power systems. One of the main tasks of a power system operator is to manage the system in such a way that its operation is safe and reliable. Its main aim is to determine the optimal operating capacity and the physical distribution of the compensation devices such as voltage rating of generators, reactive power injection of shunt capacitors/reactors, tap ratios of the tap setting transformers, to ensure a satisfactory voltage profile while minimizing the transmission loss. Due to the continuous growth in the demand for electricity with unmatched generation and transmission capacity expansion, voltage instability is emerging as a new challenge to power system planning and operation. Therefore, the voltage stability index should also be considered as an objective of the ORPD problem. Many classical optimization techniques such as gradient search (GS) [1], linear programming (LP) [2], Lagrangian approach (LA)

[3], quadratic programming (QP) [4], interior point methods (IP) [5] etc., have been successfully applied to solve ORPD problems. However, from the specialized literature, it may be observed that such classical optimization methods exhibit several drawbacks, such as insecure convergence properties and algorithmic complexity. Due the non-differential, non-linearity and nonconvex nature of the optimal reactive power dispatch problem, the majority of these techniques converge to a local optimum [8].

In recent years, many meta-heuristic such as evolutionary methods have been implemented to solve the ORPD problem. The advantages of evolutionary algorithms in terms of the modeling and search capabilities have encouraged their application to the ORPD problem. Non-dominated Genetic Algorithm (NSGA-II) [6], particle swarm optimization (PSO) [7], differential evolution (DE) [8], harmony search algorithm (HSA) [9], and general quantum genetic algorithm (GQ-GA) [10], are some of the meta-heuristic methods that have been used to solve the ORPD problem. The majority of the existing

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 196-203. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Barreto et al / DYNA 81 (185), pp. 196-203. June, 2014.

multi-objective evolutionary algorithms (MOEAs) aim at producing a number of Pareto-optimal solutions as diverse as possible to approximate the whole Pareto front. Therefore, these methods need some other techniques for ranking solutions (e.g., crowding distance, fitness sharing, niching). However, it has been empirically found that these methods cannot always provide good results, especially when the multiobjective optimization problem is very complicated [11]. Recently, a new framework called multi-objective evolutionary algorithm based on decomposition (MOEA/D) [11], was proposed. This framework decomposes a multiobjective optimization problem into several single-objective optimization sub-problems. In this way, a set of approximate solutions to the Pareto front is achieved by minimizing each sub-problem instead of using Pareto ranking. This has given rise to a new generation of MOEAs. The teaching-learning based optimization (TLBO) algorithm was recently proposed by Rao et al. [12] as a new meta-heuristic algorithm inspired on the philosophy of teaching-learning process in a class between the teacher and learners (students). This algorithm has emerged as a simple and efficient technique for solving single-objective complex benchmark problems and real world problems. TLBO does not require any specific parameter to be tuned, which facilitates its implementation and use. These are the characteristics of TLBO algorithm that make it suitable to deal with multi-objective optimization problems. Therefore, in this research a teachinglearning algorithm is proposed to deal with multi-objective optimization problems employing a decomposition framework similar to the one adopted by MOEA/D. This framework decomposes a multi-objective problem into several singleobjective optimization sub-problems. Thus, a set of Paretooptimal solutions is achieved by minimizing each sub-problem through neighborhood relationships, instead of using a Pareto ranking method. The effectiveness of the proposed approach is assessed and compared to a multi-objective evolutionary algorithm based on decomposition MOEA/D [11], which is representative of the sate-of-the-art on the subject. Results are also compared to the NSGA-II algorithm [6], which remains the most popular method based on Pareto ranking. The performance of the methods is investigated on an IEEE 14-bus test system to solve the optimal reactive power dispatch problem by minimizing the transmission losses and a voltage stability index. The rest of the paper is organized as follows: Section 2 exposes the fundamentals and the general framework of the proposed approach. In Section 3, the problem formulation is summarized. Section 4 presents a brief description of the performance measures. Simulation results and a comparative study are presented in Section 5. Finally, conclusions are provided in Section 6. 2. Fundamentals: methods 2.1. Teaching-learning based optimization The original teaching learning based optimization (TLBO) algorithm was proposed by Rao et al. [12] to obtain

global solutions for continuous non-linear functions. In optimization algorithms, the population consists of different design variables. In TLBO, the design variables are analogous to different subjects offered to learners. The learners' grade is analogous to the 'fitness' as in any other evolutionary algorithm, and the teacher is considered to be the best solution obtained so far [12]. Hence, the performance of TLBO is based on two main phases: the teacher phase, which involves learning from the teacher, and the learner phase, which involves learning through the interaction among learners. The pseudo-code of the TLBO algorithm can be summarized in the following way. 1: Initialization 2: Evaluation 3: iteration = 1 4: Repeat 5: Teacher Phase 6: Keep the best solutions 7: Learner Phase 8: Keep the best solutions 9: iteration = iteration + 1 10: Until Maximum number of iterations. Many optimization methods require parameters that affect the performance of the algorithm. For example, differential evolution (DE) primarily depends on the mutation strategy and its intrinsic control parameters such as scale factor (Fs) and crossover rate (Pcr); particle swarm optimizers (PSO) require learning factors, the variation of the inertia weight and the maximum value of the velocity. Unlike other optimization techniques, TLBO does not require any algorithm’s parameters to be tuned, which makes the implementation of TLBO simpler. 2.2. Decomposition of a multi-objective optimization problem A multi-objective optimization problem (MOP) may be formulated as follows:

   F ( x )  { f1 ( x ),..., f k ( x )}  subject to x   min

(1)

where x is the vector of decision variables, and Ω is the feasible region within the decision space. F :    is defined as the mapping of k objective functions. In multi-objective optimization, the goal is to find the best possible trade off among the objectives since, frequently, one objective can be improved only at the expense of worsening another. To describe the concept of optimality for problem (1) the following definitions are provided.     Definition 1. Let x, y  , such that x  y , we say k

that x dominates y (denoted by x  y ) if and only if,

197

  fi ( x )  fi ( y) for all i = 1, ..., k.


Barreto et al / DYNA 81 (185), pp. 196-203. June, 2014.

*

*

Definition 2. Let x  , we say that x is a Pareto  optimal solution, if there is no other solution y  such

C jth

*

that y  x .



Definition 3. The Pareto Optimal Set ( PS ) is defined by

   PS  {x *   x * is Pareto Optimal Solution} while its     image PF  {F ( x * ) x *  PS } is called the Pareto Optimal Front. There are several approaches for transforming a MOP into a number of scalar optimization problems, which have been described in detail in [13]. Usually, these methods use a weighted vector to define a scalar function and, under certain assumptions, a Pareto optimal solution is achieved by minimizing such function. In this paper, the Tchebycheff approach is used to decompose a MOP. In this approach, the scalar optimization problem can be stated as [13]:

Minimize g  x w, z*   max wi fi ( x)  zi* i{1,..,k } Subject to x 

represents

k i 1

reference point, i.e., zi  min  f i ( x ) x   , for i  1,..., k . For each Pareto

optimal solution x * there exists a weighting vector w such that x * is the optimal solution of (2), and each optimal solution is a Pareto optimal solution for (1). Therefore, it is possible to obtain different Pareto optimal solutions using different weighting vectors w [13]. based

on

The proposed Multi-Objective Teaching Learning Algorithm based on Decomposition (MOTLA/D) utilizes the Tchebycheff approach, (2), to decompose the MOP into N scalar optimization sub-problems. Hence, using (2), the objective function of the j-th sub-problem becomes: with g ( x w j , z * )  max {wij f i ( x )  zi * } ,

(4)

(5)

The teacher will try to improve the mean of the class (Mclass) taking it towards its own level (Mnew). The difference between the class (Mclass) and the teacher (Mnew) modifies the j-th learner (xj) in order to generate a new solution ( xnew ). Hence, the new solutions generated in this phase are as follows, for j  1 to number of sub-problems xnew j  x j ,i  ri ( M new,i  TF M class ,i )

(6)

end

where index j corresponds to the current index of j-th sub-problem, ri is a random number within the range [0, 1]. TF is the teaching factor which value can be either 1 or 2, which is decided randomly with equal probability as TF = round [1 + rand (0, 1)]. The new solution (xnew) is accepted if it gives a better function value. Learner Phase: The learner phase generates a new solution (xnew) by randomly selecting two learners xi, and xk such that i ≠ k ≠ j. The new solutions generated in this phase are as follows,

i{1,..,k }

w  {w ,..., wkj } and j = 1… N. The proposed approach j

(3)

The teacher (Mnew) for the j-th sub-problem represents the best learner of the class Cjth. Thus,

T

wi  1 ; z *   z1* ,..., zk * 

teaching-learning

M class  [m1 , m2 ,..., mD ]

x j 

the

Multi-objective decomposition

 x1, D   x2, D        xTsize , D 

M new  {x j min g ( x j w j , z* )}

*

2.3.

x1,2 x2,2  xTsize ,2

where the subscript D is the number of design variables, and Tsize is the size of the neighborhood ΩT. The main steps of the proposed MOTLA/D can be summarized as follows: Teacher phase: Within the teacher phase, the mean of the class for each design variable is evaluated,

(2)

where w   w1 ,..., wk  is a weighting vector and

wi  0 for all i  1,..., k ,

 x1,1  x 2,1      xTsize ,1

j 1

looks for the sequential minimization of these subproblems. Similar to MOEA/D [11], neighborhood relationships among these sub-problems are defined by computing Euclidean distances between weighting vectors. In MOTLA/D, for the j-th sub-problem, the size of the neighborhood becomes the number of learners in the class. This class can be expressed as,

for j  1 to number of sub-problems Randomly select two learners xi and xk , such that xi  xk  x j f ( xk )  f ( xi )

if

xnew j  x j  ri ( xk  xi ) else xnew j  x j  ri ( xi  xk ) end end

198

(7)


Barreto et al / DYNA 81 (185), pp. 196-203. June, 2014.

Additionally, a polynomial mutation operator is applied to maintain the diversity of solutions. If one or more variables within the new solution (xnew) lies outside Ω, the ith value of xnew is reset as follows,

 xlb,i , if xnew,i  xlb,i xnew,i    xub,i , if xnew,i  xub,i

reactive power losses minimization is selected as one objective function. The losses are evaluated by the following expression,

(Vei  Vri )2 X i2

QVAR,i  X i | I i |2  X i

(8)

(9)

The new solution (xnew) is accepted if it improves the function value and replaces the old solution (xj). The procedure for the implementation of MOTLA/D may be summarized as follows, Step1) Initialization Step 1.1. Generate a well-distributed set of N weighting vectors w j  ( w1j ,..., wmj ) , j  1,..., N and find the

where Vei and Vri are the sending and receiving voltages, respectively; Xi is the line reactance; Ii is the current through the transmission line. Therefore, the objective function for the reactive power losses is expressed as,

neighborhood of each sub-problems: B ( j )  {w j ,..., w j } . Step 1.2. Generate the initial population and evaluated its fitness. Step 1.3. Initialize the reference point z*. Step 2) For j = 1 to N do Step 2.1. Determine the class according to: if rand    B( j) C {1,..., N } otherwise Where rand is a random number within [0,1] and δ the probability to select the neighborhood as the class. Step 2.2. Teacher phase Step 2.3. Update the reference point z*. Step 2.4. Update (Sr) solutions. where (Sr) is the maximal number of solutions replaced by each new solution obtained. Step 3) For j = 1 to N do Step 3.1. Determine the class according to: if rand    B( j) C {1,..., N } otherwise Step 3.2. Leaner Phase Step 3.3. Update the reference point z*. Step 3.4. Update (Sr) solutions. Step 4) Stop: If the stop condition is satisfied, then stop MOTLA/D, otherwise go to Step 2.

where nl is the number of lines. Reducing the reactive power losses enables more active power to be transferred over a single line.

f Loss   i 1 QVAR ,i nl

(10)

3.1.2. Voltage stability index There are a variety of indexes that help to assess the steady state voltage stability. In this case, the voltage stability index Lindex is used [14]. This index is able to evaluate the steady state voltage stability margin of each bus. The Lindex value lies between 0 (no load) and 1 (voltage collapse). This value implicitly includes the load effect. The bus with the highest Lindex value will be the most vulnerable, and therefore, this method helps to identify weak areas that require a critical support of reactive power. For the j-th load bus, the voltage stability index is defined by [14],

Lj  1 

i

G

F jiVi

(11)

Vj

where ( G ) represents the set of generator buses; Fji is a term evaluated through the partial inversion of the admittance matrix Ybus, and V represents complex

0  L 1

3. Problem formulation In this paper, a reactive power system problem is tackled, which may be stated as an optimization problem where two objective functions are minimized, while satisfying a number of equality and inequality constraints. The following objective functions are minimized: a) the reactive power losses; and b) the voltage stability index Lindex [14]. 3.1. Objective functions 3.1.1. Reactive power losses One important issue in power transmission is the high reactive power losses on the highly loaded lines, with the consequent transmission capacity reduction. Therefore, the

j voltages. For stable conditions, must not be violated for any j. Hence, a global indicator Lindex describing the whole system’s stability is defined by [14],

Lindex  max j

L

L  j

(12)

where ( L ) is the set of load buses. Pragmatically, Lindex must be lower than a given threshold value. The predetermined threshold value is specified depending on the system configuration and on the utility policy regarding service quality and allowable margin. The Lindex in (12) is associated with the worst bus in the sense of voltage stability. Therefore Lindex is considered as the second objective function for the reactive power dispatch problem

199


Barreto et al / DYNA 81 (185), pp. 196-203. June, 2014.

addressed in this paper. The minimization of Lindex implies to move such bus toward a less stressed condition. 3.2. Equality constraints These constraints are the power flow equations as follows: nb (13) PGi  PDi  Vi  j 1V j (Gij cos ij  Bij sin ij ) QGi  QDi  Vi  j 1V j (Gij sin ij  Bij cos  ij ) nb

(14)

where i=1,..,nb; nb is the number of buses. PGi and QGi are the active and reactive generated powers at the i-th bus, PDi and QDi are the active and reactive demands at the i-th bus, respectively. Vi is the voltage magnitude at the i-th bus. θij is the voltage angle difference between buses i and j. Gij and Bij are the mutual conductance and susceptance between buses i and j, respectively.

Figure 1. IEEE 14 bus test system Table 1. Parameters used by the algorithms

Parameter Spop Tsize Sr δ Cr ηm F

3.3. Inequality constraints Generator constraints: Generator reactive power outputs and voltage magnitudes are restricted by their upper and lower bounds as follows:

QG imin  QG i  QG imax for i  1, 2,..., ng

(15)

VG imin  VG i  VG imax

(16)

for i  1, 2,..., ng

for i  1,2,..., nt

(17)

where nt is the number of transformers. Shunt VAR compensator constraints: The setting of the shunt VAR compensation devices is restricted as follows:

QCimin  QCi  QCimax for i  1,2,..., nc

(18)

where nc is the number of VAR compensation devices. 3.4. Decision variables The decision variables include the generator voltages VG, and the tap ratio of the transformers (T),

x  [Vg1 ,Vg2 ...,VgNg , T1 , T2 ..., TNt ]

MOEA/D 100 30 3 0.9 1 20 0.5

NSGA-II 100 0.9 20 -

3.5. Case study

where ng is the number of generating units. Transformer’s tap constraints: Transformer taps are bounded by their related minimum and maximum limits as follows:

Ti min  Ti  Ti max

MOTLA/D 100 30 3 0.9 20 -

(19)

It is worth noting that the decision variables are selfconstrained by the optimization algorithm.

This paper compares the effectiveness and performance of the proposed algorithm with respect to MOEA/D and NSGAII. Therefore, the algorithms have been applied to a test system composed of 14-buses, 5-generating units, and 20-lines in which three lines have tap changing transformers. The system model and data are summarized in [15]. In this study, 20 independent runs were performed by each algorithm. Fig. 1 shows the bus code diagram of the test power system. The control parameter settings utilized by the algorithms are summarized in Table 1 where: Spop is the population size, Tsize is the neighborhood size, Sr is the maximum number of solutions replaced, and δ is the probability of selecting solutions from the neighborhood. ηm is the distribution index used in the polynomial mutation. Cr is the Crossover rate, which determines the quantity of elements to be exchanged by the crossover operator. The parameter Fs is the scale factor associated with MOEA/D, which represents the amount of perturbation added to the main parent. Finally, a mutation rate Pm=1/n is taken into account, where n is the number of decision variables. This parameter indicates the probability that each decision variable has of being changed. It is worth mentioning that the stop condition of each algorithm is the number of function evaluations (25000 for each algorithm). 4. Performance measures In order to assess the algorithms’ performance, two indicators are utilized.

200


Barreto et al / DYNA 81 (185), pp. 196-203. June, 2014. Table 3. Average optimal values for power losses

4.1. Coverage of two sets This performance measure compares two sets of nondominated solutions (A,B) and outputs the percentage of individuals in one set dominated by the individuals of the other set. This performance measure is defined as [16],

C ( A, B ) 

{b  B  a  A : a  B} B

(20)

The value C(A,B) = 1 means that all points in B are dominated by or equal to all points in A. The opposite, C(A,B) = 0 represents the situation when none of the solutions in B are covered by the set A. Note that both C(A,B) and C(B,A) have to be considered, since C(A,B) is

Variables

MOTLA/D

MOEA/D

NSGA-II

Vg1(p.u)

1.0250

1.0250

1.0250

Vg2(p.u)

1.0215

1.0219

1.0246

Vg3(p.u)

0.9962

0.9975

1.0172

Vg6(p.u)

1.0250

1.0249

1.0250

Vg8(p.u)

1.0167

1.0182

1.0248

T4-7

1.0061

1.0058

0.9965

T4-9 T5-6

1.0147 1.0083

1.0148 1.0084

0.9874 1.0178

Table 4. Average optimal values for voltage stability system

1  C(B, A)

. When C(A,B) = 1 and not necessarily equal to C(B,A) = 0 then, we say that the solutions in A completely dominate the solutions in B (i.e., this is the best possible performance for A). 4.2. Spacing metric This performance measure quantifies the spread of solutions (i.e., how uniformly distributed are the solutions) along a Pareto front approximation. This performance measure is defined by [17],

S

n

1   d  di  n  1 i 1

2

(21)

where n is the number of non-dominated solutions, di is the minimum Euclidean distance between objective functions, d is the mean of all di. A value of zero implies that all solutions are uniformly spread (i.e., the best possible performance).

MOTLA/D

MOEA/D

NSGA-II

Vg1(p.u)

1.0250

1.0250

1.0250

Vg2(p.u)

1.0250

1.0250

1.0250

Vg3(p.u)

1.0250

1.0250

1.0240

Vg6(p.u)

1.0250

1.0249

1.0250

Vg8(p.u)

1.0250

1.0250

1.0250

T4-7

0.9770

0.9773

0.9766

T4-9 T5-6

0.9750 1.0248

0.9740 1.0247

0.9770 1.0243

respect to the base case. Meanwhile, the MOEA/D algorithm attains in average a 21.76% reduction in reactive power losses and 6.52% reduction in voltage stability index with respect to the base case. Finally, it is observed that the NSGA-II attains in average a 19.44% reduction in reactive power losses and 6.52% reduction in voltage stability index with respect to the base case. 0.0736

MOTLA/D MOEA/D NSGA-II

0.0734 0.0732

5. Results

0.073

The values of the reactive power losses (floss) and the voltage stability (Lindex) in the base case without optimization are 0.4393 p.u and 0.0767, respectively. Because of the stochastic nature of the tested algorithms, their performance cannot be concluded by the result of a single run. Therefore, the average of the optimal values obtained by the algorithms are shown in Table 2.

MOTLA/D Average 0.3434 0.0716

MOEA/D Average 0.3437 0.0717

 Solutions reached only by MOTLA/D

Lindex

0.0728 0.0726 0.0724 0.0722 0.072

Table 2. Average of the optimal values obtained by the algorithms

Objective Function floss (p.u) Lindex

Variables

0.0718

NSGA-II Average 0.3539 0.0717

0.345

0.35

0.355 0.36 f (p.u) loss

0.365

0.37

Figure 2. Pareto front obtained by the algorithms

According with Table 2, the proposed MOTLA/D estimates in average a 21.83% reduction in reactive power losses and a 6.65% reduction in voltage stability index with

Table 3 shows the average optimal values of the generators' voltages and the tap-position for the transformers of the optimal solution of the objective

201


Barreto et al / DYNA 81 (185), pp. 196-203. June, 2014.

function (floss) achieved by the algorithms. Likewise, Table 4 indicates the average optimal values of the decision variables (voltages and tap-position) for the optimal solution of the objective function (Lindex) obtained by the algorithms. It means that moving the corresponding elements toward the optimal values specified in Tables 3 and 4, the power system will attain an improved operating condition, more secure, economical, and efficient. The Pareto fronts obtained by the algorithms are shown in Fig. 2. These figures shows the final set of non-dominated solutions found by each algorithm and correspond to the run with the nearest value of the average of the performance measure (20). It can be clearly seen in Fig. 2, that MOEA/D and NSGA-II only cover a portion of the solutions achieved by MOTLA/D. Therefore, the proposed algorithm is able to achieve more distributed solutions. It is noteworthy that a distribution of the optimal solutions as uniform as possible along the Pareto Front ensures avoid big gaps in the Pareto front and, therefore, all the different type of trade-off solutions are generated. This is relevant, because if big gaps occur, it may happen that the trade-off solution of interest is not produced (i.e., the optimal solution of concern may be located in the missing portion of the Pareto front). The average results of the coverage of two sets metric obtained by each algorithm is summarized in Table 5. The best results are displayed in boldface.

Algorithm MOTLA/D MOEA/D

B MOEA/D NSGA-II NSGA-II

C(A,B) average 0.14 0.56 0.51

References [1] Yu, D. C., Fagan, J. E., Foote, B. and Aly, A. A. “An optimal load flow study by the generalized reduced gradient approach.” Elect Power Syst Res, vol. 10 (1), pp. 47-53, January 1986.

Table 5. Coverage of two sets C(A,B) performance measure.

A

problem of optimal reactive power dispatch in power systems. The mechanism of the MOTLA/D is as effective as the MOEA/D and it has the advantage of being easy to comprehend, and simple to implement, so that it can applied to a wide variety of optimization problems. The efficiency of the proposed algorithm is illustrated when solving an IEEE 14 bus test system. Likewise, the effectiveness and performance of MOTLA/D was compared with respect to MOEA/D and NSGA-II. The results of the coverage of two sets metric indicate that the proposed algorithm was able to obtain better solutions than MOEA/D and NSGA-II. According to this metric the optimal solutions obtained by MOTLA/D dominate about 16% of the solutions produced by MOEA/D and 58% of the solutions generated by NSGAII. In addition, the convergence time proves that MOTLA/D is about twice as fast as MOEA/D and it is slightly faster than NSGA-II. Therefore, MOTLA/D is able to achieve a better handling of reactive power by optimizing the reactive power losses and the voltage stability index. Thus, it may be concluded that our proposed algorithm is a reliable choice for the power test system considered in this study and it may be a promising choice for other test systems because the its simplicity, effectiveness and less computational effort.

C(B,A) average 0.08 0.01 0.03

[2] Aoki K, Fan M. and Nishikori, A., “Optimal var planning by approximation method for recursive mixed integer linear programming.” IEEE Trans Power Syst, vol. 3 (4), pp. 1741-1747, Nov. 1988. [3] Sousa V. A., Baptista E. C. and Costa, G. R. M., “Optimal reactive power flow via the modified barrier Lagrangian function approach.” Elect Power Syst Res, vol. 84 (1), pp. 159– 164, March 2012.

As noticed in Table 4, the proposed approach (MOTLA/D) outperformed MOEA/D and NSGA-II regarding the coverage of two sets indicator. Moreover, it can be seen that the algorithms based on decomposition have better convergence that the traditional evolutionary technique based on Pareto ranking, NSGA-II. Regarding the spacing metric, the proposed MOTLA/D obtained, in average, a value of 0.0150, meanwhile, MOEA/D and NSGA-II achieved an average value of 0.0156 and 0.0073, respectively. Therefore, NSGA-II attains relatively better results regarding to the spacing metric. However, since convergence has precedence over spread, we can conclude that our proposed MOTLA/D outperformed MOEA/D and NSGA-II in the analyzed case of study. The convergence time of MOTLA/D, MOEA/D and NSGA-II is, 109.10 s, 246.03 s, and 109.79 s, respectively. Therefore, the proposed MOTLA/D outperformed MOEA/D, and it has a convergence time which is average slightly faster than NSGA-II.

[4] Lo, K. L. and Zhu, S. P., “A decoupled quadratic programming approach for optimal power dispatch.” Elect Power Syst Res, vol. 22 (1), pp. 47–60, September 1991. [5] Granda, M., Rider, M. J., Mantovani, J. R. S. and Shahidehpour., “A decentralized approach for optimal reactive power dispatch using a Lagrangian decomposition method.” Elect Power Syst Res, vol. 89, pp. 148–156, August 2012. [6] Deb, K., Agrawal, S., Pratab, A. and Merayivan, T., “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6 (2), pp. 182-197, April 2002. [7] Yoshida, H., Fukuyama, Y., Kawata, K., Takayama, S. and Nakanishi, Y., “A particle swarm optimization for reactive power and voltage control considering voltage security assessment.” IEEE Trans. Power Syst. vol. 15, pp. 1232-1239, 2001. [8] Abou, A., Ela, E., Abido, M. and Spea, S. R., “Differential evolution algorithm for optimal reactive power dispatch.” Electr. Power Syst. Res., vol. 81, pp. 458-64, 2011. [9] Khazali, A. and Kalantar, M., “Optimal reactive power dispatch based on harmony search algorithm.” Electr. Power & Energy Syst., vol. 33, pp. 684-692, 2011. [10] Vlachogiannis, J. and Ostergard, J., “Reactive power and voltage control based on general quantum genetic algorithms.” Expert Syst. Appl., vol. 36, pp. 6118-6126, 2009.

6. Conclusions This paper presented a multi-objective teaching-learning algorithm based on decomposition for solving the complex

202


Barreto et al / DYNA 81 (185), pp. 196-203. June, 2014. [11] Li, H. and Zhang, Q., “Multiobjective optimization problems with complicate Pareto sets, MOEA/D and NSGA-II.” IEEE Transactions on Evolutionary Computation, vol. 13 (2), pp. 284-302, April 2009. [12] Rao R. V, Savsani, V. J. and Vakharia, D. P., “Teaching-learning based optimization: A novel method for constrained mechanical design optimization problems.” Computer-Aided Design, vol. 43 (3), pp. 303-315, Mar. 2011. [13] Miettinen, K., “Nonlinear Multiobjective Optimization.” Kluwer Academic Publishers, Boston. Massachuisetts, 1999. [14] Kessel, P. and Glavitsch, H., “Estimating the Voltage Stability of a Power System.” IEEE Transactions on Power Delivery, vol.1 (3), pp.346354, July 1986. [15] System data (2006, December). <www.ee.washington.edu/ research/ pstca>.

Available

from:

[16] Zitzler, E., Deb, K. and Thiele, L., “Comparison of Multiobjective Evolutionary Algorithm: Empirical Results.” Evolutionary computation, vol. 8(2), pp. 173-195, summer 2000. [17] Schott, J. R., “Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization.” Master Thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology. May 1995. Miguel A. Medina was born in Merida. Yuc., MEXICO, on February 5, 1984. He received the B.S. degree in engineering physics from Universidad de Yucatán, Yuc., MEXICO, in 2008 and the MSc degree in electrical engineering from CINVESTAV, Guadalajara, Mexico, in 2010, where he is currently pursuing the PhD degree. His primary area of interest is optimization.

Juan M. Ramirez received the B.S. degree in electrical engineering from Universidad de Guanajuato, Guanajuato, México, in 1984, the M.Sc. degree in electrical engineering from UNAM, D.F., México, in 1987, and the PhD degree in electrical engineering from UANL, Monterrey, México, in 1992. He joined the Department of Electrical Engineering of CINVESTAV, Guadalajara, in 1999, where he is currently a full-time Professor. His areas of interest are in power electronics and smart grid applications. Carlos A. Coello Coello received the PhD degree in computer science from Tulane University (in the USA) in 1996. He is a full Professor and Chair of the Computer Science Department of CINVESTAV-IPN, in Mexico City, Mexico. He is associate editor of the IEEE Transactions on Evolutionary Computation. His main research interests are evolutionary multiobjective optimization, and constraint-handling techniques for evolutionary algorithms. Swagatam Das received the PhD degree in engineering from Jadavpur University, India, in 2009. Currently he is serving as an assistant professor at the Electronics and Communication Sciences Unit of Indian Statistical Institute, Kolkata. His research interests include evolutionary computing, pattern recognition, multi-agent systems, and wireless communication. He is the founding co-editor-in-chief of “Swarm and Evolutionary Computation”, an international journal from Elsevier. He serves as associate editors of the IEEE Trans. on Systems, Man, and Cybernetics: Systems and Information Sciences (Elsevier). He is an editorial board member of Progress in Artificial Intelligence (Springer), Mathematical Problems in Engineering, International Journal of Artificial Intelligence and Soft Computing, and International Journal of Adaptive and Autonomous Communication Systems. He is the recipient of the 2012 Young Engineer Award from the Indian National Academy of Engineering (INAE).

203


http://dyna.medellin.unal.edu.co/

Bioethanol production by fermentation of hemicellulosic hydrolysates of african palm residues using an adapted strain of Scheffersomyces stipitis Producción de bioetanol por fermentación de hidrolizados hemicelulósicos de residuos de palma africana usando una cepa de Scheffersomyces stipitis adaptada Frank Carlos Herrera-Ruales a & Mario Arias-Zabala b a

Facultad de Ciencias. Universidad Nacional de Colombia. Colombia. fcherrerar@unal.edu.co Facultad de Ciencias. Universidad Nacional de Colombia. Colombia. marioari@unal.edu.co

b

Received: June 25th, de 2013. Received in revised form: January 22th, 2014. Accepted: February 20th, 2014. Abstract Ethanol production using a strain of Scheffersomyces stipitis (Pichia stipitis) adapted to inhibitors of african palm hydrolysates was evaluated. The strain adaptation by cultivations in mediums progressively concentrated with inhibitors after 20 subcultures was achieved. Then the variables orbital agitation, medium volume and inoculum volume were studied for ethanol production, finding that orbital agitation and culture medium volume were significant on maximal ethanol concentration while culture medium volume and inoculum volume were significant on ethanol productivity. Maximal ethanol concentration and yield were 8.48 gl-1 and 0.39 gg-1 achieved with 125 rpm, 6.75x107 cells ml-1 and 140 ml of medium. Maximal ethanol productivity was 0.062 gl-1h-1 achieved with 125 rpm, inoculum of 99.63x107 cells ml-1 and 90 ml of culture medium. Yeast adaptation showed to be and good strategy to produce ethanol from hemicellulosic residues of african palm tree, avoiding the detoxification processes. Keywords: ethanol; african palm tree; hemicellulosic hydrolysate; Scheffersomyces stipitis. Resumen Se evaluó la producción de etanol a escala matraz usando una cepa de Scheffersomyces stipitis (Pichia stipitis) adaptada a inhibidores presentes en hidrolizados hemicelulósicos de palma africana. La adaptación se logró luego de 20 subcultivos en medios progresivamente concentrados en inhibidores. La evaluación de la producción de etanol mostró que la agitación orbital y el volumen de medio influyen significativamente sobre la concentración máxima de etanol, mientras que el volumen del medio y la concentración del inóculo influyen sobre la productividad máxima de etanol. La máxima concentración y rendimiento de etanol fueron 8.48 gl-1 y 0,39 gg-1, respectivamente, alcanzados con 125 rpm, inóculo de 6.75x107 células ml-1 y 140 ml de medio. La productividad máxima fue 0.062 gl-1h-1 alcanzada con 125 rpm, inóculo de 99,63x107 células ml-1 y 90 ml de medio, mostrando que es posible producir etanol a partir de hemicelulosa de palma africana usando la adaptación de cepas. Palabras clave: etanol; palma africana; hidrolizado hemicelulósico; Scheffersomyces stipitis.

1. Introducción Actualmente existe un marcado incremento en la demanda energética mundial, la cual se suple en un 87% con el empleo de combustibles fósiles, responsables de la mayoría de emisiones de gases causantes del efecto invernadero [1]. Por tanto, desde hace algún tiempo se han desarrollado estudios enfocados a encontrar nuevas alternativas energéticas, provenientes de fuentes renovables amigables con el ambiente; dentro de éstas están los combustibles provenientes de biomasa como el bioetanol y el biodiesel [2,3]. La producción de biocombustibles se ha

visto cuestionada cuando se obtienen a partir de materias primas alimentarias. Debido a ello, los biocombustibles de segunda generación han ganado importancia frente a las formas tradicionales de producción de biocombustibles, ya que optimizan el uso de las materias primas y minimizan el impacto sobre otros sectores alimentarios. En ese sentido, la utilización de residuos agrícolas procedentes del proceso de extracción de aceite de la palma africana resulta atractiva. En este proceso se genera gran cantidad de racimos vacíos conocidos como raquis cuyo contenido hemicelulósico alcanza un 24% del peso seco [4]. Esta hemicelulosa está compuesta principalmente por xilosa, que debido a su

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 204-210. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Herrera-Ruales & Arias-Zabala / DYNA 81 (185), pp. 204-210. June, 2014.

abundancia presenta un gran potencial en la producción de etanol por vía fermentativa. Entre los microorganismos capaces de transformar la xilosa a etanol y que muestran buen potencial fermentativo está Scheffersomyces stipitis (Pichia stipitis) que viene de un grupo de levaduras aisladas de la madera y con la capacidad de utilizar muchos de los azúcares presentes allí. Este microorganismo tiene una prometedora aplicación industrial, ya que fermenta xilosa con un alto rendimiento de etanol y es capaz de fermentar un amplio rango de azúcares, incluyendo glucosa. La manera en que S. stipitis produce etanol o masa celular a partir de xilosa, depende de diversidad de factores entre los cuales se pueden destacar la presencia de inhibidores del crecimiento celular y la cantidad de O2 disponible en los medios de crecimiento. El efecto negativo de inhibidores del crecimiento como furfural, hidroximetilfurfural (HMF), ácido acético y derivados de la lignina puede ser reducido por diferentes estrategias entre las cuales están la adaptación de las cepas microbianas a tales inhibidores por medio de su exposición prolongada a los mismos [5, 6, 7]. Con respecto al efecto del oxígeno, se ha determinado que un alto suministro del mismo sólo produce crecimiento celular y con bajos suministros de oxígeno, conocidos como microaireación, se produce etanol [8, 9]. Se han realizado diversos estudios utilizando S. stipitis en la producción de etanol a partir de materiales lignocelulósicos, mostrando una amplia variabilidad de resultados en función de la naturaleza del material [10, 11]. Este estudio explora la producción de etanol a partir de hidrolizados hemicelulósicos de residuos de raquis de palma africana, determinando la efectividad de un proceso de adaptación de una cepa de la levadura S. stipitis a los inhibidores, así como el efecto de la concentración celular y las variables que intervienen en la transferencia de oxígeno al medio de cultivo, como la agitación orbital y el volumen de medio.

transfiriendo una asada de la levadura a matraces de 100 ml conteniendo 10 ml de medio compuesto por hidrolizado neutralizado de raquis enriquecido. La incubación se realizó durante 48 horas a 30ºC con una velocidad de agitación de 150 rpm. A partir de estos medios se prepararon inóculos usando un 10% v/v de preinóculo, transfiriéndolo a matraces de 250 ml con 30 ml de hidrolizado enriquecido y cultivándolos a 30ºC y 150 rpm hasta alcanzar concentraciones aproximadas a 9.0 x 108 células ml-1. 2.2 Material lignocelulósico Racimos vacíos o raquis de palma africana fueron facilitados por el Centro de Investigación de CENIPALMA ubicado en Barrancabermeja, Santander. Una vez en el laboratorio, se sometieron a esterilización a 121ºC por 20 minutos. Seguidamente, el material se sometió a un proceso de corte en húmedo en un molino equipado con cuchillas de acero, permitiendo abrir las fibras del raquis facilitando el proceso de secado, el cual se llevó a cabo durante 24 horas en un secador de bandejas con flujo de aire a 60ºC. Una vez seco, el material se molió en un molino de cuchillas equipado con un tamiz con orificios de 3 mm de diámetro. 2.3 Hidrólisis ácida El material lignocelulósico proveniente de la molienda se sometió a hidrólisis ácida con H2SO4 al 2% v/v y temperatura de 121ºC durante 30 minutos usando una carga sólido/líquido de 0.2 g ml-1, en matraces de 1 litro [12]. El hidrolizado ácido se filtró para eliminar los restos de celulosa y el filtrado se neutralizó mediante la adición de NaOH y Ca(OH)2 hasta un valor de pH aproximado a 6.0. Se filtró nuevamente y se estableció su pH final en un valor de 6.00± 0.05 con la adición de H2SO4 ó NaOH. 2.4. Adaptación de la cepa

2. Métodos 2.1. Microorganismo e inóculos Una cepa de S. stipitis donada por la Universidad Federal de Río de Janeiro fue usada en este estudio, la cual se conservó en refrigeración a 4ºC en un medio sólido compuesto por xilosa 15 gl-1, extracto de levadura 4 gl-1, peptona 3 gl-1, (NH4)2SO4 2 gl-1, MgSO4.7H2O 0.5gl-1 y agar 20 g l-1. Luego de sometida al proceso de adaptación al hidrolizado hemicelulósico del raquis de palma africana, la cepa se conservó en refrigeración a 4°C en medios sólidos y líquidos compuestos por hidrolizados de raquis de palma enriquecidos con extracto de levadura 4 gl-1, peptona 3 gl-1, (NH4)2SO4 2 gl-1, MgSO4.7H2O 0.5gl-1 y agar 20 gl-1. Luego de sometida al proceso de adaptación al hidrolizado hemicelulósico la cepa se conservó en refrigeración a 4°C en medios sólidos y líquidos compuestos por hidrolizados de raquis enriquecidos con extracto de levadura 4 gl-1, peptona 3 gl-1, (NH4)2SO4 2 gl-1 y MgSO4.7H2O 0.5gl-1. Se prepararon preinóculos del microorganismo

La adaptación de la cepa se llevó a cabo a través del repique continuo de la levadura en medios líquidos aireados por agitación orbital, cuya concentración de hidrolizado hemicelulósico fue incrementada progresivamente. Las fermentaciones de adaptación se realizaron en matraces de 100 ml con 20 ml de medio a un pH de 6.00 y agitación de 150 rpm a temperatura ambiente. Las concentraciones de hidrolizado hemicelulósico de raquis se incrementaron progresivamente en 25, 40, 60, 75, 85 y 100% v/v. El medio fue enriquecido con extracto de levadura 4 gl-1, peptona 3 gl-1, (NH4)2SO4 2 gl-1 y MgSO4.7H2O 0.5 gl-1. El contenido inicial de azúcares reductores del medio se fijó en 30 gl-1 adicionando xilosa grado analítico. La cepa fue repicada de 3 a 4 veces en cada una de las concentraciones de hidrolizado antes de ser llevada a una concentración mayor. Luego de los repiques continuos en cada una de las concentraciones, se cultivó la cepa en medios líquido y sólido, de igual composición al de adaptación, durante 48 horas a temperatura ambiente y luego se almacenó en refrigeración a 4ºC.

205


Herrera-Ruales & Arias-Zabala / DYNA 81 (185), pp. 204-210. June, 2014.

2.5. Fermentación en matraz Las fermentaciones se realizaron en matraces Erlenmeyer de 250 ml a una temperatura constante de 30ºC, usando volúmenes de inóculo entre 5% y 10% v/v con respecto al medio de fermentación. Se evaluó el efecto de las condiciones de transferencia de oxígeno, estableciendo valores de agitación orbital entre 100 y 150 rpm y volúmenes de medio de cultivo entre 40 y 140 ml. 2.6. Métodos analíticos 2.6.1. Cuantificación de biomasa La concentración de células se determinó por recuento en cámara de Neubauer; el conteo se realizó promediando el número de células encontrado en los 25 cuadrados que conforman el cuadrante de 0.1 mm2; el promedio de células se multiplicó por el factor de dilución y se calculó como células ml-1 de medio de cultivo. 2.6.2. Cuantificación de etanol, xilosa, glucosa, ácido acético, furfural e hidroximetil furfural La concentración de estos compuestos se determinó por HPLC en un cromatógrafo Shimadzu con columna Aminex HPX-87H (300×7.8 mm); se utilizó en el horno una temperatura de 60ºC, la fase móvil fue ácido sulfúrico 0.005 M con un flujo de 0.6 ml min-1. Previo a la inyección, las muestras fueron centrifugadas y filtradas con membrana de tamaño de poro igual 0.22 µm. La detección de etanol, xilosa, glucosa y ácido acético se llevó acabo con un detector de índice de refracción DIR, mientras que el furfural e HMF fueron detectados con un detector PDA a una longitud de onda de 210 nm [11, 13]. 2.7. Métodos estadísticos La producción de etanol en función del volumen de inóculo, agitación orbital y volumen de medio de cultivo se evaluó a través de un diseño central compuesto cubo estrella con dos puntos centrales. La significancia de las variables así como sus interacciones fueron determinadas con un nivel de significancia del 5%. 3. Resultados y Discusión La tabla 1 muestra la composición del hidrolizado hemicelulósico de raquis de palma africana usado en la adaptación y en las fermentaciones con S. stipitis. La adaptación de la levadura a los inhibidores del medio se llevó a cabo basándose en reportes de la literatura que muestran el desarrollo satisfactorio de cepas de levadura del género Scheffersomyces, tolerantes a diferentes concentraciones de furfural, HMF, ácido acético y ácido levulínico, en medios compuestos por diferentes concentraciones de hidrolizados de varias fuentes lignocelulósicas [5, 6, 7]. Para la adaptación se utilizaron

Tabla 1. Composición del hidrolizado hemicelulósico de raquis de palma africana. Concentración en hidrolizado Compuesto crudo (g l-1) Xilosa 22.04 Glucosa 1.18 Ác. acético 7.91 Furfural 0.14 HMF 0.15

medios líquidos en agitación orbital en lugar de medios sólidos; estos últimos han sido reportados como poco efectivos para la adaptación de levaduras del género Scheffersomyces [6]. Se obtuvo una adaptación satisfactoria de S. stipitis al hidrolizado hemicelulósico de raquis de palma africana tras repicar la cepa madre de 3 a 4 veces en cada una de las concentraciones de hidrolizado (25, 40, 60, 75, 85 y 100% v/v) para un total de 20 repiques consecutivos de adaptación. En la Fig. 1 se observan las cinéticas de crecimiento de las cepas madre y adaptada creciendo en condiciones aerobias. La cepa madre mostró una velocidad máxima de crecimiento, µmax, de 0.013 h-1, 3 veces inferior a la velocidad máxima de crecimiento de la cepa adaptada, la cual creció a µmax de 0.040 h-1. La cepa adaptada presentó una fase de latencia de 12 horas, luego de la cual muestra una fase de aceleramiento del crecimiento, consumiendo el 95% de los azúcares en 72 horas a una rata promedio de consumo de 0.30 gl-1h-1, mientras la cepa de levadura madre tiene una fase de latencia de 48 horas, logrando asimilar sólo 17 gramos de azúcares reductores a los 6 días de fermentación a una rata de asimilación promedia de 0.10 gl1 -1 h , mostrando una baja tolerancia a los inhibidores. Liu et al. [14], usando medios sintéticos, evaluaron el efecto de 10, 30, 60 y 120 mM de furfural e HMF sobre el crecimiento de S. stipitis encontrando que estos compuestos actuando por separado y en función de su concentración pueden alargar la fase de latencia hasta por 48 horas cuando se usan máximo 30 mM. Además, actuando en combinación se presenta sinergismo inhibitorio, siendo 1.26 y 0.96 g/l las concentraciones de HMF y furfural, respectivamente, tolerables para la levadura. Aunque estas concentraciones son mayores a las de furfural e HMF (0.14 y 0.15 gl-1) determinadas en los hidrolizados de palma africana neutralizado de este estudio, la presencia de otros compuestos inhibitorios como ácido acético, 7.91 gl-1, y compuestos derivados de la lignina no cuantificados, explicarían la baja velocidad de asimilación de la cepa madre. Nigam [15], demostró que ácido acético, furfural y compuestos derivados de la lignina tienen un efecto de inhibición sinérgica sobre el crecimiento, productividad volumétrica y rendimiento en base a sustrato de S. stipitis; las concentraciones de inhibidores usadas en tal estudio son similares a las esperadas en el hidrolizado de raquis de palma de este estudio, así que podemos decir que las concentraciones de inhibidores del hidrolizado de raquis de palma disminuyen la capacidad metabólica de S. stipitis y que tal efecto puede ser reducido en parte a través de la adaptación de la levadura a medios de cultivo compuestos

206


Herrera-Ruales & Arias-Zabala / DYNA 81 (185), pp. 204-210. June, 2014.

Figura 1. Cinética de crecimiento para las cepas de levadura madre y adaptada de Scheffersomyces stipitis en matraz de 250 ml, 150 rpm, 30ºC, 30 ml de hidrolizado hemicelulósico de raquis de palma africana enriquecido.

por hidrolizados hemicelulósicos que contengan progresivamente mayor cantidad de inhibidores. Tal adaptación no ha sido completamente entendida pero parece estar determinada por un cambio en la actividad enzimática de la levadura, aunque furfural e HMF pueden romper el DNA, inhibir la síntesis de proteínas y afectar la actividad de enzimas de las levaduras [14, 16, 17]. S. stipitis ha mostrado ser capaz de biotransformar estos compuestos para reducir su efecto inhibitorio. Palmqvist et al. [18] reportan que el furfural es reducido a furfuril alcohol aumentando la capacidad de asimilación de azúcares; más recientemente se ha descrito la conversión de HMF a 2,5bishidroximetilfurfural, con una correspondiente reducción en la inhibición del crecimiento [14]; la xilosa reductasa parece ser la enzima encargada de esta conversión, ya que se ha demostrado que la expresión de la enzima xilosa reductasa de Pichia stipitis en cepas de Saccharomyces cerevisiae, incrementa la tolerancia de esta última a altas concentraciones de HMF [19]. En resumen, se logró obtener una cepa capaz de crecer en medios compuestos por hidrolizado hemicelulósico de raquis de palma africana cuyas concentraciones promedio de xilosa, furfural, HMF y ácido acético son 22.04, 0.14, 0.15 y 7.91 gl-1, respectivamente. Una vez obtenida la cepa de S. stipitis capaz de crecer en un medio compuesto por hidrolizado de raquis de palma africana, la misma fue evaluada para determinar la capacidad de producción de etanol bajo diferentes condiciones de cultivo a nivel de matraz. Se evaluaron diferentes condiciones de agitación orbital y volúmenes de medio de cultivo, que determinan la transferencia de oxígeno, con miras a establecer las que pudiesen favorecer la producción de etanol en el hidrolizado de raquis de palma. De igual manera se estudió la influencia de la concentración inicial de inóculo en la producción de etanol y su posible interacción con la agitación y volumen de medio. Los resultados se presentan en la tabla 2. El análisis de varianza para la variable de respuesta, etanol gl-1, mostró que la agitación y el volumen de medio fueron significantes y que no hay interacciones entre los mismos, de forma que el efecto que ejerce cada uno de ellos

Tabla 2. Resultados para el diseño central compuesto en la producción de etanol a escala de matraz a partir de hidrolizado hemicelulósico de raquis de palma africana usando Scheffersomyces stipitis. Volumen Volumen Etanol Qp Agitación de de N° máximo -1 -1 (rpm) medio inóculo gl h ) ( -1 (gl ) (ml) (%) 1 167 90 7.5 5.74 0.044 2 150 60 10.0 1.33 0.018 3 150 120 10.0 6.37 0.053 4 150 120 5.0 5.85 0.030 5 150 60 5.0 0.29 0.003 6 125 39 7.5 0.88 0.0012 7 125 90 7.5 6.81 0.050 8 125 140 7.5 8.48 0.050 9 125 90 7.5 6.94 0.051 10 125 90 3.3 6.80 0.035 11 125 90 11.7 7.44 0.062 12 100 60 5.0 4.12 0.025 13 100 120 5.0 7.31 0.038 14 100 120 10.0 8.28 0.057 15 100 60 10.0 4.84 0.040 16 82 90 7.5 8.14 0.037

en la producción de etanol es independiente del valor de las demás dentro de los niveles evaluados. La Fig. 2 muestra la superficie de respuesta para la concentración de etanol en función de las condiciones de transferencia de oxígeno determinadas por la agitación y volumen de medio de cultivo; en ella se puede observar cómo la concentración de etanol se incrementa a medida que la agitación se reduce de 150 rpm a 100 rpm. Tal incremento es más notable cuando el volumen de medio de cultivo está en 60 ml comparado con 120 ml. En el volumen de 120 ml, la concentración de etanol parece ser más estable en diferentes niveles de agitación; esto se corrobora con los datos de la tabla 2, donde se puede observar que entre 150 y 100 rpm los valores de etanol varían entre 5.85 y 8.28 gl-1, mientras que a 60 ml los valores de etanol entre 150 y 100 rpm varían de 0.29 a 4.84 g l‐1. De la Fig. 2 también podemos concluir que a pesar de que las concentraciones de etanol son mayores a bajas velocidades de agitación y altos volúmenes de medio de cultivo, las mismas tienden a estabilizarse en un máximo cercano a 8 gl-1 cuando los niveles de agitación son menores a 125 rpm y los volúmenes de medio son superiores a 90 ml, mostrando que el establecer condiciones de transferencia de oxígeno que reduzcan de manera excesiva el oxígeno disuelto del medio, no es garantía para la producción de altas concentraciones de etanol cuando se usa S. stipitis; de hecho S. stipitis, al ser sometida a condiciones cercanas a anaerobiosis, consume de manera insignificante xilosa, influyendo de manera negativa en la producción de etanol. Los resultados de la concentración máxima de etanol en función de la agitación y volumen de medio muestran gran variabilidad basada en que estas dos variables determinan la concentración del oxígeno disponible en el medio de cultivo para ser asimilado por la levadura. De acuerdo con Du Preez [8], la aireación es el factor más importante en los procesos de fermentación de xilosa con levaduras del género Scheffersomyces, ya que ésta determina la partición del flujo

207


Herrera-Ruales & Arias-Zabala / DYNA 81 (185), pp. 204-210. June, 2014.

del carbono entre la formación de etanol y el crecimiento celular. Se puede decir que S. stipitis, usando como medio nutritivo xilosa proveniente de hidrolizados hemicelulósicos de palma de aceite africana, ve estimulado su metabolismo hacia la producción cuando las condiciones de transferencia de oxígeno están establecidas por volúmenes ≥ 90 ml y agitaciones ≤ 125 rpm usando matraces de 250 ml. En general, S. stipitis demostró ser capaz de producir mayores cantidades de etanol conforme la combinación de agitación y volumen favorecen condiciones de baja transferencia de oxígeno. En la tabla 2 y Fig. 3, se observa que los rendimientos de etanol más altos fueron 0.39 gg-1, 0.38 gg-1 y 0.37 gg-1, correspondientes a combinaciones de agitación y volumen de medio iguales a 125 rpm-140 ml, 100 rpm120 ml y 82.96 rpm-90 ml, respectivamente. Estos resultados contrastan con las bajas concentraciones de etanol alcanzadas con las combinaciones de agitación y volumen de medio iguales a 150 rpm-60 ml y 125 rpm39.55 ml, para las cuales los rendimientos de etanol en base a sustrato fueron 0.06 gg-1 y 0.04 gg-1, demostrando que tales combinaciones de variables transfieren oxígeno en exceso, desestimulando el flujo del carbono hacia la producción de etanol y dirigiéndolo hacia la producción de biomasa. En este tipo de bioproceso los rendimientos de etanol en base a sustrato, superiores a 0.40 g g-1, se obtienen generalmente en hidrolizados lignocelulósicos que han sido sometidos a procesos de destoxificacion con sobretitulación con Ca(OH)2 en combinación con otras técnicas como adición de carbón activado, membranas de intercambio iónico y rota-evaporación. En contraste, los trabajos llevados a cabo con hidrolizados crudos neutralizados, en su gran mayoría alcanzan rendimientos inferiores a 0.40 g g-1, siendo los mejores cercanos a 0.36 g g-1. En este sentido los rendimientos de etanol en base a sustrato iguales a 0.35, 0.37, 0.38 y 0.39 g g-1 de los experimentos 11, 16, 14 y 8 encontrados en este estudio, demuestran que a partir de hidrolizados hemicelulósicos de raquis de palma africana no destoxificados, es posible producir etanol con buenos

rendimientos usando una cepa de S. stipitis adaptada. El hecho que no se hayan usado métodos de destoxificación en la producción de los hidrolizados usados en la fermentación es de gran importancia, ya que éstos representarían un incremento significativo en los costos de producción de etanol, pensando en una perspectiva de producción a mayor escala. Con respecto a la productividad volumétrica de etanol, Qp, los resultados muestran que las variables volumen de medio y volumen de inóculo, son las variables significantes, no presentando interacciones entre ellas. De acuerdo con los resultados obtenidos para las variables productividad y concentración de etanol, podemos decir que el usar altas concentraciones celulares de S. stipitis como inóculo, aumenta la productividad del etanol, basándose en la reducción de los tiempos de fermentación más que en un aumento de las concentraciones y rendimientos de etanol. Tal reducción de tiempo se puede dar como consecuencia de mayores ratas de asimilación de azúcares y producción de etanol inherentes a un mayor número de células presentes en el medio, o también podría estar muy relacionada con la capacidad de las células de levaduras del género Scheffersomyces para reducir las concentraciones de furfural e HMF presentes en hidrolizados lignocelulósicos, a través de su biotransformación a furfuril alcohol y 2,5bishidroximetilfurfural, respectivamente [18, 14]. En ese sentido, una mayor concentración de células adaptadas reduciría a mayor velocidad y proporción las concentraciones de inhibidores del medio de cultivo, facilitando por lo tanto el metabolismo de los azúcares, reduciendo los tiempos de la fase de latencia y de fermentación en general. La superficie de respuesta para la productividad volumétrica de etanol representada en la Fig. 4, muestra la marcada influencia de la variable volumen de medio sobre la productividad. Esta última se ve favorecida a medida que el volumen de medio de cultivo varía de 60 a 120 ml, lo cual es aún más notorio cuando las concentraciones de inóculo son más altas. Análogamente, el inóculo muestra un efecto positivo sobre la productividad a medida que aumenta su concentración desde 5 a 10%; estos valores

Figura 2. Superficie de respuesta para la producción de etanol máxima en función de la agitación y el volumen de medio.

Figura 3. Rendimientos de etanol en base a sustrato para los experimentos del diseño central compuesto.

208


Herrera-Ruales & Arias-Zabala / DYNA 81 (185), pp. 204-210. June, 2014. [2] Saxena, R., Adhikari, D. and Goyal, H., Biomass-based energy fuel through biochemical routes: A review. Renewable and Sustainable Energy Reviews, 13 (1), pp. 167–178, 2009. [3] Monsalve, J., Medina, V. y Ruiz, A., Producción de etanol a partir de la cáscara de banano y de almidón de yuca. Revista Dyna, 73 (150), pp. 2127, 2006. [4] Rahman, S., Choudhury, J. and Ahmad, A., Production of xylose from oil palm empty fruit bunch fiber using sulfuric acid. Biochemical Engineering Journal, 30, pp. 97-103, 2006. [5] Huang, C., Lin, T., Guo, G. and Hwang, W., Enhanced ethanol production by fermentation of rice straw hydrolysate without detoxification using a newly adapted strain of Pichia stipitis. Bioresource Technology, 100 (17), pp. 3914-3920, 2009.

Figura 4. Superficie de respuesta para la productividad volumétrica de etanol en función del volumen de medio y la concentración de inóculo.

corresponden a 4.50 y 9.00 x 107 células ml-1. En general, se puede observar que a concentraciones de inóculo mayores que 7.5% (6.75 x 107 células ml-1) y volúmenes de medio mayores a 90 ml, se alcanzan las productividades más altas, superando los 0.050 gl-1h-1 Teniendo en cuenta que tanto Qp como la concentración de etanol son importantes para determinar la viabilidad económica de la producción de etanol a partir de hidrolizados de raquis de palma, se realizó la optimización multivariada de los mismos, encontrando que para valores óptimos de concentración de etanol y productividad volumétrica iguales a 7.50 g l-1 y 0.06 g l-1 h-1, los valores de las variables agitación orbital, volumen de medio y concentración de inóculo deberían establecerse en 116.90 rpm, 92.44 ml y 10.7% v/v (9.63x107 células ml-1), respectivamente. 4. Conclusiones Se encontró que es factible adaptar una cepa de S. stipitis a los inhibidores de un hidrolizado hemicelulósico de raquis de palma africana no destoxifiado a través de un proceso de aclimatación en hidrolizados progresivamente más concentrados en inhibidores, dando la posibilidad de explorar la producción de etanol, evitando los altos costos de los procesos de destoxificacion. Teniendo en cuenta la importancia del rendimiento y la productividad en la viabilidad económica del bioproceso, se determinó por optimización multivariable, que establecer las condiciones de fermentación en 116.90 rpm, 96.44 ml y 10.70% v/v (9.63x107 células ml-1) como volumen de inóculo, optimizaría el bioproceso de producción de etanol alcanzando hasta 8.00 g l-1 y 0.065 g l-1h-1.

[6] Nigam, J., Development of xylose-fermenting yeast Pichia stipitis for ethanol production through adaptation on hardwood hemicellulose acid prehydrolysate. Journal of Applied Microbiology, 90 (2), pp. 208-215, 2001. [7] Amartey, S. and Jeffries, T., An improvement in Pichia stipitis fermentation of acid-hydrolysed hemicellulose achieved by overliming (calcium hydroxide treatment) and strain adaptation. World Journal of Microbiology and Biotechnology, 12 (3), pp. 281-283, 1996. [8] Du Preez, J., Process parameters and environmental factors affecting Dxylose fermentation by yeasts. Enzyme and Microbial Technology, 16 (11), pp. 944-956, 1994. [9] Agbogbo, F., Coward, G., Torry, M. and Wenger, K., Fermentation of glucose/xylose mixtures using Pichia stipitis. Process Biochemistry. 41 (11), pp. 2333-2336, 2006. [10] Hande, A., Mahajan, S. and Prabhune, A., Evaluation of ethanol production by a new isolate of yeast during fermentation in synthetic medium and sugarcane bagasse hemicellulosic hydrolysate. Annals of Microbiology, 63 (1), pp. 63-70, 2012. [11] Canilha, L., Carvalho, W., Silva, F. and Giulietti, M., Ethanol production from sugarcane bagasse hydrolysate using Pichia stipitis. Applied Biochemistry and Biotechnology, 161, pp. 84-92, 2010. [12] Llano, J., Gómez, N. y Santamaría, M., Recuperación de xilosa a partir de material lignocelulósico procedente de la extracción de aceite de palma. Memorias IIV Simposio sobre Biofábricas los Grupos de Investigación en Biotecnología y la Formación de Investigadores. Medellín, Colombia, 2009. [13] Walton, S., Heiningen, A. and Walsum, P., Fermentation of nearneutral pH extracted hemicellulose derived from northern hardwood. 2013 [consulta, 18 de Agosto de 2013]. Disponible en http://archivos.labcontrol.cl/ [14] Liu, Z., Slininger, P., Dien, B., Berhow, M., Gorsich, S., Adaptive response of yeasts to hydroxymethylfurfural and new chemical evidence for 2,5- bis-hydroxymethylfuran. Journal of Industrial Biotechnology, 31(8), pp. 345-352, 2004.

Kurtzman, C. and furfural and 5HMF conversion to Microbiology and

[15] Nigam, J., Ethanol production from wheat straw hemicellulose hydrolysate by Pichia stipitis. Journal of Biotechnology. 87 (1), pp. 17-27, 2001. [16] Tian, S., Zhu, J. and Yang, X., Evaluation of an adapted inhibitortolerant yeast strain for ethanol production from combined hydrolysate of softwood. Applied Energy, 88 (5), pp. 1792-1796, 2011. [17] Taherzadeh, M., Ethanol from lignocellulose: Physiological effects of inhibitors and fermentation strategies. Doktorsavhandlingar Vid Chalmers Tekniska Hogskola, (1485), pp. 1-57, 1999.

Referencias [1] BP British Petroleum. Statistical Review of World Energy. [consulta, junio 21 de 2010]. Disponible en: http://www.bp.com/liveassets/bp_internet/globalbp/globalbp_uk_english/re ports_and_publications/statistical_energy_review_2008/STAGING/local_a ssets/2010_downloads/statistical_review_of_world_energy_full _report_2010.pdf

[18] Palmqvist, E. and Hahn, B., Fermentation of lignocellulosic hydrolysates. II: Inhibitors and mechanisms of inhibition. Bioresource Technology, 74 (1), pp. 25-33, 2000.

209


Herrera-Ruales & Arias-Zabala / DYNA 81 (185), pp. 204-210. June, 2014. F. C. Herrera-Ruales, se graduó como Ingeniero Agroindustrial de la Universidad de Nariño (Pasto, Nariño) en el año 2007. Obtuvo su título de Magíster en Ciencias-Biotecnología en la Universidad Nacional de Colombia sede Medellín en el año 2013. Ha trabajado en el área de aprovechamiento de residuos agroindustriales vía fermentación para la producción de bioetanol. Actualmente realiza estudios de doctorado en la Universidad de Antioquia (Medellín-Colombia). M. Arias-Zabala, se graduó como Ingeniero Químico en 1983 en la Universidad de Antioquia (Medellín, Colombia). Obtuvo su título de Magíster en Tecnología de Procesos Bioquímicos en la Universidad

Federal de Río de Janeiro (Brasil) en 1990 y el Doctorado en Ingeniería Química en 2005 en la Universidad Nacional de Colombia (Bogotá, Colombia). Actualmente es Profesor Asociado en la Facultad de Ciencias de la Universidad Nacional de Colombia sede Medellín y director del área de Biotecnología de la misma facultad. Es líder fundador del grupo de investigación Biotecnología Industrial (Categoría A, Colciencias-2014) y director fundador del Laboratorio de Bioconversiones. Su área de trabajo son los procesos fermentativos tanto microbianos como utilizando células vegetales en suspensión y el aprovechamiento de residuos agroindustriales vía biotecnológica.

210


http://dyna.medellin.unal.edu.co/

Making Use of Coastal Wave Energy: A proposal to improve oscillating water column converters Aprovechamiento Energético de las Olas Costeras: Una propuesta de mejora de los convertidores de columna de agua oscilante Ramón Borrás-Formoso a, Ramón Ferreiro-García b, Fernanda Miguélez-Pose c & Cándido Fernández-Ameal d a

Escuela Técnica Superior de Náutica y Máquinas, Universidad de La Coruña, España. ramon.borras@udc.es b Escuela Técnica Superior de Náutica y Máquinas, Universidad de La Coruña, España. ferreiro@udc.es c Escuela Técnica Superior de Náutica y Máquinas, Universidad de La Coruña, España. fermigue@udc.es d Escuela Técnica Superior de Náutica y Máquinas, Universidad de La Coruña, España.caameal@udc.es Received: June 27th, de 2013. Received in revised form: November 5th, 2013. Accepted: November 8th, 2013

Abstract This paper aims to describe an alternative design (protected by patent), for an onshore wave based energy converter, specifically an oscillating water column, capable of providing increased efficiency. In order to compare the various alternative designs, a theoretical model that describes the physical behavior with certain restrictions is proposed. The converters incorporate a rectifying barrier leading to a large pool of water between the sea and the converter. In order to estimate the theoretical increase in achievable power, a theoretic cycle model is assumed for the simulation of its operational dynamics using a simplified ideal behavior: regular waves, the air assumed to be an ideal gas, with adiabatic compression and expansion. The results obtained show that a correct adjustment of the turbine differential pressure contributes to an increase in the available power output. We conclude that the proposed innovations can lead to an improved technology applicable to this type of converters. . Keywords: Ocean energy; Renewable energy; Alternative energy; Wave energy converters (WEC); Oscillating water column (OWC); OWC-DPST. Resumen El objetivo de este artículo es describir un diseño alternativo (protegido por patente) de un convertidor undimotriz, para ubicar en la línea de costa, del tipo de columna de agua oscilante, capaz de alcanzar un mayor rendimiento. Para hacer una comparación de los distintos convertidores se propone un modelo teórico que describe el comportamiento físico, con ciertas restricciones. El convertidor incorpora una barrera rectificadora formando una gran balsa de agua entre el mar y el convertidor y para estimar teóricamente la potencia obtenible se supone un comportamiento ideal simplificado: olas regulares, comportamiento del aire como gas ideal, compresión y expansión adiabática. Los resultados obtenidos muestran que un ajuste de la presión diferencial en la turbina contribuye al aumento de la potencia de salida aprovechable. Se concluye que la innovación propuesta puede ayudar al avance de la tecnología aplicable a este tipo de convertidor. Palabras clave: Energía del mar; Energías renovables; Convertidor undimotriz (WEC); Columna de agua oscilante (OWC); OWCDPST.

Nomenclature g: Acceleration of gravity [9.81m/s2] Hi : Initial upstroke [m] Ho : Minimum height of the air [m] HS : Initial downstroke [m] Hw : Wave height [m] P: Power [W] Patm : Atmospheric Pressure [Pa] PCh : Pressure in chamber [Pa] P*Ch : Relative pressure in chamber [Pa] PH : Pressure in High Pressure Tank [Pa] P*H : Relative pressure in High Pressure Tank [Pa]

PL : Pressure in Low Pressure Tank [Pa] P*L : Relative pressure in Low Pressure Tank [Pa] T : Period of wave [s] U : Internal energy [J] W : Energy [J] WD: Energy in the downstroke [J] WU: Energy in the upstroke [J] Greek Letters: ρ Seawater density [1025kg/m3] γ Specific heat ratio Δ Level difference in the upstroke [m] Γ Level difference in the downstroke [m]

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 211-218. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Borrás-Formoso et al / DYNA 81 (185), pp. 211-218. June, 2014.

1. Introduction There is a pressing need for gradual substitution of energy sources based on fossil fuels, because of continuous depletion of these resources, and in order to protect the environment. Thus, an increased reliance upon alternative sources of energy is a logical imperative. The automobile industry is one in which there are many technical and design limitations which complicate the use of alternative fuels. It is therefore in the generation of electricity from renewable sources where R & D can lead to practical results in the near term. Some renewable sources of energy can generate a significant amount of CO2, whereas others, not based on fossil-fuel as set forth in this article, are free of these emissions. At present, over 1000 Wave Energy Converters (WEC) are patented worldwide [1]. All Wave Energy Converters (WEC) of the oscillating water column type (OWC), currently in operation or under construction, whether off shore or on-shore, have a structure that can be represented schematically as shown in Fig. 1. The turbine shaft may be arranged vertically, as in the plant installed in Mutriku (Guipúzcoa, Spain), or horizontally, as in the case of the wave power plant of Pico (in the Azores) or that of the island of Islay in Scotland [2]. The operational mode is as follows: when the wave reaches the converter, the water level within the chamber is lower than outside. When the water level within the chamber starts to rise, this compresses the air during the upstroke of the water column, and this compressed air drives an air turbine which in its turn drives an electric generator. This process will continue until the water column reaches its highest level. When the wave outside the chamber begins to recede, there will be a fall in the level of the water in the chamber, a downstroke, leading to increased air volume within the chamber reducing it to below-atmospheric pressure. This drop in pressure causes a reverse in the direction of air-flow, and this once again drives the turbine. It is customary to use self-rectifying turbines such as Wells, Dennis-Auld, McCormick, or reaction–selfrectifying turbines which maintain a constant direction of rotation regardless of the direction of airflow. The use of self-rectifying turbines is a satisfactory solution to the problem of rotation, converting the bi-directional flow within the water column into energy. These wind turbines have proven to be a tried-and-tested technology based upon years of experience in the field [3]. However, there are several weak points in the way that these converters work, given that the flows are characterized by: a) Variable amplitude b) Variable rotation direction c) Intermittency d) and lack of control. Let us analyze briefly the implications of each of these characteristics. a) The compression and suction cycle phases are carried out at variable pressure. This leads to highly variable flows

in amplitude because the pressure in the chamber is changing significantly. The effect is that the rotating torque developed by the turbine is also variable, being zero during most of the cycle. Furthermore, in addition to other drawbacks, it will lead to sub-optimum operating conditions (maximum efficiency condition) causing low performance conversion rates. According to [3] the operating range is too much restricted, which has a significant effect on efficiency. The abrupt increase in pressure within the chamber may damage the turbine blades [4], while the “stalling” phenomenon reduces efficiency as flow rates surpass certain limits [5]. On the other hand, a considerable safety margin must be taken into account by assuming significant design parameterization values for turbine and generator. b) The variable condition of the air flow requires the adoption of one of two constructive options: 1) Using a structure based on rectified flow which requires a conventional turbine, (unidirectional flow turbine type) exhibiting greater performance but requiring several valves and piping accessories, leading to a fall in pressure and energy losses. 2: Using a self-rectifying turbine, leading to a significantly lower level of efficiency [6]. c) The fact there is a change in the direction of air flow in the duct of the chamber means, mathematically speaking, that the flow must pass through zero twice per cycle, and that during those two moments it is not transferring energy to the turbine. Furthermore, after the level of the water column reaches its lower level and the air pressure in the chamber is equal to atmospheric pressure, it will take time to begin its upward movement, and yet more time until it reaches the compressed air in the chamber, at a pressure sufficient to produce an air flow rate through the turbine and create torque. Until that point, no energy is transferred to the turbine. Something similar occurs when the water column is at its top. Hence it is clear that, during much of the cycle, power is not being transferred to the turbine. The energy converted is therefore intermittent [7]. d) The conclusion to be drawn from the three previous points is that the amount of air flowing through the turbine is insufficient, under normal conditions, to drive a 3-phase alternator, which requires a synchronized speed. One option is to use a doubly fed induction generator. Another is to use dual conversion rectification-inversion with static elements. One possibility is to use a turbine regulatory system acting on a variable blade distributor turbine, partly to mitigate the effects of highly variable flows. In any case, the control in this type of converter raises important theoretical and practical problems. Furthermore, saltwater particles will inevitably have an impact on turbine blades, resulting in adverse effects such as corrosion and fouling. Another drawback, from the environmental point of view, is the very loud noise emanating from the air outlet of the turbine at higher speed

212


BorrĂĄs-Formoso et al / DYNA 81 (185), pp. 211-218. June, 2014.

peaks, which prevents these converters from being located near residential areas.

sufficient, allows a nearly constant and continuous airflow over the conventional high performance turbine. This turbine drives the electric generator, for example a 3-phase alternator, which may be coupled directly to the mains. Another possibility would be to use a Doubly-fed induction machine (DFIG) as it can work as synchronous machine, which are commonly used also with wind turbines [10] The volumes of the storage tanks are much greater than the volume of the chamber. (Fig. 2 is not shown to scale). 2.1. Theoretical model for study

Figure 1. Conventional OWC converter

The proposed WEC-OWC with differential pressure storage tanks (DPST) is designed to avoid many of the disadvantages of conventional OWC converters. The double oscillating water column “twin-OWC Wave energy converter� [8,9], can be considered from a technological perspective as an intermediate design between the OWC and the OWC-DPST.

In order to compare the various alternative designs of converters, we propose a theoretical model that describes the physical behavior with certain restrictions. The converters incorporate a rectifying barrier leading to a large pool of water between the sea and the converter. This rectifying barrier offers little resistance to movement in the direction of the sea water-converter, but blocks the passage of water in the opposite direction, which causes the water level in the pool to remain at its highest level, for sufficient time so that the work cycle is approaching that of the theoretical cycle. Placing the rectifying barrier, labeled with the number 17 in Fig. 2, has little influence on the behavior of the converter, and therefore is not necessary, but facilitates the study and analysis of the converter behavior, since it helps the level outside the chamber remains constant during the upstroke. This is the model to be analyzed in this article.

2. Description of the proposed converter Fig. 1 shows a simplified diagram of a conventional OWC converter. The extraordinary simplicity of its operating principle can be observed. The water level inside the chamber rises and falls in consonance with the external level, after a certain delay. Fig. 2 shows the OWC-DPST converter in simplified form. Number 5 represents the high pressure storage tank (HPT), in which the compressed air from the chamber during the upstroke of the oscillating water column is transferred through a remote operated valve 3 (HPV). The tank pressure gauge 5 indicates a positive relative pressure. The low pressure storage tank, LPT is represented by the number 12. From this tank air is removed by suction through a remote operated valve 14 (PVL), during the downstroke of the water column. In this tank, gauge pressure is negative. The vent valve, (VV), denoted as (4), allows the process of filling and emptying of storage tanks. It is a remote operated valve which opens before the water column reaches its top level, once the charging period is completed, allowing the pressurized air which is still in the chamber to be evacuated into the atmosphere, thus permitting the water level to continue to rise. This valve should open at the end of the period of air suction from the LPT tank, allowing air to enter the chamber until the pressure is equal to the outside atmospheric pressure. Between one storage tank and the other, there will be a drop in pressure (differential pressure) which, if the volume is

Figure 2. OWC-DPTS converter with rectifier barrier

2.2. Operating principle Like existing OWC, the proposed plant, located either off-shore or on-shore, is a converter which harnesses the energy of ocean waves and converts it into electricity. Let us assume that this plant is located on the coast because of the amplitude of the waves, easy accessibility, lower maintenance costs and high structural strength at lower cost . Once a partially submerged structure has been constructed,

213


Borrás-Formoso et al / DYNA 81 (185), pp. 211-218. June, 2014.

partially underwater and with a closed upper part, and open to the sea in the part located below the water level, in the presence of waves the water level inside the structure will approximate a level equal to the varying height of the waves, by the principle of communicating vessels. During the upward movement of the waves, the air that is trapped within the chamber undergoes a decrease in volume and hence increased pressure, since the water column acts as a "rigid piston" [11]. When this pressure slightly exceeds the pressure within the accumulator tank of high pressure air, air will flow naturally through an open valve, HPV (preferably, but not necessarily, a remotely operated valve) which communicates the chamber with the high pressure accumulator tank. Similarly, when the level of the wave starts to fall, the water level within the lower chamber also descends, and there is a progressive increase in chamber volume and consequently a reduction in pressure. When the pressure in the chamber is slightly below that within the low pressure accumulator tank, air flows naturally from the low pressure tank through another open remotely operated valve LPV towards the chamber, thus decreasing slightly the pressure in the low pressure accumulator tank. As shown, there are two tanks, one with air at high pressure, positive relative pressure, and another with a negative relative pressure with respect to the atmospheric pressure, Patm. There will be a difference of pressure between these two tanks and thus a duct or conduit can be used to connect the two tanks. Within this duct, a turbine is placed, driving an electric generator. In conventional OWCs, during the upstroke the turbine works between the pressure within the compression chamber and the atmospheric pressure, whereas during the downward stroke it works between atmospheric pressure and suction pressure. The same is however not true for the proposed OWC-DPST. The gauge pressure in the high pressure accumulator tank could be for example 10 kPa positive, i.e. 10 kPa gauge. The gauge pressure in the low pressure accumulator tank could be for example 10 kPa negative, i.e. -10 kPa gauge. In this case the differential pressure in the OWC-DPST converter turbine approaches 20 kPa, providing a steady flow to the turbine

In this type of converter, for a given wave height and tide, we can choose the value of P and the integration limits for each of the upward and downward movements to operate within the range of defined values such that maximize the value of the function (1) When the water level reaches its lowest position (the wave is at its trough), "O" level, the vent valve is closed, the relative air pressure chamber is zero, and the LPV and HPV valves are closed.

2.2.1. Pressure inside the tank

Figure 3. a) Upstroke oscillating column, b) Downstroke oscillating column

This section analyzes the process of energy storage in the tanks, starting with the upward movement, with the help of Fig. 3 a), an expansion of the area of the camera with a design similar to that of Fig. 2, where now the vent valve is positioned along the wall of separation of both tanks. The relative pressure is indicated by (*) as superscript. Without superscript indicates the absolute pressure. First let’s analyze the upstroke movement of the water column. The work converted and hence the absorbed energy is given by: H2

W 

 P·A·ds

H1

(1)

The oncoming wave introduces an amount of water in the chamber without difficulty driving up the level. Since all valves are closed air pressure into the chamber will rise. The absolute air pressure in the chamber based on the water gap between the inner and outer chamber levels, Δh, is given by

PCh  Patm   ·g ·h

(2)

As the gauge pressure in the high pressure accumulator tank, HPT, is PH* (PH, absolute pressure), let us determine how much the water level inside the chamber had to rise, so that the air pressure would reach this value. Assuming an 214


Borrás-Formoso et al / DYNA 81 (185), pp. 211-218. June, 2014.

Once the water column rise ends, the HPV valve closes and then the VV valve opens, and the compressed air into the chamber is evacuated into the atmosphere, restoring the atmospheric pressure Patm, and the water column rises without back up pressure to the outer level. When it reaches this point, the valve VV is closed. Now with all chamber valves closed, the downstroke will begin, following the descent of the water outside the chamber, with some steps analogous to those described above, but with negative relative pressures The maximum energy that can be accumulated by each upstroke of the water column is obtained by taking the partial derivative of eq. (8) with respect to high pressure and equating to zero. This is expressed as,

adiabatic compression it follows that

PH  H 0  H W  Patm  H 0  H W  H i

  

which yielding Hi as 1      P atm    H i  H 0  H W 1     PH    

(3)

The gap between the inside and outside of the chamber to maintain pressure PCh = PH in the chamber, Δ, shall be in accordance with Eq. (2):

UH 2 PH  Patm   HW  ·g PH

H0  HW  1 Patm ·PH    1

*

P  Patm PCh   Ch   ·g  ·g

(4)

SUU  HW  H i  

(5)

The amount of stored energy in the accumulator tank HPT during this column movement per m2 of horizontal surface into the chamber, will be:

U H  PH ·SU  PH HW  Hi   J / m2 *

(6)

After substituting Eqs. (4) and (3) in Eq. (6), this yields: *

U H  PH ·H W

P  

* 2

H

·g

1      Patm *    PH H 0  H W  1   *  P  Patm      H

(7)

1

Since γ, g, Patm, and ρ, can be considered to be constant, a function PH of the wave height HW and the minimum wave height H0, such that maximizes the accumulated energy per cycle can be achieved. For a constant wave height, the value of H0 will vary continuously according to the tide height. The chosen pressure for the high pressure tank, PH, will have a direct effect upon the useful energy. Using fig. 3 b), we can now study the oscillating water column downstroke, starting from the water level “E”, the maximum level of water, with minimum air over the water column, H0, with all valves closed, and chamber pressure equal to the atmospheric pressure. As the water level outside the chamber decreases, the water level within the chamber will also decrease causing the air volume to increase and the pressure to drop until the pressure in the chamber equals the pressure in the low pressure accumulator, PL, according to the scheme of fig. 3 b). At this point of the cycle a command signal to open the valve HPV is given. As a consequence, assuming an adiabatic expansion, it follows that

PL  H 0  Patm  H 0  H S

  

and solving for HS yields:

PH  Patm 2 

 P H S  H 0  atm  PL 

·g

1      P atm  PH  Patm H 0  HW  1     P   H  

   0 

PH  Patm H0  HW  Patm ·1·PH  

which expressed in absolute pressure yields

U H  PH  Patm ·HW 

(9)

1

After reaching the high pressure PH within the chamber, the valve HPV opens, so that with increasing water column, air is introduced into the accumulator tank HPT. The upward movement of the water column, during which air is introduced, is called the "upward stroke", SUU, and is defined as:

*

1

(8)

1     1   

(10)

In this situation there will be a gap between the outside water level and the water level within the chamber, Γ, given as 215


Borrás-Formoso et al / DYNA 81 (185), pp. 211-218. June, 2014.

PL  Patm  ··g Where Γ is given by:



Patm  PL  ·g

(11)

As shown in fig. 3b), the useful stroke during the downstroke, SUD, will be

SUD  HW  H S  

(12)

The converted energy from the LPT, WL = ΔUL during the useful stroke "downstroke", part F to J of the cycle, will be:

U L  PL* HW  H S  

(13)

After the useful stroke, with the chamber water level at “J”, the LPV valve is ordered to close, and an opening command is given to the VV, until the level reaches “O”, causing the water column to decrease until the outside and inside levels are equal. Thus, substituting equations (10) and (11) in (13), we can obtain the specific energy (energy per square meter) converted during the downstroke:

U L  PL  Patm  ·

H W  H 0  Patm  PL

1

 1   H 0  P  PL  ·g atm 

(14)

Following the same strategy performed for the upstroke, the pressure in the low pressure tank PL, which maximizes the accumulated power for the portion of the cycle corresponding to negative pressures, can be achieved by taking the partial derivative of the function (14) with respect to PL and equating to zero.  U  PL

L

  H 

W

P L

0

 P atm  P L 

 P atm

 H

 H

 1    H 

 

0

· P atm

1



0

·

1

1

 ·g

· P L

  P L    1    0  · g 

P atm

1 

(15)

Solving eq. (15) for PL, the pressure that maximizes the converted energy achieved: 3. Alternative designs Fig. 2 shows the basic idea of the OWC converter with differential pressure storage tanks (OWC-DPST). Other designs can be made with the same operating principles, but which enhance practical aspects. Such is the case of Fig. 4,

Figure 4. OWC-DPST. Alternative design

where the turbine is located on the outside of the tanks and prevents the shaft passage through a wall of the tank with the consequent problem of shaft sealing. Also, instead of a single turbine, another smaller turbine-alternator could be mounted in parallel, and used when the wave height is small and therefore would avoid having the larger turbine working at a power well below its rated power. Another improvement could be to mount the turbine between two large storage tanks fed by several tanks 5 and 12 according to the represented structure. It might be possible to incorporate previously a rectifier barrier labeled 17 in Fig. 2, in this case the actual behavior would approach the theoretical behavior. 4. Case studies and results To make an assessment of the energy efficiency we could expect from a converter such as the proposed one, we will consider the ideal conditions which would allow quantification of the various parameters. First, the following assumptions are considered: regular waves, with a series of wave fronts approaching the converter with a constant amplitude and periodicity, which is the theoretical case. With regard to the behavior of seawater within the chamber, it is assumed that if the chamber were permanently connected with the atmosphere, the vertical oscillation amplitude would be equal to the height of the wave [12] as shown in Fig. 5. As for the air, as it is working at low pressures, it is assumed to have nearly ideal diatomic gas behavior with a value of γ (specific heat ratio) of 1.39. It is assumed that the phases of compression and expansion of air in the chamber, being in a reduced time interval, are adiabatic [13]. Pressure drops in valves LPV and HPV are neglected because they are remotely

216


Borrás-Formoso et al / DYNA 81 (185), pp. 211-218. June, 2014. Table 3. Resuls for downstroke Case 1 4000 0.029 0.397 0.574 2294

P L* HS Γ HW-Γ-HS WD

Case 2 8300 0.063 0.824 1.113 9236

Case 3 7800 0.119 0.774 1.107 8636

Table 4. Power PMax PElect.

Figure 5. OWC with courtesy of Voith Siemens Hydro Wavegen . Wave height and vertical oscillation amplitude.

controlled, and will be closed when the pressure in the tank and the chamber are equal. It is also assumed that the horizontal dimension of the chamber is small relative to the wave length. Assuming the above considerations, three cases depicted in Table 1 will be analyzed, under moderate wave amplitudes. Case 1 corresponds to a small wave height, only 1m from crest to trough, with a minimum amount of air over the water column, H0 = 1 m. Case 2 consists of medium wave height approaching, HW =2m, to see the effect of doubling the wave height while maintaining the same air over the water column, and in case 3 doubling the height of the air over the water column, H0 =2m. It is a transition condition of the tide from high tide to low tide, equal 1m, maintaining a constant wave period of 10 seconds. Table 1. Input parameters for the three case studies Case 1 Case 2

Case 3

H0

1.00

1.00

2.00

HW

1.00

2.00

2.00

T (s)

10

10

10

Table 2. Results for upstroke Case 1

Case 2

Case 3

PH*

4000

8500

8000

Hi

0.055

0.169

0.213

Δ

0.397

0.844

0.794

HW-Δ-Hi

0.548

0.987

0.993

WU

2192

8393

7945

Case 1 449 362

Case 2 1763 1423

Case 3 1658 1339

The results presented in Tables 2 and 3 are obtained from the input data shown in Table 1, using the equations derived above, using a spreadsheet. Table 2 shows the upward stroke of the water column in the chamber, and in Table 3 the downstroke. Row 1 of Tables 2 and 3 indicate the values chosen for the pressure in the accumulator tanks, expressed in relative values, close to the maximum performance values and taking into account the equal mass flow rates. Note that for each case analyzed, the particular value of H0 and HW, the values for the pressure in the storage tanks, deduced from eq. (9) and eq. (15), are not symmetrical with respect to the atmospheric pressure Patm. The values of Hs and Hi are obtained from equations (3) and (10), indicated in the cells of the 2nd row. The values of Δ and Γ are obtained from equations (4) and (11). The results are presented in the 3rd. row. The "useful stroke" during the upstroke is obtained from expression (5) and during the downward stroke from equation (12) given in row 4. The converted energy in the upstroke, WU, is obtained from the above results, according to equation (6) and during the downward stroke, WD, according to equation (13). The results of both expressed in J/m2 are given in the 5th row. Assuming a known period introduced as input data in seconds, the accumulated energy can be obtained per unit time, as the ratio of the former to the period. The result, expressed in watts, is given in the first row of Table 4. To estimate the available electrical power, efficiencies of 85% for the turbine and 95% for the alternator [14] are considered. It is also assumed that the system uses a constant flow unidirectional pneumatic turbine due to its higher efficiency, which represents a efficiency of the genset approaching 81%. Multiplying first row of table 4 by the efficiency of the group yields the second row of the table, which is the electrical power expressed in W/m2. As shown, the achievable pneumatic power values for the three previous assumptions, has a range from a minimum of about 449 W/m2 up to 1.76 kW/m2, leading to an electrical power output between 362 W/m2 and 1.42 kW/m2. 5. Conclusions This paper presents a novel design alternative with regard to the state of the art of technology in OWCs. The

217


Borrás-Formoso et al / DYNA 81 (185), pp. 211-218. June, 2014.

proposed structure for the OWC-DPST and the operation modes, including its duty cycle and mathematical modeling tasks are described. The performance of energy transformation is a function of accumulated pressure. The pressure in the accumulation tanks may be adjusted in order to achieve the maximum efficiency of the converter. Analyzing the results, it is clear that the desired objective, to maximize the available energy has been achieved. Summarizing, several improvements can be highlighted:  The proposed OWC-DPST can use unidirectional flow based Francis turbines with their increased performance and efficiency, instead of Wells turbines with their bidirectional flow.  The turbine can operate continuously within the range of maximum efficiency.  As the converter works at a steady, stable and smooth speed, there is no requirement for special devices designed to work in peaks and troughs of power. Thus for any given energy production, much smaller and less powerful machinery can be used.  Synchronous generators directly connected to the grid may be used.  A small wave height, insufficient to be useful for a conventional OWC, would be capable of generating electricity in an OWC-DPST.  It would prevent erosion and fouling of the turbine blades by impact of particles of entrained water, leading to lower maintenance costs.  Turbine noise emanating from the air outlet will be inherently attenuated, with reduced environmental impact. Also, some disadvantages can be highlighted, such as:   

larger size of the converter, greater cost, more complex operating principle, requiring operating adjustments updated according to the state of the sea, implying the unavoidable need for an adaptive control system. increased maintenance costs

References [1] Vininanza, D. et al. The SSG Wave Energy Converter: Performance, Status and Recent Developments, Energies, 5, pp. 193-226, 2012 doi:10.3390/en5020193

[5] Alberdi, M. et al. Complementary control of oscillating water column-based wave energy conversion plants to improve the instantaneous power output, IEEE Transactions on Energy Conversion, 26-4, pp. 1021-1032, 2001. [6] Maeda, H., Performance of an impulse turbine with fixed guide vanes for wave power conversion, Renewable Energy, 17, pp. 533-547, 1999. [7] Muthukumar, S. et al. On minimizing the fluctuations in the power generated from a wave energy plant, Proceedings IEEE Internal Electrical Machines Drives Conference, San Antonio, Texas, EE.UU, pp. 178-185, 2005 [8] FalnesA, J. et al. Simulation studies of a double oscillating water column, Fourth International Workshop on Water Waves and floating Bodies, University of Oslo, Norway, pp. 65-68, 1989. [9] Falnes, J., Optimum Control of Oscillation of wave-energy converters. International Journal of Offshore and Polar Engineering, 12-2, pp. 147-155, 2002. [10] Giménez, J. and Gómez, J., Wind Generation using different generators considering their impact on power system. Dyna-Colombia, 169, pp. 95-104, 2011. [11] Evans, D., Wave-power absorption by oscillating surface pressure distributions, Journal of Fluid Mechanics, 114, pp. 481-499, 1982 [12] Bakar, B. et al. Mathematical Model of Sea Wave Energy in Electricity Generation, 5th International Power Engineering and Optimization Conference (PEOCO2011) Malaysia, 2011. [13] Gervelas, R., Trarieux, F. and Patel, M., A time domain simulator for an oscillating water column in irregular waves at model scale, Ocean Engineering, 38, pp. 1007-1013, 2011 [14] Thorpe, T., A brief review of wave energy, UK Department of Trade and Industry, May 1999.

R. Borrás-Formoso, received the Bs. Eng in Electrical Engineering in 1988 at the University of Vigo, Spain, and the Bc degree of Marine Engineering in 1984 at the University of A Coruna, and the PhD degree in Maritime Engineering in 1997. Currently he is a full professor in the ETSNM. of La Coruna University in the Electrical Engineering area. His research interest is centered on the fields of simulation, modeling and forecasting in Ocean Energies, and electrical ship propulsion simulation.

R. Ferreiro-García, received the Bs. degree in Marine Engineering in 1975, and PhD degree in Maritime Sciences in 1986, he worked as teacher in the University of A Coruna, and as a researcher in the fields related with Systems Engineering and Automation applied on renewable energies, with emphasis on ocean energies. He is currently coordinator of the Research group G0087 at the university of A Coruna.

F. Miguélez-Pose, she obtained Licenciateships in Physics in Santiago de Compostela and gained her Doctor of Science degree in 1990. She took his place as Professor of General Physics in the Faculty of Sciences and now in Marine Engineering School. She worked in programs and projects of the Condensed Matter Physics, with emphasis on superconductivity and energy and have several publications of scientific journals. She is currently Maritim Studies Institute Director, Universidade da Coruña.

[2] Ibañez, P. and Miguélez, F., La energía que viene del mar. Instituto Universitario de Estudios Marítimos. Netbiblo. Coruña, 2009. [3] Takao, M. and Setoguchi, T., Air Turbines for Wave Energy Conversion, International Journal of Rotating Machinery. Volume, 2012. Article ID 717398 doi: 10.1155/2012/717398 [4] Brito-Melo, A. et al. Analysis of Wells turbine design parameters by numerical simulation of the OWC performance, Ocean Engineering, 29, pp. 1463-1477, 2002

C. Fernández-Ameal, received the Bs. Sc. in Nautical Science in 1984 and the MSc in Nautical Science in 1989. Postgraduate studies for PhD degree in Fracture Mechanics and Plasticity of structures, in 1989-2000. Actually Associate Professor on Naval Construction and Ship Theory on the ETSNM. of La Coruña University, working on vessels and floating bodies dynamics and stability in waves.

218


http://dyna.medellin.unal.edu.co/

Enterprise architecture as tool for managing operational complexity in organizations Arquitectura empresarial como instrumento para gestionar la complejidad operativa en las organizaciones Martín Dario Arango-Serna a, John Willian Branch-Bedoya b & Jesús Enrique Londoño-Salazar c Facultad de Minas, Universidad Nacional de Colombia, Colombia. mdarango@unal.edu.co Facultad de Minas, Universidad Nacional de Colombia, Colombia. jwbranch@unal.edu.co c Doctorando en Ingeniería, Facultad de Minas, Universidad Nacional de Colombia, Sede Medellín. jelondono@ucn.edu.co a

b

Received: February 5th, de 2014. Received in revised form: March 7th, 2014. Accepted: March 13th, 2014.

Abstract Different types of companies and organizations around the world, in spite its size, economic activity, nature, capital managed by them, and many more aspects, are heading towards diverse daily challenges that must be overcomed agilely, in order to compete successfully in a globalized world operating under highly dynamic environments. In this sense, the fact that companies in its internal operation, must be prepared well enough to respond efficiently, agile and innovative to the challenges and needs they face is taking more relevance. This paper states the importance of information technologys and the development of an enterprise architecture model, as an instrument that allows companies to face challenges related to complexity which is represented in the organization's operative environment. Keywords: Enterprise architecture; Organizational complexity; Information technology; Technological capabilities; Business capabilities; Strategic alignment; Management models. Resumen Diferentes tipos de empresas alrededor del mundo, sin importar su tamaño, ni su actividad económica, ni su naturaleza y el capital que manejan, entre otros aspectos, enfrentan en su día a día, retos de diferente índole que deben ser atendidos de manera ágil para poder competir de forma exitosa en un mundo globalizado y que opera bajo entornos altamente dinámicos. En este sentido, cada vez toma mayor relevancia el hecho de que, a nivel del funcionamiento interno, las organizaciones deben estar lo suficientemente preparadas para dar respuesta de forma eficiente, ágil e innovadora a los retos y necesidades que se presentan. Este artículo plantea la importancia que representan las tecnologías de información y el desarrollo de un modelo de arquitectura empresarial, como instrumentos que permiten a las empresas afrontar los retos asociados con la complejidad que se presenta en una organización en su entorno operativo. Palabras clave: Arquitectura empresarial; Complejidad organizacional; Tecnologías de la información; Capacidades tecnológicas; Capacidades de negocio; Alineación estratégica; Modelos de gestión.

1. Introducción Los retos asociados a la complejidad que deben enfrentar las organizaciones en su funcionamiento interno es un tema de estudio que ha sido tratado por diferentes autores, y que tiene relación con los conceptos de simplicidad y agilidad, que han venido tomando relevancia para hacer frente al tema de la complejidad [26]. Una empresa se considera ágil cuando tiene la capacidad de responder rápidamente a los retos que se le presentan, es ingeniosa y capaz de adaptarse a su entorno [1]; por su parte, el concepto de simplicidad en una empresa se asocia a los procesos y soluciones tecnológicas, cuyas funcionalidades y procedimientos sean los estrictamente necesarios para cubrir las necesidades específicas de un

requerimiento, que sean fáciles de implementar, de mantener y utilizar, además de poder ser desarrollados en los tiempos establecidos. Con el fin de adaptarse y dar respuesta efectiva a retos que implican la complejidad, las organizaciones requieren revisar constantemente la orientación de las estrategias de negocio y hacer los ajustes que sean requeridos, con mayor agilidad y efectividad, los cuales deben verse reflejados de forma integral a nivel de la relación entre la estrategia, el modelo de negocio, los procesos operativos y las tecnologías de información (en adelante TI); tarea que tradicionalmente representa grandes dificultades y complejidad al ser abordados. La arquitectura empresarial (en adelante AE) puede ser utilizada como una herramienta para ayudar en la consolidación de la estrategia de negocio, a través de la

© The authors; licensee Universidad Nacional de Colombia. DYNA 81 (185), pp. 219-226. June, 2014 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online


Arango-Serna et al / DYNA 81 (185), pp. 219-226. June, 2014.

materialización y puesta en práctica de los cambios que presenta la empresa y que afectan toda la estructura operativa. La importancia que representa abordar los temas de complejidad y arquitectura empresarial en el desarrollo de este artículo, se sustenta en el hecho que, la AE es una de las herramientas o mejores prácticas que en los últimos años vienen adoptando las diferentes industrias y empresas alrededor del mundo, para poder hacer frente a la gestión de los procesos operativos. Al hablar de procesos operativos, se conjugan diferentes roles y funciones de la empresa: temas de estrategia, procesos de negocio, el manejo que debe darse a la información, sistemas informáticos e infraestructura tecnológica; todos estos aspectos requieren ser gestionados de manera integral, y es allí donde la AE tiene su campo de acción. Desde otra perspectiva, el nivel de conocimiento y desarrollo que se tiene sobre el tema de AE en las empresas no ha alcanzado el nivel de penetración y adopción deseadas; incluso, debido a los diferentes conceptos y disciplinas que incorpora, se convierte en un tema que presenta algún grado de dificultad, tanto en su concepción, como en las iniciativas de implementación; de allí la importancia de abordar el desarrollo de este tema y plantear diferentes enfoques y perspectivas para ser abordado. El desarrollo de este artículo comienza por un acercamiento inicial al fenómeno de la complejidad en las empresas, además de las implicaciones y retos que se les presenta en su operatividad (sección-2). Posteriormente, en las secciones 3 y 4, se desarrollan los temas que tienen que ver con las estrategias asociadas a las tecnologías de información y la alineación estratégica, como mecanismos que permiten a las empresas poder contrarrestar los efectos que les representa gestionar la complejidad; de tal forma que puedan operar de manera más ágil y eficiente. Por último, se hace énfasis en el tema de arquitectura empresaria, como instrumento que permite a las empresas afrontar de manera articulada y de cara al negocio los retos asociados con alcanzar la eficiencia operativa, al facilitar la alineación entre la estrategia, los aspectos de negocio, las tecnologías de la información y las capacidades operativas. También cabe resaltar que el desarrollo de este artículo no es un tema aislado, y que por el contrario, se deriva de un proceso investigativo que aborda la problemática de la complejidad de tipo operativo en las organizaciones; estudio que se soporta en un proceso de revisión sistemática de la literatura, además de la aplicación de casos prácticos a nivel empresarial. 2. Complejidad organizacional Los retos crecientes y vertiginosos que impone el mercado y, en general, la economía mundial, hacen que las empresas deban afrontar un entorno de funcionamiento y operatividad complejos, dinámicos y regidos bajo un contexto de globalización, así como la necesidad de mantener altos niveles de competitividad. Sumado a lo anterior, las organizaciones deben afrontar los retos y dificultades que les representa la operación asociada al funcionamiento interno, el cual se ve influenciado en gran

medida por la identidad corporativa, y el modelo organizativo y de gestión que se tengan establecidos. Autores como Aiken y Hage (en Hall [2]), expresan que existe una tendencia a que las organizaciones se vuelvan cada vez más complejas debido a presiones tanto internas como externas, en especial de estas últimas, originadas por el dinamismo y al entorno cambiante que las impulsa. Se hace entonces ineludible que las organizaciones consideren en sus estrategias y modelos de gestión el manejo que deben dar al tema de la complejidad, como lo indica Sáez et al., “los recursos intelectuales y operativos son insuficientes para hacerse cargo de la complejidad, para mitigarla y gestionarla; lo que afecta de lleno a las empresas que, para cumplir su misión, tienen que afrontar y gestionar la complejidad” (p. 1) [3]. En este sentido, las organizaciones deben ser de naturaleza flexible, que les permita asumir la velocidad de cambio que le impone el medio y los retos inherentes al funcionamiento interno. Para afrontar los retos y exigencias que se presentan ante el fenómeno de la complejidad, las empresas deben implementar estrategias y apropiarse de instrumentos que les permitan alcanzar una mayor agilidad empresarial, como lo expresa Londoño [4], cuando afirma que lo anterior puede ser alcanzable, si se propicia la implantación de nuevos modelos de negocio de forma rápida y la obtención de una mejora en la eficiencia empresarial derivada de unos procesos correctamente gestionados vía una integración más natural, confiable y oportuna. Para Cuenca, Ortiz y Boza, “se hace necesario entender la naturaleza y composición de las operaciones empresariales que atraviesan los límites de la organización, como elemento fundamental para iniciar y mantener las relaciones de negocio” (p.1) [5]. Cada organización trata de ser única y distinguirse de sus competidores, sin embargo, el aspecto asociado con la complejidad que se administra al interior de cada una de ellas no es único, y en general las afecta a todas [6, 7], lo cual, está asociado a fenómenos de alineación y funcionalidad que tiene que ver con las estrategias y el modelo de negocio, los procesos operativos, la gestión de los sistemas de información (en adelante SI), la tecnología (van der Raadt et al., [8], Smith, Watson y Sullivan [9]) y los esquemas de gestión, Lankhorst [10]. En la Fig. 1, se relacionan, entre otras, algunas de las variables más relevantes que ejercen presión sobre una organización y que le generan algún nivel de complejidad en su funcionamiento (parte izquierda de la figura). Por otro lado, en la parte derecha, se resaltan algunas de las estrategias y enfoques que debe emprender una empresa para contrarrestar los efectos asociados con la complejidad. Dicho efecto se acrecienta aún más, a medida que estas variables se entrelazan y se materializan de forma simultánea, lo cual, en la cotidianidad de las empresas se presenta de forma constante. Para que una organización tome acciones acertadas que hagan frente a las variables que le generan complejidad y que no les permite ser eficientes y dinámicas, es importante y casi que obligado abordar el concepto de “simplicidad o simplificación”.

220


Arango-Serna et al / DYNA 81 (185), pp. 219-226. June, 2014.

Figura 1. Relación entre las variables de complejidad vs. eficiencia empresarial en una organización

Como lo expresa Sáez [11], una vez reconocida la existencia de la complejidad en cualquier sistema, hay que establecer las formas de tratarla, entendiendo que la simplificación es un concepto indisoluble de la complejidad. Los retos asociados con la simplificación son quizás los temas más importantes en el estudio de los sistemas [12], incluso, la esencia del análisis de un sistema y su complejidad tiene que ver principalmente con el tema de la simplificación [13]. En un enfoque complementario, Lankorst [14], denomina el concepto de simplicidad con el nombre de Compositionality, que traducido al español significa “enfoque composicional”, dando a entender que el método más comúnmente utilizado, casi de forma natural para hacer frente a la complejidad de los sistemas, es el que permite distinguir entre sus partes y sus relaciones. Al asociar estos conceptos en el contexto empresarial, surgen algunas inquietudes y retos sobre la forma en que podrían hacerse más simples las cosas a lo largo y ancho de toda la organización, de tal forma que se pueda alcanzar mayor agilidad en la operatividad de la empresa. 3. Influencia de las tecnologías de información Desde la perspectiva de las tecnologías de información y el papel que juegan al interior de las organizaciones, se resalta que, en circunstancias de alta complejidad del entorno y de la misma organización, la adecuada gestión que se haga de las TI, adquiere cada vez mayor relevancia [15, 50]. Los procesos de negocio y los sistemas de información que se soportan sobre las TI juegan un papel decisivo en este contexto [16, 18, 51]. Autores como Porter y Millar (en [17]) afirman que “las tecnologías de información han adquirido un valor estratégico para las

organizaciones, ya que permite modificar la estructura de un sector, crear nuevas ventajas competitivas, e incluso, originar nuevos negocios que antes no eran viables”. Según estos autores, las TI proporcionan ventajas competitivas impulsando estrategias de liderazgo, reflejadas en la optimización de costos y la diferenciación. Bernadat [21] expresa que, bajo un entorno competitivo, las empresas deben integrar los esfuerzos de las distintas áreas funcionales en un marco común y dinámico donde las habilidades y recursos clave permitan a la organización obtener ventajas competitivas sostenibles. Al ser las TI uno de los recursos más significativos para las empresas, no deben concebirse como un recurso que deba alinearse, en el sentido tradicional con la estrategia de la organización, sino como un elemento fundamental del negocio en sí. Se resalta entonces la importancia que representa para las organizaciones las estrategias asociadas con las tecnologías de información en función de apoyar a la empresa para alcanzar los objetivos de negocio, convirtiéndose en un elemento clave en su desarrollo y crecimiento [16]; sin embargo, las estrategias de TI deben permanecer alineadas con la estrategia organizacional y de negocio, la cual debe concebirse como una acción permanente de balanceo, donde las TI acompañan y apoyan al negocio, incluso, siendo capaz de adelantarse a éste, convirtiéndose en factor estratégico que impulsa el desarrollo de la entidad. A diferencia de lo que consideran muchas empresas y en general las personas, las estrategias tecnológicas superan aspectos de simple moda o disponer de las últimas innovaciones en la materia; por el contrario, siguen una lógica (o por lo menos, es el deber ser) respecto a un análisis exhaustivo basado en el conocimiento, el funcionamiento de la organización y de su entorno, apuntando a tener una estrategia tecnológica que contribuya a que la empresa sea ágil y eficiente en su interior, y que marque diferencia en el mercado. La alineación entre TI y el negocio debe ser vista como un proceso permanente (como un ideal regulador), un objetivo idealista que siempre está presente en la organización y sobre el que hay que trabajar de forma permanente, pero que nunca es plenamente alcanzado [14]. Lo anterior, abre el camino para abordar el tema de alineación desde el enfoque de la arquitectura empresarial. 4. Alineación estratégica La alineación estratégica debe ser entendida como el proceso que garantiza la integración de las diferentes áreas y proyectos de la empresa, para alcanzar los objetivos comunes [20]. Este proceso de integración puede enfocarse desde varias perspectivas y niveles [21], por ejemplo: (i) integración de procesos (entendida como la coordinación de las funciones de negocio y la gestión operativa, el control y el monitoreo de los procesos de negocio); (ii) integración entre aplicaciones y los sistemas de información (aplicaciones, fuentes de información, bases de datos, etc); (iii) integración física (en términos de componentes

221


Arango-Serna et al / DYNA 81 (185), pp. 219-226. June, 2014.

tecnológicos: equipos, dispositivos, redes, etc.) [22]. Algunos otros enfoques también consideran: (1) la integración bajo un enfoque metodológico que apoye la toma de decisiones a nivel de toda la empresa y (2) la integración a través del modelamiento empresarial (por ejemplo, a través del uso de marcos de referencia y el modelamiento de servicios y capacidades de negocio) [23]. En el contexto del desarrollo de este artículo, el enfoque de alineación e integración empresarial está enmarcado principalmente en la relación que se da entre aspectos asociados con la estrategia (negocio y TI), con la parte operativa (capacidades operativas y tecnológicas). Desde una perspectiva de alineación estratégica, se conjugan los siguientes aspectos: (i) los relacionados con la estrategia de negocio y la estrategia tecnológica que, de forma integrada posibilitan una alineación estratégica empresarial [19] y (ii) la alineación que se presenta entre las capacidades operativas y tecnológicas [24]. Ambos aspectos, operando de manera articulada, posibilitan un escenario deseable para que la organización desarrolle sus procesos operativos de forma eficiente, y orientados en una misma dirección. En la Fig. 2, se tiene una representación del escenario expuesto anteriormente, donde se puede observar que existe una alineación natural entre los componentes de enfoque vertical: contexto de negocio (conformado por la estrategia de negocio y la capacidad operativa); y por otro lado el contexto de TI (representado por estrategia de TI y la capacidad tecnológica). El reto que siempre ha existido para las organizaciones, y en el que se invierten recursos significativos para mejorar cada día, corresponde a cerrar la brecha en lo que respecta a la “alineación entre negocio y TI”, donde se espera que las áreas de tecnología se conviertan en parte integral de la estrategia de negocio, con el reconocimiento respectivo, pero también con los compromisos, responsabilidades y generación de valor que le obligan [25,26]. Al abordar el tema asociado con modelos de gestión y mejores prácticas a nivel de industria, algunos autores como Arango S, Londoño S, y Zapata [27], expresan que gran

parte de estos modelos y mejores prácticas tienen su origen en las transformaciones y en la aparición de nuevas disciplinas soportadas en los modelos administrativos y de gestión, como lo son la teoría organizacional y la teoría de sistemas; mientras que en Scott [28], se indica que en las últimas décadas han tomado bastante fuerza nuevos campos del conocimiento, propiciando que surjan nuevas disciplinas, conceptos y mejores prácticas de tipo organizacional orientadas en la gestión de la información y a la evolución creciente de las tecnologías de información. Entre 1980 y 1995 surgieron en el mercado modelos y prácticas de gestión que de forma gradual se posicionaron al interior de las organizaciones, de las cuales muchas siguen vigentes y son aplicables en la actualidad. Aunque son muchas las variables que determinaron el origen de estas mejores prácticas, corresponde a los conceptos asociados con la gestión de la información y del conocimiento, ser los principales impulsores para su desarrollo y evolución; entre los modelos más representativos se tiene: (i) modelo ABM Activity Based Management (modelo de gestión basado en actividades) y ABC - Activity Based Costing (costeo basado en actividades); (ii) la cadena de valor de Porter (estructuración de la empresa orientada a satisfacer al cliente final) y (iii) teoría de las restricciones (enfocada en la optimización de la producción). Igualmente, surgieron otras herramientas de gestión en el mundo empresarial como son: calidad total, justo a tiempo, Kaizen (mejoramiento continuo) [16, 28], y otros como: la reingeniería, gestión del riesgo, gestión de procesos, etc. [29, 52]. En la Fig. 3, se hace un despliegue de algunos de los modelos de gestión y mejores prácticas que en la actualidad utilizan las empresas como instrumento para apoyar el desarrollo de diferentes áreas y procesos de negocio, aunque, dependiendo del sector al cual pertenezca la empresa, es otros …. RM - Risk Management PMI - Project Management QM - Quality Management BSC - Balance Score Card 6σ - SixSigma BPM - Business Process Management Enfoque de Negocio EA - Enterprise Architecture Enfoque Tecnológico COBIT - Control Objectives for Information and related Technologies CMMI - Capability maturity model integrate RISKIT - Framework for Management of IT Related Business Risks ITIL - Information Technology Infrastructure Library ISO 27000 - Security Management System otros ….

Figura 2. Enfoque de alineación estratégica: estrategia, negocio, operaciones y TI Fuente: elaboración propia a partir de: Henderson & Venkatraman [24].

Figura 3. Mejores prácticas de industria y modelos de gestión utilizados a nivel de negocios y TI Fuente: elaboración propia 222


Arango-Serna et al / DYNA 81 (185), pp. 219-226. June, 2014.

normal que utilicen de forma complementaria otras mejores prácticas existentes. Es de resaltar que, aunque todos estos modelos han surgido para ayudar a las empresas en los procesos de gestión del negocio y a nivel operativo, asimismo, también le imprimen un alto grado de complejidad, debido a que muchos de ellos conviven dentro de una misma organización de forma simultánea, incluso dentro de una misma área. A pesar de que los modelos y las mejores prácticas generan aportes significativos para apoyar la mejora en la eficiencia de la gestión empresarial, cada uno de ellos tiene un campo de acción, cobertura y aplicabilidad en temas específicos centrados en la esencia de su concepción y las necesidades empresariales que pretenden apoyar. Es así como surge la necesidad de establecer mecanismos que permitan analizar los esquemas de convivencia, relacionamiento y complementariedad entre los diferentes modelos adoptados. En la Fig. 3, la AE se despliega como una mejor práctica de gestión, con alcance tanto en el contexto de negocio, como del gobierno de TI, lo cual le posibilita la interacción con otras mejores prácticas desde ambas perspectivas. En la parte central de la misma figura, está representada la arquitectura empresarial como una mejor práctica de gestión cuyo enfoque ha permitido abordar de manera integral las problemáticas que se presentan en las empresas y que tienen que ver con la adecuada gestión de las tecnologías de información y su relación con el negocio. Incluso, es a partir de las concepciones que plantea dicho modelo que han surgido otras mejores prácticas y se han re-enfocado algunas de las existentes.

Tabla 1. Enfoques sobre la arquitectura empresarial para diferentes autores

Autor / Enfoques de la arquitectura empresarial ISO/IEC/IEEE [31] Conceptos fundamentales o propiedades de un sistema dentro de su entorno, representado en sus elementos, relaciones y en los principios que rigen su diseño y evolución. TOGAF [32] Conjunto coherente de principios, métodos y los modelos que se utilizan en proceso de diseño y representación de la estructura organizacional, los procesos de negocio los sistemas de información y la infraestructura. GARTNER [33] Disciplina que, de manera proactiva y holística conduce la empresa dando respuesta a las fuerzas disruptivas mediante la identificación y el análisis de la ejecución del cambio, con foco en la visión y los resultados esperados por el negocio. LANKHORST [10] Práctica que trata de describir y controlar la estructura de una organización, los procesos, las aplicaciones, los sistemas y la tecnología de una manera integrada. RNARD [34] Corresponde al análisis, descripción y la documentación de una empresa en su estado actual y futuro, desde las perspectivas de estrategia, negocio y tecnología. SCHEKKERMAN [35] La AE es una expresión completa de la empresa, un plan maestro que actúa como una fuerza de la colaboración entre los aspectos de planificación de negocio (visión, estrategias, metas, principios de gobierno), aspectos de operación de negocio (estructura organizacional, procesos, productos y servicios, información), aspectos tecnológicos (sistemas de información, bases de datos e infraestructura tecnológica). ROSS [29] Es la organización lógica de los procesos de negocio y las capacidades de TI que refleja la integración y la estandarización de los requerimientos para el modelo de funcionamiento de una empresa.

5. Arquitectura empresarial El concepto de Arquitectura Empresarial surge a la luz de un artículo escrito por Zachman [30], donde el autor hace alusión al concepto de sistema de información, independientemente de su tamaño y que permitiera alinear y justificar las inversiones en la parte tecnológica. Para Zachman, en Arango et al., “El éxito del negocio y los costos que ello conlleva dependen cada vez más de sus sistemas de información, los cuales requieren de un enfoque y una disciplina para la gestión de los mismos” (p.1) [27]. La visión de Zachman sobre la agilidad y el valor que las TI podrían aportar al negocio, se puede desarrollar de forma más efectiva a través del concepto de una arquitectura holística de sistemas. Continuando con Arango et al., [27], el concepto de arquitectura empresarial surge como una disciplina que, al interior de una empresa, afronta de forma integrada los aspectos de negocio y de tecnologías de información, con el propósito de garantizar la alineación entre las iniciativas, objetivos, metas estratégicas, procesos de negocio y sus sistemas de soporte. Desde las primeras etapas en el proceso de desarrollo y evolución de la AE, su enfoque y evolución estuvo asociado con mayor intensidad con los dominios de tipo tecnológico, y en menor proporción con los dominios de negocio y organizacional [36, 40, 41], donde, a nivel tecnológico se

enfatiza en los sistemas de información, la información en sí misma y las tecnologías subyacentes que soportan dichos recursos [37, 38, 39]. En la tabla 1, se describen algunas de las definiciones y enfoques acerca del concepto y lo que significa la AE, planteado por diferentes autores. Se pretende que a partir de las definiciones expuestas, se alcance a estructurar una idea clara de la concepción y el enfoque de lo que representa el concepto de arquitectura empresarial. El objetivo que siempre ha pretendido la AE es la alineación entre las estrategias y los objetivos de TI, con las estrategias y objetivos de negocio. Es a partir de esa concepción que las inversiones que realizaban las empresas en las áreas de TI, comenzaron a tener un giro, dejando de concebirse como un costo, para comenzar a ser vistas como una inversión [1, 42]; lo que se suma al continuo y vertiginoso desarrollo que viene teniendo las tecnologías de información en sus diferentes campos: mayores capacidades de procesamiento y almacenamiento de datos, reducción en los costos del hardware y el software [43], el auge y penetración de internet en todas las instancias de la sociedad, la expansión de la telefonía móvil, la evolución de los computadores

223


Arango-Serna et al / DYNA 81 (185), pp. 219-226. June, 2014.

personales a equipos portátiles, tabletas y teléfonos inteligentes, esquemas de procesamiento en la nube y nuevas técnicas para el tratamiento de la información (minería de datos, inteligencia de negocios, etc; propiciando una transformación total en el funcionamiento de las empresas, lo cual se ha visto reflejado en mayores exigencias en términos de la optimización de los procesos [44], la gestión del conocimiento, la integración de los sistemas y de la información [45] y en general, la prestación de servicios mucho más especializados que se ofertan al mercado. La concepción y el enfoque que actualmente se tiene respecto a la arquitectura empresarial, ha venido presentando variaciones con respecto a la formulación que tuvo en sus orígenes; en la actualidad, la AE alcanza a tener mayor participación en aspectos relevantes desde la perspectiva de negocio (la estrategia, los procesos, la eficiencia y la innovación); y la forma en que se establece una alineación con los aspectos de TI. La arquitectura de negocio (uno de los dominios o vistas de la arquitectura empresarial), ha venido tomando fuerza y evolucionando en los últimos años, convirtiéndose en un espacio propicio para que la AE comience a tener una participación más activa, a través de diferentes acciones: acompañamiento en la ejecución de los programas y proyectos estratégicos, definición de mapas de capacidades de la organización, propiciar un mejor relacionamiento e integración entre “estrategia-proyectos-procesostecnología”, apoyar la orientación del gobierno de TI, y lo principal, en acompañar el desarrollo y ejecución de la estrategia de negocio en su proceso continuo de transformación. A continuación, se relacionan algunos de los retos más significativos que actualmente enfrentan las empresas que, según Gravesen [46] se les debe prestar atención y ser atendidos, independiente del grado de complejidad que ello represente: (i) la velocidad del cambio es más acelerado, (ii) incremento exponencial en la densidad de la información, (iii) exigencias de bienes y servicios cada vez más personalizados, (iv) las barreras tradicionales entre industrias se siguen desmoronando y (v) cambios en el concepto de diversificación y el crecimiento organizacional. Complementando la lista de aspectos antes descritos, y dependiendo del sector en que operan las empresas, se tienen los siguientes: temas de tipo regulatorio (leyes y normas nuevas o las existentes que se modifican) que por lo general son de obligatorio cumplimiento [47, 10], maximizar la eficiencia de las operaciones de negocio disminuyendo el riesgo, y explotar las capacidades de innovación a partir de las recursos y capacidades existentes [49]. Los procesos de fusión o adquisición de nuevas empresas son otro evento que se presenta con frecuencia en la actualidad y que se ha convertido en uno de los retos más difíciles de abordar por una organización, visto desde diferentes perspectivas, pero en especial desde las perspectivas de tipo regulatorio, operativo y tecnológica [48]; los cuales en su conjunto, les corresponde garantizar el funcionamiento y la puesta en marcha de una nueva estructura de negocio.

A la luz de la intervención y la forma como las empresas afronten los retos que se les presentan, la arquitectura empresarial entra a brindar apoyo a la organización, más allá de circunstancias y necesidades de tipo coyuntural, permitiendo que se fortalezcan de forma progresiva los procesos y las capacidades de negocio y así estar mejor preparados para enfrentar nuevos y constantes desafíos. 6. Conclusiones A través del análisis y la interpretación de las posiciones que plantean los diferentes autores estudiados, es generalizada la concepción que se tiene sobre el fenómeno de la complejidad creciente que cada vez más deben afrontar las empresas en el desarrollo de sus funciones, y sobre la necesidad que éstas adopten estrategias y mejores prácticas que les permita hacer frente a dicho fenómeno para poder ser competitivas. En este sentido, las estrategias e instrumentos que adopten las empresas deben permitir que la organización avance de forma progresiva en alcanzar un esquema de funcionamiento eficiente, donde exista una estrecha relación y sinergia entre los aspectos estratégicos a nivel del negocio, respecto a los aspectos asociados con las capacidades operativas (procesos, estructura organizacional y tecnologías de información). La adopción de un modelo de arquitectura empresarial, unido a otras mejores prácticas, es considerada por muchos autores e industrias de todo el mundo, como una herramienta necesaria para que las empresas puedan afrontar los desafíos que les representa poder gestionar con agilidad, eficiencia e integralidad los procesos operativos. Como trabajos futuros de relevancia que puedan complementar desde otras perspectivas el tema de estudio abordado en este artículo, se tiene: (i) desarrollar estudios asociados con el concepto de simplicidad y eficiencia empresarial, (ii) abordar el desarrollo de un modelo de arquitectura empresarial alrededor de organizaciones complejas (p.e: un grupo empresarial compuesto por ‘n’ compañías), (iii) estudiar los efectos que representa a una empresa la convivencia de múltiples modelos de gestión y mejores prácticas y (iv) plantear un enfoque de arquitectura de solución como mecanismo para reducir la brecha existente entre la arquitectura empresarial y la implementación de soluciones tecnológicas. Agradecimientos El presente artículo se deriva como parte de los resultados del proyecto de investigación “Modelo funcional de integración de la arquitectura empresarial de N entidades alrededor de un grupo empresarial. Un enfoque de orientación a servicios y modelado de redes de capacidades”, línea de investigación: Modelización Empresarial, financiado por el Grupo de I+D+I Logística Industrial – Organizacional “GICO” de la Universidad Nacional de Colombia – Sede Medellín, Facultad de Minas.

224


Arango-Serna et al / DYNA 81 (185), pp. 219-226. June, 2014.

Referencias [1] Sena, J., Coget, J.-F., y Shani, A., “Designing for Agility as an Organizational Capability: Learning from a Software Development Firm”, The International Journal of Knowledge, Culture and Change Management, vol. 9(5), pp. 1-24, 2009.

[19] Grant, R. M., “The Resource-Based Theory of Competitive Advantages: Implications for Strategy Formulation”, California Management Review, vol. 33(3), pp. 114-136, 1991. [20] Chen, D., Doumeingts, G. and Vernadat, F., “Architectures for enterprise integration and interoperability: Past, present and future. Computers in Industry”, vol. 2, pp. 647–659, 2008.

[2] Hall, R. H., “Organizaciones, estructura y procesos”, Mexico, Prentice Hall, 1983.

[21] Vernadat, F. B., “Enterprise Modeling and Integration: Principles and Applications”, Londres, Chapman & Hall, p. 510, 1996.

[3] Sáez, V. F., Garcia, O., Palao, J. y Rojo, P., “Innovacion Tecnológica en las empresas - Temas básicos [en línea]”. Madrid, E.T.S de Ingenieros de Telecomunicacion, Universidad Politécnica de Madrid, 2003 [fecha de consulta diciembre de 2013], cap. 9, Gestión de la complejidad en la empresa. Disponible en: http://www.gsi.dit.upm.es/~fsaez/intl/indicecontenidos.html.

[22] Kosanke, N. and Nell, J.G., “Enterprise Engineering and Integration: Building International Consensus”, proceedings of ICEIMT ’97, International Conference on Enterprise Integration and Modeling Technology, Springer, Torino, pp. 235–243, 1997.

[4] Londoño, J., “Arquitectura de Tecnología en la mira. La arquitectura empresarial, un doble reto [en línea]”, Sistemas – ACIS. Ed. 93 jul-sep, 2005. [fecha de consulta marzo de 2013]. Disponible en: http://www.acis.org.co/index.php?id=539.

[24] Henderson, J. C. and Venkatraman, N., “Strategic alignment: leveraging information technology for transforming operations”, IBM Systems Journal, vol. 32 (1), pp. 4-16, 1993.

[5] Cuenca, G. L., Ortiz, B. A. y Boza, G. A., “Arquitectura de Empresa. Visión General”, en Congreso de Ingeniería de Organización (IX, 2005, Gijón, España). Sistemas de información, Gijón, ADINGOR, P. 10. 2005 [6] Wilbanks, L., “Using Enterprise Architecture to Upgrade Old IT Systems”, IT Professional - CIO Corner, vol. 10(2), pp. 63-64, marzoabril, 2008. [7] Niemi, E. and Pekkola, S., “Enterprise Architecture Quality Attributes: A Case Study”, proceedings of 46th Hawaii International Conference on System Sciences (HICSS), pp. 3878-3887, 2013.

[23] Noran, O., “Building a support framework for enterprise integration”, Computers in Industry, vol. 64 (1), pp. 29-40, 2013.

[25] Hedman, J. and Thomas, K., “IT and Business Models: Concepts and Theories”. Malmo, Liber Ekonomi, P. 288, 2002. [26] Mustafa, R. and Werthner, H., “A Knowledge Management Perspective on Business Models”, The international journal of knowledge, culture and change management, vol. 8 (5), pp. 3-14, 2008. [27] Arango S, M., Londoño, J. E. y Zapata, C, J., “Arquitectura Empresarial - Una Visión General”, Revista Ingenierías Universidad de Medellín, vol. 9 (16), pp. 101-111, 2010. [28] Scott, B., “An Introduction To Enterprise Architecture”, 2ª ed., Bloomington: Authorhouse, P. 356, 2005.

[8] van der Raadt, B., Bonnet, M., Schouten, S., and van Vliet, H., “The relation between EA effectiveness and stakeholder satisfaction”, The Journal of Systems and Software vol. 83 (10), pp. 1954–1969, 2010.

[29] Ross, J. W., Weill P, P. and Robertson, D. C., “Enterprise Architecture As Strategy: Creating a Foundation for Business Execution”, Massachusetts, Harvard Business School Press, p. 256, 2006.

[9] Smith, H. A., Watson, R. T. and Sullivan, P., “Delivering an Effective Enterprise Architecture at Chubb Insurance”, MIS Quarterly Executive, vol. 11 (2), pp. 75-85, 2012.

[30] Zachman, J., (1987), “A Framework for Information Systems”, The IBM Systems Journal, vol. 26 (3), pp. 454-470, 1987.

[10] Lankhorst, M., “Enterprise Architecture at Work: Modelling, Communication and Analysis”, 1ª ed., Berlin, Springer-Verlag, 2009. [11] Sáez, V. F., “Complejidad y Tecnologías de Información [en línea]”. 1ª ed., Madrid, F.R.S para el Desarrollo de las Telecomunicaciones, fundetel ETSIT-UPM, 2009, [consulta 10/15 de diciembre de 2013]. Disponible en: http://www.gsi.dit.upm.es/~fsaez/intl/libro_complejidad.pdf. [12] Klir, G. J., “Complexity, Some General Observations. Systems Research and Beahavioral Science”, vol. 2 (2), pp. 131-140, 2011. [13] Weinberg, G. M., “An Introduction to General Systems”, New York: Dorset House Publishing, P. 320, 2011. [14] Lankhorst, M., “Enterprise Architecture at Work: Modelling, Communication and Analysis”, 3ª ed., Berlin Heidelberg, Springer-Verlag, P. 356, 2013. [15] Porter, M. E. and Millar, V. E., “How Information Gives you Competitve”, Harvard Business Review, vol. 63 (4), pp. 149-160, 1985. [16] Davenport, T. H., Harris, J. G., De Long, D. W. and Jacobson, A. L., “Data to Knowledge to Results: Building an Analytic Capability”, Harvard Business Review, vol. 43 (2), pp. 117-138, 2001. [17] Arango, S. M., Londoño, J. E. y Alvarez, U. K., “Capacidades de negocio en el contexto empresarial”, Revista Virtual Universidad Católica del Norte [en línea], vol 35, pp. 5-27, 2012, [fecha de consulta 12 de diciembre 2013], disponible en: http://revistavirtual.ucn.edu.co/. [18] Pulkkinen, M., “Systemic Management of Architectural Decisions in Enterprise Architecture Planning. Four Dimensions and Three Abstraction Levels”, proceedings of the 39th Hawaii International Conference on System Sciences, pp. 1-9, 2006.

[31] ISO/IEC/IEEE., “Systems and software engineering - Architecture description”, ISO/IEC/IEEE FDIS 42010, pp. 1-46, 2011. [32] The Open Group., “The Open Group Architectural Framework (TOGAF) Version 9.1 [en línea]”, 2012, [fecha de consulta octubre de 2013]. Disponible en: http://www.opengroup.org/togaf/. [33] Gartner., “IT Glossary Gartner research”, [fecha de consulta diciembre de 2013]. Disponible en: http://www.gartner.com/itglossary/enterprise-architecture-ea/. [34] Bernard, S., “An Introduction To Enterprise Architecture”, 2ª ed., Bloomington, Paperback, p. 351, 2005. [35] Schekkerman, J., “Enterprise Architecture Good Practices Guide How to Manage the Enterprise Architecture Practice”, Bloomington, Trafford Publishing, p. 388, 2008. [36] Zachman, J. A., "Enterprise architecture: The issue of the century”, Database Programming and Design, vol. 10, pp. 44-53, 1997. [37] Niemi, E. and Pekkola, S., “Enterprise Architecture Quality Attributes: A Case Study”, proceedings of 46th Hawaii International Conference on System Sciences (HICSS), pp. 3878-3887, 2013. [38] Armour, F., Kaisler, S. and Huizinga, E., “Business and Enterprise Architecture: Processes, Approaches and Challenges”, proceedings of 46th Hawaii International Conference on System Sciences, pp. 1-12, 2012. [39] Giachetti, R. E., “A Flexible Approach to Realize an Enterprise Architecture”, Computer Science, proceedings of Conference on Systems Engineering Research (CSER), vol. 8, pp. 147-152, 2012. [40] Alter, S., “A General, Yet Useful Theory of Information Systems”, Communications of the Association for Information Systems, vol. 1 (13), pp. 2-70, 1999.

225


Arango-Serna et al / DYNA 81 (185), pp. 219-226. June, 2014. [41] Lagerström, R., Sommestad, T., Buschle, M. and Ekstedt, M., “Enterprise Architecture Management’s Impact on Information Technology Success”, proceedings of 44th Hawaii International Conference on System Sciences, pp. 1-10, 2011. [42] Mathiassen, L. and Pries-Heje, J., “Business agility and diffusion of information technology”, EuropeanJ ournal of Information Systems, vol. 15(2), pp. 116-122, 2006. [43] Sessions, R., “The IT Complexity Crisis: Danger and Opportunity”. ObjectWatch, pp.1-24, 2009. [44] Hugos, M. H., “The value of IT agility”, Computerworld, vol. 40(28), pp. 22-24, 2010. [45] Overby, E., Bharadawaj, A. and Sambamurthy, V., “Enterprise agility and the enabling role of Information Technology”, European Journal of Information Systems, vol. 15(2), pp. 120-131, 2006. [46] Gravesen, J. K., “Reasons for resistance to enterprise architecture andways to overcome it”, Developer Works - IBM, pp. 1-18, 2012. [47] Bian - Banking Industry Architecture Netwok., “Why standards are key [en línea]”, BIAN Newsletter, pp. 1-9, abril de 2012, [fecha de consulta 5 de diciembre 2013], disponible en: http://bian.org/wpcontent/uploads/2012/12/BIAN_Newsletter_December_2012.pdf. [48] de Veries, M. and van Rensburg, A. C., “Enterprise Architecture New Business Value Perspectives”, South African Journal of Industrial Engineering, vol. 19(1), pp. 1-16, 2008. [49] Heather A, S., Richard T., W. and Patrick, S., “Delivering an Effective Enterprise Architecture at Chubb Insurance”, MIS Quarterly Executive, vol. 1(2), pp. 75-86, 2012. [50] Ross, J. W. and Beath, C. M., “New approaches to IT investment”. Sloan Management Review, vol. 43(2), pp. 51-59, 2002. [51] Chan, Y. E., “IT value: the great divide between qualitative and quantitative and individual and organizational measures”, Journal of Management Information Systems, vol. 16(4), pp. 225-261, 2000.

[52] Fraguela, J.A, Carral, L., Iglesias, G., Castro, A. y Rodríguez, M. J., “La integración de los sistemas de gestión. Necesidad de una nueva cultura empresarial”, Revista Dyna, Universidad Nacional de Colombia, vol. 78 (167), pp. 44-49, 2011. Arango-Serna Martín Dario, es graduado como Ingeniero Industrial en 1991, Especialista en Finanzas, Formulación y Evaluación de Proyectos en 1993 por la Universidad de Antioquia, Especialista en Docencia Universitaria en 2007 por la Universidad Politécnica de Valencia (España), Magister en Ingeniería de Sistemas en 1997 por la Universidad Nacional de Colombia – Sede Medellín, Doctor Ingeniero Industrial en 2001 por la Universidad Politécnica de Valencia (España). Profesor Titular en Dedicación Exclusiva adscrito al Departamento de Ingeniería de la Organización, Facultad de Minas, Universidad Nacional de Colombia. Investigador Senior según clasificación Colciencias 2013. Director del Grupo de I+D+i Logistica Industrial- Organizacional “GICO”, grupo A1. Branch-Bedoya John Willian, es graduado como Ingeniero de Minas y Metalurgia, Magíster en Ingeniería de Sistemas y Doctor en Ingeniería de la Universidad Nacional de Colombia-Sede Medellín. Actualmente Profesor Asociado en Dedicación Exclusiva adscrito al Departamento de Ciencias de la Computación y la decisión de la Facultad de Minas, Universidad Nacional de Colombia. Desde junio de 2010 se desempeña como Decano de la Facultad de Minas. Investigador Senior según clasificación Colciencias 2013 Londoño-Salazar Jesus Enrique, es graduado como Ingeniero de Sistemas en 1994, especialista en gestión de la calidad universitaria en 1999, Master en comercio electrónico en 2004 y especialista en administración de empresas en 2011. Actualmente candidato a doctor en Ingeniería –Sistemas e Informática en la Universidad Nacional de Colombia. Medellín, Colombia. Desde 1995 a la fecha, ha trabajado en el grupo Bancolombia S.A, Colombia, en diferentes áreas de tecnologías de información (infraestructura y desarrollo), en los últimos 8 años se viene desempeñado como Arquitecto Empresarial y de Solución en la misma entidad. Cuenta con certificaciones de industrial en: Itil v3, Cobit 4.0, CISA y Togaf 1 y 2 Vers.9.

226


DYNA

81 (185), June 2014 is an edition consisting of 350 printed issues and 300 CDs (electronic versions) which was finished printing in the month of June of 2014 in Todograficas Ltda. MedellĂ­n - Colombia The cover was printed on Propalcote C1S 250 g, the interior pages on Hanno Mate 90 g. The fonts used are Times New Roman, Imprint MT Shadow


Editorial: Correct citation in DYNA and anti-plagiarism editorial policy Juan D. Velásquez

Editorial: Correct citation in DYNA and anti-plagiarism editorial policy Juan D. Velásquez

Letter to the Editor: Review Papers Carlos Andrés Ramos Paja

Letter to the Editor: Review Papers Carlos Andrés Ramos Paja

Phase transformations in air plasma-sprayed yttria-stabilized zirconia thermal barrier coatings Julián D. Osorio, Adrián Lopera-Valle, Alejandro Toro & Juan P. Hernández-Ortiz

Transformaciones de fase en recubrimientos de barrera térmica de zirconia estabilizada con yttria depositados mediante aspersión por plasma atmosférico Julián D. Osorio, Adrián Lopera-Valle, Alejandro Toro & Juan P. Hernández-Ortiz

Remote laboratory prototype for automation of industrial processes and communications tests Sebastián Castrillón-Ospina, Luz Inés Hincapie & Germán Zapata-Madrigal Potential for geologically active faults Department of Antioquia, Colombia Luis Hernán Sánchez-Arredondo & Orlando Giraldo-Bolivar Static and dynamic task mapping onto network on chip multiprocessors Freddy Bolaños-Martínez, José Edison Aedo & Fredy Rivera-Vélez Effect of heating systems in litter quality in broiler facilities in Winter conditions Ricardo Brauer-Vigoderis, Ilda de Fátima Ferreira-Tinôco, Héliton Pandorfi, Marcelo Bastos-Cordeiro, Jalmir Pinheiro de Souza-Júnior & Maria Clara de Carvalho-Guimarães The dynamic model of a four control moment gyroscope system Eugenio Yime-Rodríguez, César Augusto Peña-Cortés & William Mauricio Rojas-Contreras Design and construction of low shear laminar transport system for the production of nixtamal corn dough Jorge Alberto Ortega-Moody, Eduardo Morales-Sánchez, Evelina Berenice Mercado-Pedraza, José Gabriel Ríos-Moreno & Mario Trejo-Perea.

Prototipo de laboratorio remoto para prácticas de automatización de procesos y comunicaciones industriales Sebastián Castrillón-Ospina, Luz Inés Hincapie & Germán Zapata-Madrigal Potencial de fallas geológicas activas Departamento de Antioquia, Colombia Luis Hernán Sánchez-Arredondo & Orlando Giraldo-Bolivar Mapeo estático y dinámico de tareas en sistemas multiprocesador, basados en redes en circuito integrado Freddy Bolaños-Martínez, José Edison Aedo & Fredy Rivera-Vélez Efecto del sistema de calefacción en la calidad de la cama de galpones de pollos de engorde en condiciones de inverno Ricardo Brauer-Vigoderis, Ilda de Fátima Ferreira-Tinôco, Héliton Pandorfi, Marcelo Bastos-Cordeiro, Jalmir Pinheiro de Souza-Júnior & Maria Clara de Carvalho-Guimarães Modelo dinámico de un sistema de control de par por cuatro giróscopos Eugenio Yime-Rodríguez, César Augusto Peña-Cortés & William Mauricio Rojas-Contreras

Dynamic stability of slender columns with semi-rigid connections under periodic axial load: theory Oliver Giraldo-Londoño & J. Darío Aristizábal-Ochoa

Diseño y construcción de un sistema de transporte laminar de bajo cizallamiento para la producción de masa de maíz nixtamalizada Jorge Alberto Ortega-Moody, Eduardo Morales-Sánchez, Evelina Berenice Mercado-Pedraza, José Gabriel Ríos-Moreno & Mario Trejo-Perea.

Dynamic stability of slender columns with semi-rigid connections under periodic axial load: verification and examples Oliver Giraldo-Londoño & J. Darío Aristizábal-Ochoa

Estabilidad dinámica de columnas esbeltas con conexiones semirrígidas bajo carga axial periódica: teoría Oliver Giraldo-Londoño & J. Darío Aristizábal-Ochoa

Polyhydroxyalkanoate production from unexplored sugar substrates Alejandro Salazar, María Yepes, Guillermo Correa & Amanda Mora.

Estabilidad dinámica de columnas esbeltas con conexiones semirrígidas bajo carga axial periódica: verificación y ejemplos Oliver Giraldo-Londoño & J. Darío Aristizábal-Ochoa

Straight-Line Conventional Transient Pressure Analysis for Horizontal Wells with Isolated Zones Freddy Humberto Escobar, Alba Rolanda Meneses & Liliana Marcela Losada

Producción de polihidroxialcanoatos a partir de sustratos azucarados inexplorados Alejandro Salazar, María Yepes, Guillermo Correa & Amanda Mora.

Creative experience in engineering design: the island exercise Vicente Chulvi, Javier Rivera & Rosario Vidal

Análisis Convencional de Pruebas de Presión en Pozos Horizontales con Zonas Aisladas Freddy Humberto Escobar, Alba Rolanda Meneses & Liliana Marcela Losada

Calibrating a photogrammetric digital frame sensor using a test field Benjamín Arias-Pérez, Óscar Cuadrado-Méndez, Pilar Quintanilla, Javier Gómez-Lahoz & Diego González-Aguilera.

Experiencia creativa en ingeniería en diseño: el ejercicio de la isla Vicente Chulvi, Javier Rivera & Rosario Vidal

Testing the efficiency market hypothesis for the Colombian stock market Juan Benjamín Duarte-Duarte, Juan Manuel Mascareñas Pérez-Iñigo & Katherine Julieth Sierra-Suárez Modeling of a simultaneous saccharification and fermentation process for ethanol production from lignocellulosic wastes by kluyveromyces marxianus Juan Esteban Vásquez, Juan Carlos Quintero & Silvia Ochoa-Cáceres.

Calibración de una cámara digital matricial empleando un campo de pruebas Benjamín Arias-Pérez, Óscar Cuadrado-Méndez, Pilar Quintanilla, Javier Gómez-Lahoz & Diego González-Aguilera. Comprobación de la hipótesis de eficiencia del mercado bursátil en Colombia Juan Benjamín Duarte-Duarte, Juan Manuel Mascareñas Pérez-Iñigo & Katherine Julieth Sierra-Suárez

Simulation of a stand-alone renewable hydrogen system for residential supply Martín Hervello, Víctor Alfonsín, Ángel Sánchez, Ángeles Cancela & Guillermo Rey

Modelado de un proceso de sacarificación y fermentación simultanea para la producción de etanol a partir de residuos lignocelulósico utilizando kluyveromyces marxianus Juan Esteban Vásquez, Juan Carlos Quintero & Silvia Ochoa-Cáceres.

Conceptual framework language – CFL – Sandro J. Bolaños-Castro, Rubén González-Crespo, Victor H. Medina-García & Julio Barón-Velandia.

Simulación de un sistema autónomo de hidrógeno renovable para uso residencial Martín Hervello, Víctor Alfonsín, Ángel Sánchez, Ángeles Cancela & Guillermo Rey

Flower wastes as a low-cost adsorbent for the removal of acid blue 9 Ana María Echavarria-Alvarez & Angelina Hormaza-Anaguano.

Lenguaje de marcos conceptuales – LMC – Sandro J. Bolaños-Castro, Rubén González-Crespo, Victor H. Medina-García & Julio Barón-Velandia.

Discrete Particle Swarm Optimization in the numerical solution of a system of linear Diophantine equations Iván Amaya, Luis Gómez & Rodrigo Correa.

Residuos de flores como adsorbentes de bajo costo para la remoción de azul ácido 9 Ana María Echavarria-Alvarez & Angelina Hormaza-Anaguano.

Some recommendations for the construction of walls using adobe bricks Miguel Ángel Rodríguez-Díaz, Belkis Saroza-Horta, Pedro Nolasco Ruiz-Sánchez, Ileana Julia Barroso-Valdés, Fernando Ariznavarreta-Fernández & Felipe González-Coto. Thermodynamic analysis of R134a in an Organic Rankine Cycle for power generation from low temperature sources Fredy Vélez, Farid Chejne & Ana Quijano

Optimización por Enjambre de Partículas Discreto en la Solución Numérica de un Sistema de Ecuaciones Diofánticas Lineales Iván Amaya, Luis Gómez & Rodrigo Correa. Algunas recomendaciones para la construcción de muros de adobe Miguel Ángel Rodríguez-Díaz, Belkis Saroza-Horta, Pedro Nolasco Ruiz-Sánchez, Ileana Julia Barroso-Valdés, Fernando Ariznavarreta-Fernández & Felipe González-Coto.

Hydro-meteorological data analysis using OLAP techniques Néstor Darío Duque-Méndez, Mauricio Orozco-Alzate & Jorge Julián Vélez

Análisis termodinámico del R134a en un Ciclo Rankine Orgánico para la generación de energía a partir de fuentes de baja temperatura Fredy Vélez, Farid Chejne & Ana Quijano

Monitoring and groundwater/gas sampling in sands densified with explosives Carlos A. Vega-Posada, Edwin F. García-Aristizábal & David G. Zapata-Medina.

Análisis de datos hidroclimatológicos usando técnicas OLAP Néstor Darío Duque-Méndez, Mauricio Orozco-Alzate & Jorge Julián Vélez

Characterization of adherence for Ti6Al4V films RF magnetron sputter grown on stainless steels Carlos Mario Garzón, José Edgar Alfonso & Edna Consuelo Corredor

Monitoreo y muestreo de aguas subterráneas y gases en arenas densificadas con explosivos Carlos A. Vega-Posada, Edwin F. García-Aristizábal & David G. Zapata-Medina.

UV-vis in situ spectrometry data mining through linear and non linear analysis methods Liliana López-Kleine & Andrés Torres. A refined protocol for calculating air flow rate of naturally-ventilated broiler barns based on co2 mass balance Luciano Barreto-Mendes, Ilda de Fatima Ferreira-Tinoco, Nico Ogink, Robinson Osorio-Hernandez & Jairo Alexander Osorio-Saraz. Use of a multi-objective teaching-learning algorithm for reduction of power losses in a power test system Miguel A. Medina, Juan M. Ramirez, Carlos A. Coello & Swagatam Das Bioethanol production by fermentation of hemicellulosic hydrolysates of african palm residues using an adapted strain of Scheffersomyces stipitis Frank Carlos Herrera-Ruales & Mario Arias-Zabala. Making Use of Coastal Wave Energy: A proposal to improve oscillating water column converters Ramón Borrás-Formoso, Ramón Ferreiro-García, Fernanda Miguélez-Pose & Cándido Fernández-Ameal Enterprise Architecture as Tool For Managing Operational Complexity in Organizations Martín Dario Arango-Serna, John Willian Branch-Bedoya & Jesús Enrique Londoño-Salazar.

Caracterización de la adherencia para películas de Ti6Al4V depositadas por medio de pulverización catódica magnetrón RF sobre acero inoxidable Carlos Mario Garzón, José Edgar Alfonso & Edna Consuelo Corredor Minería de datos UV-vis in situ con métodos de análisis lineales y no lineales Liliana López-Kleine & Andrés Torres. Un protocolo refinado basado en balance de masa de co2 para el cálculo de la tasa de flujo de aire en instalaciones avicolas naturalmente ventiladas Luciano Barreto-Mendes, Ilda de Fatima Ferreira-Tinoco, Nico Ogink, Robinson Osorio-Hernandez & Jairo Alexander Osorio-Saraz. Uso de un algoritmo de enseñanza-aprendizaje multi-objetivo para la reducción de pérdidas de energía en un sistema de potencia de prueba Miguel A. Medina, Juan M. Ramirez, Carlos A. Coello & Swagatam Das Producción de bioetanol por fermentación de hidrolizados hemicelulósicos de residuos de palma africana usando una cepa de Scheffersomyces stipitis adaptada Frank Carlos Herrera-Ruales & Mario Arias-Zabala. Aprovechamiento Energético de las Olas Costeras: Una propuesta de mejora de los convertidores de columna de agua oscilante Ramón Borrás-Formoso, Ramón Ferreiro-García, Fernanda Miguélez-Pose & Cándido Fernández-Ameal Arquitectura Empresarial como Instrumento para Gestionar la Complejidad Operativa en las Organizaciones Martín Dario Arango-Serna, John Willian Branch-Bedoya & Jesús Enrique Londoño-Salazar

DYNA

Publication admitted to the Sistema Nacional de Indexación y Homologación de Revistas Especializadas CT+I - PUBLINDEX, Category A1

“Si ésta revista despierta en algunos curiosidad por la investigación científica, nuestro trabajo quedará cumplido” Joaquín Vallejo, 1933. Fundador


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.