Dyna Edition 192 - August of 2015

Page 1

DYNA Journal of the Facultad de Minas, Universidad Nacional de Colombia - Medellin Campus

DYNA 82 (192), August, 2015 - ISSN 0012-7353 Tarifa Postal Reducida No. 2014-287 4-72 La Red Postal de Colombia, Vence 31 de Dic. 2015. FACULTAD DE MINAS


DYNA

http://dyna.medellin.unal.edu.co/

DYNA is an international journal published by the Facultad de Minas, Universidad Nacional de Colombia, Medellín Campus since 1933. DYNA publishes peer-reviewed scientific articles covering all aspects of engineering. Our objective is the dissemination of original, useful and relevant research presenting new knowledge about theoretical or practical aspects of methodologies and methods used in engineering or leading to improvements in professional practices. All conclusions presented in the articles must be based on the current state-of-the-art and supported by a rigorous analysis and a balanced appraisal. The journal publishes scientific and technological research articles, review articles and case studies. DYNA publishes articles in the following areas: Organizational Engineering Geosciences and the Environment Mechatronics Civil Engineering Systems and Informatics Bio-engineering Materials and Mines Engineering Chemistry and Petroleum Other areas related to engineering

Publication Information

Indexing and Databases

DYNA (ISSN 0012-73533, printed; 2346-2183, online) is published by the Facultad de Minas, Universidad Nacional de Colombia, with a bimonthly periodicity (February, April, June, August, October, and December). Circulation License Resolution 000584 de 1976 from the Ministry of the Government.

DYNA is admitted in:

Contact information Web page: E-mail: Mail address: Facultad de Minas

http://dyna.unalmed.edu.co dyna@unal.edu.co Revista DYNA Universidad Nacional de Colombia Medellín Campus Carrera 80 No. 65-223 Bloque M9 - Of.:107 Telephone: (574) 4255068 Fax: (574) 4255343 Medellín - Colombia © Copyright 2014. Universidad Nacional de Colombia The complete or partial reproduction of texts with educational ends is permitted, granted that the source is duly cited. Unless indicated otherwise.

The National System of Indexation and Homologation of Specialized Journals CT+I-PUBLINDEX, Category A1 Science Citation Index Expanded Journal Citation Reports - JCR Science Direct SCOPUS Chemical Abstract - CAS Scientific Electronic Library on Line - SciELO GEOREF PERIÓDICA Data Base Latindex Actualidad Iberoaméricana RedALyC - Scientific Information System Directory of Open Acces Journals - DOAJ PASCAL CAPES UN Digital Library - SINAB CAPES EBSCO Host Research Databases

Notice All statements, methods, instructions and ideas are only responsibility of the authors and not necessarily represent the view of the Universidad Nacional de Colombia. The publisher does not accept responsibility for any injury and/or damage for the Publisher’s Office use of the content of this journal. Juan David Velásquez Henao, Director The concepts and opinions expressed in the articles are the Mónica del Pilar Rada T., Editorial Coordinator exclusive responsibility of the authors. Catalina Cardona A., Editorial Assistant Amilkar Álvarez C., Diagrammer Institutional Exchange Request DYNA may be requested as an institutional exchange through the Byron Llano V., Editorial Assistant Landsoft S.A., IT e-mail canjebib_med@unal.edu.co or to the postal address: Biblioteca Central “Efe Gómez” Universidad Nacional de Colombia, Sede Medellín Calle 59A No 63-20 Teléfono: (57+4) 430 97 86 Medellín - Colombia

Reduced Postal Fee Tarifa Postal Reducida # 2014-287 4-72. La Red Postal de Colombia, expires Dec. 31st, 2015


SEDE MEDELLÍN

DYNA


SEDE MEDELLÍN


COUNCIL OF THE FACULTAD DE MINAS

JOURNAL EDITORIAL BOARD

Dean Pedro Nel Benjumea Hernández PhD

Editor-in-Chief Juan David Velásquez Henao, PhD Universidad Nacional de Colombia, Colombia

Vice-Dean Ernesto Pérez González, PhD Vice-Dean of Research and Extension Santiago Arango Aramburo, PhD Director of University Services Carlos Alberto Graciano, PhD Academic Secretary Francisco Javier Díaz Serna, PhD Representative of the Curricular Area Directors Luis Augusto Lara Valencia, PhD Representative of the Curricular Area Directors Wilfredo Montealegre Rubio, PhD Representative of the Basic Units of AcademicAdministrative Management Gloria Inés Jiménez Gutiérrez, MSc Professor Representative Jaime Ignacio Vélez Upegui, PhD Delegate of the University Council León Restrepo Mejía, PhD FACULTY EDITORIAL BOARD Dean Pedro Nel Benjumea Hernández, PhD Vice-Dean of Research and Extension Santiago Arango Aramburo, PhD Members Oscar Jaime Restrepo Baena, PhD Juan David Velásquez Henao, PhD Jaime Aguirre Cardona, PhD Mónica del Pilar Rada Tobón, MSc

Editors George Barbastathis, PhD Massachusetts Institute of Technology, USA Tim A. Osswald, PhD University of Wisconsin, USA Juan De Pablo, PhD University of Wisconsin, USA Hans Christian Öttinger, PhD Swiss Federal Institute of Technology (ETH), Switzerland Patrick D. Anderson, PhD Eindhoven University of Technology, the Netherlands Igor Emri, PhD Associate Professor, University of Ljubljana, Slovenia Dietmar Drummer, PhD Institute of Polymer Technology University ErlangenNürnberg, Germany Ting-Chung Poon, PhD Virginia Polytechnic Institute and State University, USA Pierre Boulanger, PhD University of Alberta, Canadá Jordi Payá Bernabeu, Ph.D. Instituto de Ciencia y Tecnología del Hormigón (ICITECH) Universitat Politècnica de València, España Javier Belzunce Varela, Ph.D. Universidad de Oviedo, España Luis Gonzaga Santos Sobral, PhD Centro de Tecnología Mineral - CETEM, Brasil Agustín Bueno, PhD Universidad de Alicante, España Henrique Lorenzo Cimadevila, PhD Universidad de Vigo, España Mauricio Trujillo, PhD Universidad Nacional Autónoma de México, México

Carlos Palacio, PhD Universidad de Antioquia, Colombia Jorge Garcia-Sucerquia, PhD Universidad Nacional de Colombia, Colombia Juan Pablo Hernández, PhD Universidad Nacional de Colombia, Colombia John Willian Branch Bedoya, PhD Universidad Nacional de Colombia, Colombia Enrique Posada, Msc INDISA S.A, Colombia Oscar Jaime Restrepo Baena, PhD Universidad Nacional de Colombia, Colombia Moisés Oswaldo Bustamante Rúa, PhD Universidad Nacional de Colombia, Colombia Hernán Darío Álvarez, PhD Universidad Nacional de Colombia, Colombia Jaime Aguirre Cardona, PhD Universidad Nacional de Colombia, Colombia



DYNA 82 (192), August, 2015. Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online

CONTENTS Editorial Indicators DYNA journal

Juan David Velásquez Henao

How to promote distributed resource supply in a Colombian microgrid with economic mechanism?: System dynamics approach Adriana Arango-Manrique, Sandra Ximena Carvajal-Quintero & Camilo Younes-Velosa

Loads Characterization using the instantaneous power tensor theory

Odair Augusto Trujillo-Orozco, Yeison Alberto Garcés-Gómez, Armando Jaime Ustariz-Farfán & Eduardo Antonio Cano-Plata

Analysis of voltage sag compensation in distribution systems using a multilevel DSTATCOM in ATP/EMTP

Herbert Enrique Rojas-Cubides, Audrey Soley Cruz-Bernal & Harvey David Rojas-Cubides

Estimating the produced power by photovoltaic installations in shaded environments

Carlos Andrés Ramos-Paja, Andrés Julián Saavedra-Montes & Luz Adriana Trejos

Effects on lifetime of low voltage conductors due to stationary power quality disturbances

Ivan Camilo Duran-Tovar, Fabio Andrés Pavas-Martínez & Oscar German Duarte-Velasco

Voltage regulation in a power inverter using a quasi-sliding control technique

Nicolás Toro-García, Yeison Alberto Garcés-Gómez & Fredy Edimer Hoyos-Velasco

Distributed generation placement in radial distribution networks using a bat-inspired algorithm

John Edwin Candelo-Becerra & Helman Enrique Hernández-Riaño

Representation of electric power systems by complex networks with applications to risk vulnerability assessment

Gabriel Jaime Correa-Henao & José María Yusta-Loyo

Vulnerability assessment of power systems to intentional attacks using a specialized genetic algorithm

Laura Agudelo, Jesús María López-Lezama & Nicolás Muñoz-Galeano

Spinning reserve analysis in a microgrid

Luis Ernesto Luna-Ramírez, Horacio Torres-Sánchez & Fabio Andrés Pavas-Martínez

Methodology for relative location of voltage sag source using voltage measurements only Jairo Blanco-Solano, Johann F. Petit-Suárez & Gabriel Ordóñez-Plata

An epidemiological approach to assess risk factors and current distortion incidence on distribution networks

Miguel Fernando Romero, Luis Eduardo Gallego & Fabio Andrés Pavas

Generalities about design and operation of microgrids

Juan Manuel Rey-López, Pedro Pablo Vergara-Barrios, Germán Alfonso Osma-Pinto & Gabriel Ordóñez-Plata

Energy considerations of social dwellings in Colombia according to the NZEB concept

Germán Alfonso Osma-Pinto, David Andrés Sarmiento-Nova, Nelly Catherine Barbosa-Calderón & Gabriel Ordóñez-Plata

Measurement-based exponential recovery load model: Development and validation Luis Rodríguez-García, Sandra Pérez-Londoño & Juan Mora-Flórez

Complete power distribution system representation and state determination for fault location

Andres Felipe Panesso-Hernández, Juan Mora-Flórez & Sandra Pérez-Londoño

The impact of supply voltage distortion on the harmonic current emission of non-linear loads

Ana Maria Blanco, Sergey Yanchenko, Jan Meyer & Peter Schegner

Design and construction of a reduced scale model to measure lightning induced voltages over inclined terrain

Edison Soto-Ríos, Ernesto Pérez-González & Javier Herrera-Murcia

Modeling dynamic procurement auctions of standardized supply contracts in electricity markets including bidders adaptation Henry Camilo Torres-Valderrama & Luis Eduardo Gallego-Vega

A new method to characterize power quality disturbances resulting from expulsion and current-limiting fuse operation

Neil A. Ortiz-Silva, Jairo Blanco-Solano, Johann F. Petit-Suárez, Gabriel Ordóñez-Plata & Víctor Barrera-Núñez

9 11 19 26 37 44 52 60 68 78 85 94 101 109 120 131 141 150 160 168 177


Time-varying harmonic analysis of electric power systems with wind farms, by employing the possibility theory

Andrés Arturo Romero-Quete, Gastón Orlando Suvire, Humberto Cassiano Zini & Giuseppe Rattá

Water quality modeling of the Medellin river in the Aburrá Valley

Lina Claudia Giraldo-B, Carlos Alberto Palacio, Rubén Molina c & Rubén Alberto Agudelo

Compressive sensing: A methodological approach to an efficient signal processing

Evelio Astaiza-Hoyos, Pablo Emilio Jojoa-Gómez & Héctor Fabio Bermúdez-Orozco

Settlement analysis of friction piles in consolidating soft soils

Juan F. Rodríguez-Rebolledo, Gabriel Y. Auvinet Guichard & Hernán E. Martínez-Carvajal

Influence of biomineralization on a profile of a tropical soil affected by erosive processes Yamile Valencia-González, José de Carvalho-Camapum & Luis Augusto Lara-Valencia

Analysis of the investment costs in municipal wastewater treatment plants in Cundinamarca

Juan Pablo Rodríguez-Miranda, César Augusto García-Ubaque & Juan Carlos Penagos-Londoño

Fast pyrolysis of biomass: A review of relevant aspects. Part I: Parametric study

Jorge Iván Montoya, Farid Chejne-Janna & Manuel Garcia-Pérez

Fortran application to solve systems from NLA using compact storage

Wilson Rodríguez-Calderón & Myriam Rocío Pallares-Muñoz

Factors involved in the perceived risk of construction workers

Ignacio Rodríguez-Garzón, Myriam Martínez-Fiestas, Antonio Delgado-Padial & Valeriano Lucas-Ruiz

Cost estimation in software engineering projects with web components development

Javier de Andrés, Daniel Fernández-Lanvin & Pedro Lorca

Our cover Image alluding to Article: Influence of biomineralization on a profile of a tropical soil affected by erosive processes Authors: Yamile Valencia-González, José de Carvalho-Camapum & Luis Augusto Lara-Valencia

185 195 203 221 221 230 239 249 257 266


DYNA 82 (192), August, 2015. Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online

CONTENIDO Editorial Indicadores de la Revista DYNA

9

Cómo promover el suministro de los recursos distribuidos en una microrred colombiana con mecanismos económicos?: Enfoque con dinámica de sistemas

11

Juan David Velásquez Henao

Adriana Arango-Manrique, Sandra Ximena Carvajal-Quintero & Camilo Younes-Velosa

Caracterización de cargas usando la teoría instantánea del tensor de potencia

Odair Augusto Trujillo-Orozco, Yeison Alberto Garcés-Gómez, Armando Jaime Ustariz-Farfán & Eduardo Antonio Cano-Plata

19

Análisis de la compensación de hundimientos de tensión en sistemas de distribución usando un DSTATCOM multinivel en ATP/EMTP

26

Herbert Enrique Rojas-Cubides, Audrey Soley Cruz-Bernal & Harvey David Rojas-Cubides

Estimación de la potencia producida por instalaciones fotovoltaicas en entornos sombreados

Carlos Andrés Ramos-Paja, Andrés Julián Saavedra-Montes & Luz Adriana Trejos

Efectos en conductores de baja tensión debido a perturbaciones estacionarias de calidad de potencia

Ivan Camilo Duran-Tovar, Fabio Andrés Pavas-Martínez & Oscar German Duarte-Velasco

Regulación de tensión en un inversor de potencia usando técnica de control cuasi deslizante

Nicolás Toro-García, Yeison Alberto Garcés-Gómez & Fredy Edimer Hoyos-Velasco

Ubicación de generación distribuida en redes de distribución radiales usando un algoritmo inspirado en murciélagos

44 52

John Edwin Candelo-Becerra & Helman Enrique Hernández-Riaño

60

Representación de los sistemas eléctricos de potencia mediante redes complejas con aplicación a la evaluación de vulnerabilidad de riesgos

68

Valoración de la vulnerabilidad de sistemas de potencia ante ataques intencionales usando un algoritmo genético especializado

78

Gabriel Jaime Correa-Henao & José María Yusta-Loyo

Laura Agudelo, Jesús María López-Lezama & Nicolás Muñoz-Galeano

Análisis de reserva rodante en una microred

Luis Ernesto Luna-Ramírez, Horacio Torres-Sánchez & Fabio Andrés Pavas-Martínez

Metodología para la localización relativa de la fuente de hundimientos de tensión usando únicamente mediciones de tensión

Jairo Blanco-Solano, Johann F. Petit-Suárez & Gabriel Ordóñez-Plata

Aproximación epidemiológica para evaluar factores de riesgo e incidencia de distorsión armónica en redes de distribución

Miguel Fernando Romero, Luis Eduardo Gallego & Fabio Andrés Pavas

Generalidades sobre diseño y operación de micro-redes

Juan Manuel Rey-López, Pedro Pablo Vergara-Barrios, Germán Alfonso Osma-Pinto & Gabriel Ordóñez-Plata

Consideraciones energéticas de viviendas de interés social en Colombia según el concepto NZEB Germán Alfonso Osma-Pinto, David Andrés Sarmiento-Nova, Nelly Catherine Barbosa-Calderón & Gabriel Ordóñez-Plata

Modelo de carga de recuperación exponencial basado en mediciones: Desarrollo y validación

Luis Rodríguez-García, Sandra Pérez-Londoño & Juan Mora-Flórez

Representación completa del sistema de distribución de energía y determinación del estado para localización de fallas

Andres Felipe Panesso-Hernández, Juan Mora-Flórez & Sandra Pérez-Londoño

Efecto de la distorsión de tensión sobre la emisión de armónicos de corriente de cargas no lineales

Ana Maria Blanco, Sergey Yanchenko, Jan Meyer & Peter Schegner

Diseño y construcción de un modelo a escala reducida para medición de tensiones inducidas en un terreno inclinado

Edison Soto-Ríos, Ernesto Pérez-González & Javier Herrera-Murcia

37

85 94 101 109 120 131 141 150 160


Modelado de subastas dinámicas para compra de contratos estandarizados en el mercado eléctrico incluyendo la adaptación de los participantes

168

Nuevo método para la caracterización de perturbaciones de la calidad de la potencia producto de la operación de fusibles de expulsión y limitadores de corriente

177

Análisis de armónicos variando en el tiempo en sistemas eléctricos de potencia con parques eólicos, a través de la teoría de la posibilidad

185

Henry Camilo Torres-Valderrama & Luis Eduardo Gallego-Vega

Neil A. Ortiz-Silva, Jairo Blanco-Solano, Johann F. Petit-Suárez, Gabriel Ordóñez-Plata & Víctor Barrera-Núñez

Andrés Arturo Romero-Quete, Gastón Orlando Suvire, Humberto Cassiano Zini & Giuseppe Rattá

Modelación de la calidad de agua del Río Medellín en el Valle de Aburrá Lina Claudia Giraldo-B, Carlos Alberto Palacio, Rubén Molina c & Rubén Alberto Agudelo

Sensado compresivo: Una aproximación metodológica para un procesamiento de señales eficiente

Evelio Astaiza-Hoyos, Pablo Emilio Jojoa-Gómez & Héctor Fabio Bermúdez-Orozco

Análisis de asentamientos de pilas de fricción en suelos blandos compresibles

Juan F. Rodríguez-Rebolledo, Gabriel Y. Auvinet Guichard & Hernán E. Martínez-Carvajal

Influencia de la biomineralización en un perfil de suelo tropical afectado por procesos erosivos

Yamile Valencia-González, José de Carvalho-Camapum & Luis Augusto Lara-Valencia

Análisis de los costos de inversión en plantas de tratamiento de aguas residuales municipales en Cundinamarca

Juan Pablo Rodríguez-Miranda, César Augusto García-Ubaque & Juan Carlos Penagos-Londoño

Pirólisis rápida de biomasas: Una revisión de los aspectos relevantes. Parte I: Estudio paramétrico

Jorge Iván Montoya, Farid Chejne-Janna & Manuel Garcia-Pérez

Aplicativo en Fortran para resolver sistemas de ALN usando formatos comprimidos

Wilson Rodríguez-Calderón & Myriam Rocío Pallares-Muñoz

Factores conformantes del riesgo percibido en los trabajadores de la construcción

Ignacio Rodríguez-Garzón, Myriam Martínez-Fiestas, Antonio Delgado-Padial & Valeriano Lucas-Ruiz

Estimación de costes en proyectos de ingeniería software con desarrollo de componentes web Javier de Andrés, Daniel Fernández-Lanvin & Pedro Lorca

Nuestra carátula Imagen alusiva al artículo: Influencia de la biomineralización en un perfil de suelo tropical afectado por procesos erosivos Authors: Yamile Valencia-González, José de Carvalho-Camapum & Luis Augusto Lara-Valencia

195 203 221 221 230 239 249 257 266


Editorial

Indicadores de la Revista DYNA En este editorial analizo la evolución que ha tenido nuestra revista durante los últimos cinco años a partir de la información recopilada por Scimago Journal Ranking. La revista DYNA presenta medición de sus citaciones desde el año 2009; durante este periodo el factor SJR ha venido creciendo desde su creación tal como se aprecia en la Tabla 1; el índice SJR ha pasado de 0,130 a 0,228 en un lapso de seis años. Es así como DYNA tiene el factor SJR más alto entre las revistas de ingeniería en Colombia, tal como se aprecia en la Tabla 2. Si bien este ha sido un logro muy importante, no ha permitido una mejora significativa de su posición en el ranking latinoamericano de revistas de ingeniería, donde ocupa la posición 11 (2014) tal como se puede ver en la Tabla 3; la revista ha ocupado históricamente los puestos 12 al 9 en dicha clasificación. Sin embargo, resulta interesante resaltar que casi todas las revistas en las primeras posiciones en este ranking únicamente aceptan artículos en idioma inglés, mientras que DYNA publica un número importante de sus artículos en idioma español y sólo para el año 2014 se aumentó considerablemente la cantidad de artículos publicados en inglés (véase la Fig. 1). En relación al total de revistas científicas editadas en Latinoamérica, la revista DYNA ha venido mejorando su posición hasta alcanzar el lugar 189 entre 715 revistas latinoamericanas para el año 2014. Actualmente, nuestra revista ocupa la sexta posición de importancia entre las 75 revistas científicas editadas y publicadas en Colombia. Otro factor a destacar, es el alto volumen de artículos publicados por año, el cual se ha duplicado entre los años 2009 y 2014; esta situación es acorde con el aumento en la cantidad de artículos sometidos a la revista; en el año 2014 la revista recibió 450 manuscritos para evaluación. De acuerdo con estos valores, la revista rechaza aproximadamente el 50% de los manuscritos remitidos.

Tabla 2. SJR para diferentes revistas de ingeniería publicadas de en Colombia. Revista SJR 2009 2010 2011 2012 2013 2014 DYNA 0,130 0,157 0,179 0,208 0,216 0,228 Ingeniería y 0,102 0,111 0,126 0,121 0,120 0,111 Universidad Rev. Facultad de 0,101 0,120 0,159 0,132 0,139 0,124 Ingeniería Ingeniería e - 0,104 0,132 0,106 0,179 0,151 Investigación Fuente: Scimago Journal and Country Rank

Tabla 3. Posición de la Revista DYNA en diferentes clasificaciones. Indicador Año 2009 2010 2011 2012 2013 Revistas de ingeniería 12 10 9 9 10 (Latinoamérica) Revistas de ingeniería 1 1 1 1 1 (Colombia) Revistas Latinoamericanas 262 245 236 205 212 Revistas Colombianas 15 12 11 6 8 Fuente: Scimago Journal and Country Rank

2014 11 1 189 6

También resulta importante mencionar que el 35% de los autores provienen de instituciones extranjeras, principalmente de España, México y Brasil (véase la Fig. 1). Históricamente, la producción con autoría de investigadores de la Universidad Nacional de Colombia se ha mantenido alrededor del 30%.

Tabla 1. Información Bibliométrica de la Revista DYNA. Indicador Año SJR

2009

2010

2011

2012

2013

2014

0,130

0,157

0,179

0,208

0,216

0,228

Doc. publicados

106

112

143

164

131

197

Acumulado

106

218

361

525

656

853

Total citas (3 años)

5

22

57

117

115

160

Total doc. citados

4

11

42

83

87

110

Fuente: Scimago Journal and Country Rank

Figura 1. Cantidad de artículos publicados por idioma en la revista DYNA. Fuente: Centro Editorial de la Facultad de Minas

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 9-10. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online http://dx.doi.org/10.15446/dyna.v82n192.52712


Velásquez / DYNA 82 (192), pp. 9-10. August, 2015.

Cuba, 2,8%

Otros, 3,3%

Chile, 2,9% Brasil, 5,0%

UNAL, 30,5% UNAL

México, 6,4%

Colombia España México Brasil Chile

España, 14,4%

Cuba Otros

Colombia, 34,7%

Figura 2. Distribución porcentual de las instituciones de origen de los autores para los números publicados entre 2010 y 2015. Fuente: Centro Editorial de la Facultad de Minas

De acuerdo con el reporte entregado por Web-of-Science, entre enero del 2009 y diciembre de 2014 la revista ha recibido un total de 215 citas (88 para artículos en español y 127 para artículos en inglés) y se ha citado un total de 145 artículos (69 publicados en español y 76 publicados en inglés). La distribución porcentual de citas de acuerdo con la editorial o índice es la siguiente:  Elsevier, 38%  SciELO, 21%  Springer, 9%  Taylor & Francis, 3%  Wiley, 3%  Otros, 26% Finalmente, deseo invitar a todos nuestros lectores y autores a que sigan remitiendo sus contribuciones a nuestra revista. Juan D. Velásquez, MSc, PhD Profesor Titular Editor, Revista DYNA Editor, Revista Boletín de Ciencias de la Tierra Universidad Nacional de Colombia Sede Medellín E-mail: jdvelasq@unal.edu.co http://orcid.org/0000-0003-3043-3037

10


How to promote distributed resource supply in a Colombian microgrid with economic mechanism?: System dynamics approach Adriana Arango-Manrique a, Sandra Ximena Carvajal-Quintero b & Camilo Younes-Velosa c a b

Facultad de Ingeniería y Arquitectura, Universidad Nacional de Colombia, Manizales, Colombia. aarangoma@unal.edu.co Facultad de Ingeniería y Arquitectura, Universidad Nacional de Colombia, Manizales, Colombia. sxcarvajalq@unal.edu.co c Facultad de Ingeniería y Arquitectura, Universidad Nacional de Colombia, Manizales, Colombia. cyounesv@unal.edu.co Received: May, 28th of 2014. Received in revised form: May 5th of 2015. Accepted: .June 17th of 2015

Abstract Distributed generation currently has caused an increase in the installation or renewable energy generation resources near the consumption centers and the ability to operate in the event of a failure of the interconnected system in isolation mode. However, this type of generation and operation has not progressed significantly in Colombia due to the lack of financial mechanisms. This paper presents a model in System Dynamics that proposes a mechanism for the promotion of distributed resources by including regulatory incentives known as Renewable Energy Premium Tariff and incentives for providing technical support for the distribution and transmission system. Proposed mechanisms help to promote the use of renewable energy in Colombia and further enhance the tools so that grid operators can avoid accidental disconnection. Keywords: Distributed Resources; Micro Grid Operation; Renewable Premium Tariff; Support Services; Small Hydropower Plants.

Cómo promover el suministro de los recursos distribuidos en una microrred colombiana con mecanismos económicos?: Enfoque con dinámica de sistemas Resumen La generación distribuida actualmente ha provocado un aumento en la instalación de recursos renovables cerca a los centros de consumo con la posibilidad de operar, en caso de una falla en el sistema interconectado, en modo aislado. Sin embargo, este tipo de generación y operación no ha progresado de manera significativa en Colombia debido a la falta de mecanismos financieros. En este trabajo se presenta un modelo en Dinámica de Sistemas que propone la integración de un mecanismo para la promoción de recursos distribuidos como incentivos regulatorios conocidos como Renewable Premium Tariff e incentivos por prestar servicios de soporte al sistema de distribución y de transmisión. Los mecanismos propuestos permiten promover el uso de energías renovables en Colombia y adicionalmente aumentar las herramientas para que los operadores del sistema puedan evitar desconexiones fortuitas. Palabras clave: Operación por Microrredes; Pequeñas Centrales Hidroeléctricas; Recursos Distribuidos; Renewable Premium Tariff; Servicios de Soporte.

1. Introduction Microgrids (µG) allow the integration of new power generation technologies in distribution networks, presenting operational advantages such as injection of active and reactive power at different points in the distribution network, by converting distribution networks into active nets [1].

This new mode of operation ensures cleaner energy production at a relatively low cost, and with technical benefits associated with the local reliability, improved quality, security of supply by reduced outages, and reduced emissions by the use of renewable resources [2,3]. According to [4], µG are smart grids of low and/or medium voltage (LV/MV), which include various types of

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 11-18. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48564


Arango-Manrique et al / DYNA 82 (192), pp. 11-18. August, 2015.

distribution network and the transmission, and implementing economic mechanisms that facilitate the participation of DR and demand response program in providing reliability and security of supply. This paper is structured as follows: in Section II a model of the diffusion of µG in SD and an analysis of different features for its implementation are presented, taking into account the income and costs incurred in this type of operation. In Section III the results are presented of the simulations carried out using the diffusion model in different scenarios. Finally, the conclusions are described in Section IV.

renewable sources such as distributed resources DR and loads connected near the DR. Which operate connected in parallel with the distribution network or when there is some maintenance or emergencies, they have the ability to work autonomously or isolated. However, unplanned operation of the DR in isolated µG presents problems in voltages and frequency of the µG, causing failures in elements connected to it and increasing the µG losses, resulting in reductions of quality rates and reliability, which increases the probability of generating a blackout widespread in the µG and in the distribution system [5]. Correct planning of the µG increases resilience in the operation of the distribution system, preventing current demand disconnections, decreased quality index and security of supply due to random events like atmospheric discharges among others [6,7]. For the integration of the µG in a deregulated system it is necessary to implement economic mechanisms that promote the correct integration of the DR and active demand response programs to provide technical operation and a market dynamic in the µG [8]. This integration should guarantee a sustainable supply of energy for the growing demand at a relatively low cost, taking into account the location, technologies used and dimensions of the DR [2]. Additionally, the costs incurred in the integration of the DR and the operation of the µG should be considered, as investment costs or Capital Expenditures (CAPEX) and operation and maintenance costs or Operational Expenditures (OPEX) to determine the actual economic benefit, by conducting an analysis of the income of the operation by µG [9,10]. The aim of this paper is to determine through economic mechanisms the promotion of µG operation under the conditions of distribution network in the Caldas, Quindío and Risaralda (CQR) areas, considering DR and a reduction in peak demand. There are mechanisms that facilitate the adaptation of the DR and demand in market dynamics, reducing long-term costs, user’s tariff and the losses in the transmission and distribution systems which imply long-term postponement of investments in infrastructure [11,12]. It is important to consider that the technical operation and economic and market behavior depend on the dynamics of the µG itself and the optimal integration of the DR and the demand response programs [6,8]. However, one must consider that each µG has specific characteristics depending on the geographic location, the primary source of generation, implemented technologies and behavior of the demand [4]. It is proposed to analyze the possibility of implementing a µG in the CQR area of the Colombian electric power system (EPS), which is characterized by the use of Small Hydroelectric Plants (SHPs) owing to the large quantity and great hydro potential of rivers with high waterfalls in this area, a product of the Andes Mountains which cover this region [13], using a simulation model in System Dynamics (SD) to analyze long-term contribution of support services and economic mechanisms such as the Renewable Energy Premium Tariff (RPT) to determine the behavior and growth of the operation by µG. The most important contribution of this research is the analysis of investment in µG, combining the benefits inherent in the implementation of the µG to the

2. System Dynamics System Dynamics (SD) is the methodology used to analyze the dynamic behavior of economic mechanisms in the diffusion µG model [14]. SD is based on a group of elements constantly interacting with one another with the aim of finding the structural causes that bring about the behavior of the system being analyzed [15]. The models are composed of differential equations that are resolved using numerical integrations [16]. The standard system dynamics approach to modeling starts by establishing the feedback and delays in the system to be investigated, followed by a stock and flow diagram, and finally the formal equations, which are differential equations. This formalization allows the system to be simulated. For a detailed description and discussion of system dynamics modeling, formalization, and verification see Sterman [14]. The model is characterized by the conditions of the CQR area, and is used for the planning of the µG type with SHPs. The proposed planning model can be used for decisionmaking, taking into account not only the criterion of the lowest cost, but also including new evaluation criteria such as profitability [17] and technical support for the safe operation of the µG. µG Operation +

+

+ Reduction of peak loading

-

Resilience

Investment +

Costs

Blackout

Non-delivered Energy

Profitability -

Economic Mechanisms

+

Renewable Energy Premium Tariff (RPT)

Figura 1. Causal diagram of dynamic of µG operation. Diffusion in CQR sub-network of the Colombian power system. Source: The authors

12


Arango-Manrique et al / DYNA 82 (192), pp. 11-18. August, 2015.

distribution system), extending the useful life of distribution assets. In general, DR is dispatch continuously, i.e., is not intermittently, which ensures that the reduction occurs at peak demand [18]. This reduction in peak demand directly influences security of supply by improving the reliability of µG by the contribution of the DR installed close to the demand; in this case, µG can reduce the indices of duration and frequency of the interruptions experienced by users connected to the µG [18]. These indices are SAIFI and SAIDI and are represented in (1) and (2), respectively.

2.1. Causal diagram The causal diagram or feedback diagram shown in Fig. 1 shows the causal relationships between the main variables of the system and the relationships that allow us to study the dynamics of the µG. The causal diagram consists of three positive cycles, which are associated with the benefits accompanying the operation by the µG system. First, the contribution to the distribution system is associated with the reduction of peak demand on the EPS seen from centralized generators by reducing demand fluctuations, because the DR provides energy directly to the consumer centers, ensuring a constant supply to users who are connected to the µG operating in isolation, and decreasing the levels of non-delivered energy and reducing blackouts that may occur. Therefore, this condition has economic rewards [18]. Likewise, the operation by µG increases resilience in the operation of electric power systems, associated with the ability to expand by taking into account alternative and renewable generation, adaptation to different geographical locations, control strategies for centralized coordination between the substations and control centers, support for multiple markets, and connection or disconnection of DR and loads [19]. The reduction in costs has a delay, shown in Fig.1 by "=", because methodologies are used to quantify the behavior patterns of the cost in time by one of the most common techniques, known as learning curves. The diagram in Fig. 1 shows how the operation of an isolated µG enhance the level of investment in the operation by µG, increasing the capacity of an isolated operation of µG in supporting both the distribution and the transmission system

SAIFI 

N

SAIDI 

rN

i

(1)

NT i

i

NT

(2)

Where Ni is the number of users affected by each interruption, determined by making an annual analysis of them using the data of the system operator [23], NT is the total number of users connected to the µG, and ri is the duration of interruptions submitted annually. Equation (3) expresses the energy that is not delivered due to the reliability indices (adapted from [18]).

NDE   SAIDI  SAIFI   G Demand

(3)

In the case of an isolated µG, sufficient dispatchable generation should be ensured to accommodate variations in demand [18]. However, for the model a reduction in peak demand is assumed, so that the reliability indices SAIFI and SAIDI are improved and ensure security of supply to the µG [19]. In Colombia, if the percentage of non-delivered energy exceeds 2%, a detailed analysis must be performed of the situation that caused the partial or total unavailability and what occurred in a scheduled or unscheduled manner [24].

2.2. Formulation of the simulation model The causal diagram shown in Fig. 1 translates to a formal diagram which provides a detailed analysis of the variables of the model in specialized software [16]. The model is composed of level variable, which accumulate system information and present relationship changes due to the relation flow rates, which determines changes in the system and allows dynamic behavior to occur [20,21]. The simulation model consists of the level variables associated with increases in the operation by µG; these variables are associated with the growth rate of the µG related to the level of investment or profitability.

2.2.2. Transmission system benefits The existing EPS were designed for centralized generation, which is increasingly challenged by the negative effect it causes in the environment. In addition, centralized generation does not, in case of a failure, continue to operate in subsections or operational areas in order to avoid disconnection of the end user. Smart grids allow operation in an intentional island mode or operational area through the creation of a µG [1,4]. The operating philosophy from this point of view is based on a process of restoration where the main objective is to achieve total EPS interconnection in the shortest time possible. With the inclusion of smart grids the key concept should be to prevent the end user from experiencing an interrupted supply of electricity. The term that relates to noninterruption of electricity in case of a failure in EPS is selfhealing and is related to the possibility that restoration process may be performed using multiple distribution networks with active acquisition systems. Real-time data can be synchronized and thus ensure security of supply to the end user [25].

2.2.1. Distribution system benefits The growth in demand leads to file system failures, power outages and rising electricity prices [18]. Also, the prediction of behavior of the demand presents difficulties and complexities to determine it and limits the data for implementing the methodologies, which hinders the making of operational decisions and strategies regarding behavior and growth [22]. However, the presence of DR in the µG operation can reduce peak demand (seen from the 13


Arango-Manrique et al / DYNA 82 (192), pp. 11-18. August, 2015.

In the future, a fast self-healing capability will enable the electric power system to operate as multiple islands intentional at any voltage level and thus avoid costly blackouts. The customer plays a key role in creating stable electrical islands because success in creating electrical islands is based on the proper management and control of loads. Such controllable loads must be connected in active distribution networks, which should make changes to the mode of operation such as the inclusion of DR that causes bidirectional flows, which must be measured and controlled in real time by checking constraints. Such constraints relate to the need to maintain the frequency and voltage within allowable values in the operating of multi µG which may form in the distribution system, and which will assist in strategy through strategic modules created by the operator of the national power system, who will be able to start large generation plants and restore the interconnection and unity of the overall power system (bulk power system).

energy market and especially for the financial support and to ensure the sustainability of the scheme of the growth of the DR, representing a change in the price elasticity before the acceptance of this change in technologies [11]. This mechanism is developed around the Feed-in-Tariff (FIT) with local adaptations that depend on technological, socioeconomic development and on the operation of isolated µG [32]. The implementation of RPT is not intended to finance capital investments in renewable energy projects, as would be the case of FIT. The aim is to ensure high levels of performance because the payments are based on power generated. The RPT was designed to provide a cost-benefit mechanism, designed to introduce renewable energy technologies in isolated µG and provide sustainable, affordable electricity to the users connected to it [31]. In [32] the equation for income as shown in (3) is defined.

2.2.3. Economic Mechanisms

However, the proposal of this work and other sustainable options is to consider the support services that operation by µG can provide to the distribution and transmission system. The equation that describes the behavior of the proposed economic mechanisms which benefit from the operation by µG is defined in (4).

Income  User tariff  RPT

The decision criterion on which μR is implemented depends on regulatory conditions prevailing in Colombia concerning the issue of μG. Currently, distributing agents in Colombia can have generation assets [26]; for this reason, if the regulatory conditions do not change, the implementation of μG is governed by a monopoly model of distributing agents. In a monopoly model, the μR is built primarily to address problems of electrical power quality or to postpone expansions in the distribution network given the compensation of reactive power that can deliver the μG [27]. Therefore, in a monopoly model the technical conditions are a bonus because the current regulations do not allow them to have a unique local market for μG that could compete with the wholesale market [26]. Also in the monopoly model, the active integration of demand is not a priority because the broker-dealer would have to implement differential rates, and income could be reduced if the customer could choose between buying local energy or μG from the wholesale energy market [28]. The diagram shows that the integration and operation of the µG increases the level of investment in this technology, as it increases the ability of the DR and the active participation of demand to ensure the reduction in peak consumption. This investment will reduce costs over a period of time, as mentioned above, by learning curves. To encourage renewable energy penetration in developing countries it is necessary to implement a scheme that is financially and technically sustainable in the long term and which requires minimal intervention from governments or regulations [11,29]. Traditional schemes have been "tropicalized" to analyze the behavior and possible integration in developing countries. For this we have proposed the implementation of a sustainable scheme called: Renewable Energy Premium Tariff (RPT) [11,29-32]. This scheme includes demand as a major player in the

(3)

Income  User tariff  RPT   Support services 

(4)

Where User tariff is the rate that the user pays for the service of electricity, the rate for the Colombian case can be compared to the price paid by users in Non-Interconnected Areas (ZNI), defined by Resolution CREG 091 of 2007 [33], defined in (5).

User tariff 

Gm  Dmn  Cm 1 p

(5)

Where Gm is the maximum available capacity charge, p is the losses in the system, Dmn is the charge for use of the distribution system and Cm is the marketing fee. These data are defined in [33]. Support services are the benefits that lend µG for both the distribution system and the transmission system, the latter being an incentive for both µG and for the EPS and the price RPT is recognized by the government and uses the example of the prices recognized by FIT Ecuador for different technologies shown in Table 1 [31]. Table 1. Prices recognized by Ecuador’s FIT (Regulation 09/06, CONELEC, 2008). Technology Price (cUSD/kWh) Price (cUSD/kWh) Mainland Galápagos Islands Wind 9.39 12.21 Photovoltaic 52.04 57.24 Biomass 9.67 10.64 Geothermal 9.28 10.21 Small hydro up to 5 5.80 6.38 MW Small hydro 5 – 10 MW 5.00 5.50 Source: [31] 14


Arango-Manrique et al / DYNA 82 (192), pp. 11-18. August, 2015.

 Income Pr ofitability   CAPEX  OPEX

2.2.4. Costs

i

i

Learning curves describe how investment costs in a technology are reduced through one or more factors that represent the accumulation of knowledge and experience related to Research and Development (R & D), production and use of such technology [34,35]. The cost of DR is reduced, as mentioned above, due to new technologies, programs of asset management and scheduled maintenance, also investment would be conditioned by the profitability of the plant and the ratio of income to support costs of the operation of the µG. The costs that are included in the simulation model are defined as CAPEX, known as investment costs associated with installing plants, taking into account the rate defined by the CREG using a calculation method known as the Weighted Average Cost of Capital (WACC). This method aims to remunerate shareholders’ investments in the same way that a company functioning efficiently in a sector characterized by comparable risk would be remunerated [23], as is shown in (6). CAPEX have a similar behavior pattern to that of a learning curve, meaning that CAPEX are reduced over time owing to the fact that it becomes increasingly easier to implement DR due to new experience gained on each project. CAPEX 

Capital _ cos ts 1 Rate 

i

i

(7) i

i

Table 2. Scenarios to assess the model. Case

Income

Description

1

RPT + User tariff + Island Operation

Includes user payment plus payment by the RPT and support for isolated operation capability

2

RPT + User tariff + (Island Operation + Distribution Support)

Case 1 with additional payment availability for the distribution system support

3

RPT + User tariff+ (Island Operation + Distribution Support + Transmission Support)

Case 2 with additional payment to support the availability of the transmission system

Source: The authors

3. Simulations and results

(6)

The model was constructed to analyze the economic mechanisms to promote DR in the operation of such a µG island in Colombia. It implements the RPT and additionally proposes the integration of availability payment for the provision of services of support of the EPS. The model evaluation is defined by the following stages, as shown in Table 2. To analyze the diffusion rate of the operation by μG in the CQR area, which has DR as the SHPs, we must assess growth in the μG operation in the area. The value that is allocated for support services is equal to the value of the single supplementary service available in Colombia, the AGC service, since it is considered that the isolated μG operation helps in providing these support services, enabling the distribution system to be active and to support the needs of transmission systems and large centralized generators.

The learning rate of SHPs in the world is very low (1.4%), mainly because it is considered a technology that has not changed in the last 50 years [36]. However, this theory must be staked to the topic of smart grids and the possibility of functioning in an interconnected network and a grid with acceptable levels of quality, reliability and security of supply. 2.2.5. Profitability In this research we propose µG operation for a distribution sub-network in Colombia – CQR zone; the final decision of the alternative selected operation must consider: reliability, quality indices, benefit/cost, among others. Regardless of the selection, the operation of the studied µG shows good technical performance and system variables remain within acceptable ranges of operation; thus it appears that this type of operation is feasible. The analysis of profitability in the implementation model of economic mechanisms, as mentioned in section 3), determines the level of investment for expansion of the µG operation. The profitability depends on the relationship between income and the economic mechanisms, considering RPT rates and benefits for system support, to ensure sustainability in the operation by µG and costs incurred through investment, operation and maintenance of the operation by µG, discussed in Section 4). Equation (7) determines the profitability of the investment. Where income that is associated with equation (4) and the CAPEX defined in equation (6) and OPEX are defined in [23].

3.1. Case 1: In Case 1, the simulation is performed taking into account the fee paid by users, defined from CREG Resolution 091 of 2007 [33], the RPT taken from Table I for the SHPs [31] and payment for service support for island operation capability. Fig. 2 shows that in 20 years the rate of diffusion only reaches 100% in the 16th year, taking into account what is paid to the μG a payment for availability of island operation. For the 10th year profitability, under a cost-benefit ratio analysis, is greater than 1, having implications for investors due to the coverage of costs incurred and a return on investment that favors the spread of the μG operation and ensures long-term sustainability.

15


Arango-Manrique et al / DYNA 82 (192), pp. 11-18. August, 2015.

120

40 35 30

80

Profitability

Diffusion rate

100

60 40

25 20 15 10

20

5 0

0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Years

Years

Figura 2. Diffusion rate – μG Operation. Case 1. Source: The authors

Case 2: Profitability

Case 3: Profitability

Figura. 4. Profitability – μG Operation. Case 2 and Case 3. Source: The authors 120

Diffusion Rate

100

distribution network, the rate of diffusion of the operation by µG in the Colombian system is feasible. However, technical studies should be conducted to avoid saturation of the system and ensure that the DR system is integrated to provide the support services properly and adequately. Additionally, demand should be considered as an active agent that participates in the market and that has great advantages for the operation of EPS.

80 60 40 20 0

3.3. Profitability

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Case 2:µG Operation Years

Case 3: µG potential

Case 2 and Case 3 have similar behavior; however, the profitability of the two cases is shown in Fig. 4 and shows what occurs when applying the two types of incentives. Fig. 4 shows that the profitability increases significantly from the moment you start paying support services for the distribution and transmission network, making this type of operation competitive and ensuring a return on investment. These mechanisms are intended to be sustainable in the long term because the payment is to help the EPS to stay within operating limits, increasing the security of supply and maintaining the reliability and quality indices. However, technical analysis should be performed to ensure the above conditions.

Figura 3. Diffusion rate – μG Operation. Case 2 and Case 3. Source: The authors

3.2. Case 2 and case 3: In case 2, it is proposed to implement, in addition to the rates of the users [33], the RPT [31] and an availability payment for the provision of support to the distribution network, defined in section II B. 1, and an additional payment for island operation. In the last case, case 3, it is proposed to implement, in addition to the rates of the users [33] and the RPT [31], a payment for additional availability to provide support services to the transmission network, as expressed in section II B. 2, also maintaining the availability payment for the distribution system and the ability of isolated operation. Diffusion behavior shown in Fig. 3 presents an improvement over Case 1 because, in the 20 years analyzed in this model, the diffusion reaches 100% in the 13th year, which implies a favorable investment. Payment for support service being provided to the system increases from year 11, because it starts to compensate when 33% of the DR projected is operating in μG, so that at the beginning of the operation the behavior of the diffusion is similar to Case 1. Adding the payment for availability to support the transmission system and, conjugated with the support to the

4. Conclusions This article proposes using the µG as a form of operation that brings benefits to the EPS, with the ability to provide support services through an economic evaluation using mechanisms that allow penetration and growth of the operation by µG, including the RPT which is known worldwide, and an availability payment to provide support services for the distribution and transmission system. Implementation of economic mechanisms such as RPT has benefits for the investor, taking into account that it ensures the long-term cash flow, decreasing the investment risk. However, one must consider the conditions of the region, such as its geographical and climatic characteristics, 16


Arango-Manrique et al / DYNA 82 (192), pp. 11-18. August, 2015.

as the payment to cover the costs for RPT is a guarantee for the investor and reduces investment risk. The successful implementation of support services proposed in the paper is based on a strong policy that covers all the specific characteristics of the µG and a regulatory framework, to eliminate the barriers associated with the implementation of the µG, DR and renewable energy. To provide ancillary or support services using µG is a regulatory challenge for Colombia since it would have to modify procedural aspects and invest in modern equipment measurement and control. Furthermore, the issue should be regulated to remunerate the demand participation to convert it into a stable demand in the long term, as this would allow deferrals in the investment of expansion of the generation and networks of transmission and distribution of the EPS. The active participation of demand allows better use of generation resources through voluntary shedding of the demand. Furthermore, demand in a µG with measurement and control equipment with telemetry can provide specific support services as tertiary control frequency and voltage control; in this way, it would increase the reliability and optimize the utilization of the resources of the µG. The operation by μG in Colombia does not have a regulatory policy that encourages this operation, so investors will continue to build SHPs with conventional control systems that do not allow island mode operation and therefore, are missing the opportunity to have a resilience EPS and higher rates of security of the supply of electricity. For the Colombian EPS to increase resilience, stability and security of supply, it is necessary to promote, implement and regulate the operation form of μG and thus modernize the operation of the EPS and achieve multiple benefits (technical, economic, environmental and social) for all those involved in the electric supply chain.

[9] [10] [11]

[12]

[13]

[14] [15] [16]

[17] [18]

[19] [20]

References [1] [2]

[3]

[4] [5]

[6]

[7] [8]

[21]

Sioshansi, F.P., Smart Grid: Integrating renewable, distributed & efficient energy. Waltham: Academic Press, 2011. Mejia-Giraldo, D.A., Lopez-Lezama, J.M. and Gallego-Pareja, L.A., Energy generation expansion planning model considering emissions constraints, DYNA, Universidad Nacional de Colombia, 77 (163), pp. 75-84, 2010. Sanseverino, E.R., Di Silvestre, M.L., Ippolito, M.G., De Paola, A. and Lo Re, G., An execution, monitoring and replanning approach for optimal energy management in microgrids, Energy, 36 (5), pp. 34293436, 2011. DOI: 10.1016/j.energy.2011.03.047 IEEE, Guide for design, operation and integration of distributed resource island systems with electric power systems, IEEE Standard 1547.4TM, Jul. 2011. Soto, D., Edrington, C., Balathandayuthapani, S. and Ryster, S., Voltage balancing of islanded microgrids using a time-domain technique, Electric Power Systems Research, 84 (1), pp. 214-223, 2012. DOI: 10.1016/j.epsr.2011.11.022 Peças-Lopes, J.A., Hatziargyriou, N., Mutale, J., Djapic, P. and Jenkins, N., Integrating distributed generation into electric power systems: A review of drivers, challenges and opportunities, Electric Power Systems Research, 77 (9), pp. 1189-1203, 2007. DOI: 10.1016/j.epsr.2006.08.016 Lorente-de la Rubia, J., Estudio sobre el estado actual de las "Smart Grids, Tesis, Departamento de Ingeniería Eléctrica, Universidad Carlos III de Madrid, Madrid, España, 2012. Muñoz-Maldonado, Y.A., Optimización de recursos energéticos en zonas aisladas mediante estrategias de suministro y consumo. PhD

[22] [23]

[24]

[25]

[26] [27]

17

Tesis, Departamento de Ingeniería Eléctrica, Universidad Politécnica de Valencia, España, 2012. Frías, P., Gómez, T. and Rivier, J., Regulation of distribution system operators with high penetration of distributed generation, Proceding of Power Tech, 2007 IEEE Lausanne, pp.579-584, 2007. Sánchez-de Tembleque, L.J., Sostenibilidad de las energías renovables, Fundación Ciudadanía y Valores, 2009. Thiam, D.R., An energy pricing scheme for the diffusion of decentralized renewable technology investment in developing countries, Energy Policy, 39 (7), pp. 4284-4297, 2011. DOI: 10.1016/j.enpol.2011.04.046 Barker P.P. and de Mello, R.W., Determining the impact of distributed 5 generation on power systems: part 1 - radial distributed systems, IEEE Power Engineering Society Summer Meeting, 3, pp.1645-1656, 2000. Díaz, A., Identificación y evaluación de un conjunto de medidas e incentivar la penetración de energía renovable en la generación de electricidad de Colombia, MSc, Tesis, Departamento de Ingeniería Eléctrica, Universidad de los Andes, Bogotá, Colombia, 2007. Sterman, J., Business dynamics. Systems thinking and modeling for a complex word. McGraw-Hill, 2000. Dyner, I., Dinámica de sistemas y simulación continua en el proceso de planificación, 1993. Borshcev, A. and Filippov, A., From system dynamics and discrete event to practical agent based modeling: reasons, techniques, tools., Proceeding of the 22nd International Conference of the System Dynamics Society, Oxford, England, 2004. Lemos-Cano, S. and Botero-Botero, S., Hydro-Thermal generation portfolio optimization at the colombian power market, DYNA, Universidad Nacional de Colombia, 79 (175), pp. 62-71, 2012. Morris, G.Y., Abbey, C., Wong, S. and Joos, G., Evaluation of the costs and benefits of microgrids with consideration of services beyond energy supply, Proceeding of Power and Energy Society General Meeting, 2012 IEEE, pp.1-9, 2012. DOI: 10.1109/pesgm.2012.6345380 Li, F., Qiao, W., Sun, H., Wan, H., Wang J., Xia, Y., Xu, Z. and Zhang, P., Smart transmission grid: Vision and framework, smart grid, IEEE Transactions 1 (2), pp.168-177, 2010. Kundur, P., Paserba, J., Ajjarapu, V., Anderson, G., Bose, A., Cañizares, C., Hatziargyriou, N., Hill, D., Stankovic, A., Taylor, C.W., Van Custem T. and Vittal, V., Definitions and classification of power system stability, IEEE/CIGRE joint task force on stability terms and definitions, IEEE Transactions on Power Systems, 19 (3), pp. 1387-1401, 2004. DOI: 10.1109/TPWRS.2004.825981 Forrester, J., Principles of systems. Norwalk, CT: Productivity Press. 1961. Rueda, V., Velásquez-Henao, J.D. and Franco-Cardona, C.J., Recent advances in load forecasting using nonlinear models, DYNA, Universidad Nacional de Colombia, 78 (167), pp. 36-43, 2011. Central Hidroeléctrica de Caldas - CHEC S.A. ESP., Estudios de viabilidad de las plantas menores para las pequeñas centrales hidroeléctricas de propiedad de la CHEC. Contrato NO.177.08. 3er Informe de actividades, Septiembre, 2009. CREG, Comisión de Regulación de Energía y Gas, Resolución 93, 2012., Reglamento para el reporte de eventos y el procedimiento para el cálculo de energía no suministrada, y se precisan otras disposiciones relacionadas con la calidad del servicio en el sistema de transmisión regional., [Online]. Available at: www.creg.gov.co/eléctrica/info.html Madureira, A.G. and Peças-Lopes, J.A., Ancillary services market framework for voltage control in distribution networks with microgrids, Electric Power Systems Research, 86, pp. 1-7, 2012. DOI: 10.1016/j.epsr.2011.12.016 CREG, Comisión de Regulación de Energía y Gas, Resolución 25, 1995. Código de Redes. [Online]. Available at: www.creg.gov.co/eléctrica/info.html. Hatziargyriou, N.D., Anastasiadis, A.G., Vasiljevska, J. and Tsikalakis, A.G., Quantification of economic, environmental and operational benefits of Microgrids, Proceeding of Power Tech, IEEE Bucharest, pp.1,8, 2009.


Arango-Manrique et al / DYNA 82 (192), pp. 11-18. August, 2015. [28] More Microgrids, STREP project funded by the EC under 6FP, SES6019864, Advanced Architectures and Control Concepts for More Microgrids, Report on the technical, social, economic, and environmental benefits provided by Microgrids on power system operation, 2009. [29] Rickerson, W., Hanley, C., Laurent, C. and Greacen, C., Implementing a global fund for feed-in tariffs in developing countries: A case study of Tanzania, Renewable Energy, 49, pp. 29-32, 2013. DOI: 10.1016/j.renene.2012.01.072 [30] Moner-Girona, M., A new tailored scheme for the support of renewable energies in developing countries, Energy Policy, 37 (5), pp. 2037-2041, 2009. DOI: 10.1016/j.enpol.2008.11.024 [31] Solano-Peralta, M., Moner-Girona, M., Van Sark, W.G.J.H.M. and Vallvè, X.R., Tropicalisation of Feed-in Tariffs: A custom-made support scheme for hybrid PV/diesel systems in isolated regions, Renewable and Sustainable Energy Reviews, 13 (9), pp. 2279-2294, 2009. DOI: 10.1016/j.rser.2009.06.022 [32] Moner-Girona, M., A new scheme for the promotion of renewable energies in developing countries, JRC European Commission, 2008. [33] CREG, Comisión de Regulación de Energía y Gas, Resolucion 91, 2007. Metodologías generales para remunerar las actividades de generación, distribución y comercialización de energía eléctrica, y las fórmulas tarifarias generales para establecer el costo unitario de prestación del servicio público de energía eléctrica en zonas no interconectadas. [Online]. Available at: www.creg.gov.co/eléctrica/info.html. [34] Taillant, P., L’analyse évolutionniste des innovations technologiques: l’exemple des énergies solaire photovoltaïque et éolienne. PhD. Tesis, Université de Montpellier I, France, 2005. [35] Papineau, M., An economic perspective on experience curves and dynamic economies in renewable energy technologies. Energy Policy 34, pp. 422-432, 2006. DOI: 10.1016/j.enpol.2004.06.008 [36] Jamasb, T., Technical change theory and learning curves: Patterns of progress in energy technologies, The Energy Journal, 28 (3), pp. 4565, 2007. DOI: 10.5547/ISSN0195-6574-EJ-Vol28-No3-4

Área Curricular de Ingeniería de Sistemas e Informática Oferta de Posgrados

Especialización en Sistemas Especialización en Mercados de Energía Maestría en Ingeniería - Ingeniería de Sistemas Doctorado en Ingeniería- Sistema e Informática Mayor información: E-mail: acsei_med@unal.edu.co Teléfono: (57-4) 425 5365

A. Arango-Manrique, is MSc in Industrial Automation, Electric Engineer, member of Research Group on E3P at Universidad Nacional de Colombia, Manizales Campus. Is student of Doctorate in Engineering, Universidad Nacional de Colombia, Manizales Campus. S. X. Carvajal-Quintero, is PhD. In Electrical Engineer, member of Research Group on E3P at Universidad Nacional de Colombia. Full professor, at Universidad Nacional de Colombia, Manizales Campus. Field of interest: Smart grids, Ancillary services, System Dynamics and Electricity Markets. C. Younes-Velosa, is PhD. Electrical Engineer, Manizales Campus. Associate Professor with the Department of Electrical and Electronic Engineer, Manizales Campus.

18


Loads Characterization using the instantaneous power tensor theory Odair Augusto Trujillo-Orozco a, Yeison Alberto Garcés-Gómez b, Armando Jaime Ustariz-Farfán c & Eduardo Antonio Cano-Plata d Facultad de Ingeniería y Arquitectura, Universidad Nacional de Colombia – Sede Manizales, Manizales, Colombia a oatrujilloo@unal.edu.co, b yagarsesg@unal.edu.co, c ajustarizf@unal.edu.co, d eacanopl@unal.edu.co Received: April, 29th of 2014. Received in revised form: February 13th of 2015. Accepted: .June 30th of 2015

Abstract This paper presents a novel methodology to characterize loads using the instantaneous power tensor theory in three-phase three-wire or four-wire systems. Tensor theory is based on the dyadic product between voltage and current instantaneous vectors; this definition allows us to represent the phenomena of power quality produced by loads operation, through the deformation of a cube and the trajectory of one of its three-dimensional vectors. This new way of characterization could help researchers to construct better models of three-phase loads, or to achieve a better monitoring and machines diagnosis. Results are reached by implementing simulations in MATLAB® and endorsed with experimental measurements. Keywords: Power quality, vector trajectory, paths, current-voltage characteristic, characterizing loads, harmonics, Lissajous, diagnosis.

Caracterización de cargas usando la teoría instantánea del tensor de potencia Resumen Este artículo presenta una novedosa metodología para caracterizar cargas usando la teoría instantánea del tensor de potencia en sistemas trifásicos de tres y cuatro hilos. La teoría tensorial está basada en el producto diádico entre los vectores instantáneos de corriente y tensión; esta definición permite representar los fenómenos de calidad de la potencia que produce la operación de la carga, a través de la deformación de un cubo y la trayectoria de uno de sus vectores tridimensionales. Esta nueva forma de caracterización podría ayudar a investigadores a construir mejores modelos de cargas trifásicas, o a realizar un mejor monitoreo y diagnóstico de máquinas. Los resultados son alcanzados implementando simulaciones en MATLAB® y son validados con mediciones experimentales. Palabras clave: Calidad de la potencia, trayectoria del vector, caminos, característica corriente-tensión, caracterización de cargas, armónicos, Lissajous, diagnóstico.

1. Introduction Characterization of loads has been a useful tool for several applications like dimensioning of facilities, loads modeling, power conservation and energy efficiency [1], diagnosis of machines [2-4], even real time monitoring [5], etc. Modeling is the most important part in the simulation analysis; also a good characterization of loads is a useful input for network operation and planning [6]. Some methods use data analysis [7], or Lissajous patterns that involve the voltage and current time functions [8,9]; but most of such methods only apply for single phase loads; those that are suitable for three-phase systems do

not involve voltage and current at the same time [5]. This paper proposes a new way for characterizing loads, using threedimensional trajectories (3-D paths) for three-phase power, instead of single-phase voltages or currents diagrams, using the instantaneous power tensor theory. The paper is arranged as follows. First, a background of other methods is presented in such a way that comparisons can be made between them. Second, the proposed method is introduced as a methodology consisting of five steps. Third, simulation results using synthetic signals are presented. Fourth, experimental results with measurements of two real loads are exposed. Finally, conclusions and future work are raised.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 19-25. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48565


Trujillo-Orozco et al / DYNA 82 (192), pp. 19-25. August, 2015.

Figure 1: I-V Characteristic for single phase loads. (a) resistive, (b) inductive, (c) a reactive non-linear load Source: The authors.

2. Background of Other Methods 2.1. Current - Voltage Lissajous Patterns Traditionally, for single phase loads, the current-voltage characteristic or Lissajous patterns has been the most popular method for characterizing loads [1,8,9]; even on three-phase systems it is still used, graphing the I-V characteristic for each phase. The Lissajous patterns describe the “complex harmonic motion” of a system with parametric equations given by (1), where the ratio ω1/ω2 and the value of α determines the shape of the pattern.

x  A sin(1t ), y  B sin( 2 t   )

Figure 2: abc currents, alpha-beta currents and its Concordia patterns for (a) an ideal load (b) a load with the 5th harmonic in its current Source: The authors.

to obtain the two-dimensional patterns characterizing load or machine diagnosis.

(1)

1  1   i  2 2  i   3 3  0  2

In Figure 1 three typical single-phase I-V characteristic curves can be seen corresponding to (a) a resistive load, (b) an inductive load, and (c) a reactive non-linear load. It is worth noticing here that for the pure resistive load a straight line or linear I-V relationship is obtained. This is not a suitable approach for characterizing threephase loads, because it is not possible to see the whole threephase effect caused by such load operation. Also, it is difficult to determine which harmonic is imposing the load to the network. The main feature of this method is the advantage that can be seen in the relationship between voltage and current signals.

useful

1  ia  2    ib 3   ic   2 

for

(2)

Figure 2 shows the abc currents, the αβ currents and the generated patterns of (a) a three-phase load without unbalances, neither harmonics nor reactive power; (b) a load that causes three-phase current with the 5th harmonic. The pattern from (a) could be taken as a reference or ideal condition. 2.3. Characteristic Trajectory of the Instantaneous Power Tensor

2.2. Alpha – Beta Patterns Due to the need of characterizing three-phase loads on a single diagram, for a better understanding and analysis, some authors have proposed the α-β frame [4,5]. This approach uses the Clarke's Transform for α-β components, and their paths or Concordia patterns to draw the three-phase voltage or current vector. The Clarke's transform is given by (2). It takes the measurements of a three-phase system and leads it to a two axes coordinate frame of reference (α-β frame), allowing us

A more recent approach, based on the tensor theory has been published in [10]. It uses the dyadic product of the voltage and current instantaneous vectors, to get the Instantaneous Power Tensor. It also describes all the phenomena of power quality just analyzing the different characteristic values of the obtained matrix. The following equation defines the power tensor.

20


Trujillo-Orozco et al / DYNA 82 (192), pp. 19-25. August, 2015.

ij  vi  i j

(3)

Then, for a three-phase system, the power tensor can be written as:

 va  ia   va ia ij   vb   ib    vb ia  vc   ic   vc ia

va ib vb ib vc ib

va ic  vb ic  vc ic 

(4)

In order to be able to draw the patterns used in loads characterization, it is necessary to get a spatial vector derived from the power tensor. For that reason it is desirable to reduce the order of the tensor. This spatial vector allows us to describe the characteristic trajectory of the power tensor in the R3 space. Reduction of the power tensor order is described in [10]. Here, it is shown that it is possible to find a vector xj (a first order tensor), which matches with the following equation:

ij xi   x j

Figure 3: Trajectory for (a) three-phase ideal load, (b) three-phase load with the 5th harmonic in its current Source: The authors.

(5)

yi ij x j1  1 xi1

Where, λ and xj are the eigenvalues and eigenvectors of the power tensor, respectively. The matrix representation of the instantaneous power tensor, allows us to calculate its eigenvalues and eigenvectors through the following equation system:



ij

  ij  x j  0

Figure 3 (a), shows the trajectory generated by the vector stated in (10), corresponding to the operation of an ideal load, i.e. the matrix ij is formed by instantaneous vectors of voltage and current with positive sequence, fundamental component and perfectly in phase. Figure 3 (b) shows another trajectory for a three-phase load with the 5th harmonic in its current. Here the number of bumps is equal to n+1, where n is the harmonic order. Notice that in this approach, the drown trajectory describes the behavior of the instantaneous power, i.e. takes into account the voltage and current signals at the same time. Although this method takes into account the relationship between the three-phase voltage and current vectors, it cannot draw trajectories generated by reactive power, neither 3rd harmonic, nor its multiples. Thus, it is necessary to use another concept of the instantaneous power tensor, which enables us to draw the corresponding trajectories caused by the mentioned phenomena. This alternative methodology is described in the next section.

(6)

Where,  ij is the “Kronecker’s delta” tensor. The roots of the characteristic equation given by (7) are the eigenvalues stated in (8).

ij   ij  0

(1)  v1i1  v2i2  v3i3 ; (2)  (3)  0

(7) (8)

There is a unique eigenvector xj for each λ in (6). As an example, a possible combination of eigenvectors associated to each eigenvalue, may be:

x

i 1

 v1 v3    i2 i1    i3 i1     v2 v3  ; x   1  ; xi 3   0  i  2  1   0   1     v3 , i1  0

(10)

3. Proposed methodology for loads characterization The following methodology uses the instantaneous power tensor theory proposed in [10] for characterizing loads, but using the definition of the power distortion tensor, instead of the characteristic trajectory. The process begins when the voltage and current vectors are stored in a computer or in a DSP based system. Thus, if a DSP based system is used, it is possible to do it in real time. On the contrary, the use of a computer based software like Matlab® or another one that allows making matrix operations and plotting the results of the measurements, is needed.

(9)

With these results it is possible to obtain the trajectories that describe the internal products between the power tensor and its eigenvectors. Considering that λ2= λ3=0, the only one possible trajectory is:

21


Trujillo-Orozco et al / DYNA 82 (192), pp. 19-25. August, 2015.

Assuming instantaneous values, the methodology is described as follows:  Step 1: Obtain the power tensor of the real system This is done by applying the definition in (4) for the voltage and current vectors. Then, the ij tensor is obtained.  Step 2: Obtain the ideal power tensor The ideal power tensor is formed by instantaneous voltage and current vectors at fundamental frequency, positive sequence and perfectly in phase.  Step 3: Obtain the distortion tensor and the distortion cube The distortion tensor represents exclusively all the phenomena of power quality in the power trade of the system. The distortion cube is the R3 spatial representation of the distortion tensor.  Step 4: Visualization process From the distortion cube, choose the three-dimensional vector path generator and trace the path with each spatial coordinate.  Step 5: Identify the power issues Using the different views of the plotted path for one cycle of the signals on steady stage; look at the characteristics of the closed path (i.e. bumps, loops, plane unsubscription). And then, conclude about the load operation. In the next subsections the necessary foundations to implement this methodology, are described.

Figure 4: Geometrical representation of the deformations tensor forces in R3 space Source: The authors.

Having a load that draws sinusoidal balanced voltages and currents signals and perfectly in phase, all the elements of ∂ij will equal zero. 3.3. Spatial representation of ∂ij

3.1. The ideal power tensor

In this approach, it is possible to construct a unitary cube having three director vectors given by the reference axes, xˆ, yˆ , zˆ named e1,e2,e3, respectively. The column vectors

It represents the ideal power transference corresponding to the conditions of a circuit with a sinusoidal source at fundamental frequency, without either unbalances or reactive power, feeding a resistive load. The equation is given by:

The voltage and current vectors could be gathered by extracting the fundamental components at positive sequence from the voltage and current signals, using the preferred method. Here, the method depicted in [11] was used.

d 1, d 2, d 3 exert deforming forces to each external cube face, depending on their components, i.e., depending on the deviations of the power trade in the system. This cube is named as the deformation cube and its formulation is sketched in Figure 4. The condition for these deformations to match with reality is that they have to be infinitesimal quantities. Thus, in order to be able to see such deformations it is necessary to scale it by a real positive number greater than one.

3.2. The instantaneous power distortion tensor

3.4. Visualization process

The instantaneous power distortion tensor ∂ij describes exclusively the deviation of the power quality in respect to the ideal conditions of the system, and it is depicted by this equation:

The final goal is to visualize the paths generated by the distortion cube, identify their shape, and with this, characterize the load. Accordingly, it is necessary to write some software applying vector algebra. Due to the fact that the deformations are infinitesimal quantities, it is possible to shift the column vectors of the tensor ∂ij from their original position to the corners of the unitary cube, as depicted in Figure 5.

ij 

ideal

ideal

vi  ideal i j

(10)

(11)

Then, having the vectors X p , Y p , Z p , if the values of the distortion tensor are different from zero, an instantaneous deforming cube can be plotted, with origin in (-1,-1,0); otherwise, the distortion cube will be the same unitary cube used as reference. 22


Trujillo-Orozco et al / DYNA 82 (192), pp. 19-25. August, 2015.

allows drawing the paths corresponding to the deviations, imposed by the load operation. Figure 6 shows the chosen vector and, for simplicity, the resultant path corresponding to a load drawing three-phase current with the 5th harmonic. In Figure 6, the unitary cube using dashed lines and the deformed one using solid lines can be seen. This last one is where the generator vector comes. This vector does not follow a trajectory with an ideal load operation, because it is static due to the zero values in ∂ij.  The trajectory drawn by the vector V p can be gathered with either single or a combination of different loads (linear and non-linear). 4. Simulation Results

Figure 5: General vector representation of the deformations power tensor Source: The authors

In Figure 7 (a), a linear balanced, three-phase capacitive load has been characterized. Due to the fact that the load is capacitive, the generated path is an ellipse oriented to the left. In Figure 7 (b), a linear unbalanced, three-phase inductive load, has been characterized. Given that the load is inductive, the ellipse is oriented to the right. Notice that this method is able to show zero sequence components, meaning unbalance, or the 3th harmonic (and its multiples) in other cases. For this reason, and for convenience, but not being necessary, a semitransparent αβ plane has been used, since the zero sequence components cause the path to unsubscribe from the αβ plane.

Vp





Figure 7: Paths for (a) linear capacitive load; (b) linear inductive and unbalanced load Source: The authors.

Figure 8 (a) shows the paths corresponding to a threephase load that causes the 5th and 7th harmonics in its current; with the 5th harmonic predomination, unbalance (zerosequence). Figure 8 (b) shows a three-phase load with the 5th and 7th harmonics in its current, with the 7th harmonic predomination; also lagging displacement factor. Here the zero-sequence components cause path unsubscribing. Also, it can be seen that the predominant harmonic order establishes the number of bumps of the path, and the nature of the reactive power, establishes the orientation of the path.

Figure 6: Reference cube and the distortion cube, showing the generator vector and the path of a load that draws the 5th harmonic in its current Source: The authors.

3.5. Vector path generator In order to obtain the three-dimensional paths, it is necessary to choose a vector of the distortion cube, which

23


Trujillo-Orozco et al / DYNA 82 (192), pp. 19-25. August, 2015.





Figure 8: Four typical loads with a combination of different power quality issues Source: The authors.

Moreover, it is possible to characterize loads with a complex combination of power issues, which is done by looking at the whole 3-D view of the generated trajectory. Additionally, it can be seen on the 2-D views and be comparing the paths with the known ones for making additional inferences.  Figure 9: Characterization of a rolling mill Source: The authors.

4. Experimental Results Two real industrial loads with several power issues have been characterized. Measurements have been done with the AEMC Power Pad-3945, and processed with MATLAB®, using the proposed methodology. Figure 9 shows the characterization of a rolling mill; measurements have been taken from the terminal block. This is a three-phase load with several harmonics, the 7th harmonic predomination. (The frequency spectrum of the A phase current is shown, which allows comparisons). Anew, the predominant harmonic order establishes the number of bumps; in the bottom right corner of Figure 9, a zoom to the X-Y view was performed and seven bumps can be counted. The soft trace parts are due to the fact that the path is crossing the plane, which means a presence of a 3rd harmonic or unbalance. Figure 10 shows the characterization of a three-phase adjustable speed drive, connected to a hoist. This load draws the 5th harmonic predomination and evidence of zero-sequence due to the path unsubscribing. In the bottom right corner of Figure 10, a zoom to the X-Y view was performed and it can be seen that the 11th harmonic is just taken away from the symmetry to the whole trajectory, but five bumps, corresponding to the predominant harmonic order can be counted. It is important to clarify that plotting the voltage and current signals, or the frequency spectrum is not needed. Only the path is needed to characterize the load.



Conclusions

Figure 10: Characterization of a Hoist adjustable speed driver Source: The authors.

The proposed methodology is capable of depicting many phenomena due to the operation of three-phase loads; odd 24


Trujillo-Orozco et al / DYNA 82 (192), pp. 19-25. August, 2015.

harmonics are depicted more precisely with n bumps as with harmonic order. Also it is the only one that is able to depict zero sequence evidence, and can give information about harmonic content when a load imposes several harmonics, establishing the dominant harmonic order with no need of the Fourier Transform. With this methodology it is possible to characterize reactive non-linear and unbalanced loads. Loads with a more complex combination of power issues can be characterized, even for a system with different loads having different characteristics on each phase or having voltage distortion. Although the proposed methodology needs the extraction of the fundamental components of voltage and current signals for constructing the ideal tensor, it does not mean that it loses its instantaneous character, because it can be supposed that the fundamental components do not change frequency or phase angle during the load operation, i.e. the ideal tensor remains approximately constant. In this paper, only current distortion effects were taken into account; due to the fact that it is more common to have very low voltage distortion at the load side. But with the increase of low impedance non-linear loads, the voltage distortion may increase considerably. Therefore, taking into account voltage distortion, is necessary future work. Another future work is to use the capabilities of the proposed methodology to make some experiments and benchmarking, in the areas of machines diagnosis and electric systems monitoring.

IEEE Conference Proceeding, 26-29 Sept. 2010, pp. 1-7, ISBN 9781-4244-7244-4. DOI: 10.1109/ichqp.2010.5625439 [10] Ustariz-Farfán, A.J., Formulación de una teoría tensorial de la potencia eléctrica: aplicaciones al estudio de la calidad de la energía, Ph.D. Tesis, Universidad Nacional de Colombia, Sede Manizales, Colombia, 2011. [11] Ustariz-Farfán, A.J., Cano-Plata, E.A., and Tacca, H.E., New deviation factor of power quality using tensor analysis and wavelet packet transform, International Conference on Power Systems Transients (IPST2011). Delft, Netherlands. June 14-17, 2011.

O.A. Trujillo-Orozco, was born in Fresno Tolima Colombia, in November 1979. He received the BSc. Engineering degree in 2013 from the Universidad Nacional de Colombia, Manizales campus, in Electrical Engineering. He's currently working on his MSc degree in Engineering, Electrical Engineering at the Universidad Nacional de Colombia, Manizales campus. Y.A. Garces-Gomez, was born in ManzanaresCaldas, Colombia, in 1983. He received the BSc. In Engineering Electronic in 2009 from the Universidad Nacional de Colombia, Manizales campus. Between 2009 and 2011 he held a “Colciencias” scholarship for postgraduate studies in engineering - industrial automation at the Universidad Nacional de Colombia. He is currently working for a PhD degree in engineering in the Universidad Nacional de Colombia. A.J. Ustariz-Farfán, was born in Urumita, Colombia in 1973. He received a BSc degree in Electrical Engineer in 1997, and a MSc in Electric Power in 2000 from the Universidad Industrial de Santander, Colombia. He received the PhD. degree in Electrical Engineering at the Universidad Nacional de Colombia, in 2011. He is a research and associated professor with the Electrical, Electronic and Computer Engineering Department, Universidad Nacional de Colombia, Manizales campus.

Bibliography [1]

[2] [3]

[4] [5]

[6]

[7]

[8]

[9]

Yi, D., Liang, D., Bin, L., Harley, R.G. and Habetler, T.G., A review of identification and monitoring methods for electric loads in commercial and residential buildings. Energy Conversion Congress and Exposition (ECCE), 2010 IEEE, pp.4527-4533, 12-16 Sept. 2010. Milanez, D.L. and Emmanuel, D.L., The instantaneous-space phasor a powerful diagnosis tool. IEEE Trans. on Instr. and Meas. 52 (1), pp. 143-148, 2003. DOI: 10.1109/TIM.2003.809069 Zidani, F.M., Benbouzid, E.H., Diallo, D. and Nait-Said, M.S., Induction motor stator faults diagnosis by a current Concordia pattern based fuzzy decision system. IEEE Trans. Energy Convers. 18 (4), pp. 469-475, 2003. DOI: 10.1109/TEC.2003.815832 Verucchi C.J. y Acosta G.G., Técnicas de detección y diagnóstico de fallos en máquinas eléctricas de inducción. IEEE Latin America Transactions, 5 (1), March 2007. Gilreath, P., Peterson, M. and Singh, B.N., A Novel technique for identification and condition monitoring of nonlinear loads in power systems, power electronics, drives and energy systems. PEDES '06. International Conference, pp.1-7, 12-15 Dec. 2006. Chao-Shun, Ch., Tsung-Hsien, W., Chung-Chieh, L. and Yenn-Minn, T., The application of load models of electric appliances to distribution system analysis. Power Systems, IEEE Transactions. 10 (3), pp.1376-1382, Aug 1995. Figueiredo, V., Rodrigues, F., Vale, Z. and Gouveia, J.B., An electric energy consumer characterization framework based on data mining techniques. Power Systems, IEEE Transactions. 20 (2), pp.596-602, May, 2005 Cano-Plata, E.A., Aplicaciones de la transformada ondita y la teoría de la potencia instantánea a la detección y clasificación de problemas de calidad de la potencia, PhD Tesis, Universidad de Buenos Aires, Buenos Aires, Argentina, 2006. Jagiela, K., Rak, J., Gala, M. and Kepinski, M., Identification of electric power parameters of AC arc furnace low voltage system.

E.A. Cano-Plata, was born in Neiva, Colombia, in 1967. He received the BSc. and Sp. Engineering degree in 1990 and 1994 from Universidad Nacional de Colombia, Manizales, both in Electrical Engineering. Between 1996 and 1998 he had a DAAD scholarship for postgraduate studies in Electrical Engineering at the Universidad Nacional de San Juan, Argentina. He received the Dr. degree in Engineering in 2006 from the Universidad de Buenos Aires, Argentina. Since 1994, he is a titular professor at the Universidad Nacional de Colombia in Manizales, Colombia.

25


Analysis of voltage sag compensation in distribution systems using a multilevel DSTATCOM in ATP/EMTP Herbert Enrique Rojas-Cubides a, Audrey Soley Cruz-Bernal b & Harvey David Rojas-Cubides c a Electrical Engineering Department, Universidad Distrital Francisco José de Caldas, Bogotá, Colombia. herojasc@udistrital.edu.co Center of Electricity, Electronic and Telecommunications, Servicio Nacional de Aprendizaje SENA, Colombia. ascruz57@misena.edu.co c Center of Electricity, Electronic and Telecommunications, Servicio Nacional de Aprendizaje SENA, Colombia. davidrc@misena.edu.co

b

Received: April 29th of 2014. Received in revised form: January 27th of 2015. Accepted: .June 30th of 2015.

Abstract Voltage sags are the most common power quality disturbances in electrical facilities. It may cause malfunction in sensitive equipment and process interruption. The distribution static compensator (DSTATCOM) is a device that can compensate voltage sags by injecting reactive power into distribution systems. This paper shows the influence on voltage sags characteristics by the presence of twelve-pulse DSTATCOM in the modified IEEE-13 distribution system. The analysis is performed by means of a random generation of disturbances using a MATLAB routine to identify the critical buses of the test system. Further, the DSTATCOM model taking advantage of the available elements from ATP/EMTP software is described. Simulations show that when DSTATCOM is placed directly to an affected bus it is possible to obtain a complete mitigation of the voltage sag. Finally, the relation between the reactive power injected by DSTATCOM, the type of voltage sag and the location of the affected bus is considered. Keywords: voltage sags; DSTATCOM; voltage compensation; ATP/EMTP; power quality

Análisis de la compensación de hundimientos de tensión en sistemas de distribución usando un DSTATCOM multinivel en ATP/EMTP Resumen Los hundimientos de tensión (sags) son perturbaciones de calidad de potencia comunes en los sistemas eléctricos. Estos pueden causar daños en equipos sensibles y la interrupción de procesos. El compensador estático de distribución (DSTATCOM) es un dispositivo que puede compensar sags inyectando potencia reactiva al sistema. Este artículo muestra la influencia que tiene la conexión de un DSTATCOM de 12-pulsos sobre los sags que se presentan en el sistema de distribución IEEE-13 modificado. El análisis se realiza mediante la generación aleatoria de perturbaciones usando una rutina de MATLAB para identificar los nodos críticos del sistema de prueba. Además, se describe el modelo del DSTATCOM usando los elementos disponibles en el software ATP/EMTP. Las simulaciones muestran que cuando el DSTATCOM se conecta al nodo afectado es posible mitigar el sag completamente. Finalmente, se considera la relación entre la potencia reactiva inyectada por el DSTATCOM, el tipo de sag y la ubicación del nodo afectado. Palabras clave: hundimientos de tensión (sags); DSTATCOM; compensación de tensión; ATP/EMTP; calidad de potencia

1. Introduction Voltage magnitude is one of the major factors that influence the quality power conditions in distribution systems. Voltage sags are among the most frequent and one of the main power quality (PQ) problems that exist in power systems [1]. They are defined as a decrease in RMS voltage at power frequency, between 0.1 and 0.9 in p.u. of the

nominal value with durations from 0.5 to 30 cycles [2]. Usually this disturbance is characterized by the retained voltage (deep), its duration and the phase jump. Furthermore, its presence increases the current in remote locations of an electrical system, produces mal operations or interruptions on sensitive equipment, leads to complete interruptions of the industrial process and affects the proper operation of electrical systems [3].

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 26-36. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48566


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015.

transformer to 4.16/0.480 kV, two shunt capacitor banks, unbalanced spot and distributed loads [7]. On the other hand, in steady-state the IEEE-13 bus system presented three types of disturbances: (a) voltage imbalances, (b) load unbalance, and (c) reactive power flows. Fig. 1 shows the scheme of the IEEE-13 system.

The most severe voltage sags are caused by faults and short-circuit generally associated to bad weather conditions (i.e. lightning strokes, storms, wind, etc.), transformer energizing, motor starting, overloads and other load variations. Although voltage sags are less harmful than interruptions, they are more frequent, for this reason their effects can be as important as those produced by an interruption. Currently, several works have been developed to preventing or reducing the effect of voltage sags in distribution systems. Most of these works focus on installing mitigation devices and use conventional methods such as capacitor banks, introduction of new parallel feeders (distributed generation-DG) and uninterruptible power supplies (UPS). However, due to high costs of these alternatives and the uncontrollable reactive power compensation, PQ problems are not solved completely. The distribution static compensator (DSTATCOM) is a shunt-connected compensation equipment which is capable of generating and absorbing reactive power. This equipment can be used to mitigate voltage sags and other PQ solutions such as power factor correction, voltage stabilization, flicker suppression and harmonic control [4,5]. In addition, DSTATCOM has the capability to sustain reactive current at low voltage, reduced land use and can be developed as a voltage and frequency support by replacing capacitors with batteries as energy storage [6]. In this paper, the analysis of voltage sag compensation on modified IEEE-13 (IEEE-13M) bus test feeder using a twelve-pulse DSTATCOM is presented. To understand DSTATCOM operation and the IEEE-13M system response, modeling and digital simulations are made with Alternative Transient Program (ATP/EMTP). Further, the relation between the reactive power injected by DSTATCOM, the sag features and the location of affected buses is considered. The paper is structured as follows: The ATP/EMTP model of IEEE-13M bus test feeder and its main features are detailed in section 2. Section 3 describes the method used to identifying the critical buses that highly affect the voltage profiles in the IEEE13-M system. The case studies (under voltage sag conditions) obtained by a scheme of random generation of disturbances are presented in section 4. The configuration and operation of DSTATCOM are briefly explained in section 5. Section 6 shows the model parameters of DSTATCOM in ATP/EMTP. Simulation results and analysis of DSTATCOM connection and its influence on voltage profiles of the test system are presented in Section 7. Finally in section 8, some conclusions of this work are presented.

Figure 1. Configuration of IEEE-13 bus test feeder. Source: the authors

2. Description and modeling of test system In order to analyze the DSTATCOM impact on voltage sags in distribution systems, this performed simulations based on the IEEE-13 bus test feeder. This system consists of 13 buses which are interconnected by means of 10 lines (overhead and underground with a variety of phasing), one generation unit, one voltage regulator unit consisting of three single-phase units connected in wye, one main transformer ∆ to 115/4.16 kV (substation), one in-line

Figure 2. Configuration of IEEE-13M distribution system. Source: the authors

27


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015. Table 1. Fault types Fault Type Single line-to-ground (LG) Line-to-line (LL) Double line-toground (2LG) Three line (3L-3LG)

Involved Phases A B C AB BC AC AB BC AC ABC ABC to ground

Table 2. Critical buses identification (a) local functions and overall function for bus 632, (b) overall functions for IEEE-13M buses (b) (a) Affected Bus OF Fault Type Bus 632 1 671 – 692 5.207 121.174 2 632 6.624 112.017 3 680 7.966 101.604 4 675 12.378 86.287 5 633 14.144 79.702 6 650 12.874 74.157 7 684 4.073 25.123 8 634 4.370 24.508 9 645 4.177 22.038 10 646 19.751 16.335 11 611 20.389 3.742 652 2.283 112.017 Source: the authors

Fault Code (F) 1 2 3 4 5 6 7 8 9 10 11

Source: the authors

Due to the IEEE-13 bus test system, which is a distribution network with a radial configuration, any fault produced near to the main transformer (bus 650) significantly affects all voltage profiles of the system. This observed behavior in the system allows identifying easily the critical buses that under fault conditions most disturb the voltage profiles of the nearby buses. To avoid this condition, the IEEE-13 system is modified connecting other substationregulator unit at bus 680 (where no loads are connected) with the same characteristics of the original one. Fig. 2 shows the configuration of the IEEE-13 modified test system (IEEE-13M). The use of a new generation unit guarantees that critical buses are distributed in different points of the system reducing the importance of buses 650 and 632 in the study of effects produced by voltage sags. Although several tools have been used to simulate voltage sags, this paper has been based on the development of models in ATP/EMTP and its ability to analyze systems in time-domain. The capability of this software to analyze voltage sags is shown in [8].

impedance-to-ground equal to zero are performed. In total 102 cases are simulated considering that the IEEE-13M system is unbalanced and some buses just have a single-phase or twophases. In this paper, an overall function to assess the complete effect that each bus has on the other buses in the IEEE-13M system is proposed. This function takes into account all local functions obtained by fault type for each bus of interest. The equation of this function is given by: (2) where, is the local function for each fault condition in the bus . Table 2(a) shows an example of local functions for bus 632 by fault type and its overall function 112.017. In addition, the Table 2(b) shows a summary with all overall functions of IEEE-13M buses organized in descendent order. Since the overall function is a parameter that totalizes the effect of local functions by fault type, a critical bus can be considered as the one with the largest overall function. In the case of IEEE-13M test system, the most critical is bus 671 with an overall function 121.174. On the other hand, the less critical is bus 652 the overall function of which is 2.283. Observing the IEEE-13M topology shown in Fig. 2 it is possible to note that bus 671 is the most critical because it is a central bus located in a zone with many lines and spot load connections. To analyze the response of the test system in presence of voltage sags using DSTATCOM, in this paper five buses with the largest overall function are selected as the most critical. These buses are 671, 632, 680, 675 and 633. However, depending on the number of buses in the system or for different case studies other selection criteria can be used.

3. IEEE-13M critical buses identification The critical bus of a system is a load connection point where the occurrence of faults will cause the reduction of voltage profiles in all buses of the system and produce deeper voltage sags. To analyze how much the IEEE-13M system is affected by the occurrence of a fault condition in a specific bus , the following local function is used: (1)

Where, is the shunt fault type according to the information given in Table 1, is the bus where the fault is produced, is each bus of the system, is the phase, is the reference (pre-fault) voltage in p.u. and is the voltage in p.u. fault condition. The use of a squared difference in (1) prevents negative values in local function when . Eq. (1) shows that the local function is greater when a specific fault in a specific bus produces deeper voltage sags. For each bus of IEEE-13M system simulations for all fault types connecting an

4. Case studies In literature several alternatives to analyze and to assess the number of voltage sags and the impact produced by faults

28


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015.

in power systems have been proposed. The most widely used are the “method of fault positions” and the “method of critical distances”, for more details see Ref. [3], [9–12]. In this paper, a method of stochastic generation of disturbances based on the distribution system model and statistical data of faults is applied [13]. Although this method cannot determine the exact number of voltage sags that may occur in the IEEE13M system, it enables the voltage profile of the test system produced by voltage sags to be determined. This method selects the location of disturbance (critical buses identified in section 3), the fault type and the fault resistance based on the generation of random numbers with a probability density. In addition, the method assumes that sags are originated only by faults, and they are rectangular. The test system has been simulated in 200 different voltage sag scenarios. The characteristics of the faults have been randomly generated with a Matlab® function using the following parameters:  Location of fault: a fault may occur at any critical bus (B671, B632, B680, B675 and B633) or at the 25%, 50% and 75% of any line that connect the critical buses (L632-633, L632-645, L632-671, L671-675, L671-680, L671-684 y LRG60-632). The density function for buses and lines is 60% and 40%, respectively.  Probability of fault types: LG:63.4%, LL and 2LG: 22.1%, 3L and 3LG: 14.5%  Fault resistance: eight different values between 0.04Ω and 0.6Ω are established. The fault resistance is selected by a uniform density function.  Initial time and duration of the fault: for simulations the start-time is 50 ms and the duration of the fault is 10 cycles. Simulation procedure can be summarized as follows: every time the Matlab® function is run; several quantities are randomly generated (location of fault, type of fault and fault resistance). These parameters are modified in the ATP/EMTP simulation and the voltage profiles in all buses of the system are recorded. To improve the simulation speed, no protection devices have been included and the effect of transformer simulation will be neglected. For each simulation, the results of fault configurations are used to estimate a new function. Later, in order to analyze the difference between voltage profiles before and after, voltage sag condition Eq. (1) is used. Table 3 shows three of the worst cases that present voltage sag conditions in IEEE13M. These are the case studies (with large local functions) where DSTATCOM for voltage sags mitigation will be evaluated. Taking into account that the presence of voltage sags in the system has a random nature and the most critical conditions are produced by three-line faults these case studies are valid examples of IEEE-13M system response.

Table 3. Case studies for DSTATCOM implementation Resistance Case Bus Fault Type Value [Ω] 1 N633 3LG – 11 0.04 2 N675 3L – 10 0.04 3 N632 3LG – 11 0.10 Source: the authors

Local Function 11.913 11.556 10.857

Figure 3. Basic structure of DSTATCOM. Source: the authors

used to improve the PQ in power systems. This device is connected near or directly to the load buses at distribution systems and its installation can completely mitigate voltage sags and it may reduce the number of sensitive equipment failures [6]. The basic structure of a DSTATCOM is shown in Fig. 3. It consists of a DC energy storage device (DC source or DC capacitor), a three-phase inverter module (based on IGBT, thyristor, etc.), a control stage, an AC filter, and a step-up coupling transformer [14]. The main unit of DSTATCOM is the voltage source inverter (VSC) that convers the DC voltage across the storage device into a set of three-phase AC output voltages. These voltages are in phase and coupled with the distribution system through the reactance of the coupling transformer [5]. The VSC is used to completely replace the voltage or to inject the difference between the rated voltage and the voltage during the sag condition. Usually, the DSTATCOM configuration consists of a conventional six-pulse inverter arrangement. The configurations that are more sophisticated use multi-pulse or multi-level configurations. In this paper, a twelve-pulse inverter configuration is used. This arrangement has two sixpulse inverters connected by two transformers with their primaries connected in series. On the other hand, the control stage corrects the voltage drop at the affected bus by adjusting the magnitude and phase of the output voltage of the DSTATCOM during the voltage sag condition. Finally, the AC filter provides the signal conditioning and the coupling transformer allows the transfer of energy between the network and the power converter

5. Configuration and operation of DSTATCOM 5.1. Basic structure of DSTATCOM The distribution static compensator (DSTATCOM) is a shunt connected reactive compensation device that can be

29


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015.

device. In a high voltage system, the leakage inductances of the power transformer can function as coupling reactances and can also filter the harmonic current components that are produced by the power inverter module. 5.2. Operation of DSTATCOM The DSTATCOM is a solid-state device with the ability to control the voltage magnitude and the phase angle. For this reason, it can be treated as a voltage controlled source but can also be seen as a controlled current source. The block diagram of DSTATCOM and its connection scheme is shown in Fig. 4. The controller regulates the reactive current that flows between the compensator and the distribution system in such a way that the phase angle between the output voltage of the DSTATCOM ( ) and the power system voltage ( ) is dynamically adjusted so that the DSTATCOM absorbs or generates the desired reactive power at a specific point connection. The active power ( ) and reactive power ( ) that flow through the reactance of the coupling transformer can be estimated using the following equations:

|

|| | ⁄

(3)

|

|| | ⁄ ∗ | | ⁄

(4)

Figure 4. Block diagram of DSTATCOM. Source: the authors

needed to mitigate the voltage sag and is used by DSTATCOM to inject the reactive power. To determine the size of this capacitor it is necessary to determine the fault current in the system, which is defined as the difference between the current before and after the sag condition [15]. Thus, the following equation is used in a three-phase system [16]:

3∙

∗∆

á

(5)

where, is the voltage across per phase, is the peak voltage per phase, ∆ is the difference between the current before and after the fault condition in the load, is the period of one cycle of voltage and current and á is per phase. the upper limit of the energy storage in The value of ∆ can be determine by measuring the load current before and during the voltage sag [16]. The value of

where, is the phase angle between and . The expressions presented in (3) and (4) show that the basic operation of DSTATCOM varies depending upon . The operation modes of this device are as follows:  If , the reactive power exchange is zero and the DSTATCOM does not generate or absorb reactive power.  When , the current flows from DSTATCOM to the power system. In this condition, the system sees the compensator as capacitance and DSTATCOM generates reactive power.  When , the current flows from the power system to DSTATCOM. In this condition, the system sees the compensator as inductance connected to its terminals and the DSTATCOM absorbs reactive power.  When the phase angle 0 the active power exchange is zero. This can be achieved with the control stage. 6. DSTATCOM modeling on ATP/EMTP In this paper, the model of DSTATCOM is a three-phase, twelve-pulse inverter that is connected to the distribution system through a coupling transformer. Fig. 5 shows the ATP/EMTP model of the compensator. In this section, the elements that conform to each stage of the compensator are described. 6.1. DC energy storage device DC voltage storage device is modeling as DC source connected in parallel with a capacitor . The importance of the capacitor sizing is that this element stores the energy

Figure 5. Scheme of 12-pulse DSTATCOM in ATPdraw. Source: the authors

30


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015.

is determined from the ATP/EMTP simulation for each case study. The value of á is the upper limit of . voltage that in this case can be two or three times of 6.2. Converter/Inverter design The basic configuration of the DSTATCOM is the twelve-pulse inverter arrangement shown in Fig.5. This configuration uses two 6-pulse inverters connected in parallel and which share the same DC-source. The arrangement has two transformers with their primaries connected in series. The first transformer is in the connection and the second transformer is in the ∆ connection being delayed 30° with respect to the transformer [17]. The twelvepulse inverter removes the 5th and 7th harmonic, keeping only the 12 1 harmonic order for 1,2 … For each six-pulse inverter six bidirectional semiconductors are used that can be IGBTs or GTOs. However, for the DSTATCOM model presented in this paper, ideal type-13 switches controlled by TACS are used as valves. Following this element is connected a resistance in series that represents the inverter losses. It is important to emphasize that the inverter model includes a snubber circuit with diodes connected in antiparallel to each type-13 switch. This circuit is used to reduce / and / due to the switches commutation. Fig. 6 illustrates the configuration in ATP/EMTP for one branch that conforms a six-pulse inverter. The resistance and and represent the inverter losses and the resistance capacitor complete the snubber circuit.

Figure 6. Configuration and values of a six-pulse inverter branch. Source: the authors

Figure 7. LC filter response. Top: magnitude in dB -- Bottom: phase in deg. Source: the authors

6.3. Controller scheme

6.5. Harmonic filter

The control stage used in simulations is divided into two sections. The first part controls the voltage of DSTATCOM varying the DC voltage of capacitor that stores the energy. The second part controls the phase shift of DSTATCOM output voltages. This control is achieved by variations in the switching angle of each type-13 switch using a TACS–Source–Pulse-23 as shown in Fig. 6. In this paper, the controller of DSTATCOM is configured in a discrete form. This means that for each case the voltage magnitude of and the commutation angles of electronic devices must be set manually. Besides, to obtain the reactive power flow from DSTATCOM to the IEEE-13M system the compensator voltage should be greater than the system voltage. Finally, to reduce the active power exchange the output voltages of the compensator lag the system voltages by a small angle.

Due to DSTATCOM, output voltages are signals composed by rectangular pulses, a LC filter to reduce the harmonic distortion is implemented. The values of the inductance and the capacitance of the filter are 10 mH and 253 µF, respectively. Fig. 7 shows the frequency response of the LC filter with a cut-off frequency of 100 Hz. In addition, the input signal (DSTATCOM output voltages) and the output signal of the LC filter are illustrated in Fig. 8

(a)

(b)

(c)

(d)

6.4. Configuration of coupling transformer This device is a ∆ transformer that allows the energy exchange between the AC system and DSTATCOM. In applications that include electronic devices the coupling transformer is used to adapt the circuit impedances, change the values of inverter output voltages and connect to the next stage.

Figure 8. Filter input and output signals (a) Input voltage phase A (b) Input voltage phase B (c) Input voltage phase C (d) output three-phase voltage. Source: the authors

31


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015.

Table 4. Voltage profiles of IEEE-13 and IEEE-13M systems without DSTATCOM Bus 650 RG60 632 633 634 645 646 671 680 684 611 652 692 675 Source: the authors

A 1.00 1.06 1.02 1.02 0.99 0.99 0.99 0.99 0.98 0.99 0.98

Voltage [p.u] IEEE-13 Report B 1.00 1.05 1.04 1.04 1.02 1.03 1.03 1.05 1.05 1.05 1.06

C 1.00 1.07 1.02 1.02 0.99 1.01 1.01 0.98 0.98 0.98 0.97 0.98 0.98

A 0.97 1.05 1.02 1.01 0.99 0.99 0.99 0.99 0.99 0.99 0.99

The THDv in percent for the output three-phase voltages is 0.283%. Because a twelve-pulse DSTATCOM is used in this paper, then the THDv is small.

Voltage [p.u] IEEE-13 ATP B 0.97 1.06 1.04 1.04 1.02 1.03 1.03 1.05 1.05 1.05 1.05

C 0.96 1.05 1.00 1.00 0.98 1.00 1.00 0.97 0.97 0.97 0.97 0.97 0.97

Voltage [p.u] IEEE-13M ATP A B 0.97 0.97 1.05 1.05 1.03 1.03 1.02 1.03 1.02 1.03 1.02 1.02 1.01 1.02 1.01 1.02 1.01 1.00 1.01 1.05 1.00 1.02

C 0.97 1.05 1.02 1.02 1.02 1.02 1.02 1.00 1.01 1.00 1.00 1.00 1.00

Table 5. DSTATCOM angles adjustment for case 1 Sag Condition (system) DSTATCOM Source: the authors

7. Simulation results 7.1. Reference case: simulations without DSTATCOM Initially, the IEEE-13 and IEEE-13M test systems have been verified without installing the DSTATCOM. Table 4 shows the voltage profiles of the IEEE-13 system presented in [7] and the results provided by ATP/EMTP simulations for the IEEE-13 system and IEEE-13M system. This table illustrates that all buses of the IEEE-13 system have voltages between 0.96 and 1.06 in p.u., while the buses of IEEE-13M system have voltages between 0.97 and 1.05 in p.u. Note that even with the additional generation unit the maximum absolute error obtained from simulations is 3.5%. This error supports the validity of the distribution system modeling.

Phase A

Angles (degrees) Phase B

Phase C

-71.5 -70

-168.6 -170

48.2 50

 Step 5: Calculate the reactive power injected by DSTATCOM that mitigates the voltage sag condition in the IEEE-13M system. 7.1.1. Case study 1: voltage sag condition at bus 633 In this case, a three line-to-ground fault (type 11) at bus 633 with a fault resistance of 0.04 Ω is analyzed. This scenario has a local function of 11.913 and produces an average voltage sag per phase of 0.104 in p.u. To reduce the active power exchange, the output voltages of the system lag the DSTATCOM voltages by a small angle as shown in Table 5. After the angles adjustment, is varied from 2 kV to 14 kV to obtain a reduction in the local function. This variation is illustrated in Fig. 9. In this case, the lowest local function obtained is 0.008 applying a DC voltage of 12 kV. In this case, the use of compensator improves the voltage of bus 633 (affected bus) reaching an average value of 1.018 p.u. per phase. In addition, all voltages of IEEE-13M system are also improved achieving a minimum value of 0.936 p.u. at bus 650-phase B and a maximum value of 1.068 p.u. at bus 632-phase A. Fig. 10 shows the impact of the DSTATCOM connection on voltage profiles (average per phase) of IEEE13M system obtained with ATP/EMTP. The RMS voltage per phase at bus 633 before and after the connection of DSTATCOM is shown in Table 6. Note that the compensator connection contributes to mitigating completely the voltage sag in all phases of affected bus.

7.2 Case studies using DSTATCOM under voltage sag conditions In order to evaluate the performance of DSTATCOM and the response of IEEE-13M system under voltage sag conditions, the following procedure is developed for each case study defined in section 4:  Step 1: Connect the DSTATCOM at bus in which the voltage sag occurs (bus with the deeper voltage).  Step 2: Adjust the angles of DSTATCOM output voltages with the distribution system voltages to reduce the active power exchange.  Step 3: Change the capacitor voltage ( ), in order to find a solution in which the local function described in (1) is the smallest possible and the voltage of IEEE13M buses satisfying the condition 0.9 in p.u.  Step 4: Determine the value of from value and the output voltages of DSTATCOM. 32


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015. Table 7. DSTATCOM angles adjustment for case 2

Local Function

14 12 10

Sag Condition (system) DSTATCOM Source: the authors

8 6

Angles (degrees) Phase B

Phase C

-66.6 -70

-174.7 -170

53.5 50

4 2

0,008

0

14 2000

4000

6000

8000 10000 12000 14000

Local Function

0

DC Voltage [V] Figure 9. Local function vs. DC voltage for case 1. Source: the authors

1,2

12 10

1,0

8 6 4 2

0,038

0 0

0,8

2000 4000 6000 8000 10000 12000 14000

DC Voltage [V]

0,6

Figure 11. Local function vs. DC voltage for case 2. Source: the authors

0,4 0,2

1,2

0,0

Average voltage [p.u]

Average voltage [p.u]

Phase A

IEEE-13M Buses Before voltage sag Sag condition without DSTATCOM Sag condition with DSTATCOM Figure 10. Voltage profiles of IEEE-13M system for case 1. Source: the authors

1,0 0,8 0,6 0,4 0,2 0,0

Table 6. Voltage comparison at bus 633 for case 1 Without DSTATCOM With DSTATCOM Source: the authors

Phase A 0.107 1.025

Angles (degrees) Phase B 0.105 1.017

IEEE-13M Buses Phase C 0.099 1.014

Before voltage sag Sag condition without DSTATCOM Sag condition with DSTATCOM Figure 12. Voltage profiles of IEEE-13M system for case 2. Source: the authors

For this condition 15000 , á 20.9 , 16.67 and 12 . 15,27 , ∆ From these values, the size using the Eq. 5 is 197 µF. Finally, the reactive power of the DSTATCOM is calculated as follows, ∗ ∗ (6)

7.1.2. Case study 2: voltage sag condition at bus 675 Average voltage sag of 0.123 p.u due to three-line fault (type 10) at bus 675 with a fault resistance of 0.04 Ω is presented. The angles of DSTATCOM output voltages are adjusted with respect to the power system angles as shown in Table 7. In this case, the DC voltage is varied from 2 kV to 13 kV and the local function is reduced from 11.556 to 0.038 applying 11 kV. Behavior of local function with respect to is shown in Fig. 11.

where, 377 rad/s and is the nominal line-toline voltage of the system at the point connection. For IEEE13M system 4.16 . Using the Eq. 6 the rating reactive power injected by DSTATCOM is 1285 kVAR.

33


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015.

Without DSTATCOM With DSTATCOM Source: the authors

Phase A 0.124 0.953

Angles (degrees) Phase B 0.127 0.952

Local Function

Table 8. Voltage comparison at bus 675 for case 2 Phase C 0.118 0.948

Table 9. DSTATCOM angles adjustment for case 3 Sag Condition (system) DSTATCOM Source: the authors

14 12 10 8 6 4 2 0

0,028 0

Phase A

Angles (degrees) Phase B

Phase C

-54.2 -60

-174.2 180

65.4 60

1000 2000 3000 4000 5000 6000 7000

DC Voltage [V] Figure 13. Local function vs. DC voltage for case 3. Source: the authors

Average voltage [p.u]

1,2 1,0

The effect of DSTATCOM connection on voltage profiles (average per phase) of IEEE-13M system is illustrated in Fig. 12. For this condition, the minimum voltage of the IEEE-13M system is 0.921 p.u. at bus 650-phase B and maximum voltage is 1.019 p.u. at bus 680-phase B. Table 8 presents the RMS voltage per phase at bus 675 during the voltage sag condition with and without DSTATCOM. It can be observed that the compensator connection improves the voltage of bus 675 obtaining an average increase of 0.828 p.u with respect to voltage sag. From ATP/EMTP simulation, the compensator features are á 15000 , 13971 , ∆ 104.8 , 16.67 and 11000 . Using Eq. 5 the calculated capacitance value is = 704.06 µF. Finally, from Eq. 6 the injected reactive power of the DSTATCOM is 4593 kVAR.

0,8 0,6 0,4 0,2 0,0

IEEE-13M Buses Before voltage sag Sag condition without DSTATCOM Sag condition with DSTATCOM Figure 14. Voltage profiles of IEEE-13M system for case 3. Source: the authors

7.13. Case study 3: voltage sag condition at bus 632

Table 10. Voltage comparison at bus 632 for case 3

Voltage sag occurring at bus 632 is due to a three line-toground fault (type 11) with a resistance of 0.10 Ω. The local function in this scenario is 10.857 and produces an average voltage sag of 0.309 p.u. Table 9 shows the adjustment of DSTATCOM phase angles with respect to angles of the IEEE-13M system during the voltage sag condition. This adjust reduces the active power flow between the compensator and the IEEE-13M system. From simulations the local function is reduced from 10.857 to 0.028 applying a DC voltage of 5 kV. Fig. 13 shows the variation of the local function with respect to . The response of IEEE-13 M voltage profiles before and after the DSTATCOM installation is shown in Fig. 14. In this case, the compensator improves the voltage of the affected bus (B632) reaching an average value of 1.067 p.u. With these changes all voltages of the IEEE-13M system are also improved achieving a minimum value of 0.945 p.u. at bus 650-phase B and a maximum value of 1.074 p.u. at bus 632phase B. The RMS voltages of bus affected by sag condition using DSTATCOM are presented in Table 10. Note that the voltage sag is mitigated about 0.759 p.u. with respect to disturbance value (sag condition) connecting the compensation device.

Without DSTATCOM With DSTATCOM Source: the authors

Phase A 0.312 1.068

Angles (degrees) Phase B 0.318 1.074

Phase C 0.296 1.060

In DSTATCOM, the capacitor is used to inject reactive power to the system during the voltage sag condition. For this condition, 15 , á 6.27 , ∆ 47 , 16.67 and 5 . value is 73.71 µF and the injected From these values, the reactive power of DSTATCOM is 480.9 kVAR. 8. Results comparison A summary with the location of each bus under sag condition, the capacitor and the reactive power injected by DSTATCOM is shown in Fig. 15. Simulations show that the compensator injects the largest reactive power in case 2 with 4539 kVAR, while in cases 1 and 3 the compensator injects 1285 kVAR and 481 kVAR, respectively. From these results it is possible to notice that the DSTATCOM inject more reactive power in those cases where the affected bus by voltage sag is farthest from generation zones. 34


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015.

Case 1 Bus 633 VDC = 12 kV CDC = 197 µF

Case 2 Bus 675 VDC = 11 kV CDC = 704.1 µF

[5]

[6]

[7] [8] Case 3 Bus 632 VDC = 5 kV CDC = 73.7 µF

[9]

[10] Figure 15. Results of case studies using DSTATCOM. Source: the authors [11]

9. Conclusions [12]

In this paper, the influence of a 12-pulse DSTATCOM on compensation of voltage sags caused by faults in the IEEE13M distribution system has been analyzed. In order to evaluate the effect of DSTATCOM installation, the relation between the size of capacitor , the reactive power injected by compensator and the location of affected buses has been taken into account. Further, a method to identify the critical buses based on a random generation of faults, the system characteristics and statistical data of faults was applied. It was observed that the presence of DSTATCOM improved the voltage profiles not only in the affected bus but also in all IEEE-13M system voltages. In fact, with this device the voltage of buses during sag conditions are compensated close to the reference values (before voltage sag condition) and the voltage sags are completely mitigated. Developed features and graphic advantages available in ATP/EMTP were used to conduct all aspects of the DSTATCOM implementation, the IEEE-13M system modeling and to carry out extensive simulation results. This work is still under development, the next step in research should consider the modeling of an online control stage and the improvement of the converter stage to 24 or 48-pulse multistage DSTATCOM.

[13] [14]

[15] [16] [17]

H.E. Rojas-Cubides, is Electrical Engineer, MSc. in Electrical Engineering from Universidad Nacional de Colombia and PhD. candidate in the same institution. From 2003 to 2010, he worked for engineering companies within the power and telecommunications sector. Currently, he is full professor in the Electrical Engineering Department, Faculty of Engineering, Universidad Distrital Francisco José de Caldas, Bogotá, Colombia. Electromagnetic Compatibility and Interference Group GCEM. His research interests include: simulation, modeling and analysis of transmission and distribution systems, application of signal processing techniques on electrical disturbances, adaptive algorithms and high voltage tests.

References [1] [2] [3]

[4]

technologies, IEEE Power Eng. Soc. Winter Meet., 2, pp. 1132-1137, 1999. Nazarloo, A., Hosseini, S.H. and Babaci, E., Flexible D-STATCOM performance as a flexible distributed generation in mitigating faults, in 2nd IEEE Power Electronics, Drive Systems and Technology Conference (PEDSTC), 2011, pp. 568-573. Masdi, H., Mariun, N., Mahmud, S., Mohamed, A. and Yusuf, S., Design of a prototype D-STATCOM for voltage sag mitigation, in Power and Energy Conference, 2004. PECon 2004. Proceedings. National, 2004, pp. 61-66. A. IEEE Distribution System Subcomitte, IEEE 13 Node Test Feeder Report, 2001. Martinez-Velasco, J.A. and Martin-Arnedo, J., Voltage sag analysis using an electromagnetic transients program, IEEE Power Eng. Soc. Winter Meet. 2002, 2, pp. 1135-1140, 2002. Bollen, M., Fast assessment methods for voltage sags in distribution systems, IEEE Trans. Ind. Appl., 32 (6), pp. 1414-1423, 1996. DOI: 10.1109/28.556647 Qader, M., Bollen, M. and Allan, R., Stochastic prediction of voltage sags in a large transmission system, IEEE Trans. Ind. Appl., 35 (1), pp. 152162, 1999. DOI: 10.1109/28.740859 Golkar, M.A. and Gahrooyi, Y., Stochastic assessment of voltage sags in distribution networks, in International Power Engineering Conference, 2007. IPEC 2007, 2007, pp. 1224-1229. Goswami, A.K., Gupta, C.P. and Singh, G.K., The method of fault position for assessment of voltage sags in distribution systems, in IEEE Region 10 and the Third international Conference on Industrial and Information Systems, 2008. ICIIS 2008., 2008, pp. 1-6. Bollen, M., Understanding power quality problems. Voltage Sags and Interruptions, 1st Ed.. New York: Wiley-IEEE Press, 2013, 672 P. Taylor, G., Power quality hardware solutions for distribution systems: Custom Power, in IEE North Eastern Centre Power Section Symposium on the Reliability, Security and Power Quality of Distribution Systems, 1995, pp. 1-9. DOI: 10.1049/ic:19950459 Masdi, H. and Mariun, N., Construction of a prototype D-Statcom for voltage sag mitigation, Eur. J. Sci. Res., 30 (1), pp. 112-127, 2009. Hsu, C. and Wu, H., A new single-phase active power filter with reduced energy-storage capacity, 143 (I), pp. 1-6, 1996. Scholar, I. and Engineering, E., Modeling and simulation of a distribution STATCOM (D-STATCOM) for power quality problems-voltage sag and swell based on sinusoidal pulse width modulation, in 2012 International Conference on Advances in Engineering, Science and Management (ICAESM), 2012, pp. 436-441.

Dugan, R., McGranaghan, M., Santoso, S. and Beaty, W., Electrical power system quality, 3rd Ed. USA: McGraw-Hill Professional, 2012, 580 P. IEEE Standards, IEEE 1159-2009: Recommended Practice for monitoring electric power quality. 2009, 81 P. Bollen, M., Understanding power quality problems. Voltage sags and interruptions, Second Ed. New York: Wiley-IEEE Press, 1999, pp. 1-672. DOI: 10.1109/9780470546840 Reed, G., Takeda, M. and Iyoda, I., Improved power quality solutions using advanced solid-state switching and static compensation

A.S. Cruz-Bernal, is Electrical Engineer from Universidad de La Salle, Colombia. She worked for electrical engineering companies from 2012 to 2013. She is currently instructor in Servicio Nacional de Aprendizaje SENA, Colombia. Her research interests include: modeling and simulation of power quality solutions, design of electrical facilities and grounding.

35


Rojas-Cubides et al / DYNA 82 (192), pp. 26-36. August, 2015. H.D. Rojas-Cubides is Licentiate in Electronics from Universidad Pedagogica Nacional, Bogotá, Colombia. Is Specialist in Automatics and Industrial Computing from Universidad Autonoma de Colombia and candidate to MSc. in Industrial Automation from Universidad Nacional de Colombia. Currently, he is Professor in the Electronic Engineering Department, Universidad Central Bogotá, Colombia and Instructor in Servicio Nacional de Aprendizaje SENA. His research interests include: modeling and analysis of control systems, simulation of automation process and design of power electronic devices.

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64

36


Estimating the produced power by photovoltaic installations in shaded environments Carlos Andrés Ramos-Paja a, Andrés Julián Saavedra-Montes b& Luz Adriana Trejos c a

Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. caramosp@unal.edu.co Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. ajsaaved@unal.edu.co c Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. latrejosg@unal.edu.co

b

Received: April 29th of 2014. Received in revised form: January 27th of 2015. Accepted: July 02nd of 2015

Abstract A model to improve the estimation of the power and energy from photovoltaic installations under dynamic shading conditions is proposed in this paper. The model considers the shaded profile in each photovoltaic module and the temperature profile to update the model parameters of each photovoltaic module. The impact of dynamic shades on the power production is analyzed, illustrating the large errors introduced by classical prediction approaches, which consider non-shading at all or average shading. Moreover, a procedure to model the shades profile in any environment is described and illustrated. Finally, experimental irradiance and environment temperature profiles are used to evaluate the performance of the proposed model in contrast with the classical approaches. The results illustrate that the classical approaches overestimate the power produced by the photovoltaic array, in contrast with the power produced by the proposed model. Keywords: mismatching conditions; photovoltaic generator; power and energy prediction; shade modeling.

Estimación de la potencia producida por instalaciones fotovoltaicas en entornos sombreados Resumen En este artículo se propone un modelo para mejorar la estimación de la potencia y energía de instalaciones fotovoltaicas bajo condiciones de sombreado dinámicas. El modelo considera el perfil de sombreado en cada módulo fotovoltaico y el perfil de temperatura para actualizar los parámetros del modelo de cada módulo. El impacto de las sombras dinámicas en la producción de potencia es analizado, ilustrando grandes errores introducidos por las aproximaciones de predicción clásicas, las cuales o no consideran sombreado alguno o consideran un sombreado promedio. Más aún, se describe y se ilustra un procedimiento para modelar el perfil de sombras en cualquier ambiente. Finalmente, perfiles experimentales de radiación y temperatura ambiente son utilizados para evaluar el desempeño del modelo propuesto en contraste con las aproximaciones clásicas. Los resultados muestran que las aproximaciones clásicas sobreestiman la potencia producida por el arreglo fotovoltaico, en contraste con la potencia producida por el modelo propuesto. Palabras clave: condiciones irregulares; generador fotovoltaico; predicción de potencia y energía; modelado de sombra.

1. Introduction Nowadays, the photovoltaic (PV) electricity generation is a cost competitive, clean and widely used option to produce electrical energy. PV power plants are suitable power sources for isolated and grid connected consumers. The main advantages of those systems are the reduced maintenance required, which contrast with the high requirement of diesel generators and fuel

cells. Therefore, PV generators are an alternative for distributed generation systems and for their integration in power grids [1, 2]. An application of the PV systems is the building integrated photovoltaic covering roof surfaces with optimal solar exposure [3, 4], this is to make use of the space available on rooftops and parking lots. Such urban PV systems have generated new challenges in the sizing of the installation, the cost analysis and the planning of the power produced [5]. To address such problems the designers require to accurately predict the power

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 37-43. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48567


Ramos-Paja et al / DYNA 82 (192), pp. 37-43. August, 2015.

that will be generated by the system. Such prediction allows the number of modules required to fulfill the consumer load profiles to be calculated, it also allows the return-of-investment time in which the installation cost will be recovered to be estimated, and it also provides the information required to schedule the appropriate instant to inject the power to the grid. Urban environments introduce multiple sources of shading over the PV modules that are difficult to overcome, e.g. Fig. 1 shows the shading generated by a building over a set of PV modules for different times of the day. Such shades generate a mismatching effect over the PV arrays that strongly degrade the power produced [6] in a non-linear way, which is a challenge to estimate the generator power profile accurately. This topic was addressed in [3], where models of PV arrays considering different temperatures and irradiance levels are proposed. The objective of such a paper is to develop power peak prediction schemes to identify the most efficient configuration under partial shading conditions. The authors propose an electrical model where the surrounding temperature and the solar irradiance are included in the parameter values, however such average methods could over or under estimate the power produced by the PV modules, and therefore give false information in order to invest, plan or operate the system. In addition, the shades over the modules change dynamically during the day, increasing the complexity of the power prediction. Such an aspect has been addressed by neglecting the shading effect or by averaging the shading profile [3, 7]. In [7], five algebraic methods are used to predict the behavior of PV generators under natural sunlight. The results show that the performance may be described with sufficient accuracy using two methods, but only taking into

account incident global irradiance, and module temperature. If the PV modules are not electrically characterized in Standard Test Conditions (STC) poor results may be achieved regardless of the selected prediction method. In [3], although different shading for each module is considered, the dynamic shading produced during a regular day is neglected. In any case, those methods do not take into account the different shading effect on each module and/or the dynamic change of the shades. Therefore, despite the fact that it improves the energy prediction, errors are still generated since the wide variation of the shades is not taken into account. In this paper a novel modeling approach able to predict the PV power profile in presence of dynamic shading conditions is proposed. Such a solution reduces the prediction errors in comparison with classical approaches by considering the effective irradiance profile for each module without any averaging process. This paper is organized as follows: the mismatching effect to identify the aspects that must be modeled in order to improve the power prediction under shading is discussed in Section 2. Then, an array model able to calculate the PV power from the static values of the effective irradiances in each module is described in Section 3. In Section 4 the improved prediction model accounting for the dynamic changes of the shades that affect each module is introduced. Such a section also evaluates the model performance in comparison with the classical approach using an experimental irradiance profile. Finally, conclusions close the paper. 2. The mismatching effect and the bypass diodes The mismatching effect is caused by difference between the operation conditions of PV modules that compose an array. Also, the mismatching conditions are generated by the difference between the module parameters, dust, heat sources, etc. But the largest sources of mismatching conditions are shades generated by objects near to the PV installation. In addition, since the relative sun position changes, the shape of the shade changes dynamically, producing variable mismatching conditions to the PV array. Because the sun translation is predictable, the shade profiles are also predictable, which in the case of thin objects could be linear or circular. In the case of large objects, the shade moves across the array in linear or diagonal directions. Therefore, the shading profile at the beginning of the day will be different to the one affecting the array at another time of the day. Fig. 2 shows the structure of a typical PV array in seriesparallel configuration [6]: the array is formed by parallelconnected strings, which in turn are made of the seriesconnection of PV modules. The number N of modules in a series depends on the voltage required at the array terminals, while the number M of strings in parallel is defined to meet the load power requirements. In such a system, the short-circuit current of each module depends on its particular irradiance and parameters. Hence, if a shade is covering a module, partially

Figure 1. PV array in series-parallel structure. Source: The Authors 38


Ramos-Paja et al / DYNA 82 (192), pp. 37-43. August, 2015.

Figure 2. PV array in series-parallel structure Source: The Authors

or totally, its current could be lower than the one imposed to the string by other modules more irradiated. In such a case the PV cells associated to that module experiment with negative voltages and a fast increasing current which may cause hot spots and the destruction of the cells [8]. To avoid such negative effects, bypass diodes are connected in antiparallel to the module, in this way the current in excess, flows through the bypass diode. Fig. 3 shows the possible cases that must be analyzed in a PV string taking into account the bypass diode operation. In Fig. 3(a) two modules are seriesconnected under uniform operating conditions, in this case the bypass diodes remain inactive. In Fig. 3(b) the same modules are now under irregular conditions since one of the modules is partially shaded, however the current of the string is lower than the short circuit current of the shaded module, in this case the bypass diode associated to it remains inactive. Finally in Fig. 3(c) the same modules are again under irregular conditions, but in this case the bypass diode associated to the shaded module becomes active since the current of the string is higher than the short circuit current of the shaded module [10,11]. When a bypass diode becomes active, it limits the negative voltage of the cells by imposing a value around 0.4 V to 0.7 V, which reduces the power losses and protects the cells in the module [3,12,13]. A common approach is to consider that the module associated to an active bypass diode becomes short circuited. Without loss of generality, this approach is used in this work to improve the speed calculation. In strings with one or more active bypass diodes, the current vs. voltage (I-V) and power vs. voltage (P-V) curves exhibit multiple Maximum Power Points (MPP) as depicted in Fig. 4. In such an example, the strings are formed by identical PV modules but with different shading conditions, therefore at the same daytime both strings produce different power curves. On the light of such a condition, it is evident that considering uniform conditions on all the modules, introduce errors in the power estimation of shaded arrays,

Figure 3. Bypass diodes activation. Source: The Authors

Figure 4. Shade effect on a PV array formed by two strings. Source: [Authors]

such an approach is reported in [7]. Therefore, the following section presents the array model adopted in this paper, which considers different operating conditions for each module, to simulate shaded arrays. 3. PV power under shading conditions In the following, a mathematical model to calculate the PV power in shading conditions, which was introduced in [9], is described. 39


Ramos-Paja et al / DYNA 82 (192), pp. 37-43. August, 2015.

(associated to module k) becomes active for (7), where an inflection point occurs. In (7), the current of both modules is the same and equal to the current of module k with null voltage.

3.1. The PV module model The electrical behavior of a PV module is described by a current source modeling the photo-induced current and a diode modeling the P-N junction of the module. Fig. 3a shows the module equivalent circuit, where the following equations describe its electrical characteristics [9]: i pv  i sc  i pn ,i pn  A  exp B  v pv  T pv  Tamb 

isc  iSTC 

B

NOCT  20 S pv 800

S pv S STC

1    T i

pv

 TSTC 

BSTC 1   v  T pv  TSTC 

BSTC 

ln 1  impp / iSTC 

i j  ik , v k  0

(1)

From (1) and (7), the inflection voltage is given by the voltage of the module j (vo,j,k) in (8).

(2)

A j  exp B j  vo , j ,k   isc , j  isc .k  Ak

(8)

But, if the string has more than two modules in series, vo,j,k represents the contribution of the module j to the minimum string voltage that turn-off the bypass diode of module k (vo,k). Hence, vo,k is calculated as the sum of the inflection point voltages of the modules with isc greater than isc,k (modules from 1 to k-1). In general, the contribution of the module m to the inflection voltage associated to the module k (with m < k) is obtained from the solution of vo,m,k in (9), and the value of vo,k is calculated from (10).

(3)

(4)

(5)

v mpp  vocSTC

A  i STC  exp BSTC  vocSTC 

(7)

isc ,m  Am  exp Bm  vo ,m ,k   isc ,k  Ak

(6)

k 1

vo ,k   v o ,m ,k

In such expressions, ipv and vpv are the current and voltage of the PV module, respectively. The parameters A, B, and isc are usually calculated from the datasheet information for a given irradiance (Spv) and temperature (Tpv) which in turn depends on the ambient temperature (Tamb) and the Nominal Operating Cell Temperature (NOCT) expressed in °C and commonly approached to 48°C [14]. iSTC and vocSTC are the short-circuit current and open-circuit voltage in Standard Test Conditions (STC), respectively. TSTC and SSTC are the module temperature and irradiance in STC, respectively; while BSTC corresponds to B evaluated in STC. impp and vmpp are the current and voltage of the PV module at the MPP for the evaluated irradiance and temperature conditions; and αi and αv are the current and voltage temperature coefficients.

(9) (10)

m 1

From (7) and (8) it is evident that, at maximum, there exists N-1 inflection points, with N being the number of modules in the string. Moreover, since all the modules are considered in descending order of isc, if the bypass diode k becomes active all the bypass diodes for k+1…N are also active; hence the associated modules are not producing voltage and power. 3.2. The PV power calculation The string current ist imposed by an array voltage vst is calculated from (1) using any module voltage. Such module voltages are obtained by knowing that the string current is the same in all the modules, which defines the following nonlinear system:

3.1. The bypass diodes activation The analysis of the bypass diodes activation is performed string-by-string. Such a procedure does not introduce errors since the activation of a bypass diode depends on the currents of the associated PV module and string exclusively. It is important to detect the string voltages at which the bypass diodes become active, named inflection voltages: voltages lower than the inflection voltages force the associated modules to operate in short-circuit condition, therefore they do not contribute to both the string voltage and power. Since strings are formed with PV modules connected in series, the physical position of a module in a string has no impact on the string current. Therefore, without loss of generality, the calculation of the inflection points consider the modules placed in descendent order of isc, hence isc,j ≥ isc,k with j < k. From such a condition, the bypass diode k

isc ,t  i1  i2 ...  ik  ...iN am N am

v k 1

pv , k

 v st

(11) (12)

The non-linear system in (11)-(12) has Nam + 1 non-linear equations, where Nam stands for the number of active modules, i.e. modules with inactive bypass diode. Moreover, such a system can be solved by means of classical approaches like the Newton-Raphson method, or by means of modern approaches like the fsolve() function of Matlab. However in both cases the search domain of the vpv,k solutions is constrained by the inflection points: the string current is 40


Ramos-Paja et al / DYNA 82 (192), pp. 37-43. August, 2015.

limited by the currents of the inflection points that surround the string voltage. Therefore, if the operation voltage of the string is placed between the inflection points k and k+1, i.e. vo,k < vst < vo,k+1, the string current ist is also placed within io,k < ist < io,k+1 with io,k and io,k+1 being the inflection point currents. Such a characteristic speeds-up the string current calculation since the zone where the solution occurs is known. Finally, since the PV array is formed by several strings in parallel, the array current ia is calculated by adding all the string currents. Then, the array power is obtained from the array current and voltage for the given irradiance condition. It must be pointed out that this model allows the array power for different irradiance conditions in each module to be calculated. But a proper model to introduce profiles that affect individual modules, producing dynamic shading conditions, is required. Such a topic is addressed in the following section. 4. Improved prediction model Figure 5. Prediction structures to estimate PV power profile and energy. Source: The Authors

The classical prediction model used to estimate the power and energy profiles in a PV installation are depicted in the top of Fig. 5. In such a structure, the modules are assumed uniformly irradiated, hence expressions based on (1) but scaling the voltage and current in agreement with N and M, are used to calculate the array power for an irradiance condition. Such a model is fed by an irradiance forecast to evaluate the PV installation in a specific place. Then, the MPP power Pmpp for each irradiance condition is registered to provide a power profile, which eventually is integrated to predict the energy production. For PV arrays without mismatching, the classical model is accurate enough since the Pmpp is correctly predicted. The gray trace in the bottom of Fig. 6 corresponds to the power curve of a four-modules string predicted with the classical model. When shading across the modules occurs, the classical model addresses such a condition by reducing the modules’ uniform irradiance to an average irradiance value in the array. Such an approach is illustrated by the dotted trace in Fig. 6, which represents the new power curve and the Pmpp predicted for an average irradiance condition generated by a shading profile. Finally, the black trace in Fig. 6 presents a more precise power curve obtained using the model described in Section III, which takes into account the irradiance condition in each module. Such a trace put in evidence the large errors introduced in the power estimation by the classical model in both non-shading and average-shading approaches. Hence, such an error is integrated to produce a large error in the predicted energy production. In light of the previous analyses, and taking into account that the sun translation changes the shading shape in a particular place along the day, the improved prediction model illustrated at the bottom of Fig. 5 is proposed. Such a novel structure considers dynamic shading conditions for each module, which affects the effective irradiance in each module. Then, the environmental irradiance value provided by the forecast is modified in agreement with the shading profile to generate the irradiance vector [S1, ‌ Si, ‌ SN] to feed the model described in Section 3. A forecast of temperature was also considered to recalculate

Figure 6. Electrical characteristics predicted under shading conditions. Source: The Authors

the PV module parameters for each irradiance condition which also improve the accuracy of the prediction model. In this way, the MPP power for each irradiance condition forms the power profile to predict the energy production. 4.1. Shading model To provide an improved prediction of the power profile, the proposed model includes a first layer to model the dynamic change of the shades. To generate such shading models, data from the place to evaluate is required. Two options are considered: PV arrays already installed or suitable places for new PV installations. In both cases, the effective irradiance available in the position of each module must be registered along the day. Such a procedure could be done at any moment while the irradiance forecast for the place is available. In the case of an array already installed, each PV module must be short-circuited to register the 41


Ramos-Paja et al / DYNA 82 (192), pp. 37-43. August, 2015.

dynamic profile of the short-circuit current using an amperemeter (A). The lower the time intervals used to register the current, the higher the resolution of the shading model. In the case of suitable places for new PV installations, one or several modules could be used to register the shortcircuit currents generated in the positions where the modules will be installed. Then, the effective irradiances for the modules are calculated from (3) using both the measurements and STC short-circuit currents, and using both the forecast and STC temperatures. Fig. 7 illustrates the measurement architecture: the short-circuited modules are used to register the effective irradiance profiles Stest,i(t) along the day. Then, using such profiles and the forecast irradiance profile S(t), the shading profile Shi(t) for each module is calculated as in (13). Shi t  

S test , j t  S t 

(13) Figure 8. Shading profiles of the affected PV modules. Source: The Authors

To illustrate the shading profiles, Fig. 8 shows a simulation considering a PV array composed by two parallel strings with three modules each, where the shading profile begins at 8:24 in the left-bottom corner of the array. The shade flows towards the top-right corner of the array, it covering the six modules at 16:00. The simulation shows that the bottom-left module is completely shaded at 12:00, while the top-right module is not shaded almost all the time. Fig. 8 also illustrates the non-linearity of the shading profiles, as well as the difference among the patterns in each module. These shading profiles were taken on a winter day in southern Italy, where the angle of the sun allows the projection of the shadows.

4.2. Model performance evaluation To illustrate the performance of the proposed prediction model, an experimental irradiance profile, taken in the south of Italy on a summer day, was used to estimate the power production of a PV using both the classical and the improved approaches. The improved prediction model considers the shading profiles given in Fig. 8, while the classical model was considered with average shading among such profiles. The predicted power profiles obtained in Fig. 9 put in evidence the overestimation made by the classical approach with respect to the proposed solution. In such an example, the classical model overestimates by 23% the energy produced by the array under the forecast irradiance and shading profiles. Therefore, the proposed solution allows a PV installation to be accurately designed, and provides precise information to plan the energy delivery to the consumer or to the grid. 5. Conclusions An improved model to estimate the power and energy production from photovoltaic installations under dynamic shading conditions was proposed. The approach is based on the modeling of the dynamic profile exhibited by shades affecting the irradiance reaching the PV modules. Such a solution improves significantly the power prediction accuracy in contrast with classical approaches reported in literature. In the example used to validate the model, the classical approach over-estimates the energy production in 23%. Therefore, the proposed model supports the installation designers and grid planning engineers by giving an accurate estimation of the power production: the former ones profit from an accurate return-of-investment calculation and wellsized equipment design, which permits an evaluation of the economic and technical viability of an installation.

Figure 7. Shade measurement architecture. Source: The Authors

42


Ramos-Paja et al / DYNA 82 (192), pp. 37-43. August, 2015. [5] [6] [7]

[8] [9]

[10] [11]

[12] [13] [14]

C.A. Ramos-Paja, was born in Cali, Colombia, in 1978. He received the Engineering degree in electronics from the Universidad del Valle, Cali,Colombia in 2003, the MSc. degree in Automatic Control from the same university in 2005, and the PhD. degree in Power Electronics from the Universitat Rovira i Virgili, Tarragona, Spain, in 2009. Since 2011, he has been associated professor in the Universidad Nacional de Colombia, Sede Medellín, Colombia. His main research interests are in the design and control of renewable energy systems, switching converters, distributed power systems, and power quality solutions.

Figure 9. Predicted array powers for winter irradiance and temperature forecast. Source: The Authors

The later ones profit from the accurate power estimation to avoid unexpected power drops that trigger undesired and costly contingency plans.

A.J. Saavedra-Montes, was born in Palmira, Colombia, in 1974. He received the bachelor, MSc. and PhD. degrees in Electrical Engineering from Universidad del Valle, Cali, Colombia, in 1998, 2002, and 2011, respectively. Since 2003, he has been professor at Universidad Nacional de Colombia, Sede Medellín, Colombia. His main research interests are power systems, electrical machines, renewable energy systems, micro grid applications, power quality solutions, and system identification.

Acknowledgements This paper was supported by the GAUNAL group of the Universidad Nacional de Colombia under the projects RECONF-OP-21386 (Colombian Departamento Administrativo de Ciencia, Tecnología e Innovación COLCIENCIAS - call 617 of 2013) and MICRO-RED18687. This work was also supported by COLCIENCIAS under the doctoral scholarships 095-2005 and 34065242.

A. Trejos, received the BSc degree in Electrical Engineering in 2006 and MSc degree in 2010 from the Universidad Tecnologica de Pereira, Colombia. She is currently PhD student at Universidad Nacional de Colombia, Sede Medellín, Colombia, where her research interests include the modeling of photovoltaic systems under mismatching conditions.

References [1] [2] [3]

[4]

Katiraei, F. and Agüero, J.R., Solar PV integration challenges. IEEE Power & Energy Magazine, 9, pp. 24-32, 2011. Orozco-Gutierrez, M.L., Ramirez-Scarpetta, J.M., Spagnuolo, G. and Ramos-Paja, C.A., A technique for mismatched PV array simulation. Renewable Energy, 55, pp. 417-427, 2013. Fuentes, M., Nofuentes, G., Aguilera, J., Talavera, D.L. and Castro, M., Application and validation of algebraic methods to predict the behaviour of crystalline silicon PV modules in Mediterranean climates. Solar Energy, 81, pp. 1396-1408, 2007. Silvestre, S., Boronat, A. and Chouder, A., Study of bypass diodes configuration on PV modules. Applied Energy, 86, pp. 1632-1640, 2009. Bastidas-Rodriguez, J.D., Ramos-Paja, C.A. and Saavedra-Montes, A.J., Reconfiguration analysis of photovoltaic arrays based on parameters estimation. Simulation Modelling Practice and Theory, 35, pp. 50-68, 2013. Bypass diodes. [Online]. [date of reference: March 17th of 2014]. Available at: http://pveducation.org/pvcdrom/modules/bypass-diodes Vemuru, S., Singh, P. and Niamat, M., Modeling impact of bypass diodes on photovoltaic cell performance under partial shading. Electro/Information Technology (EIT), 2012 IEEE International Conference on, pp.1-5, 2012. Calleja-Cabré J., Estudio de la afectación de las sombras en un panel fotovoltaico. Tesis, Universidad Rovira i Virgili, España, 2012. Petrone, G. and Ramos-Paja, C.A., Modeling of photovoltaic fields in mismatched conditions for energy yield evaluations. Electric Power Systems Research, 81, pp. 1003-1013, 2011. Duffie, J.A. and Beckman, W.A., Solar Engineering of Thermal Processes, 3rd ed., John Wiley & Sons, New Jersey, 2006.

Brinkman, G., Denholm, P., Drury, E., Margolis, R. and Mowers, M., Toward a solar-powered grid. IEEE Power & Energy Magazine, 9, pp. 24-32, 2011. Perpétuo, T., Seleme Jr., S. and Rocha, S., Efficiency optimization in stand-alone photovoltaic pumping system. Renewable Energy, 41, pp. 220-226, 2012. Moballegh, S. and Jiang, J., Modeling, prediction, and experimental validations of power peaks of PV arrays under partial shading conditions. IEEE Transactions on Sustainable Energy, 5 (1), pp 293300, 2014. Yoon, J., Song, J. and Lee, S., Practical application of building integrated photovoltaic (BIPV) system using transparent amorphous silicon thin-film PV module. Solar Energy, 85, pp. 723-733, 2011.

43


Effects on lifetime of low voltage conductors due to stationary power quality disturbances Ivan Camilo Duran-Tovar a, Fabio Andrés Pavas-Martínez b & Oscar German Duarte-Velasco c a

Facultad de Ingeniería, Universidad Nacional de Colombia, Bogotá, Colombia. icdurant@unal.edu.co Facultad de Ingeniería, Universidad Nacional de Colombia, Bogotá, Colombia. fapavasm@unal.edu.co c Facultad de Ingeniería, Universidad Nacional de Colombia, Bogotá, Colombia. ogduartev@unal.edu.co b

Received: April 29th, 2014. Received in revised form: February 18t, 2015. Accepted: June 16th, 2015.

Abstract This paper presents a methodology to estimate the effects of heating and lifetime in Low Voltage conductors (LV) due to the presence of stationary power quality disturbances. Conductor overheating and cable insulation accelerated aging can be caused by temporary increases in the RMS values of the voltages and currents due to stationary disturbances. Waveform distortion, unbalance and phase displacements can be considered among the stationary disturbances. For disturbances with short duration, there are no significant reductions in the insulation lifetime, but disturbances acting for long time periods will cause cumulative and detrimental effects. Currently valid models for insulation aging are employed; the expected power quality disturbance levels are extracted from power quality databases. A discussion about the effects on insulation lifetime is presented. Keywords: Power quality, insulation lifetime, Harmonics, Unbalance, Phase displacement, Arrhenius equation, Aging Factor.

Efectos en conductores de baja tensión debido a perturbaciones estacionarias de calidad de potencia Resumen Este artículo presenta una metodología para la estimación de los efectos en el calentamiento y la esperanza de vida en conductores de baja tensión (BT) debido a la presencia de perturbaciones estacionarias de calidad de potencia. El sobrecalentamiento del conductor y el prematuro envejecimiento en el aislamiento del cable puede ser causado por los incrementos temporales en los valores RMS de las tensiones y las corrientes ocasionados por perturbaciones estacionarias. Entre las perturbaciones están las distorsiones de forma de onda, desbalances y los desplazamientos de fase. Para incrementos de corta duración, no se ven reducciones significativas en el tiempo de vida del aislamiento, pero para largos periodos de tiempo se puede producir efectos acumulativos y degenerativos. Se emplean modelos válidos actualmente para envejecimiento del aislamiento; los niveles esperados de las perturbaciones de calidad de potencia son extraídos de bases de datos de calidad de potencia. Una discusión sobre los efectos en el tiempo de vida del aislamiento es presentada. Palabras clave: Calidad de potencia, tiempo de vida del aislamiento, Armónicos, Ecuación de Arrhenius, Factor de envejecimiento acelerado.

1. Introduction The increased use of nonlinear loads in Colombian household facilities, such as in commercial and industrial ones, has led to the presence of stationary disturbances capable of deteriorating the power quality in the distribution system. These disturbances cause overheating in insulating dielectric materials, increasing

their aging rate. This overheating is the main cause of loss of life and premature failure in the LV cables. Previous studies have revealed that the increase of harmonics distortion is one of the major problems in power quality. It was also found that only certain frequency values are more commonly harmful for power equipment and the electric network. These results have led to the study of the impact, not only of wave distortions (harmonics), but also

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 44-51. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48568


Duran-Tovar et al / DYNA 82 (192), pp. 44-51. August, 2015.

stationary disturbances and their contributions in overheating and accelerated aging of conductor insulation. Currently most household appliances cause waveform distortion, phase displacement and unbalance due to their natural unbalanced and unpredictable conformation. Among the most common non-linear loads with significant impact on power quality, are compact fluorescent lamps, LED based lamps, computers, TV sets, communication devices, and chargers, among others, can be counted. This paper presents a method for the estimation of LV conductors temperature rise and its corresponding lifetime reduction due to stationary disturbances (waveform distortion, unbalance and phase displacement). Methodologies for estimating the aging and loss of life are extracted from currently available methods.

Aging in LV conductors depends on different factors, where the most important one is the temperature rise. This is the main cause of accelerated degradation in dielectric materials. Several phenomena can cause an increase in the temperature inside the cable, hence it is important to investigate these factors and the manner in which they cause premature aging.

insolation, reducing its useful life. The same references propose a set of equations where the temperature rise in the phase and neutral conductors are caused by power losses (I2R) due to the presence of harmonic components. The frequency components in the current change the value of the AC resistance, due to the skin effect and the proximity effect. The skin effect changes the current distribution in the conductor due to the shielding of the inner portion of the conductor by the outer layer [8]. The current is concentrated in the outer layer, increasing the effective resistance of the conductor [8]. The skin effect is frequency-dependent and increases with the conductor diameter. The proximity effect is due to the conductor magnetic field, which distorts the current in the adjacent conductors. For round conductors this effect is smaller than the skin effect [8]. This effect also increases the effective resistance of the conductor. The proximity effect is due to the conductor magnetic field, which distorts the current in the adjacent conductors. For round conductors this effect is smaller than the skin effect [8]. This effect also increases the effective resistance of the conductor. References [3]-[7] and [10] proposed the equation to determine the conductor AC resistance (2) with respect to its own DC resistance value (3), the skin effect (4), (5) and the proximity effect (6).

2.1.

R AC  R DC 1  YS  YP  ,

(2)

R DC  R 20 1   20   20  ,

(3)

2. Aging in low voltage conductors

Aging model for low voltage conductors due to temperature rise

Several studies on aging are based on the results of the study of the Arrhenius thermal reaction theory or Arrhenius equation [1]. These studies propose an equation, which relates the temperature of the chemical reaction of the dielectric element, with its chemical characteristics. The result is an equation to calculate the degradation rate or speed, which determines when the useful life of the dielectric material comes to an end (1). This equation calculates the rate of aging of the insulation material in LV conductors.

FA  e

 Ea  1 1    k   H  273  H , R  273 

,

YS  10 3 (0,2 x 5  6,616 x 4  83,345 x 3  500 x 2  1061,9 x  769,63), if 2  x  10 x  0 ,015281

(1)

f  , R DC

,

(4)

(5)

2 2  d c   , (6)  d c   1,18  0 .312  YP  YS     s    s   Y S  0 , 27

where K0 is the reaction rate constant, A is a constant that depends in part on chemical concentrations in the reaction, Ea is the activation energy of the degradation process, θR is the absolute reaction temperature in Kelvin, k is the Boltzmann constant, FA is the Aging Factor, θH,R is the maximum allowable temperature in Celsius and θH is the conductor temperature in Celsius.

where RDC is the DC resistance at maximum operating temperature in ohms per distance, YS is the skin effect and YP is the proximity effect, R20 is the DC resistance at 20°C in ohms per kilometer, α20 is the temperature coefficient of resistance for conductor material at 20°C per Kelvin, θ is the maximum operating temperature in Celsius, x is the skin effect parameter, dc is the diameter of the conductor in millimeters, s is the distance between conductor axes in mm, f is the harmonic frequency component in hertz and μ is the magnetic permeability (one for nonmagnetic material). Along with the AC resistance, a set of equations to determine the power losses due to harmonic currents (7)-(10) can be developed. These losses must consider the current and

2.2. Overheating in low voltage conductors under balanced and distorted currents Equation (1) normally relates the aging of the dielectric material with the overheating, but not the relationship of overheating with electrical stresses. References [3]-[7] show that the presence of harmonic currents cause overheating. This overheating degrades the dielectric material used for 45


Duran-Tovar et al / DYNA 82 (192), pp. 44-51. August, 2015.

increasing the power losses of phase and neutral conductors correspondingly. In reference [12] an orthogonal current decomposition for poly-phase systems has been proposed. The current is split into two main components, active and non-active currents (12). The non-active component contains mainly the stationary disturbances (13). The full composition of the current appears in (14).

the AC resistance of phase and neutral conductors. These values are different for each frequency component.

 Pcable   Pa   Ph , ph   Ph , nt ,

(7)

2  Pa  3 I (1) R AC 1 ,

(8)

hmax

 Ph , ph  3  I (2h ) R AC h  ,

(9)

h2

 Ph , nt 

hmax

h 3n

 3 I ( h )  R AC  h  , 2

(10)

where, ΔPcable is the power loss in non-sinusoidal conditions in watts, ΔPa is the power losses in sinusoidal conditions for phase and neutral conductors in watts, ΔPh,ph is the harmonic power losses for the phase conductors in watts, ΔPh,nt is the harmonic power losses for the neutral conductor in watts, I(1) is the phase current for the fundamental frequency in amperes, RAC(1) is the phase AC resistance for the fundamental frequency in ohms, I(h) is the phase current for the harmonic order h in amperes, RAC(h) is the phase and neutral AC resistance for the harmonic order h in ohms and hmax is the maximum harmonic order. Note that (7) considers a three-phase balanced system, therefore the power loss in any phase is the same; there is no power losses in the neutral conductor for fundamental frequency, under balanced conditions. Also, the harmonic current in the neutral conductor is three times the harmonic current in the phase, and only appears in the triple order frequencies (I3, I9, I15, I21… I3n). The cables’ temperature rise can be calculated based on the maximum operating temperature and the ratio between the non-sinusoidal losses and the rated sinusoidal losses calculated in (7) [11].

 H   H ,R

 Pcable , Prated

I 2  I a2  I x2 ,

(12)

2 2 I x2  I au2  I Qd  I Qu  I D2 ,

(13)

2 2 I 2  I a2  I au2  I Qd  I Qu  I D2 ,

(14)

where Ia is the fundamental active current (same I(1)), Iau is the active unbalanced current, IQd is the displaced current (this component is quite similar to reactive current), IQu is the reactive unbalanced current and ID is the distorted current (harmonic current Ih in most cases). Some field results in [12] show that phase currents are not equal for the fundamental frequency, therefore it is necessary to make some changes in (8). In the same manner, the harmonic currents have not the same magnitude in all phases, then (9) and (10) require changes too. These changes are required to consider the current for each phase separately and the neutral harmonic current is not three times the phase current harmonic.

 Pa 

(11)

4

I p 1

2 1, p

3

R AC 1, p  ,

hmax

 Ph , ph 

I

 Ph , nt 

 I

p 1 h  2

2 (h, p )

R AC  h , p  ,

(16)

R AC  h , nt  ,

(17)

hmax

hn

(15)

nt ( h )

2

fundamental frequency in amperes, RAC(1,p) is the phase and neutral AC resistance for the fundamental frequency in ohms, I(h,p) is the phase distorted current for the harmonic order h in amps, Int(h) is the neutral distorted current for the harmonic order h in amps, RAC(h,p) is the phase AC resistance for the harmonic order h in ohms and RAC(h,nt) is the neutral AC resistance for the harmonic order h in ohms. Adding the other stationary disturbances to (7) and using (15)-(17), the final equation for the power losses due to stationary disturbances in three-phase systems is shown in (18). where, Iau(h,p) is the phase and neutral active unbalanced current for the harmonic order h in Amps, IQd(h,p) is the phase and neutral reactive displaced current for the harmonic order h in amps, IQu(h,p) is the phase and neutral reactive unbalanced current for the harmonic order h in amps.

Prated is the power losses in rated sinusoidal conditions for phase and neutral conductors in watts, θH is the temperature under non-sinusoidal conditions and θH,R is the maximum operation temperature, both in Celsius. The expected loss of life can be calculated applying the result of (11) in (1). 3. Conductor power losses under distorted and unbalanced conditions Most research regarding aging on LV conductors is focused on the effects of harmonic currents in the system, but unbalance and phase displacement have not been taken into account, even though these disturbances are particularly important in low voltage grids. As a whole, the three mentioned disturbances increase the current rms value, 46


Duran-Tovar et al / DYNA 82 (192), pp. 44-51. August, 2015. 4

the unbalanced and distorted currents starts from 0% to 200% of the current magnitude. This percentage value will be called the Distorted Load Factor (DLF). The conductor used is four 500 MCM copper cables with PVC insulation (http://www.centelsa.com). Fig. 2 shows an example of the cable configuration and the physical and chemical parameters of the cable are shown in Table 1. Tables 2 and 3 show the current decomposition in amperes and pu with respect to the collective current. Collective currents are calculated as the squared root of the sum squared rms values, as explained in [12] and [15]. Given that current decomposition is orthogonal, the collective values can be calculated in the described manner. The spectrum for active current is presented in Fig. 3, active unbalanced current in Fig. 4, reactive unbalanced current in Fig. 5, reactive displaced current in Fig. 6 and the distorted current (harmonic) in Fig. 7.

4 hmax

Pcable   I RAC 1, p    I au2 ( h, p ) RAC ( h, p ) p 1

2 1, p

p 1 h 1

4 hmax

4 hmax

2 2 ,   I Qd ( h , p ) RAC ( h , p )   I Qu ( h , p ) RAC ( h , p ) p 1 h 1

p 1 h 1

3 hmax

hmax

p 1 h  2

hn

  I (2h, p ) RAC  h, p     I nt ( h )  RAC  h,nt  2

(18) 4. Simulation and results In order to compare the effects of the stationary disturbances (harmonics, unbalance and phase displacements) six cases are simulated:  Base case – Active current with reactive displaced current (Ia2+IQd2). Although reactive currents are undesirable, they are tolerated; the base case represents a common condition.  Case 1 – Base case with active unbalanced current (Ia2+IQd2)+Iau2.  Case 2 – Base case with reactive unbalanced current (Ia2+IQd2)+IQu2.  Case 3 – Base case with unbalanced currents (Ia2+IQd2)+(Iau2+Iau2).  Case 4 – Base Active current with distorted current (Ia2+IQd2)+ID2.  Case 5 – Base case with unbalanced and distorted currents (Ia2+IQd2)+(Iau2+IQu2+ID2). Equations (1) to (18) are implemented in MATLAB to illustrate the aging model. Fig. 1 represents the implementation flowchart. For the Base case, the currents magnitudes are constant. For cases 1 to 5, the magnitude of

Figure 2. Conductors’ distribution Source The authors

Table 1. Four 500 MCM copper cables with PVC insulation characteristics Parameter Dielectric activation energy (Ea)

8.62x10-5 eV/K

DC resistance at 20°C

0.0694 Ω/km

Maximum conductor temperature

90°C

Fundamental frequency

60 Hz

Temperature coefficient (α20) Phase conductor diameter with PVC jacket Neutral conductor diameter with PVC jacket Mean axial spacing between conductors Conductor nominal current at 90°C Conductor nominal power losses for I=430 A Source The authors

47

1.407 eV

Boltzmann constant (k)

Relative magnetic permeability (µ)

Figure 1. Implementation flowchart Source The authors

Value

3.93x10-3/K 0.9997 23.5 mm 23.5 mm 26.8 mm 430 A 16.372 kW/km


Duran-Tovar et al / DYNA 82 (192), pp. 44-51. August, 2015. Table 2. Current decomposition and collective values Quantity

Ph. R

Voltage [V] 120.52 Ia [A] 389.12 Iau [A] 36.89 IQd [A] 180.02 IQu [A] 17.74 ID [A] 31.61 Coll. [A] 445.97 Source The authors

Ph. S

Ph. T

121.12 391.07 19.59 180.92 3.76 33.81 446.90

120.91 390.39 17.29 180.61 21.37 35.15 446.63

Neutral 0.46 13.27 13.91 0.69 51.21 67.72 87.04

Coll.

Power [kVA]

209.33 389.12 47.30 180.02 58.38 89.24 489.00

141.49 9.90 65.45 12.22 18.68 162.90

Table 3. Current decomposition. All values are in pu with respect to average phase active current 390.196A Quantity Ph. R Ph. S Ph. T Neutral Ia [pu] 0.9972 1.0022 1.0005 0.0340 Iau [pu] 0.0945 0.0502 0.0443 0.0356 IQd [pu] 0.4614 0.4637 0.4629 0.0018 IQu [pu] 0.0455 0.0096 0.0548 0.1312 ID [pu] 0.0810 0.0866 0.0901 0.1736 Total [pu] 1.1068 1.1089 1.1083 0.2231 Source The authors

Figure 5. Reactive unbalanced current spectrum IQu Source The authors

Figure 6. Reactive displaced current spectrum IQd Source The authors

Figure 3. Active current spectrum Ia Source The authors

Figure 7. Distorted current spectrum ID Source The authors

From the previous data, the power dissipated by the conductors, its effect on temperature and expected life can be calculated. The analysis will be made taking into account:

Figure 4. Active unbalanced current spectrum Iau Source The authors

48


Duran-Tovar et al / DYNA 82 (192), pp. 44-51. August, 2015.

 Unbalanced and distorted currents magnitude is 100% (dashed line in Fig. 8, 9 and 10).  Cable expected life between 90%-100%. White zones in Table 4, 5 and 6 are considered as favorable conditions.  Cable expected life between 80%-90%. Dark gray zone in Table 4, 5 and 6 (tolerable condition).  Cable expected life between 0%-70%. Light gray zone in Table 4, 5 and 6 (unfavorable condition). Due to the linear relation between power losses and temperature rise (11), Tables 4-5 and Figs 8-9 have the same behavior. Comparing cases 1, 2 and 4 for DLF in 100%, ID provides more power losses and the temperature rise (1.45% both) than Iau (0.4%) and IQu (0.62%). In addition, the contribution of Iau+IQu (case 3) is not enough to reach the distorted current

contribution. Moreover, when all disturbances are present the power losses increase in temperature and amount to 2.47%. It clearly shows that distorted current is the less desirable disturbance. Regarding expected life, in Cases 1 and 2 the conductors’ life expectancy has lost about 10%. This can be considered favorable with respect to Case 4, where ID reduces the expected life to 85.17%, a tolerable but not favorable condition (dark gray zone). In Case 3, the unbalanced current reduces the life of the cable insulation to 89.27% (tolerable condition). In Case 5 with all disturbances present, the expected life reduces to 76.09%, it clearly represents an unfavorable and intolerable condition as the life reduction is approximately one quarter. Reviewing expected life areas for the cable, in Case 1, the active unbalanced current allows an extra increase of 150%, without a change to tolerable conditions. In Case 2, the reactive unbalanced current remains in a favorable condition to 100%. At this point, it can be observed that the increase in power loss and temperature rise reaches 0.6%; same of Iau

Table 4. Percentage of power loss rise respect base case (49.115 kW/km) Base DLF Case 1 Case 2 Case 3 Case 4 Case 5 case (%) rise (%) rise (%) rise (%) rise (%) rise (%) (%) 0 0.00 0.00 0.00 0.00 0.00 25 0.03 0.04 0.06 0.09 0.15 50 0.10 0.15 0.26 0.36 0.62 1.39 75 0.23 0.35 0.57 0.81 100 0.40 0.62 1.02 1.45 2.47 0.00 125 0.63 0.97 1.60 2.26 3.86 150 0.91 1.39 2.30 3.25 5.55 175 1.24 1.89 3.13 4.43 7.56 200 1.61 2.47 4.09 5.79 9.87 Source The authors

Table 5. Percentage of temperature rise respect the base case (90°C) DLF Base case Case 1 Case 2 Case 3 Case 4 Case 5 (%) (%) rise (%) rise (%) rise (%) rise (%) rise (%) 0 0.00 0.00 0.00 0.00 0.00 25 0.03 0.04 0.06 0.09 0.15 50 0.10 0.15 0.26 0.36 0.62 1.39 75 0.23 0.35 0.57 0.81 100 0.40 0.62 1.02 1.45 2.47 0.00 125 0.63 0.97 1.60 2.26 3.86 150 0.91 1.39 2.30 3.25 5.55 175 1.24 1.89 3.13 4.43 7.56 200 1.61 2.47 4.09 5.79 9.87 Source The authors

Figure 8. Per unit cable power losses Source The authors

Table 6. Percentage of expected life values for all cases DLF Base case Case 1 Case 2 Case 3 Case 4 Case 5 (%) (%) rise (%) rise (%) rise (%) rise (%) rise (%) 0 100.00 100.00 100.00 100.00 100.00 25 99.72 99.57 99.29 99.00 98.29 50 98.88 98.29 97.19 96.05 93.37 85.72 75 97.50 96.20 93.80 91.35 100 95.61 93.35 89.27 85.17 76.09 100.00 125 93.23 89.82 83.76 77.85 65.34 150 90.40 85.69 77.52 69.79 54.32 175 87.17 81.06 70.76 61.38 43.76 200 83.60 76.05 63.71 52.98 34.18 Source The authors

Figure 9. Per unit cable temperature rise Source The authors

49


Duran-Tovar et al / DYNA 82 (192), pp. 44-51. August, 2015.

It has been shown that unbalance is not despicable regarding its effect on the cables’ useful life. Its effects are comparable to the effects of overloading, which are known as well. Nevertheless, unbalance increases apparent power but does not increases active or reactive power, therefore its presence can pass unperceived. If unbalance is not taken care of, not only will the system capacity be decreased but also the insulation will be stressed by an additional unperceived disturbance. In this paper, the considered cases included a full duty load behavior and different composition sizes, by means of the so-called Distorted Load Factor. It is necessary to investigate this topic further, particularly paying attention to the effects under different operation conditions where the current fluctuates but disturbances remain present. There is an interesting aspect regarding the abovepresented results. The benefits of compensating waveform distortion are well known (reduction of rms currents, reduction of apparent power, losses reduction, reduction of resonance risk, etc.). Benefits of unbalance compensation have not been widely considered. The compensation of unbalance can be achieved by economic solutions, which in general are cheaper than harmonics compensation. The reduction of unbalance can provide interesting possibilities for improving the usage of an existing electric facility and increasing its life expectancy. The effects of disturbance conductors’ life expectancy can be considered as design criteria for present and future electric facilities in order to avoid possible dangers.

Figure 10. Cable expected life Source The authors

achieved when the magnitude increases at 125%. When the magnitude of the IQu exceeds 100%, the expected life changes to a tolerable condition, and if it reaches 200%, it becomes an intolerable condition. In Case 3, which groups unbalanced currents, the increase of power losses and temperature rise, with respect to individual cases are low (variations are 0.03% and 0.3% for a DLF=75%). At this point, the expected life was favorable. The difference increases to 0.6% when DLF is equal to 100% and 125% (tolerable condition). After the 125%, Case 3 comes to an intolerable condition and the differences from Cases 1 and 2, reaches values of 1.4%. It can be observed that Case 3 is not too critical. Case 4 is particularly critical because it is harmful and, as previously explained, it has the higher power losses for individual disturbances, as shown in Figure 8 for a DLF≥100%. When the DLF≤75%, the increase of losses are comparable to those found in case 3 and the expected life is favorable. Only when the DLF=100%, expected life is tolerable but the difference from other cases increases significantly. After the DLF=125%, expected life becomes intolerable. Finally, for DLF=200% life expectancy is reduced by almost half. This confirms that the distorted current is the least desirable disturbances. Finally, for Case 5, which groups all disturbances, the power losses and the temperature increase drastically with respect to the other cases. It has a favorable condition only for DLF≤50%, for DLF=75% it has a tolerable condition and for DLF≥100% it has an intolerable condition. It can be concluded from these results that in cases when the power system has all types of disturbances, the insulation degrades rapidly reducing the expected life to 35%.

6. Conclusions This article presents a methodology for the estimation of reduction of the lifetime caused by the temperature rise of low voltage cables due to stationary disturbances, applying existing methods, which consider only current waveform distortions. The applied model enables us to estimate punctual behavior of the power losses, temperature rise and expected life due to stationary disturbances, but it is not very accurate because it cannot show the entire behavior over the time. In order to estimate the behavior over a long time, it is necessary to resort to many different samples in short time intervals (i.e. every hour) in order calculate a cumulative aging factor for large time intervals (i.e. days). It was observed that temperature increases of about 5°C are capable of reducing conductors’ life expectancy to 50%. The effects of waveform distortion are more severe than the effects of reactive components and unbalance together, depending on the size of each disturbance. Nevertheless, the effects of unbalance cannot be disregarded. The employed orthogonal current decomposition provided a simple and useful manner to discriminate the effects of each disturbance on power losses, temperature increase and loss of life as well.

5. Discussion The results show that stationary disturbances, like unbalance and phase displacement, are capable of causing additional deterioration of dielectric materials. Effects of waveform distortion on cables’ ampacity and life expectancy have been already investigated. This paper has gone further by considering what unbalance can cause.

References

50


Duran-Tovar et al / DYNA 82 (192), pp. 44-51. August, 2015. [1] [2]

[3]

[4]

[5]

[6]

[7]

[8]

[9] [10] [11]

[12] [13]

[14] [15]

Laidler, K., The Development of the Arrhenius Equation, Journal of Chemical Education, 61 (6), pp. 494-498, 1984. DOI: 10.1021/ed061p494 Duran, I.C. and Duarte, O.G., A survey of methods of estimating lifetime and aging of assets in substations, Advances in Power System Control, Operation and Management (APSCOM 2012), 9th IET International Conference on, pp. 1-6, November, 2012. IEC 60287-1-1., Electric cables - Calculation of the current rating - Part 1-1: Current rating equations (100% load factor) and calculation of losses - General International. International Electrotechnical Commission, Electric cables Committee, 2001. Desmet, J., Putman, D., D'hulster, F. and Belmans, R., Thermal analysis of the influence of nonlinear, unbalanced and asymmetric loads on current conducting capacity of LV-cables, Power Tech Conference Proceedings. IEEE, 4, pp. 1-8, 2003. Sahin, Y.G. and Aras, F., Investigation of harmonic effects on underground power cables, Power Engineering, Energy and Electrical Drives, 2007. POWERENG 2007. International Conference on, pp. 589-594, 2007. Desmet J., Vanalme G., Belmans R. and Van Dommelen D., Simulation of losses in LV cables due to nonlinear loads, Power Electronics Specialists Conference 2008. PESC 2008. IEEE, pp. 785-790, 2008. Demoulias, C., Labridis, D.P., Dokopoulos, P.S. and Gouramanis, K., Ampacity of Low-Voltage power cables under nonsinusoidal currents, Power Delivery IEEE Transactions on, 22, pp. 584-594, 2007. DOI: 10.1109/TPWRD.2006.881445 Wagner, V., Balda, J., Griffith, D., McEachern, A., Barnes, T., Hartmann, D., Phileggi, D., Emannuel, A., Horton, W., Reid, W., Ferraro, R. and Jewell, W., Effects of harmonics on equipment, Power Delivery, IEEE Transactions on, 8, 672-680, 1993. Johnson H. and Graham M., High speed signal propagation: Avanced Black Magic, Prentice Hall, 2003. Tofoli, F., Sanhueza, S. and de Oliveira, A., On the study of losses in cables and transformers in nonsinusoidal conditions, Power Delivery, IEEE Transactions on, 21, pp. 971-978, 2006. Patil, K. and Gandhare, W., Effects of harmonics in distribution systems on temperature rise and life of XLPE power cables, Power and Energy Systems (ICPS), 2011 International Conference on, pp. 1-6, 2011. Pavas, A., Study of responsibilities assignment methods in power quality, PhD Thesis, Bogota, DC. Universidad Nacional de Colombia, Bogotá, Colombia, 2012. Mazzanti, G., Passarelli, G., Russo, A. and Verde, P. The effects of voltage waveform factors on cable life estimation using measured distorted voltages, Power Engineering Society General Meeting, 2006. IEEE, 8 P, 2006. Stone, G. and Lawless J., The application of weibull statistics to insulation aging tests, Electrical Insulation, IEEE Transactions on, EI-14, pp. 233-239, 1979. Pavas, A., Torres, H. and Staudt, V., Method of disturbances interaction: Novel approach to assess responsibilities for steady state power quality disturbances among customers, Harmonics and Quality of Power (ICHQP), 2010 14th International Conference on, Bergamo, Italy, pp 1-9, 2010.

I.C. Duran-Tovar, received the BSc. Eng. in Electrical Engineering in 2006 from the Universidad Nacional de Colombia, Bogotá, Colombia, and the MSc. degree in Electrical Engineering in 2010 from the Universidad de los Andes, Bogotá, Colombia. He is currently a PhD candidate for Electrical Engineering at the Universidad Nacional de Colombia. His research interests include: power quality, distribution systems and simulation. F.A. Pavas-Martínez, received the BSc. Eng in Electrical Engineering in 2002, the MSc. degree in Electrical Engineering 2005, and the PhD degree in Electrical Engineering in 2013, all of them from the Universidad Nacional de Colombia, Bogotá, Colombia. Currently, he is a full professor with the Electrical and Electronic Department, Facultad de Ingeniería, Universidad Nacional de Colombia, Bogotá, Colombia. O.G. Duarte-Velasco, received the BSc. Eng in Electrical Engineering in 1991, the MSc. degree in Industrial Automation in 1997, both from the Universidad Nacional de Colombia, Bogotá, Colombia. He received the PhD. degree in Informatics in 2000, from the Universidad de Granada, Granada, España. Currently, he is a full professor with the Electrical and Electronic Department, Facultad de Ingeniería, Universidad Nacional de Colombia, Bogotá, Colombia.

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64

51


Voltage regulation in a power inverter using a quasi-sliding control technique Nicolás Toro-García a, Yeison Alberto Garcés-Gómez b & Fredy Edimer Hoyos-Velasco c a, b

Facultad de Ingeniería y Arquitectura, Universidad Nacional de Colombia, Manizales, Colombia. ntoroga@unal.edu.co, yagarsesg@unal.edu.co c Escuela de Física, Universidad Nacional de Colombia, Medellín, Colombia. fehoyosve@unal.edu.co Received: October 07th, 2014. Received in revised form: February 12th, 2015. Accepted: July 02nd, 2015.

Abstract This paper shows the behavior of a three-phase power converter with resistive load using a quasi-sliding and a chaos control techniques for output voltage regulation. Controller is designed using Zero Average Dynamic (ZAD) and Fixed Point Inducting Control (FPIC) techniques. Designs have been tested in a Rapid Control Prototyping (RCP) system based on Digital Signal Processing (DSP) for dSPACE platform. Bifurcation diagrams show the robustness of the system. Chaos detection is a signal processing method in the time domain, and has power quality phenomena detection applications. Results show that the phase voltage in the load has sinusoidal performance when it is controlled with these techniques. When delay effects are considered, experimental and numerical results match in both of stable and transition to chaos zones. Keywords: Power measurement, Power quality, Power electronics, Complexity theory, Chaos, Power Inverter.

Regulación de tensión en un inversor de potencia usando técnica de control cuasi deslizante Resumen Este documento presenta el desempeño de un inversor de potencia con carga resistiva usando una técnica de control cuasi deslizante y una técnica de control de caos para la regulación de la tensión de salida. El controlador se diseño usando técnicas de Dinámica de Promedio Cero (ZAD) y Punto Fijo de Control de Inducción (FPIC). Los diseños han sido probados en un sistema de Prototipado Rápido de Control (RCP) basado en un Procesador Digital de Señales (DSP) para la plataforma dSPACE. Los diagramas de bifurcaciones muestran la solides del sistema. La detección de caos se realiza por un método de procesamiento de señales en el dominio del tiempo, y tiene aplicaciones en detección de fenómenos de calidad de la potencia. Los resultados muestran que la tensión de fase de la carga tiene desempeño sinusoidal cuando se controla con las técnicas mencionadas. Cuando se consideran los efectos de retraso, los resultados simulados y experimentales coinciden en ambos casos en zonas estables y de transición a caos. Palabras clave: Medida de potencia, Calidad de la Potencia, Electrónica de Potencia, Teoría Compleja, Caos, Inversor de Potencia.

1. Introduction The study of variable structure systems uses bifurcation theory in order to determine the conditions in parameter values which generate stability changes, periodicity and chaotic dynamics in the system, and which allow us to define safe and stable operation zones. Knowledge of these operation ranges let us avoid the presence of non-desired phenomena such as auto-sustained oscillations, chaos, and

evolution to other operation regimes, among others. Control action required for three-phase loads is implemented usually by power electronic circuits based on switches. For this reason, controlled commuted system with three-phase-load (Resistive) becomes a variable structure system defined by non-smooth differential equations in which a complete theoretical framework does not exist yet allows its study [1] since its theoretical and numerical analysis represents an extremely difficult problem [3]. In this

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 52-59. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48569


Toro-García et al / DYNA 82 (192), pp. 52-59. August, 2015.

This paper is organized as follows. Section 2 describes the proposed system. Section 3 describes the mathematical model of the system. Section 4 describes the control techniques. Section 5 presents the obtained results, and finally, section 6 presents the conclusion.

sense, non-smooth transitions occur when a cycle interacts with a boundary of discontinuity in the phase space in a nongeneric way, causing periodic additions or sudden chaos transitions [4]. One of the most relevant aspects in the bifurcation analysis of non-smooth systems is the absence of the double periodic sequences that are observed in smooth systems [3]. Due to characteristic behavior of non-smooth systems, in many cases, it is not possible to apply analysis techniques for smooth systems without modifications or adequate adjustments [1]. Converters use power electronics for efficient transformation and rational use of electricity from the generation sources to its industrial and commercial use. It has been estimated that 90% of electrical energy is processed through power converters before the final use [5]. Power converters must provide a certain level of output voltage, either in task regulation or tracking, and they must be able to reject changes in load and primary supply voltage levels. A complete and detailed analysis of the operation and configuration of different power converters can be found in [6, 7]. One of the most desirable qualities in these devices is efficiency in the performance by using switching devices generating the desired output with low power consumption. In general, the deterioration of power quality is due to non-stationary disturbances (voltage sags, voltage swells, impulses, among others) and also due to stationary disturbances (harmonic distortion, unbalance and flicker) [811]. In addition, the chaotic dynamics in the system under study generate non-periodic solution currents that are reflected on the source side, affecting other sensitive loads connected to the point of common coupling. Controller designed in this work combines Zero Average Dynamics (ZAD) and Fixed Point Inducting Controller (FPIC) strategies, which have been reported in [18]. Design corresponds to a three-phase low power inverter (1500 W) with a three phase resistive load using a dSPACE platform for the control. Numerical and experimental bifurcations are obtained for the ZAD-FPIC-controller [22-24], by changing the parameter values. Obtained numerical and experimental bifurcation diagrams match. Development and application of the FPIC control technique are presented in [14,15,17,18]. This technique allows the stabilization of unstable orbits in a simple way.

2. Proposed system Fig. 1 shows the block diagram of the system under study. This system is divided into two major subgroups, hardware and software. Hardware includes electrical circuits and electronic devices, and software includes signals acquisition and implementation of control techniques. The software is implemented in a dSPACE platform. Hardware is composed of a Three–phase power converter with resistive load, which is rated to 1500W, 600 V DC and v 20 A DC. For the measure of variables, c (capacitor voltage), a series resistance was used and for the i measurement of L (inductor currents) HX10P/SP2 current sensors were used. Converter switches were driven by PWM outputs of the controller card; these signals are coupled via fast optocouplers (6N137). Software is developed using the control and development card dSPACE DS1104, where ZAD and FPIC control techniques are implemented. The sampling rate for all variables is set to 4 kHz. The state variables vc and i L are stored at 12 bits; the duty cycle (d) is handled at 10 bits. Parameters of buck converter ( C , L , rS , rL ) and ZAD-FPIC–controller ( K S , N , FS, R ) are entered to the control block by the user, as constant parameters. K S is the bifurcation parameter. For each sample the controller calculates in real time the duty cycle and the equivalent PWM signal to control the gate. 3. Mathematical model Fig. 2 shows a basic diagram of the system. Buck power converter is used to feed the resistive load. Eq. (1) is obtained for the system model.

Figure 1. Block diagram of the proposed system Source: The authors.

Figure 2. Electrical circuit for the 3-phase buck converter Source: The authors. 53


Toro-García et al / DYNA 82 (192), pp. 52-59. August, 2015.

of time. The first step in the analysis of multi-topological circuit is to write down the state equations, which describes the individual switched circuits. For converters operating in the continuous conduction mode, two switched circuits can be identified. When the switch is ON, it is described by Eq. (3); when the switch is OFF, it is described by Eq. (4).

(3)

(1) With

S4  1  S1 S5  1  S2

(4)

S6  1  S3 And State variables are the capacitor voltage ( vc ) and the inductor current ( i L ). These equations can be expressed in a

Si {0,1} for i  16

compact form as x  Ax  Bu with x1  vc and x2  i L . E Denotes the converter power supply and depending on the control pulse voltage E or E is injected to the system through a PWM signal. By considering continuous conduction mode (CCM) and according to centered PWM (Fig. 4), the control signal is defined as follows:

This equation can be expressed in a compact form as:

x  Ax  BU

(2)

where the state variables are:

vc  x1 , i L  x2 , vc  x3 , i L  x4 , vc  x5 , and i L  x6 a

a

b

b

c

kT  t  kT  dT / 2 1 if  u   1 if kT  dT / 2  t  kT  T  dT / 2  kT  T  dT / 2  t  kT  T  1 if

c

A is a block diagonal matrix, so that the system consists

of three uncoupled subsystems that may be treated independently. Fig. 3 shows the equivalent circuit per phase.

(5)

Solution of the system (3) for kT  t   kT  dT / 2  is given by: x(t)  eA(tkT ) x(kT )  A1[I  eA(tkT ) ]B

Solution

of

the

system

 kT  dT / 2   t   kT  T  dT / 2  is given by:

Figure 3. Electrical circuit for the buck converter (equivalent per phase) Source: The authors.

3.1. Derivation of the discrete time iterative map of the converter The inherent piecewise switched operation converters implies a multi-topological mode in which one particular circuit topology describes the system for a particular interval

Figure 4. Centered PWM Source: The authors.

54

(4)

for


Toro-García et al / DYNA 82 (192), pp. 52-59. August, 2015.

x(t)  eA(t(kT dT /2)) x(kT  dT / 2)  A1[I  eA(t(kT dT /2)) ]B

where x(kT  dT / 2)  eA(dT /2) x(kT )  A1[I  eA(dT /2) ]B

The

solution

of

the

 kT  T  dT / 2   t   kT  T  is given by:

system

(3)

for

x(t)  eA(t(kTTdT /2)) x(kT  T  dT / 2)  A1[I  eA(t(kTTdT /2)) ]B Figure 5. Surface to compute the duty cycle Source: The authors.

where x(kT  T  dT / 2)  eA(T dT ) x(kT  dT / 2)  A1[I  eA(T dT ) ]B

x1   xref )) |xx( kT ),u1 s  (( x1  xref )  ks (

General solution of the system for kT  t   kT  T  is given by:

x1   xref )) |xx( kT ),u1 s  (( x1  xref )  ks ( s1  ((x1  xref )  ks ( x1  xref )) |xx( kT ),u1

x((k 1)T)  eAT x(kT )  [2eA(dT /2)  2eA(TdT /2)  eAT  I ]A1 B

dk s1  s1 2 s3  s1  (T  dk )s2

(6)

s2 

where k represents the kth iteration, T is the sampling period and d is the duty cycle Eq. (6) is the discrete-time state equation for the buck converter. In much of the literature, the terms iterative map, iterative function and Poincaré map have been used synonymically with discrete-time state equation.

t1  kT 

dk ) 2

t 3  (k  1)T

The dk satisfying zero average requirements is:

The control strategies presented in this section are developed for the per phase equivalent circuit. So for the three phase system the control must be applied for each phase independently, taking into account that the reference voltage will be phase shifted according to the corresponding circuit phase.

Dk 

2s1 (x(kT ))  Ts (x(kT )) s (x(kT ))  s (x(kT ))

(9)

From (3), (4) and (8) we obtain:

4.1. Derivation of the discrete time iterative map of the converter

s1 (kT )  (1 aks )x1 (kT )  bks x2 (kT )  x1ref  ksx1ref E x1ref  x1ref  ks  L E s (kT )  (a  a2 ks  bcks )x1 (kT )  (b  abks  bdks )x2 (kT )  bks  x1ref  ks  x1ref L s (kT )  (a  a2 ks  bcks )x1 (kT )  (b  abks  bdks )x2 (kT )  bks

As reported in [19-[21], one of the possibilities for computing the duty cycle is to define a surface and to force it to be zero in each iteration. The surface per phase is defined as a piecewise-linear function as (Fig. 5) given by:

(10) with a   1 , b  1 , c   1 , d   (rs  rL ) RC

if kT  t  t1 if t1  t  t 2

dk 2

t 2  kT  (T 

4. Control strategies

 s1  (t  kT )s   s  (t  kT  dk )s  spwl (t)   2 2  d  s3  (t  kT  T  k )s 2  

(8)

C

L

L

The duty cycle is given by (11). (7)

 1  dk   Dk / T  0 

if t 2  (k  1)T

where the variables are described in (8) with

if

Dk  T

if 0  Dk  T if

(11)

Dk  0

We have experimentally measured and noticed that there is a period of delay in the control action. In this case, the control action is taken from the data acquired in the past sampling time, and then we compute the duty cycle as:

ks  Ks* LC a positive constant.

55


Toro-García et al / DYNA 82 (192), pp. 52-59. August, 2015.

dk 

2s1 (x(k 1)T )  Ts (x(k  1)T ) s (x(k 1)T )  s (x(k 1)T )

(12)

To apply this technique we need to measure the states at the beginning of each sampling time. To do this, we carry out a synchronization between the measured signals and the start of the PWM. This synchronization is performed using a trigger signal obtained from the PWM, which gives the command to the ADC converter for reading vC and i L . On the other hand, we need to know the values of the parameters L , C , rS , rL . In this case, we suppose that these parameters will be constant and measurable. The load R may be unknown and in this case must be estimated. Taking into account the strategies FPIC and ZAD [22, 23], the new duty cycle is calculated as follow: dkFPIC 

dk (k)  N  d* N 1

Where

Figure 6. Periodic Solution for the three–phase converter with resistive load Source: The authors.

(13)

dk (k ) is calculated as (12) and d* is the duty cycle

calculated in steady state T T[(1 d  Dk |x1 (kT )x1ref   2 *

(x1 (kT )  x1ref )

. From (9) we have:

L rs  rL )x  (  (r  r )C) x1ref  LCx1ref ] R 1ref R s L (14) 2E

Figure 7. Chaotic Solution for the Three–phase converter with resistive load Source: The authors.

5. Numerical and experimental results

Figs. 6 and 7 show the experimental behavior for the Three–phase power converter, using the same parameters and controller, but with different initial conditions. Reference Voltages for phases a, b, c (upper signals); phase voltages ( va , vb , vc ) and phase currents ( i a , i b ) (middle signals) are shown in these Figures. In Fig. 6 the system exhibits a periodic solution and in Fig.7 a chaotic solution. This fact shows the coexistence of attractors or solutions in the system. When the solution is periodic, the controlled voltage follows the voltage reference by the control action unlike the chaotic solution, where the output voltage is lower than the reference and it has an irregular fashion, in this regime the phase currents have a higher peak. Sometimes the system toggles between two solutions while it is running, this happens when the solution is near to the border of two regions of attraction. Results obtained in simulation and experiments for periodic solutions are shown in Fig. 8. The simulation was executed taking into account a 3T (0,75ms) delay in the duty cycle application. Under this condition simulation and experimental results match. Controlled voltage vC follows reference voltage for all phases with a maximum error of 2V. Fig. 9 shows the bifurcation diagrams for the output error and duty cycle of controlled system with KS and N like bifurcation parameters, obtained via model simulation using Simulink of Matlab. For K S  1,5 with N  2 and N  1, 5 with K S  3 the system presents a qualitative behavior change. Before K S  1,5 and N  1, 5 the system is in chaotic regime

In this section numerical and experimental results are shown using K S and N as bifurcation parameters, in addition the system behavior under frequency and voltage amplitude variations are illustrated graphically. Parameter values used in simulations and experiments are listed in Table 1; initially the reference voltage has a peak of 32V and 40Hz of frequency. For the simulation in SIMULINK model the fixed step size (fundamental sample time) in the configuration parameter was setting in 1 /  4Fs . Table 1 Units for Magnetic Properties Parameter

rS : Internal resistance of the source

E : Input voltage L : Inductance

rL : Internal resistance of the inductor

C : Capacitance N : FPIC control parameter Fc : Switching frequency Fs: Sampling frequency

1T _ p : 1 Delay time K S : Control parameter

Value 4Ω 40 V 1.6 mH 0.9 Ω 368 uF 7 4 kHz 4 kHz 0.25 ms 5

Source: The authors.

56


Toro-García et al / DYNA 82 (192), pp. 52-59. August, 2015.

Figure 11. Vc voltage, ia current and duty cycle experimental bifurcation diagrams with f like bifurcation parameter of three phasic converter with resistive load Source: The authors. Figure 8. Experimental and simulation results for Three-phase converter with resistive load Source: The authors.

Fig. 11 shows the bifurcation diagrams for the i a current,

va voltage and duty cycle of controlled system with f (frequency of voltage reference) like bifurcation parameter, obtained experimentally. For f  27Hz the system presents a qualitative behavior change. After f  27Hz the system is in chaotic regime before its stable regime. For constructing these diagrams, the experiment was run for 2 seconds for each value of f and samples were acquired every 50 milliseconds. The phenomenon shown in Fig. 11 is caused by saturation of the inductor core in the LC filter; due to the fact that the core saturation and magnetic hysteresis not were simulated these phenomena are not present in simulations. Fig. 12(a) shows the bifurcation diagram of duty cycle and Fig. 12(b) shows the bifurcation diagram of va for the Poincare map (6), both of which are numerical. Bifurcation diagrams were constructed taking the last 30 samples, each one for every period of reference voltage for phase a from the model (6) simulation using Matlab, while the system was running during 35 periods. The simulation was executed taking into account a single period (1T) of delay in the duty cycle application. In these Figures various regimes of operation can be appreciated: periodic windows, Chaos and periodic solutions. Fig. 13 shows the bifurcation diagram when 3T of delay is considered. Bifurcation diagrams are quite different for different delay. Before analysis shows the effects of the delay time in the signal control applied to the power converter. Many complex phenomena arise as shown in Figs. 13(e) and 13(f) some of these are not smooth bifurcations, double period bifurcations, chaos and 1-periodic orbits.

Figure 9. Error in voltage, with Ks and N like bifurcation parameters, and error in duty cycle for phase a resulting from the model simulation using Simulink of Matlab of Three– phase converter with resistive load Source: The authors.

and after it is in stable regime. For constructing these diagrams, the simulation was running for 3 periods of reference voltage and the last 15 samples of output error are taken with initial conditions equal to zero; also a delay in duty cycle application of 3T. In the following the results of experiments are shown for buck power converter behavior when the voltage reference and voltage level vary. Fig.10 shows the bifurcation diagrams for the output error and duty cycle of the controlled system with K S and N as bifurcation parameters, obtained experimentally. For K S = 3 the system presents a qualitative behavior change. Before K S = 3 the system is in chaotic regime and after it is in stable regime. For constructing these diagrams, the experiment was run for 2 seconds for each value of K S and samples were acquired every 50 milliseconds.

Figure 10. va voltage, ia current and duty cycle experimental bifurcation diagrams with KS like bifurcation parameter of three phasic converter with resistive load Source: The authors.

Figure 12. Bifurcation Diagram with Ks like bifurcation parameters, taking one sample each period of the reference voltage for phase a during 30 periods considering a T delay Source: The authors.

57


Toro-García et al / DYNA 82 (192), pp. 52-59. August, 2015.

system, simulations and experiments were performed. The stability of the closed loop system was analyzed using bifurcation diagrams; stability and transitions to chaos were observed. It was demonstrated in an experimental way, that the delay effects have high importance in the ZAD strategy. Simulations and experiments match when delay effects were included and improve the quality of the waves with ZAD control. Thus, in the chaos zone, a power quality deterioration by non-periodical current and voltage waveforms was observed, but, when the control is performed the non-periodicity is eliminated. Acknowledgments This work was supported in part by the Universidad Nacional de Colombia, Manizales Branch, the Universidad Autonoma de Manizales and the Instituto Tecnológico de Antioquia. The authors would like to thank the DIMA (Dirección de Investigaciones de la Sede Manizales) for their support throughout the projects “HERMES-19210 (Prototipo de controlador unificado de calidad de la energía con funciones de generación distribuida) y HERMES-19190 (Diseño y construcción de controladores de cargas y generadores para el laboratorio de energías renovables).”

Figure 13. Bifurcation Diagram with Ks like bifurcation parameters, Taking a sample each 30 periods of the reference voltage for phase a considering a 3T delay, for the Poincare map 4.6 [ Source: The authors.

References [1].

DiBernardo M. et all, Bifurcatios in nonsmooth dynamical systems,AMS Subject Classifications: 34A36, 34D08, 37D45, 37G15, 37N05, 37N20, May. 2006. [2]. DiBernardo M. et all, Piecewise-smooth dynamical systems: Theory and applications, Springer, New York, 2007. [3]. Zhusubaliyev, Z.T. and Mosekilde, E., Bifurcations and chaos in piecewise-smooth dynamical systems, World Scientific, series on Nonlinear sciencE, A, vol. 44, London. 2003. [4]. Kowalczyk, P. et all, Two-Parameter discontinuity-induced bifurcations of cycles: Classification and open problems, International Journal of Bifurcation and Chaos., 16 (3), PP. 6016292006. DOI: 10.1142/S0218127406015015 [5]. Banerjee, S. and Verghese, G.C., Nonlinear phenomena in power electronics, IEEE Press. Piscataway, 2001. DOI: 10.1109/9780470545393 [6]. Mohan, N., Undeland, T. and Robbins, W., Power Electronics: Converters. Aplications and design, J. Wiley., 1995. [7]. Mohan, N., First Course on Power Electronics and drives, Mnpere, USA., 2003. [8]. Lin, T. and Domijan, A., On power quality indices and real time measurement, IEEE Trans. Power Del., 20 (4), pp. 2552-2562, 2005. DOI: 10.1109/TPWRD.2005.852333 [9]. Ferrero, A., Measuring electric power quality: Problems and perspectives, Measurement, 41 (2), pp.121-129, 2008, DOI: 10.1016/j.measurement.2006.03.004. DOI: 10.1016/j.measurement.2006.03.004 [10]. Salmerón, P., Herrera, R.S., Pérez, A. and Prieto, J., New distortion and unbalance indices based on power quality analyzer measurements, IEEE Trans. Power Del., 24 (2), 2009. [11]. Shin, Y.J., Powers, E.J., Grady, M. and Arapostathis, A., Power quality indices for transient disturbances, IEEE Trans. Power Del., 21 (1), 2006. [12]. Fossas, E. and Griñó, R. and Biel, D., Quasi-sliding control based on pulse width modulation. zero averaged dynamics and the L2 norm, advances in variable structure systems. Analysis. Integration and Applications (6th International Workshop on Variable Structure Systems (VSS'2000)., pp. 335-344, 2000.

Figure 14. Results for Poincare map 4.6 simulation, Taking a sample on each one of last 10 periods of the reference voltage for phase a and ia considering a 3T delay, varying frequency in the reference voltage Source: The authors.

In order to analyze the effect of the variation of frequency and voltage level in the reference voltage, a simulation was carried out. Results obtained varying the frequency of reference voltage are shown in Fig. 14. The Figs. 14(a), 14(b), 14(c) show the behavior of duty cycle, peak voltage and peak current in C and L respectively when the frequency voltage reference is varied in the Poincare map simulation. The amplitude of the output voltage does not drop significantly with the variation of frequency, but the current peak has a linear growth with frequency, showing a predominant capacitive effect. In this case the delay does not have a significant impact. 5. Conclusion Control strategy ZAD-FPIC was designed and applied to a three-phase buck converter with resistive load. For this 58


Toro-García et al / DYNA 82 (192), pp. 52-59. August, 2015. [13]. Angulo, F., Olivar, G. and Taborda, J.A., Continuation of periodic orbits in a ZAD-strategy controlled buck converter, Chaos. Solitons and Fractals., 38, pp. 348-363, 2008. DOI: 10.1016/j.chaos.2007.04.023 [14]. Angulo, F., Fossas, E. and Olivar, G., Transition from periodicity to chaos in a PWM controlled buck converter with ZAD strategy, Int. Journal of Bifurcations and Chaos., 15, pp. 3245-3264, 2005. DOI: 10.1142/S0218127405014015 [15]. Angulo., F., Análisis de la dinámica de convertidores electrónicos de potencia usando PWM basado en promediado cero de la dinámica del error (ZAD), Tesis, Universidad Politécnica de Cataluña, Cataluña., 2004. [16]. Taborda, J., Análisis de bifurcaciones en sistemas de segundo orden usando pwm y promediado cero de la dinámica del error, Tesis, Universidad Nacional de Colombia - Sede Manizales., Colombia, 2006. [17]. Angulo, F., Burgos J.E. and Olivar, G, Chaos stabilization with TDAS and FPIC in a buck converter controlled by lateral PWM and ZAD, In Proceedings: Mediterranean Conference on Control and Automation., July 2007. [18]. Angulo, F., Olivar, G., Taborda, J. and Hoyos, F., Nonsmooth dynamics and FPIC chaos control in a DC-DC ZAD-strategy power converter, EUROMECH Nonlinear Dynamics Conference., Saint Petersburg, RUSSIA, July 2008. [19]. Angulo, F., Olivar, G. and Taborda, J., Continuation of periodic orbits in a zad-strategy con- trolled buck converter, Chaos. Solitons and Fractals, 38, pp. 348-363, 2008. DOI: 10.1016/j.chaos.2007.04.023 [20]. Angulo, F., Análisis de la dinámica de convertidores electrónicos de potencia usando PWM basado en promediado cero de la dinámica del error (ZAD), PhD. Tesis, Universidad Politécnica de Cataluña, Cataluña, España, 2004. [21]. Taborda, J., Análisis de bifurcaciones en sistemas de segundo orden usando PWM y promediado cero de la dinámica del error, MSc. Tesis, Universidad Nacional de Colombia - Sede Manizales, Colombia, Mayo 2006. [22]. Hoyos, F.E., Toro, N. and Angulo, F., Rapid control prototyping of a permanent magnet DC motor using non-linear sliding control ZAD and FPIC, 3rd IEEE Latin American Symposium on Circuits and Systems (LASCAS 2012), Playa del Carmen, Mexico, de Febrero 29 a Marzo 2 de 2012. [23]. Hoyos, F.E., Rincón, A., Taborda, J.A., Toro, N. and Angulo, F., Adaptive quasi-sliding mode control for permanent magnet DC motor, Mathematical Problems in Engineering, Article ID 693685, 12 P, 2013. DOI:10.1155/2013/693685. [24]. F.E. Hoyos, D. Burbano, F. Angulo, G. Olivar, J. Taborda, and N. Toro, Effects of quantization, delay and internal resistances in digitally ZAD-controlled buck converter. International Journal of Bifurcation and Chaos. 22 (10), 9 P, 2012. DOI: 10.1142/S0218127412502458.

Y.A. Garces-Gomez, was born in ManzanaresCaldas, Colombia, in 1983. He received the BSc. Engineering degree in 2009 from the Universidad Nacional de Colombia, Manizales, in Electronic Engineering. Between 2009 and 2011 had a “Colciencias” scholarship for postgraduate studies in engineering - industrial automation at the Universidad Nacional de Colombia. He is currently working for a PhD degree in Engineering at the Universidad Nacional de Colombia. His research interests include power definitions under nonsinusoidal conditions, power quality analysis, and power electronic applications.

N. Toro-García, received the BSc. degree in Electrical Engineering and the PhD. degree in Automatics from Universidad Nacional de Colombia, Manizales, in 1983 and 2012 respectively; the MSc. degree in Production Automatic Systems from Universidad Tecnológica de Pereira, Colombia, in 2000. He is currently an associate professor in the Department of Electrical Engineering, Electronics and Computer Science, at the Universidad Nacional de Colombia, sede de Manizales. His research interests include nonlinear dynamics of nonsmooth systems, and applications for switched inverters.

Maestría en Ingeniería - Ingeniería Eléctrica

F.E. Hoyos-Velasco, received the BSc. degree in Electrical Engineering, the MSc. degree in automatics, and the PhD. degree in automatics from the Universidad Nacional de Colombia, Manizales, Colombia, in 2006, 2009, and 2012, respectively. He is currently professor in the School of Physics, Universidad Nacional de Colombia Sede Medellín. His research interests include nonlinear control, nonlinear dynamics of nonsmooth systems, and applications to dc–dc converters. He is a member of the Scientific and Industrial Instrumentation Research Group, at the Universidad Nacional de Colombia.

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64

59


Distributed generation placement in radial distribution networks using a bat-inspired algorithm John Edwin Candelo-Becerra a & Helman Enrique Hernández-Riaño b a

Departamento de Ingeniería Eléctrica y Automática, Facultad de Minas, Universidad Nacional de Colombia – Sede Medellín, Medellín, Colombia, jecandelob@unal.edu.co b Departamento de Ingeniería Industrial, Universidad de Córdoba, Montería, Colombia and Universidad del Norte, Barranquilla, Colombia, hhernandez@correo.unicordoba.edu.co Received: October 07th, 2014. Received in revised form: February 13th, 2015. Accepted: July 02nd, 2015.

Abstract Distributed generation (DG) is an important issue for distribution networks due to the improvement in power losses, but the location and size of generators could be a difficult task for exact techniques. The metaheuristic techniques have become a better option to determine good solutions and in this paper the application of a bat-inspired algorithm (BA) to a problem of location and size of distributed generation in radial distribution systems is presented. A comparison between particle swarm optimization (PSO) and BA was made in the 33-node and 69-node test feeders, using as scenarios the change in active and reactive power, and the number of generators. PSO and BA found good results for small number and capacities of generators, but BA obtained better results for difficult problems and converged faster for all scenarios. The maximum active power injections to reduce power losses in the distribution networks were found for the five scenarios. Keywords: bat-inspired algorithm; distributed generation; particle swarm optimization; distribution system.

Ubicación de generación distribuida en redes de distribución radiales usando un algoritmo inspirado en murciélagos Resumen La generación distribuida (DG) es un tema importante para las redes de distribución debido a la reducción de las pérdidas de energía, pero la ubicación y el tamaño de generadores puede ser una tarea difícil para las técnicas de solución exactas. Las técnicas metaheurísticas se han convertido en una mejor opción para determinar soluciones válidas y en este trabajo se presenta la aplicación de un algoritmo inspirado en murciélagos (BA) a un problema de ubicación y dimensionamiento de generación distribuida en sistemas de distribución radial. Una comparación entre la técnica de optimización por enjambre de partículas (PSO) y BA fue hecha en los sistemas de prueba de 33 nodos y 69 nodos, utilizando como escenarios el cambio en la potencia activa y reactiva, y el número de generadores. PSO y BA encontraron buenos resultados para un número pequeño y pocas capacidades de generación, pero BA obtuvo mejores resultados para problemas difíciles y converge más rápido para todos los escenarios. Las máximas inyecciones de potencia activa para reducir las pérdidas de energía en las redes de distribución fueron encontradas para los cinco escenarios. Palabras clave: algoritmo inspirado en murciélagos; generación distribuida; optimización con cúmulo de partículas; sistemas de distribución.

1. Introduction Power losses of distribution networks have always been an important issue due to the energy efficiency and the costs that the problem represents for electricity companies. This problem has usually been solved using feeder restructuring [1], DG

placement [1-10], capacitor placement [1,11] and network reconfiguration [12, 13]. Location and size of distributed generation in distribution networks has difficulties due to the combination of the number and capacities of generators, and the selection of the best nodes. Several algorithms have been used to solve the

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 60-67. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48573


Candelo-Becerra & Hernández-Riaño / DYNA 82 (192), pp. 60-67. August, 2015.

problem such as particle swarm optimization [14-16], ant colony [17], optimal power flow [18], analytical methods [19,20], evolutionary algorithm [21-24], and simulated annealing [25-27], among others [14,28,29]. PSO have been widely used to solve this problem [30], but some convergence problems have been identified [31]. BA has been proposed as a good technique to optimize several functions [32], but more research is needed to determine the performance for power system problems. In this paper, BA and PSO were used to obtain the solutions to a problem of location and size of generators. A comparison of the two algorithms was conducted changing capacities and the number of generators in two distribution networks.

node i with an active and a reactive power at the iteration k. and are the active and reactive power of the generator d, located at node i at the iteration k. is the node of the DG at the iteration k. changes between the minimum active power of the generator d. and the maximum active power changes between the minimum reactive power of the generator d. and the maximum reactive power changes position among all possible PQ nodes, searching for the best location during the iterations. The new active power injection can be calculated as the sum of the real power and the change of active power at each iteration, as shown in (4).

2. Power losses

(1)

6

7

where, and are the maximum active and reactive power of the generator d, respectively. and are the minimum active and reactive power of the generator d, respectively. In this research, all generators were considered to supply the active power up to the maximum value with a power factor of 0.98. During the search of an active power, the reactive power was adjusted to maintain the operation of each generator at the same power factor.

3. Distributed generation model Location and size of DG were modeled in this research as a vector representing the active power, the reactive power, and the node at each iteration k, as shown in (3).

5

is the new reactive power at the iteration k, where, is the last value of the reactive power at the iteration kis the new change in reactive power at the 1, and ∆ iteration k. The active and reactive power supplied by the generators located at each node i, were limited by the minimum and maximum values as shown in (6) and (7).

2

,

where, QLoss is the total reactive power losses, b is the branch number, nb is the number of branches, Ib is the series current that circulates through the branch b, and Xb is the reactance of branch b. As the demand increases, the current increases and the power losses are higher, due to the long distance from the generators to supply the demand. Centralized generation does not allow a reduction of power losses, to improve voltage levels at critical nodes or to reduce network congestion. Therefore, distributed generation is a better option to supply power to the load.

,

4

is the new active power at the iteration k, where, 1 is the last value of active power at the iteration k-1, and ∆ is the changes in active power at the iteration k. The new reactive power generation can be modeled as the sum of the last reactive power calculated and the new power change at the current iteration k, as shown in (5).

where, PLoss is the total active power losses, b is the branch number, nb is the number of branches, Ib is the series current that circulates through the branch b, and Rb is the resistance of the branch b. The total reactive power losses can be calculated as the sum of the reactive power losses of each branch, as shown in (2).

The total active power losses in distribution networks can be calculated as the sum of the active power losses of each branch, as shown in (1).

4. Location and size of distributed generation

3

This optimization problem was formulated as the minimization of the active power losses, with respect to the change in power of all new DG at nodes i, as shown in (8).

where, d is the DG number (d=1, 2…ndg), and ndg is the is the generator d located at maximum number of DG. 61


Candelo-Becerra & Hernández-Riaño / DYNA 82 (192), pp. 60-67. August, 2015.

4.

Select the particle with the best location according to the best fitness Fbest. 5. While (t < Maximum iteration) a. Generate new positions adjusting the velocities of each particle according to (10) and (11). b. Evaluate the objective function for the new particles and find the new fitness Fnew. c. if (Fnew ( )< F( ))  Update the new solution End if  if (Fnew ( )<Fbest))  Update the best solution End if End while. Initial velocities are generated between the minimum and maximum values, as shown in (9).

(8)

, d=1,2…ndg

where, OF is the objective function, b is the branch number and nb is the number of branches. PLoss,b is the active power losses of each branch b. is the vector of generators located at different nodes in the distribution network, as defined in (3). d is the generator number and nd the total number of generators. This objective function is subject to the following constraints: ∑ ∑

0 0

Active power balance Reactive power balance

Active power limits

Reactive power limits

Voltage at node i

Line current from i to j

Line current from j to i

,

9

Where vp is the velocity of the particle and is the maximum velocity of each particle p. The velocity of each particle is updated at each iteration k, as shown in (10).

~

∗ ∗ ∗

10

where, is the velocity updated at the iteration k. w, a and b are the parameters of PSO, respectively. p is the number of the particle (p=1,2,3….np) and np is the number is the previous velocity at the iteration k-1. of particles. and are random numbers generated at each iteration to multiply the action over the direction of each particle to the local best or the global best, respectively. is the position of each particle p at the iteration k, is the previous position of the particle p at the iteration is the position of the best particle. k-1, and The new position of the particle , is updated after adding the , as shown in (11). new velocity to the previous position

where, i is the node and n is the number of nodes. PGi and QGi are the active and reactive power generation at node i, respectively. PL and QL are the active and reactive power of all loads, respectively. PLoss and QLoss are the active and reactive and are the minimum power losses, respectively. and maximum active power of the generator located at node i, respectively. and are the minimum and maximum reactive power generation, respectively. Vi is the voltage at node i, Iij is the current supplied from node i to node j, and Iji is the current supplied from node j to node i. VMin and are i the minimum and maximum voltage at node i, respectively. and are the maximum line currents from buses i to j and j to i, respectively.

11

5. Optimization techniques

, A vector containing the position of the DG, was considered to locate the generation at PQ buses. This vector contains all DG installed in the distribution network, with the active and reactive power and the nodes (12)

5.1. Particle swarm optimization PSO is a metaheuristic based on the behavior of birds or fishes. This iterative method moves all particles to the best local and the best global solutions. The algorithm presented in this paper was based on [30] with the following steps: 1. Generate the initial population. 2. Generate the initial velocities of each particle, using (9). 3. Evaluate the objective function for each particle and find the fitness F( ).

12

is the position of the particle containing where, the power and the nodes of all DG. is the vector of velocities for each generator at the iteration k. is the vector of positions of all DG at the previous iteration k-1. 62


Candelo-Becerra & Hernández-Riaño / DYNA 82 (192), pp. 60-67. August, 2015. Table 1. Information of the radial distribution networks.

5.2. Bat-inspired algorithm

Specifications

BA is a metaheuristic based on the behavior of the bats during the search for their prey. This algorithm was presented by [32] and the steps used in this paper were defined as follows:

Buses Lines Feeder Transformers Loads Source: The Authors.

1. 2. 3.

Generate the initial population. Initialize the velocities of bats Define the pulse rate rp, the frequency fp and the loudness Ap. 4. Evaluate the objective function for each bat and find the fitness F( ). 5. Rank the bats and select the best 6. While (t < Maximum iteration) a. Generate new positions adjusting the velocities with the frequency according to (13). b. If rand>rp c. Generate solution close to the best  End if d. Generate new bats by flying randomly e. Evaluate the objective function for the new bats and find the new fitness Fnew. f. if (Fnew ( )< F( )) and rand< Ap) g. Update the new solution  End if h. Increase rp and reduce Ap i. Rank the bats and select the best End while The frequency fp, used to update each bat during iterations, is calculated using (13) [32].

69-node

33 32 1 0 32

69 68 1 0 49

Figure 1. Diagram of the 33-node test feeder. Source: [33]

13

where, p is the number of the bat, fp is the frequency of is the maximum frequency, is the the bat p, minimum frequency and βp is a random number to generate different frequencies. The velocity of each bat was updated using the previous velocity k-1, and the difference between previous position xp and the best , multiplied by the frequency fp, as shown in (14).

33-node

Figure 2. Diagram of the 69-node test feeder. Source: [34]

Fig. 1 shows the diagram of the 33-node test feeder. This distribution network has 32 possible nodes to locate DG. The total load is 3715 kW and 2300 kVAr distributed in 32 nodes. The centralized generation supplies 3926 kW and 2443 kVAr, and the power losses are 210 kW and 143 kVAr. The voltage limits were defined as =0.9 p.u. and =1.1 p.u. Fig. 2 shows the diagram of the 69-node test feeder. This radial distribution network has 68 possible nodes to locate DG. The total load is 4014 kW and 2845 kVAr, distributed in 49 nodes. The centralized generation supply 4265 kW and 2957 kVAR and the power losses are 265 kW and 112 kVAr. The voltage limits considered for this distribution network were defined as =0.9 p.u. and =1.1 p.u.

14

The new position of each bat is calculated using (11), with as the new position of the bat, the new velocity and the 1 . The vector of position of DG for this previous position technique was modeled using (12). 6. Test systems and simulations 6.1. Test system cases

6.2. Scenarios for the simulation

Most of the loads are supplied using radial distribution networks and this test focused on determining the location and size of distributed generation. Therefore, the 33-node and the 69-node test feeders were selected to test the algorithms. General information about the test feeders can be found in Table 1.

Five scenarios of active and reactive power capacities of DG were tested to determine the best algorithm. Table 2 shows the five scenarios, the maximum active and reactive power per generator, and the total active and reactive power generation. 63


Candelo-Becerra & Hernández-Riaño / DYNA 82 (192), pp. 60-67. August, 2015. Table 2. Scenarios defined for the test. PDGd Scenario 500 1 750 2 1000 3 1500 4 2000 5 Source: The Authors.

QDGd 164 246 328 493 657

PDGT 5007503750 10005000 15007500 200010000

 200 individuals  100 iterations  The same initial population  The same programming structure.  Comparable velocities for updating positions. The test consisted in achieving similar characteristics for the algorithms to obtain the efficiency of each technique. Furthermore, the test was based on determining the best solutions according to the number of generators and capacities.

QDGT 1642461230 3281640 4932465 6573285

where, PDGd and QDGd are the active and reactive power of generator d, respectively. PDGT and QDGT are the total active and reactive power of all generators to be installed in the distribution network, respectively.

7. Results and Discussion 7.1. Location and size of DG Tables 3 and 4 show the results of DG location and size for the five scenarios and the seven generators installed in the 33-node and 69-node test feeders, respectively. PT is the total power supplied by the new generators located in the network. PLoss is the total power losses of the distribution network obtained for each scenario.

6.3. General parameter of the algorithms PSO and BA were compared in this research to determine the best location and size of DG in distribution networks. Comparisons of the minimum objective functions were carried out, considering the following parameters: Table 3. Location and Size of DG in the IEEE 33-node test feeder. PSO Number of Scenario Generators PT (kW) Nodes 0 0 0 0 1 500 15 2 750 14 1 3 1000 12 4 1500 30 5 2000 27 1 1000 15-32 2 1500 14-31 2 3 2000 12-30 4 2210 13-30 5 2130 14-29 1 1500 15-30-32 2 2250 14-25-31 3 3 3000 13-24-30 4 3486 14-24-30 5 2613 6-14-32 1 2000 8-15-30-31 2 3000 7-14-24-31 4 3 3516 6-12-24-32 4 3268 7-14-25-33 5 2998 7-13-25-33 1 2500 9-14-25-29-32 2 3091 7-11-18-25-32 3 3547 2-6-13-31-25 5 4 3080 2-8-16-24-32 5 3589 2-6-14-25-31 1 2953 6-8-16-25-30-33 2 3340 3-7-14-25-30-33 3 3457 7-11-13-15-24-30 6 4 3640 7-13-17-20-24-30 5 4118 2-13-24-26-29-33 1 3070 3-7-11-15-25-29-32 2 3080 6-8-13-17-25-30-33 7 3 4141 2-10-13-18-24-27-30 4 3443 5-7-13-15-21-25-31 5 3316 8-12-14-24-25-26-32 Source: The Authors.

PLoss (kW) 210.99 145.89 127.85 116.72 101.90 96.62 97.51 71.47 60.60 58.07 59.48 71.45 52.52 44.02 43.11 49.68 54.41 38.98 40.51 42.59 41.25 41.77 37.15 38.66 41.27 38.51 37.13 37.02 37.85 38.75 38.02 37.27 34.75 36.57 37.10 35.68

64

PT (kW) 0 500 750 1000 1500 2000 1000 1500 2000 2206 2199 1500 2250 3000 3178 3182 2000 2995 3497 3363 3536 2500 3250 3365 3494 3431 3000 4107 4083 3967 3947 3500 3519 3694 3733 3605

BAT Nodes 0 15 14 12 30 27 15-32 14-31 12-30 13-30 13-30 15-30-32 14-25-31 12-24-30 13-24-30 13-24-30 8-15-30-32 7-14-25-31 7-14-24-30 6-13-24-30 6-14-24-31 8-15-25-29-32 8-14-25-27-31 6-8-15-31-25 7-11-16-24-30 7-14-24-28-32 6-8-16-25-30-32 2-3-7-14-25-31 2-6-10-15-24-30 6-12-16-21-24-31 6-8-16-24-25-32 6-8-14-24-25-29-33 7-11-15-21-25-30-32 7-12-15-24-25-27-31 11-16-20-25-27-30-33 8-11-15-21-25-26-31

PLoss (kW) 210.99 145.89 127.85 116.72 101.90 96.62 97.51 71.47 60.60 58.07 58.13 71.45 52.52 43.59 42.10 41.91 54.32 38.36 37.14 38.96 37.39 41.38 36.99 37.12 36.76 36.34 36.75 36.55 36.28 36.42 37.72 34.05 33.36 35.21 35.35 34.23


Candelo-Becerra & HernĂĄndez-RiaĂąo / DYNA 82 (192), pp. 60-67. August, 2015.

The algorithms found different solutions around the maximum power inclusion for each node according to the power losses reduction. Some scenarios supply more real power, but the power losses are higher.

From the results shown in the two tables, the algorithm PSO obtained good results for the location of a small number of generators and small capacities. When the number of generators and the capacity increase, the convergence was affected and the solutions were not what was expected. BA obtained similar results to PSO for a small number of generators and capacities, but when the problem increased the solutions were improved using BA. BA determined a better reduction in power losses and the results were consistent according to the number, capacity and location of generators. The two algorithms found compensations according to the maximum power inclusions of each node. The results show the power generation for cases with the maximum power generation, the algorithms found solutions where not all capacity is needed and the power losses increase with more injection from generators.

Table 4. Location and Size of DG in the IEEE 69-node test feeder. PSO Number of Scenario Generators PT (kW) Nodes 0 0 0,00 0 1 500 64 2 750 61 3 1000 61 1 4 1500 61 5 2000 61 1 1000 61-64 2 1500 61-62 3 2000 61-62 2 4 2351 21-61 5 2786 21-61 1 1500 25-61-64 2 2250 22-62-63 3 3 3000 17-60-62 4 3017 17-61-62 5 2800 2-21-61 1 2000 27-61-62-63 2 3000 22-55-62-64 4 3 3220 24-61-62-69 4 2630 24-60-63-69 5 3145 14-23-61-69 1 2500 23-57-62-63-69 2 3000 22-54-62-64-69 5 3 4059 17-37-50-61-62 4 3385 20-46-53-61-62 5 3218 18-21-39-61-63 1 2414 15-26-56-61-62-64 2 2630 17-26-60-61-62-69 6 3 5435 4-18-50-53-61-62 4 3930 19-49-58-61-63-69 5 3760 17-40-50-61-63-67 1 3452 12-21-29-57-60-63-65 2 5155 9-21-37-50-61-62-69 7 3 4525 11-23-37-44-54-61-62 4 4896 6-23-36-49-62-64-69 5 5961 4-15-21-48-50-61-62 Source: The Authors.

7.2. Convergence of the algorithms Fig. 3 shows the convergence of the algorithms using the scenario of maximum active power and five generators. The test was conducted using 200 individuals and 100 iterations. The average of all solutions were used to evaluate the behavior of the individuals and the evolution of all values. PSO slowly reduced the average of the solutions and the number of evaluations made to solve the problem was not enough to find better solutions. BA converged faster than PSO and found better solutions for the same number of iterations. During the first ten iterations, this technique reached good solutions and the search is centered on reducing to the minimum value.

PLoss (kW) 265.00 170.48 141.07 117.04 85.31 73.28 116.53 85.33 73.48 49.38 42.71 82.90 50.77 45.97 44.68 42.72 57.05 45.20 41.93 45.01 40.66 54.49 45.27 41.55 41.78 42.75 47.19 44.38 42.08 40.88 43.19 44.27 43.77 42.95 40.43 41.66

65

PT (kW) 0.00 500 750 1000 1500 2000 1000 1500 2000 2351 2742 1500 2250 2745 2550 3640 2000 3000 3040 3109 3293 2500 3750 3197 5362 3748 3000 4347 3614 4006 4580 3500 4598 3612 4196 4336

BAT Nodes 0 64 61 61 61 61 61-64 61-62 61-62 21-61 22-61 24-61-64 21-61-62 21-61-62 21-61-62 19-49-61 24-61-62-64 9-22-61-62 11-21-61-62 16-61-62-69 11-21-29-61 12-21-61-62-64 10-22-28-61-62 8-24-61-62-69 7-24-49-61-64 13-19-49-61-69 12-24-58-61-63-64 8-21-36-58-61-63 9-22-31-61-62-67 9-20-28-40-61-65 10-20-38-49-61-64 12-20-50-58-61-62-63 2-11-22-28-58-61-62 22-27-50-55-61-64-69 10-21-29-49-60-61-64 2-9-23-49-61-62-69

PLoss (kW) 265.00 170.48 141.07 117.04 85.31 73.28 116.53 85.33 73.48 49.38 42.61 82.62 49.88 42.82 43.32 41.13 54.90 44.19 39.72 41.85 39.57 45.95 44.28 40.97 41.38 39.36 40.73 42.73 39.93 40.74 37.75 39.59 39.97 39.17 39.84 37.92


Candelo-Becerra & HernĂĄndez-RiaĂąo / DYNA 82 (192), pp. 60-67. August, 2015.

PSO

BA

130 120

Power losses (MW)

110 100 90 80 70 60 50 40 1

11

21

31

41

51

61

71

81

91

101

Iteration

Figure 3. Convergence of PSO and BA. Source: The Authors.

Figure 5. Power losses reduction with DG in the 69-node test feeder. Source: The Authors.

with a small number of generators and capacities, but for more difficult problems the search was not successful. BA showed good results for minimizing power losses for the five scenarios, obtaining consistent results when changing capacities, locations, and the number of generators. BA converged faster than PSO and found better solutions for all scenarios. Power losses were reduced according to the number of generators and capacities, but the maximum reduction was found depending on the number of generators and capacities. Acknowledgments This research was supported in part by the energy strategic area of the Universidad del Norte. The authors would like to thank the Power Systems Research Group of the Universidad del Norte for the valuable information provided for this research.

Figure 4. Power losses reduction with DG in the 33-node test feeder. Source: The Authors.

7.3. Power losses reduction

References

Fig. 4 and 5 show the results of BA to reduce power losses with DG in the 33-node and the 69-node test feeders, respectively. Fig. 4 shows that the active power losses were reduced according to the number, capacities and locations of generators. The maximum active power injection to reduce power losses was found for all five scenarios in the 33-node test feeder. Fig. 5 shows similar results for power loss reduction in the 69-node test feeder, but the maximum power injection was found for a smaller number of generators and capacities.

[1]

[2] [3]

[4]

8. Conclusions PSO and BA were tested to find location and size of DG in two radial distribution networks. PSO found good results

[5]

66

Ramesh, L., Chowdhury, S.P., Chowdhury, S., Natarajan, A.A. and Gaunt, C.T., Minimization of power loss in distribution networks by different techniques. In Proc. World Academy of Science, Engineering and Technology, 3, pp. 521-527, 2009. Gopiya-Naik, S., Khatod, D.K. and Sharma, M.P., Optimal allocation of distributed generation in distribution system for loss reduction. In Proc. IACSIT Coimbatore Conferences, 28, pp. 42-46, 2012. Singh, A.K. and Parida, S.K., Selection of load buses for DG placement based on loss reduction and voltage improvement sensitivity. In Proc. International Conference on Power Engineering, Energy and Electrical Drives, pp. 1-6, 2011. DOI: 10.1109/powereng.2011.6036559 Atwa, Y.M., El-Saadany, E.F., Salama, M.M.A. and Seethapathy, R., Optimal renewable resources mix for distribution system energy loss minimization, IEEE Transactions on Power Systems, 25 (1), pp. 360370, 2010. DOI: 10.1109/TPWRS.2009.2030276 Hung, D.Q., Mithulananthan, N. and Bansal, R.C., Analytical strategies for renewable distributed generation integration


Candelo-Becerra & Hernández-Riaño / DYNA 82 (192), pp. 60-67. August, 2015.

[6]

[7]

[8]

[9] [10]

[11]

[12]

[13]

[14] [15]

[16]

[17]

[18] [19]

[20]

[21]

[22]

considering energy loss minimization, Applied Energy, 105, pp. 7585, 2013. DOI: 10.1016/j.apenergy.2012.12.023 Shaaban, M.F. and El-Saadany, E.F., Optimal allocation of renewable DG for reliability improvement and losses reduction. In Proc. IEEE Power and Energy Society General Meeting, pp. 1-8, 2012. DOI: 10.1109/pesgm.2012.6345054 Albadi, M.H., Al-Hinai, A.S., Al-Abri, N.N., Al-Busafi, Y.H. and AlSadairi, R S., Optimal allocation of renewable-based DG resources in rural areas using genetic algorithms. In Proc. APPEEC Power and Energy Engineering Conference, pp. 1-4, 2012. DOI: 10.1109/appeec.2012.6307161 Riyami, M.A., Khalasi, F.A., Hinai, A.A., Shuraiqi, M.A. and Bouzguenda, M., Power losses reduction using solar photovoltaic generation in the rural grid of Hij-Oman. In Proc. IEEE International Energy Conference, pp. 553-557, 2010. DOI: 10.1109/energycon.2010.5771743 Chen, X. and Gao, W., Effects of distributed generation on power loss, loadability and stability. In Proc. IEEE SoutheastCon, pp. 468473, 2008. Junjie, M.A., Yulong, W. and Yang, L., Size and location of distributed generation in distribution system based on immune algorithm, Syst. Eng. Procedia, 4, pp. 124-132, 2012. DOI: 10.1016/j.sepro.2011.11.057 Zhang, D., Fu, Z. and Zhagn, L., Joint optimization for power loss reduction in distribution system, IEEE Transaction on Power Systems, 23 (1), pp. 161-169, 2008. DOI: 10.1109/TPWRS.2007.913300 Wu, Y.-K., Lee, C.-Y., Liu, L.-C. and Tsai, S.-H., Study of reconfiguration for the distribution system with distributed generators, IEEE Transactions on Power Delivery, 25 (3), pp. 16781685, 2010. DOI: 10.1109/TPWRD.2010.2046339 Rao, R.S., Ravindra, K., Satish, K. and Narasimham, S.V.L., Power Loss minimization in distribution system using network reconfiguration in the presence of distributed generation, IEEE Transactions on Power Systems, 28 (1), pp. 317-325, 2013. DOI: 10.1109/TPWRS.2012.2197227 Hajizadeh, A. and Hajizadeh, E., PSO-based planning of distribution systems with distributed generations. In Proc. World Academy of Science, Engineering and Technology, 21, pp. 598-603, 2008. Lalitha, M.P., Reddy, V.C.V. and Usha, V., Optimal DG placement for minimum real power loss in radial distribution system using PSO, Journal of Theoretical and Applied Information Technology, 13 (2), pp. 107-16, 2010. Kansal, S., Sai, B.B.R., Tyagi, B. and Kumar, V., Optimal placement of distributed generation in distribution networks, International Journal of Engineering, Science and Technology, 3 (3), pp. 47-55, 2011. DOI: 10.4314/ijest.v3i3.68421 Wang, L. and Singh, C., Reliability-constrained optimum placement of reclosers and distributed generators in distribution networks using an ant colony system algorithm, IEEE Transactions on Systems, Man, and Cybernetics- Part C: Application and Reviews, 38 (6), pp. 75764, 2008. DOI: 10.1109/TSMCC.2008.2001573 Gautam, D. and Mithulananthan, N., Optimal DG placement in deregulated electricity market, Electric Power Systems Research, 77 (12), pp. 1627-1636, 2007. DOI: 10.1016/j.epsr.2006.11.014 Hung, D.Q., Mithulananthan, N. and Bansal, R.C., Analytical expressions for DG allocation in primary distribution networks, IEEE Transactions on Energy Conversion, 25 (3), pp. 814-820, 2010. DOI: 10.1109/TEC.2010.2044414 Wang, C. and Nehrir, M.H., Analytical approaches for optimal placement of distributed generation sources in power systems, IEEE Transactions on Power Systems, 19 (4), pp. 2068-2076, 2004. DOI: 10.1109/TPWRS.2004.836189 Teng, J-H., Luor, T-S. and Liu, Y-H., Strategic distributed generator placements for service reliability improvements. In Proc. IEEE Power Engineering Society Summer Meeting, 2, pp. 719-724, 2002. DOI: 10.1109/PESS.2002.1043399 Borges, C.L.T. and Falcao, D.M., Optimal distributed generation allocation for reliability, losses, and voltage improvement, International Journal of Electrical Power and Energy Systems, 28 (6), pp. 413-420, 2006. DOI: 10.1016/j.ijepes.2006.02.003

[23] Shaaban, M.F., Atwa, Y.M. and El-Saadany, E.F., DG allocation for benefit maximization in distribution networks, IEEE Transactions on Power Systems, 28 (2), pp. 639-649, 2012. DOI: 10.1109/TPWRS.2012.2213309 [24] Celli, G., Ghiani, E., Mocci, S. and Pilo, F., A multiobjective evolutionary algorithm for the sizing and siting of distributed generation, IEEE Transactions on Power Systems, 20 (2), pp. 750757, 2005. DOI: 10.1109/TPWRS.2012.2213309 [25] Sutthibun, T. and Bhasaputra, P., Multi-objective optimal distributed generation placement using simulated annealing. In Proc. ECTI-CON International conference on electrical engineering/electronics computer telecommunications and information technology, pp. 810813, 2010. [26] Aly, A.I., Hegazy, Y.G. and Alsharkawy, M.A., A simulated annealing algorithm for multi-objective distributed generation planning. In Proc. IEEE Power and Energy Society General Meeting, pp. 1-7, 2010. DOI: 10.1109/PES.2010.5589950 [27] Ghadimi, N. and Ghadimi, R., Optimal allocation of distributed generation and capacitor banks in order to loss reduction in reconfigured system, Research Journal of Applied Sciences, Engineering and Technology, 4 (9), pp. 1099-1104, 2012. [28] Nara, K., Hayashi, Y., Ikeda, K. and Ashizawa, T., Application of Tabu search to optimal placement of distributed generators. In Proc. IEEE power engineering society winter meeting, 2, pp. 918-923, 2001. DOI: 10.1109/pesw.2001.916995 [29] Mokhtari-Fard, M., Noroozian, R. and Molaei, S., Determining the optimal placement and capacity of DG in intelligent distribution networks under uncertainty demands by COA. In Proc. ICSG 2nd Iranian conference on smart grids, pp. 1-8, 2012. [30] Tan, W-S., Hassan, M., Majid, M. and Rahman, H., Optimal distributed renewable generation planning: A review of different approaches, Renewable and Sustainable Energy Reviews, 18, pp. 626-645, 2013. DOI: 10.1016/j.rser.2012.10.039 [31] Van den Bergh, F. and Engelbrecht, A.P., A new locally convergent particle swarm optimizer. In Proc. IEEE Conference on Systems, Man, and Cybernetics, pp. 96-101, 2002. DOI: 10.1109/ICSMC.2002.1176018 [32] Yang, X.-S., A new metaheuristic bat-inspired algorithm. In Proc. NISCO Nature Inspired Cooperative Strategies for Optimization, Studies in Computational Intelligence, 284, pp. 65-74, 2010. DOI: 10.1007/978-3-642-12538-6_6 [33] University of Washington. Power systems test case archive. [Online] [date of reference: February 10th of 2014]. Available at: http://www.ee.washington.edu/research/pstca/ [34] Phonrattanasak, P. and Leeprechanon, N., Optimal location of fast charging station on residential distribution grid, Int. J. Innov. Manag. Technol., 3 (6), pp. 675-681, 2012. J.E. Candelo-Becerra, received his BSc. degree in Electrical Engineering in 2002 and his PhD in Engineering with emphasis in Electrical Engineering in 2009 from Universidad del Valle, Cali, Colombia. His employment experiences include the Empresa de Energía del Pacífico EPSA, Universidad del Norte, and Universidad Nacional de Colombia. He is now an assistant professor of the Universidad Nacional de Colombia, Medellín, Colombia. His research interests include: planning, operation and control of power systems, artificial intelligence and smart grids. H.E. Hernández-Riaño, received his BSc. degree in Industrial Engineering in 1999 from Universidad Distrital Francisco José de Caldas, Bogotá, Colombia and the MSc. degree in Management of Organizations in 2006 from Universidad EAN, Bogotá, Colombia. He is now an assistant professor of the Department of Industrial Engineering in the Universidad de Córdoba, Montería, Colombia. He is actually studying a PhD in Industrial Engineering in the Universidad del Norte, Barranquilla, Colombia. His research interests include: planning, operation and control of power systems, operations management, supply chain management, total quality management, continual improvement process, artificial intelligence and optimization.

67


Representation of electric power systems by complex networks with applications to risk vulnerability assessment Gabriel Jaime Correa-Henao a & José María Yusta-Loyo b a

Facultad de Ingenierías, Fundación Universitaria Luis Amigó, Medellín, Colombia. gabriel.correahe@amigo.edu.co b Departamento de Ingeniería Eléctrica, Universidad de Zaragoza, Zaragoza, España. jmyusta@unizar.es Received: March 27th, 2014. Received in revised form: May 14th, 2015. Accepted: July 02nd, 2015.

Abstract The occurrence of impact events (e.g. blackouts with vast geographic coverage) into electrical critic infrastructure systems usually require the analysis of cascade failure root causes through the conduction of structural vulnerability studies, with well-defined methodologies that may guide decision-making for the implementation of prevention actions and for operation recovery of the power system (e.g. N-1 and Nt contingency studies). This technical contribution provides some alternative techniques based upon complex networks and graph theory, which in the last few years, have been proposed as useful methodologies for analysis of physical behavior of electric power systems. Vulnerability assessment is achieved by testing their performance into random risks and deliberate attack threat scenarios. Results shown in this proposal lead to conclusions on the use of complex networks for contingency analysis by means of studies of those events that result in cascade failures and consumer disconnections. Keywords: critical infrastructure protection; vulnerability analysis; cascade failures; homeland security; risk analysis

Representación de los sistemas eléctricos de potencia mediante redes complejas con aplicación a la evaluación de vulnerabilidad de riesgos Resumen La ocurrencia de eventos de alto impacto (e.g., apagón con alcance geográfico) en sistemas eléctricos usualmente se diagnostica a través de técnicas de análisis estructural de vulnerabilidad, constituidas por metodologías definidas que permiten guiar la toma de decisiones en acciones de prevención y recuperación de la normalidad en la red (e.g., contingencias N-1 y N-t). En esta contribución técnica se presenta una metodología alternativa frente a las herramientas clásicas de análisis de contingencias (teoría de grafos), que últimamente se ha validado como método útil en el análisis físico de sistemas de potencia. Se realiza una valoración de la vulnerabilidad en redes de prueba IEEE, mediante cuantificación de su comportamiento ante escenarios de riesgos de tipo aleatorio o de ataques deliberados. Estos resultados permiten concluir la viabilidad de redes complejas para análisis de contingencias, mediante el estudio de eventos desencadenantes de fallos en cascada y desconexión de consumidores. Palabras clave: protección de infraestructura crítica; análisis de vulnerabilidad; fallos en cascada; seguridad nacional; análisis de riegos.

1. Introduction Critical infrastructure is described by many governments as the whole set of assets that are essential for the functioning of a society and its economy. In recent years, the European Commission (EC), the United States (US) Department of Homeland Security, and others have been concerned about the security of their country infrastructure. The Council of the European Union adopted Directive 114/08/EC in 2008 [1],

giving rise to the European Program for Critical Infrastructure Protection (EPCIP). In 2009, the US National Infrastructure Protection Plan (NIPP) was launched and later updated in 2013 as an effort to guide decision-making under threat scenarios [2]. Such protection plans can be framed as a risk management plan involving six steps: establishing safety goals, identification of resources, risk assessment, prioritization of actions, implementation of protection programs, and measuring of their effectiveness [3,4].

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 68-77. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48574


Correa-Henao & Yusta-Loyo / DYNA 82 (192), pp. 68-77. August, 2015.

electric critical infrastructure and their response against threats that may disrupt the normal operation of the power grid. The paper has the following arrangement: Section 2 introduces scale-free graphs and their equivalence for power electric systems, and Section 3 describes appropriate indexes to quantify vulnerability in power grid disruptions. Section 4 proposes an algorithm for risk scenarios of random error and deliberate attack vulnerability assessment on the basis of illustrative examples based upon IEEE test power networks, using N-1 contingency analysis and N-t dynamic simulation model for cascade failure events. Section 5 shows the results of the proposed model on selected IEEE testing networks (14, 30, 300 bus). Discussion and conclusions on practical applicability of scale-free graph modeling under risk scenarios is also provided at the end of the paper. The purpose of the technical contribution focuses on the comparison of the relative vulnerability between networks. This is very useful to guide decision-making concerning the effectiveness and impact of expansion plans, e.g., providing greater robustness to the electric network (improvement of the mesh and higher degree of connectivity of buses) and their responses in both random risk scenarios and intentional attack threats.

Power systems have always been regarded as one of the most important critical infrastructures in relation to social, economic and military issues in a country. In order to analyze the electric system’s vulnerability to threats, some new concepts have arisen in an attempt to describe the grid performance [5]. The resilience concept suggests that a system can adapt to reach a new stable position, after suffering a disturbance or contingency in one or more of its elements. Additionally, robustness implies that the system will operate its undamaged infrastructure, despite being exposed to perturbations [5]. Therefore, a robust and resilient network is equivalent to a low vulnerability network. In order to evaluate the stated vulnerability, it is important to point out the importance of threat quantification required by NIPP’s steps of identification and assessment. Therefore, it is possible to determine highimpact incidents that may lead to cascade failure events into critical infrastructures, in this case, electric power systems. Such studies, usually referred to as Structural Vulnerability Analysis [6], provide important data about the performance of the power grid when exposed to perturbations and disruptions, requiring suitable methodologies to precise detection of anomalies and perturbations in power systems [6]. Among these techniques, N-1 and N-t contingency studies [7-9] are the most used criteria in the power industry. On the other hand, the first definition of scale-free networks compared the infrastructure systems to complex networks [10-13]. Ever since then, graph theory has provided a new perspective on the study of power systems. Furthermore, concepts of resilience and robustness in scalefree networks have been applied to both power grids and computer networks [12]. They have proved to be a good approach to understanding the grid’s dynamic behaviors that generally lead to cascade effect failures. As a result, when applied to power systems, structural vulnerability analysis focuses on the performance of complex networks when they’re exposed to disruptions, either randomly (tolerance against errors or faults) or deliberately (tolerance against attacks) [12,14,15]. The way the nodes are removed from a scale-free network depends on graph statistical measures. Some studies suggest node removal according to their degree of connection [7,14,16-18]. Other studies suggest node removal based on their betweenness [19-21]. Besides, considering random node removals or degree-based node attacks, some authors also propose recalculation of degree distribution at each iteration, after every node disruption [7,19]. In this technical contribution, authors show the usefulness of scale-free graph measures as an accurate tool to assess the vulnerability of power transmission grids. This is undertaken by comparing operational indexes in traditional AC electric power flow measurements versus scale-free graph indexes, by assessing vulnerability to both deliberate attack and random error network tolerance. This shows the validation of a faster method than AC power flow, as it is graph theory modeling, and provides acceptable results for understanding the complex nature of

2. Topology representation in power networks The fields of application of graph theory, also known as complex network theory [22], are characterized by the fact that they make it easy to perform an abstract representation of a system as a network topology with statistical measures. This leads to evaluate the effects of the changes in topology on the robustness of the system when subjected to different types of attacks and failures. Electric power networks resemble scale-free graphs [10], which enable the representation of most of the assets that conform the power grid. Such representation may be simplified as a complex network where substations are specified as nodes and electric lines are sketched as links [7,8,12,15,18-20]. In those cases, it is easy to calculate cluster measures (triangles) in order to determine the graph vulnerability [12]. The herein proposed representation looks for a topology where both transformers and electric towers are also taken into account as assets susceptible to be removed due to attacks or errors in the power grid. Figure 1 shows the proposed topological representation of a 14-bus electric network, compared to the traditional representation (which only considers buses and links). Thus, the resulting IEEE 14bus network is constituted by a graph of 50 nodes and 56 links. This way, it is possible to provide a more realistic sketch of the power system as a scale-free graph, where the set of towers that hold power lines as well as the set of transformers are considered as nodes in the graph [26]. Such representation is very useful when assessing random error related risks, since randomly disruption of any node, may relate to one of low connectivity degree. In statistical terms, those nodes with less connectivity are the most likely to disrupt [12,18,23,24].

69


Correa-Henao & Yusta-Loyo / DYNA, pp. 1-2. April, 2015. C4 Bus 13

Linea 16 C10

C11

Linea 14

Bus 14

Bus 10

C1 Bus 12

Linea 9 Linea 15

Linea 13

Linea 10

C9

C8

Bus 09

Bus 11

Linea 8 Linea 12

Linea 11

Trafo 1 C6

Trafo 2 Bus 06

Trafo 3

C7

Bus 04

Capacit 2

Capacit 3 Bus 08

Trafo 4

C5

Linea 7 Bus 05

G1 Bus 01

Linea 5

Linea 2 Linea 1

Bus 07

Linea 6

Linea 4 C2

b)

Bus 02 G2

Linea 3

Traditional Topologic Model

C3

c)

Bus 03

Proposed Topologic Model

Capacit 1

a)

Traditional Electric Model

Figure 1. Representation of a power electric grid as scale-free graph (IEEE 14-bus) Source: [15]

3. Definitions on indexes for topological representation of the power network Representation of power systems as scale-free networks has been well documented [7,23,24]. As previously exposed in Section 2, the topological sketch of an electric power grid would consist of a set of assets such as transmission lines and cables representing graph edges, whereas substations, transformers, generators, loads and electric towers represent graph nodes [12,15]. Mathematically, a graph corresponds to an adjacency matrix composed by a pair of sets G = (N, E), where N(G) is the set of nodes and E(G) is the set of edges. An edge corresponds to a connection between pairs of nodes with the form (i, j) such that i, j  E. The link (i, j) is denoted as ij. An edge connecting two nodes denotes Gij= 1 (corresponding to the location of a pair of nodes) and Gij = 0 otherwise [5,7,20]. Studying the properties of a graph leads to the analysis of the adjacency matrix properties [25]. The nodal degree (ki) is the set of converging edges (Ei) to a particular node (Ni): ki = |Ni|

(1)

though the degree of connection throughout the scale-free graph is quite low. Such graphs are closer to reality, since the network will grow preferentially based on the nodes of greater connectivity [10,16], as happens in real infrastructures. A more detailed explanation on this characterization may be consulted in [3,7,12,15]. 3.1. Graph’s geodesic distance (d) Based on the analysis of the scale-free graph adjacency matrix G = (N, E), it is possible to determine inferences on the evolution of the network when it is exposed to successive node removal (disintegration of the network). These statistic measures lead to the construction of indexes that reveal an equivalence between power load shedding and graph disintegration [7,12,15]. Graph geodesic distance dij describes how compact a network is. The shortest geodesic distance between two nodes dij is calculated by counting the minimum number of nodes in the path between a pair of nodes i and j [25]. The graph average distanced is determined as a function of the network’s geodesic distance dij and the total number of nodes N [19], as shown in Eq. (3):

where: Ni = {j  N | {i; j}  E}

(2)

In order to illustrate this definition, please refer to generators and loads that are connected through a single link to the power grid, meaning that their nodal degree is k = 1. The definitions in (1) and (2) constitute the basis to determine statistical measures of the scale-free graph (i.e. their robustness and their connectivity). It is important to point out that scale-free graph representation implies that few nodes are highly connected. This means that such nodes have a larger number of edges compared to other nodes, even

d

1  d ij N   N  1 i  j

(3)

Calculation of the geodesic distances in (3) may be performed through Dijkstra, Bellman-Ford, Floyd-Warshall and Johnson algorithms [25] that are well documented in literature. The following definitions relate to the analysis of the adjacency matrix G = (N, E) of a scale-free graph. The graph evolves as it is exposed to disintegration due to node disruptions (cascade failure evolution).

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 68-77. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48574


Correa-Henao & Yusta-Loyo / DYNA 82 (192), pp. 68-77. August, 2015.

the results of traditional electrical engineering parameters, referred to as a portion of disconnected loads or power load shedding [28-31], with geodesic strength gs, and thus allowing comparisons between different power systems to determine the most vulnerable. This validation implies an important advantage when combining traditional methodologies of electric power flows and graph theory statistics in order to study perturbations, disruptions and black-out events [12].

3.2. Network connectivity (K) The connectivity of network K is determined in any graph representing the power grid by counting the amount of nodes that are connected to a scale-free graph [22], as shown in Eq. (4):

K 1

N LC N

(4)

NLC: amount of nodes connected on the remaining scalefree graph, after a node disruption under contingency events. N: Base-case: total amount of nodes in the original scalefree graph

3.4. Power grid load (PGL) Even though structural vulnerability analysis can be achieved by calculating evolution of indexes in Eq. (4) and (6), it is not clear that this evaluation method may be reliable, since electric parameters of the power grid are not involved in these calculations. Therefore, traditional power flow parameters need to be considered in order to compare the effectiveness of graph theory indexes. Power flow indexes documented in literature are mainly used to determine the impact of N1 contingencies in the power grid: Maximum Load Conditions [9], Comprehensive Information System, Power System Loss [31], and Index of Severity [8]. A good measure of functionality for the power grid network would be the consumer loads that remain connected to the electrical service after a disruption event. An intuitive power flow index to understand evolution of cascade failure events corresponds to Power Grid Load (PGL), which is also useful to quantify the load shedding condition in a power grid, as proposed in [9,28,31].

3.3. Geodesic strength (gs) From Eq. (3) the Average Efficiency (e) is formulated as [21]:

e

1 1  N   N  1 i  j d ij

(5)

Index eij is usually calculated as the inverse of geodesic distances, and it allows the quantification about how efficiently flows can be exchanged within a network. If there were no connection between two nodes: dij   , eij = 0 [15, 17,24]. From Eq. (5), we define the geodesic strength gs as a parameter that measures the functionality of a network when exposed to node disruptions. Index in Eq. (6) standardizes geodesic efficiency as formulated by [17,26,27] and enables vulnerability assessments into a network due to effects of iterative cascade failure events [10].

 1  LC   i  j  d ij gs   1  BC   i  j  d ij

       

PGL 

 P   Q   LC 2 Di

LC 2 Di

 P   Q   i

BC 2 Di

(7)

BC 2 Di

i

PDiLC: active power load that remains electrically connected, after a node disruption under contingency events. QDiLC: reactive power load that remains electrically connected, after a node disruption under contingency events. PDiBC: Base-case: Total active power load in testing network. QDiBC: Base-case: Total reactive power load in testing network. PGL in Eq. (7) is calculated as a percentage of the load that keeps connected to the remaining electric grid at each node removal iteration, in order to avoid cascade outage. Its range varies between zero and one. The higher PGL index value, the lower impact on non-supplied energy. PGL in Eq. (7) is computed by means of Standard Power Flow (SPF) routine [30] (corresponding to nonlinear equations that are solved iteratively using Newton’s Method [8,9]).

(6)

dij LC: shortest path between a pair of nodes of the scalefree graph, after a node disruption under contingency events. dij BC: Base-case: shortest path between a pair of nodes in the original scale-free graph Indexgs in (6) varies between one and zero. The lower strength index valuegs, the greater impact on the graph. It describes flow bottlenecks into the network as some geodesic paths are disrupted. This is equivalent to a power grid fragmentation due to isolation of power loads into the system. Consequently, it is possible to substitute onerous computational techniques (e.g. power flow routines) with more efficient procedures (e.g. graph theory statistics) in order to perform structural vulnerability analysis, depending on the incidents that trigger cascade failure events [10]. The convenience of merging power flow models and scale-free graph statistics has been previously demonstrated in [12,15,16] through the calculation of the responses in different IEEE networks. This is performed by contrasting

3.5. Severity index (S.I) Even though Severity Index (S.I.) is well documented for N1 contingency analysis [8], the index might be useful for measuring in cascade failure events, as quoted in technical studies [9, 28]. The Severity Index is referred to as a common 71


Correa-Henao & Yusta-Loyo / DYNA 82 (192), pp. 68-77. August, 2015.

The proposed N-t cascade failure dynamic model takes into account two different scenarios in which multiple samplings are performed for random error phenomena, unlike deliberate attacks that run only one sample [12]. Since error distribution is highly asymmetric in N-t analysis, we propose taking the suggestion of the Central Limit Theorem, implying the normal distribution of data for a sufficiently large number of independent random values. Such approximation is good enough when more than 30 samples are collected [25]. The described algorithm has been implemented in Matlab®. Its programming takes into account power flow algorithms provided by PSAT (Power System Analysis Toolbox) [30]. Furthermore, the script incorporates features of MatlabBGL graph theory toolbox [31]. Geodesic distances dij in Eq. (3) are calculated according to Bellman-Ford shortest-paths algorithm [25].

method of contingency analysis based on power flow methodologies in order to establish the load level of both lines and transformers after a certain incident event. Severity Index is defined as follows [8]:

S .I 

1 N

 PDiLC    PDiBC i 1  N

   

(8)

PDiLC: active power load that remains electrically connected, after a node disruption under contingency events. N: Base-case: total number of nodes representing substations, lines, and transformers in the scale-free graph. PDiBC: Base-case: Total active power load in testing network 4. Algorithm design for vulnerability assessment on the power grid under risk scenarios

4.2. Algorithm implementation and processing time

This section explains the procedure for computational analysis, considering a first approach through N-1 contingency analysis and extending it to random errors and deliberate attacks tolerance with an N-t dynamic cascade failure simulation model. Even though the disruption strategies are explained in detail in Section 5, it is worth mentioning that Deliberate Attacks node removal means that the most connected nodes are removed at each contingency event.

Realistic scenarios have been applied in order to prove the usefulness of graph theory models, especially for N-t contingency analysis. They correspond to IEEE Testing Networks of 5, 14, 24, 30, 57, 118 and 300 buses, whose iterative processes are shown in Table 1. The data can be accessed through flat text files [15]. Table 1 shows the iterations required to perform the proposed dynamic cascade failure model, for N-t contingency analysis. In N-t contingency analysis, the structure of the network has to rearrange constantly, since it is exposed to successive node interdictions. This fact may turn out in divergences on the power flow results when executing a Standard Power Flow (SPF) routine [8]. In cases where the SPF routine does not converge, a convenient PSAT feature provides a Continuation Power Flow (CPF) routine [31], an efficient method for solving ill-conditioned cases. The algorithm has been implemented in Matlab ®. The script completes its execution until it may not be possible to disrupt any other node from the network, either because all nodes are isolated, or because there are no more electric circuits to perform power flows routines. The designed algorithm is able to calculate graph theory indexes previously defined in Eq. (4), (6) and electric power flow indexes previously exposed in Eq. (7), (8). Table 1 also shows some relevant statistics that concern the simulation of deliberate attacks and random errors on IEEE testing networks. Note that the number of iterations per sample is greater in random disruptions than node degree-based attacks.

4.1. N-1 contingency analysis model and n-t dynamic failure model simulation Cascade failure events resembles a graph disintegration, taking first into account the estimation of the parameters from a power grid that operates under steady state conditions (base case) [12]. Such cascade failure events are determined by the interdiction of network’s nodes according to certain removal strategies. A node disruption implies the elimination of all edges connected to it and therefore, its corresponding connected links also disappear. A first approach to determine the most critical assets on the power grid consists in N-1 contingency analysis. The results provide information about the nodes that require the most important attention in terms of their protection, due to the effects exerted on them throughout the network when they are interdicted from the system [9]. The described technique can be extended to a dynamic simulation model, which is equivalent to successive contingency N-1 and N-t iterations over a constantly changing topology structure. Since power flows can only be performed based upon the existence of the reference (slack) bus generator, removal of nodes is handled around the reference slack generator bus (this means that the slack generator must always be present in the network and cannot be removed). The algorithm has been designed to measure parameters only with components that remain connected to the network as it disintegrates. Generator outages are considered in random failure routines, since generators are treated as nodes in the scale-free graph that may be subject to disruptions.

5. Simulation results according to interdiction strategies The results of the simulation are shown in Figure 2 for N1 contingency analysis, whereas Figure 3 shows the results for the N-t dynamic model simulation for random error node removal strategy, and Fig. 4 for deliberate attack node removal strategy.

72


Correa-Henao & Yusta-Loyo / DYNA 82 (192), pp. 68-77. August, 2015.

strength indexes. N-1 contingency analysis provides a more realistic scenario when calculating PGL index of Eq. (7), since it relates to the power grid operating parameters. The most critical nodes may be identified as those whose removal leads to the lowest impact on either connectivity or geodesic strength. In the particular case of the IEEE 30 bus network, made up by 98 nodes, the isolation of its generators in either node 5 or node 42 implies the decoupling of system loads, configuring a blackout event that affects almost 30% of the power grid. Furthermore, disruption of node 2 may cause a significant overload in lines and transformers as implied by the quantification of S.I. (43% greater than the base case). In the IEEE 300 bus test network, even though there are no nodes that lead to a total breakdown, in a few cases (nodes 54, 166 and 876) the PGL index may decrease down to a range of 30% to 50% on the system connected load in the power grid.

Table 1 Summary of Iterative Processes for N-t Dynamic Simulation Model on IEEE Test Networks IEEE Scale Free Execution Disruption Iterations Network Graph Samples Time Strategy per Sample ( Buses) (N° Nodes) (sec) Random 35 9 125” 5 16 Deliberate 1 6 10” Random 35 33 2.105” 14 50 Deliberate 1 10 57” Random 35 62 4.810” 24 90 Deliberate 1 18 109” Random 35 67 5.423” 30 98 Deliberate 1 26 122” Random 35 120 34.236” 57 186 Deliberate 1 42 245” Random 35 293 68.430” 118 449 Deliberate 1 107 729” Random 35 635 189.256” 300 978 Deliberate 1 214 1.258” Source: The Authors

5.2. Dynamic N-t model for random error tolerance In order to keep the illustrations clear in both figures, the results of only three bus networks have been plotted, corresponding to IEEE testing networks of 5, 14, 30 and 300 buses (considered a good representation of the methodology). In Figure 2 the x-axis refers to the node name failed on an N-1 contingency (unfortunately it is not possible to display all of those names). Scales for all indexes are indicated in perunit values. The “most critical” nodes may be identified in the equivalent scale-free graph of the power grid, by just determining the higher values of the I.S parameter, or the lowest values of the PGL parameter, related to the node disruption.

A network may become a target in different risk scenarios either by deliberate attacks or random errors. This fact can be studied assuming that from a connected scale-free graph of N nodes, there might be a fraction f of nodes that may be removed. This section compares vulnerability results using both graph theory indexes and classic power flow indicators, in several realistic scenarios. In this section, nodes interdiction strategies are related to random perturbations, which cause the failure of some other nodes (natural disasters, equipment faults, procedure failures). Thus, the first mechanism to be studied is the removal of randomly selected nodes. The numerical simulations in Figure 3 indicate that scale-free networks display a topological robustness against random node failures (since low degree nodes are far more abundant than high degree ones). From Figure 3, random error risk scenarios cause a total collapse of the network service (blackout) after the removal of 20% of the nodes. The PGL index evolution shows that in all cases, the IEEE bus test network completely collapses with the removal of about 20% of the nodes. The comparison between results of graph connectivity index K and PGL in Figure 3 show that nodal connectivity is not proportionally related to the grid’s electrical condition. This means that the PGL electrical index evolves at a different bias rate than the impact on connectivity graph index K. The use of the average geodesic strength gs index, shown in Figure 3, does really facilitate contrasting of results between classic electrical parameters and topological indicators, since the results of the geodesic strength gs match with the forecasts obtained through electric index PGL from Eq. (7). A graphic comparison between PGL andgs for events of random error disruptions (Figure 3) shows that the 300 bus test-network is the most vulnerable, followed by 57 and 24 bus test-networks. Severity Index S.I. from Eq. (8) has also been sketched in a secondary axis, in order to understand the evolution on loading of both transformers and transmission lines, as node

5.1. N-1 contingency analysis The study of N-1 contingency analysis refers to those events that occur when a network element is removed or taken out of service due to unforeseen circumstances. For every grid’s disruption, power flows are redistributed throughout the network and voltage bars change. As a result, there may be overloaded lines and transformers [7]. Figure 2 shows the results for both power grid load index (Eq. 7) and severity index (Eq. 8) compared to geodesic strength and connectivity index (Eq. 4 and Eq. 6) for N-1 contingency analysis in IEEE test networks of 5, 14, 30 and 300 buses. The analysis is performed through the successive execution of the Standard Power Flow Newton-Raphson algorithm [7,30,31]. As explained in Eq. (7) and Eq. (8) the contingency results are compared to the base case, i.e., the network operating under normal conditions. Hence, N-1 contingency analysis allows the identification of the most vulnerable nodes in the power network, which is the first step for decision-making in critical infrastructure protection. Graph theory indexes gs in Eq. (6) and K in Eq. (4) show that in all cases the greatest impact on the network occurs through the removal or isolation of nodes with a higher degree of connectivity, especially buses. However, networks with less nodes (such as generators, capacitors and loads) have a minimal impact on both connectivity and geodesic

73


Correa-Henao & Yusta-Loyo / DYNA 82 (192), pp. 68-77. August, 2015.

N‐1 Contingency Analysis ‐ IEEE 14 bus test grid Performance Indexes (gs, K, PGL, SI)

Performance Indexes (gs, K, PGL, SI)

N‐1 Contingency Analysis ‐ IEEE 5 bus test grid 1,0 0,8 0,6 0,4

`gs ‐ IEEE 05 K ‐ IEEE 05 S.I. ‐ IEEE_05 PGL ‐ IEEE_05

0,2 0,0 0

2

4

6

8 10 12 Disrupted Node Number

14

16

1,0 0,8 0,6 0,4

`gs ‐ IEEE 14 K ‐ IEEE 14 S.I. ‐ IEEE_14 PGL ‐ IEEE_14

0,2 0,0

18

0

20 30 Disrupted Node Number

40

50

N‐1 Contingency Analysis ‐ IEEE 300bus test grid 1,0

1,0 Performance Indexes (gs, K, PGL, SI)

Performance Indexes (gs, K, PGL, SI)

N‐1 Contingency Analysis ‐ IEEE 30 bus test grid

10

0,8

0,8

`gs ‐ IEEE 30 K ‐ IEEE 30 S.I. ‐ IEEE_30 PGL ‐ IEEE_30

0,6

0,4

0,6

`gs ‐ IEEE 300 K ‐ IEEE 300 S.I. ‐ IEEE_300 PGL ‐ IEEE_300

0,4

0,2

0,2

0,0

0,0 0

10

20

30

40 50 60 Disrupted Node Number

70

80

90

100

0

200

400 600 Disrupted Node Number

800

1000

Figure 2. N-1 Contingency Analysis in IEEE Test Networks Source: The Authors

Fig. 4. The slight recovery of the I.S parameter when about 2% of the nodes are removed in both IEEE 5 and IEEE 14 bus test network is explained by the fact that there is an electrically connected circuit around the slack generator, which allows the circulation of power flows. Taking into account the results shown, the networks are completely isolated when removing 2% and 25% of nodes respectively at deliberate attacks (Fig. 4) and random error (Fig. 3) removal strategies. Results for deliberate attack disruptions (Fig. 4) show the existence of a better correlation between geodesic strength index gs (5) and PGL parameter (7), than network connectivity index K (4). When the geodesic strength gs has a value close to zero, this implies a greater disintegration of the network, and hence flows between generators and loads need to step over more paths.

removing takes place during cascade failure events. It can be noticed that these elements of the network discharge their loads as long as the PGL index evolves in similar trends. One interesting observation relates to the remarkable bias on the gs index when compared to the S.I parameter. 5.3. Dynamic N-t model for deliberate attack tolerance Another realistic risk scenario is given by deliberate attacks, consisting of those risks caused by antagonist attackers on the power grid, i.e. emulating an intentional attack on the network [16]. This second removal strategy, in which the most highly connected nodes are removed at each contingency event, is the most damaging scenario to the integrity of the system [32]. In the case of an intentional attack, when the nodes with the highest number of edges are targeted, the network breaks down faster than in the case of random node removal. The PGL index evolution in Fig. 3 shows that, under deliberate attacks, the removal of only 1% of the nodes in the plotted bus-test networks causes a blackout (100% of loads are isolated from the power grid). Furthermore, I.S. evolution shows that the loads are quickly discharged from transformers and transmission lines. This fact demonstrates the reason why scale-free networks may be fragile to intentional attacks, since the removal of the nodes with higher connectivity has a dramatic disruptive effect on the network, and this can be observed in

6. Discussion It would be worth determining the dependence among electric parameters (PGL, S.I.), relating to graph theory indexes (K, gs), so it is possible to estimate the benefit of using the mechanisms of graph theory, instead of power transfer capacity between generators and loads, in order to perform vulnerability analysis. A practical measure to establish such dependence is the Pearson correlation coefficient , which is obtained via division of the covariance of two variables by the product of their standard deviations , as exposed in Eq. (9).

74


Correa-Henao & Yusta-Loyo / DYNA, pp. 1-2. April, 2015.

Random Error Tolerance ‐ IEEE 14 bus test grid

`gs ‐ IEEE 05 K ‐ IEEE 05 PGL ‐ IEEE_05 S.I. ‐ IEEE_05

0,8 0,6

0,4 0,3

0,4

0,2

0,2

0,1

0,0

0,0 0,1

0,2

0,3

0,4

0,5

0,6

`gs ‐ IEEE 14 K ‐ IEEE 14 PGL ‐ IEEE_14 S.I. ‐ IEEE_14

0,8 0,6

0,2

0,2

0,1

0,0

0,0 0

0,7

0,1

`gs ‐ IEEE 30 K ‐ IEEE 30 PGL ‐ IEEE_30 S.I. ‐ IEEE_30

0,2

0,6 0,1

0,4

0,2

0,0

0,0 0,2 0,3 0,4 0,5 f ‐ fraction of removed nodes

0,6

Grid Prformance Indexes (gs, K, PGL)

Grid Performance Indexes (gs, K, PGL)

0,3

0,4

0,5

0,6

0,7

Random Error Tolerance ‐ IEEE 300 bus test grid

1,0

0,1

0,2

f ‐ fraction of removed nodes

Random Error Tolerance ‐ IEEE 30 bus test grid

0

0,3

0,4

f ‐ fraction of removed nodes

0,8

0,4

1,4

1,0

`gs ‐ IEEE 300 K ‐ IEEE 300 PGL ‐ IEEE_300 S.I. ‐ IEEE_300

0,8

1,2 1,0

0,6

0,8 0,6

0,4

0,4 0,2

0,2

0,0

0,7

Power Flow Severity Index (S.I)

0

0,5

1,0

Power Flow Severity Index (S.I)

0,5

Grid Performance Indexes (gs, K, PGL)

Grid Performance Indexes (gs, K, PGL)

Random Error Tolerance ‐ IEEE 5 bus test grid 1,0

0,0 0

0,1

0,2 0,3 0,4 0,5 f ‐ fraction of removed nodes

0,6

0,7

Figure 3. Results for graph theory indexes and power flow parameters for random errors in IEEE test networks, after averaging 35 samples in N-t dynamic model. Nodes are removed randomly Source: The Authors

0,3

0,4

0,2

0,2

0,1

0,0

0,0 0,1 0,15 f ‐ fraction of removed nodes

`gs ‐ IEEE 14 K ‐ IEEE 14 PGL ‐ IEEE_14 S.I. ‐ IEEE_14

0,8 0,6

0,6

0,4 0,2

0,0

0,0 0

0,2

0,4 0,2 0,0

0,0 0,05

0,1 0,15 f ‐ fraction of removed nodes

0,05

0,1 0,15 f ‐ fraction of removed nodes

0,2

Deliberate Attacks Tolerance ‐ IEEE 300‐bus test grid

0,1

0

0,6

0,2

0,2

`gs ‐ IEEE 30 K ‐ IEEE 30 PGL ‐ IEEE_30 S.I. ‐ IEEE_30

0,8

0,8

0,4

Deliberate Attacks Tolerance ‐ IEEE 30 bus test grid

1,0

1,0

0,2

1,4

1,0

`gs ‐ IEEE 300 K ‐ IEEE 300 PGL ‐ IEEE_300 S.I. ‐ IEEE_300

0,8 0,6

1,2 1,0 0,8 0,6

0,4

0,4 0,2

0,2

0,0

0,0 0

0,05 0,1 0,15 f ‐ fraction of removed nodes

0,2

Power Flow Severity Index (S.I)

0,05

Grid Performance Indexes (gs, K, PGL)

Grid Performance Indexes (gs, K, PGL)

0

1,2

1,0

Power Flow Severity Index (S.I)

PGL ‐ IEEE_05 S.I. ‐ IEEE_05

0,6

Grid Performance Indexes (gs, K, PGL)

0,4

Power Flow Severity Index (S.I)

0,5

`gs ‐ IEEE 05 K ‐ IEEE 05

0,8

Deliberate Attacks Tolerance ‐ IEEE 14 bus test grid

0,6

Power Flow Severity Index (S.I)

Grid Performance Indexes (gs, K, PGL)

Deliberate Attacks Tolerance ‐ IEEE 5 bus test grid 1,0

Figure 4. Results for graph theory indexes and power flow parameters for deliberate attacks in IEEE test networks in N-t dynamic model. Nodes are removed in decreasing degree order Source: The Authors © The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 68-77. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48574


Correa-Henao & Yusta-Loyo / DYNA, pp. 1-2. April, 2015.

7. Conclusion

Pearson () 1

The proposal herein explained allows the comparison of numerical indexes of graph theory (K,gs) and power flow techniques (PGL, IS) in order to assess vulnerability for any power grid. The usefulness of combining scale-free graph statistics and power flow model parameters has been validated, making it possible to substitute laborious computational tools (i.e. classic power flow techniques) with more efficient techniques (i.e. graph theory statistics) to perform structural vulnerability analysis of any electric network, depending on the events that trigger cascade failures (random risks or deliberate threats). The convenience of N-1 contingency analysis to identify the most vulnerable nodes in the power system has been validated. This is the first step for decision-making in the protection of these assets. It has also been proven that scalefree graph indexes can be used to qualify the vulnerability of a power grid topology compared to another one, especially for N-t dynamic cascade failure events. Hence, this feature is a great advantage since it is not necessary to run power flow routines to compare the vulnerability among different power systems. This result also demonstrates the computational efficiency of the proposed method.

0,8 0,6 0,4 0,2

Random

5 14 24 30 57 118 300 5 14 24 30 57 118 300 5 14 24 30 57 118 300 5 14 24 30 57 118 300

Deliberate 0

PGL , K

PGL , gs

SI , K

SI , gs

Figure 5. Comparative results of Pearson Coefficients () among electric parameters and graph theory indexes for IEEE Test Networks Source: The Authors

1 

cov(PGL, K )

3 

cov(IS , K )

 PGL K  IS K

2 

cov(PGL, gs)

4 

cov( IS , gs )

 PGL gs

(9)

 IS  gs

i: Correlation between electric index (PGL, IS) and graph theory index (K, gs) Fig. 5 reveals the results for correlations between indexes of Eq. (9). Note that for random error node removal strategy, the Pearson correlation 2 is closer to +1, which implies a positive linear relationship between the PGL index and geodesic strength gs. According to this comparison, thegs parameter would also be useful to determine the disconnected electric load PDi out of the power grid during cascade failures events. On the other hand, comparison for connectivity index K shows lower correlation 1 with electrical index PGL. This means that it should not be considered as a precise indicator to assess the vulnerability of electric networks. Therefore, this correlation confirms the comparisons of the bias trend between index gs (6) and PGL (7) shown in Figure 3 and Fig. 4. In deliberate attacks, correlation 2 for parameter gs is weaker than 1 for index K. Thus, average geodesic strengthgs is of interest to make comparisons between different power systems and determine which one is the most vulnerable. A final comment relates to the correlation between I.S and graph theory indexes (3, 4) that are quite close to +1 for random disruption strategies in all test networks. This means that both parameters may be useful to infer the impact on the loadability of the power system. Results on Pearson correlations allow an inference on the advantage of using graph theory based models, since there are faster performances when applied to IEEE test systems compared to traditional AC Power Flows. This issue is critical in order to recover a power system from a serious disturbance such as blackouts, transmission lines outages, and terrorist attacks.

References [1]

Council of the European Communities, On the identification and designation of European critical infrastructures and the assessment of the need to improve their protection, Council Directive 114 December/2008, Official Journal of the European Communities, Brussels, Belgium, 2008, 75 P. [2] NIPP, National Infrastructure Protection Plan, Washington DC (USA), U.S. Department of Homeland Security, 2013, 175 P. http://www.dhs.gov/national-infrastructure-protection-plan [3] Yusta-Loyo, J.M., Correa-Henao, G.J. and Lacal-Arántegui, R., Methodologies and applications for critical infrastructure protection: State-of-the-art, Energy Policy, 39, pp. 6100-6119, 2011, DOI: 10.1016/j.enpol.2011.07.010 [4] Correa-Henao G.J. and Yusta-Loyo, J.M., Seguridad energética y protección de infraestructuras críticas, Lámpsakos, [Online]. 10, pp. 92-108, 2013. Available at: http://www.funlam.edu.co/revistas/index.php/lampsakos/article/vie w/1312 [5] Correa-Henao G.J., and Yusta-Loyo, J.M., Structural vulnerability in transmission systems: Cases of Colombia and Spain, Energy Conversion and Management, 77, pp. 408-418, 2013. DOI: 10.1016/j.enconman.2013.10.011 [6] Gil-Montoya, F., Manzano-Agugliaro, F., Gómez-López J. and Sánchez-Alguacil, P., Técnicas de investigación en calidad eléctrica: ventajas e inconvenientes, DYNA, 79 (173-I), pp 66-74, 2012. [7] Holmgren, Å.J., Using graph models to analyze the vulnerability of electric power networks, Risk Analysis, 26, pp. 955-969, 2006. DOI: 10.1111/j.1539-6924.2006.00791.x [8] Gómez-Expósito, A., Análisis y operación de sistemas de energía eléctrica, Madrid, Spain. McGraw-Hill, 2002. ISBN: 94-4g1-3592X. [9] Milano, F., Pricing system security in electricity market models with inclusion of voltage stability constraints, Dr. Thesis in Electrical Engineering, University of Genova, Italy, 218 P., 2003. [10] Qiming, C. and McCalley, J.D, Identifying high risk N-k contingencies for online security assessment, IEEE Transactions on Power Systems, 20, pp. 823-834, 2005. DOI: 10.1109/TPWRS.2005.846065

76


Correa-Henao & Yusta-Loyo / DYNA, pp. 1-2. April, 2015. [11] Barabási, A. and Albert, R., Emergence of scaling in random networks, Science, 286, pp. 509-512, 1999. DOI: 10.1126/science.286.5439.509 [12] Correa-Henao, G.J. and Yusta-Loyo, J.M., Grid vulnerability analysis based on scale-free graphs versus power flow models, Electric Power Systems Research, 101, pp. 71-79, 2013. DOI: 10.1016/j.epsr.2013.04.003 [13] Ángel-Restrepo P.L. y Marín-Sepulveda, L.F., Un método computacional para la obtención de rutas óptimas en sistemas viales, Dyna, 78 (167), pp. 112-121, 2011. [14] Albert, R. and Barabási, L., Statistical mechanics of complex networks, Review Modern Physics, 74, pp. 47-97, 2002. DOI: 10.1103/RevModPhys.74.47 [15] Correa-Henao, G.J. y Yusta-Loyo, J.M., Seguridad en infraestructuras de transporte de electricidad, [Online]. Editorial Académica Española, 304 P, 2014, ISBN 978-3-659-01249-5. Available at: http://amzn.to/1BeXB3U [16] Correa-Henao, G.J., Yusta-Loyo, J.M. and Lacal-Arántegui, R., Using interconnected risk maps to assess the threats faced by electricity infrastructures, International Journal of Critical Infrastructure Protection, 6, pp. 197-216, 2014. DOI: 10.1016/j.ijcip.2013.10.002 [17] Motter, A. and Lai, Y., Cascade-based attacks on complex networks, Physical Review E, 66, pp. 065-102, 2002. DOI: 10.1103/PhysRevE.66.065102 [18] Jelenius, E., Graph models of infrastructures and the robustness of power grids, MSc. Of Science in Physics Engineering Thesis., Royal Institute of Technology (KTH), Stockholm, Sweden, 89 P., 2004. [19] Chen, G., Dong, Z.Y., Hill, D.J., Zhang, G.H. and Hua, K.Q., Attack structural vulnerability of power grids: A hybrid approach based on complex networks, Physica A: Statistical Mechanics and its Applications, 389, pp. 595-603, 2010. DOI: 10.1016/j.physa.2009.09.039 [20] Johansson, J., Risk and vulnerability analysis of interdependent technical infrastructures, Dr. Thesis in Industrial Electrical Engineering, University of Lund, Sweden, 189 P., 2010. [21] Wang, K., Zhang, B.H., Zhang, Z., Yin, X.G. and Wang, B., An electrical betweenness approach for vulnerability assessment of power grids considering the capacity of generators and load, Physica A: Statistical Mechanics and its Applications, 390, pp. 4692-4701, DOI: 10.1016/j.physa.2011.07.031 [22] Newman, M.E.J., The structure and function of complex networks, SIAM Review, 45, pp. 167-256, 2003. DOI: 10.1137/S003614450342480 [23] Solé, R., Rosas-Casals, M., Corominas-Murtra, B. and Valverde, S., Robustness of the European power grids under intentional attack, Physical Review E 77 (2), pp. 026102, 2008. DOI:10.1103/PhysRevE.77.026102 [24] Murray, A., Matisziw, T. and Grubesic, T., Critical network infrastructure analysis: interdiction and system flow, Journal of Geographical Systems, 9, pp. 103-117, 2007. DOI: 10.1007/s10109006-0039-4 [25] Gross, J.L. and Yellen, J., Handbook of graph theory, Chapman and Hall/CRC, [Online]. 2nd ed, 800 P, 2005, ISBN 978-1584885054. Available at: http://amzn.to/1zOFKW4 [26] Crucitti, P., Latora, V., Marchiori M. and Rapisarda, A., Error and attack tolerance of complex networks, Physica A: Statistical Mechanics and its Applications, 340, pp. 388-394, 2004. DOI: 10.1016/j.physa.2004.04.031 [27] Latora, V. and Marchiori, M., Efficient behavior of small-world networks, Physical Review Letters, 87 (19), p. 198701, 2001. DOI: 10.1103/PhysRevLett.87.198701 [28] Salmeron, J., Wood, K. and Baldick. R., Analysis of electric grid security under terrorist threat, IEEE Transactions on Power Systems, 19 (1), pp. 905-912, 2004. DOI: 10.1109/TPWRS.2004.825888 [29] Donde, V., López, V., Lesieutre. B., Pinar, A., Chao, Y. and Meza, J., Severe multiple contingency screening in electric power systems, IEEE Transactions on Power Systems, 23 (2), pp. 406-417, 2008. DOI: 10.1109/TPWRS.2008.919243

[30] Milano, F., An open source power system analysis toolbox, IEEE Transactions on Power Systems, 20 (3) pp. 1199-1206, 2005. DOI: 10.1109/TPWRS.2005.851911 [31] Gleich, D., MATLAB_BGL: Graph theory toolbox, [Online]. Purdue University, Palo Alto CA, USA, 2008. Available at: http://www.mathworks.com/matlabcentral/fileexchange/10922 [32] Holmgren, A.J., Jenelius, E. and Westin, J., Evaluating strategies for defending electric power networks against antagonistic attacks, IEEE Transactions on Power Systems, 22 (1), pp. 76-84, 2007. DOI: 10.1109/TPWRS.2006.889080 G.J. Correa-Henao, received the Electrical Engineer degree in 2001, the MSc. degree in Computer Science in 2004, from Universidad Nacional de Colombia, Medellín, Colombia, and the MSc. degree in Business Administration in 2007 from Universidad San Pablo, Madrid, Spain. He also received the PhD degree in Renewable Energy and Energetic Efficiency from Universidad de Zaragoza, Spain in 2012. He is currently a lecturer and researcher at the Faculty of Engineering in Fundación Universitaria Luis Amigó, in Medellín, Colombia, with research interests in distributed generation, power system security analysis and decision-making methodologies. J.M. Yusta-Loyo, received a degree in Industrial Engineering in 1994 and a PhD. in Electrical Engineering in 2000 from Universidad de Zaragoza, Spain. He is currently a senior lecturer at the Department of Electrical Engineering of Universidad de Zaragoza, España. From 2004 to 2007 he was Vicedean of the Faculty of Engineering at Universidad de Zaragoza, España. His research interests include technical and economic issues in electric distribution systems, power systems security analysis, and the demand side of electricity markets.

Área Curricular de Ingeniería de Sistemas e Informática Oferta de Posgrados

Especialización en Sistemas Especialización en Mercados de Energía Maestría en Ingeniería - Ingeniería de Sistemas Doctorado en Ingeniería- Sistema e Informática Mayor información: E-mail: acsei_med@unal.edu.co Teléfono: (57-4) 425 5365

77


Vulnerability assessment of power systems to intentional attacks using a specialized genetic algorithm Laura Agudelo a, Jesús María López-Lezama b & Nicolás Muñoz-Galeano b b

a Interconexión Eléctrica S.A (ISA), Medellín, Colombia. laura.agudelo@gmail.com Departamento de Ingeniería Eléctrica, Facultad de Ingeniería, Universidad de Antioquia, Medellín, Colombia jmaria.lopez@udea.edu.co; micolas.munoz@udea.edu.co

Received: April 29th, 2014. Received in revised form: February 17th, 2015. Accepted: July 10th, 2015.

Abstract A specialized genetic algorithm applied to the solution of the electric grid interdiction problem is presented in this paper. This problem consists in the interaction of a disruptive agent who aims at maximizing damage of the power system (measured as load shed), and the system operator, who implements corrective actions to minimize system load shed. This problem, also known as “the terrorist threat problem”, is formulated in a bi-level programming structure and solved by means of a genetic algorithm. The solution identifies the most vulnerable links of the network in terms of a terrorist attack, providing signals for future reinforcement of the network or more strict surveillance of critical elements. The proposed approach has been tested on three case studies: a didactic five-bus power system, a prototype of the Colombian power system and the IEEE Reliability Test System. Results show the robustness and applicability of the proposed approach. Keywords: bilevel programming; power system vulnerability; genetic algorithms, intentional attacks.

Valoración de la vulnerabilidad de sistemas de potencia ante ataques intencionales usando un algoritmo genético especializado Resumen En este artículo se presenta un algoritmo genético especializado aplicado a la solución del problema de interdicción. Este problema consiste en la interacción de un agente disruptivo que pretende maximizar el daño al sistema de potencia (medido en deslastre de carga), y el operador del sistema que implementa acciones correctivas para minimizar el deslastre de carga. Este problema, también conocido como “el problema del terrorista,” es formulado en una estructura de programación binivel y solucionado mediante un algoritmo genético. La solución identifica los corredores más vulnerables de la red en términos de un ataque terrorista, suministrando señales para futuros refuerzos de la red o vigilancia más estricta de activos críticos. El enfoque propuesto ha sido probado en tres casos de estudio: un sistema didáctico de 5 barras, un prototipo del sistema colombiano y el sistema de pruebas de confiabilidad IEEE. Los resultados muestran la robustez y aplicabilidad de la metodología propuesta. Palabras clave: programación binivel; vulnerabilidad de sistemas de potencia; algoritmos genéticos; ataques intencionales.

1. Introduction The constant growth of power demand, along with stronger environmental regulation, that in most cases delay the building of new transmission lines, have forced most power systems in industrialized countries to operate near their static and dynamic limits. Under this scenario, electric power systems are more vulnerable to intentional attacks [1].

Some of the most important effects of such attacks include load shedding and higher operational costs associated to repairing towers and transmission lines [2]. The traditional approach for power system vulnerability is based on the study of credible contingencies (N-1 and N2 criteria) [3]-[4]. However, such studies require exhaustive simulation and therefore, they are associated with a high computational burden; on the other hand, only natural-

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 78-84. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n 192.48578


Agudelo et al / DYNA 82 (192), pp. 78-84. August, 2015.

occurring outages are taken into account. Currently, it is well known that electric power systems are exposed not only to natural random phenomena, but also to malicious attacks. That is the case of the Colombian interconnected power system which has undergone the effects of terrorist attacks for several decades. As a matter of fact, only between 1999 and 2010 the National Interconnected System (NIS) faced as many as 200 terrorist attacks per year [2]. The electric grid interdiction problem, also known as the “terrorist threat problem” has been the focus of several studies in the last decade. This problem was first formulated in [5] as a max-min attacker-defender model. In such models the terrorist maximizes load shedding, while the system operator minimizes it. In [6] and [7] this problem is solved by using linearization and applying duality properties. First, the nonlinear expressions were recast as linear constraints; then, the inner optimization problem was replaced by its dual, turning the max-min bi-level optimization problem into a max-max bi-level optimization problem. Such a model is equivalent to a single-level maximization problem solvable via branch and cut methods. In [8] Arroyo and Galiana generalized the terrorist threat problem proposed by Salmeron, Wood and Baldick in [5]. The modeling approach of the problem proposed in [8] allows defining different objective functions for the terrorist and system operator. Also, it permits the imposition of new constraints in the outer optimization problem that might depend on both, inner and outer variables. As regards the terrorist’s interests, different objectives can be considered. For example: i) to minimize the number of system components to be attacked, in order to achieve a given load shed, or ii) to maximize the load shed for a given number of system components that can be attacked. In [9] the authors present a Mixed Integer Linear Programming approach for the analysis of the electric grid interdiction problem. In this case the bi-level programming problem is first turned into a one-level mixed-integer nonlinear programming program by using the fundamental duality theorem of linear programming [10]; then, disjunctive constraints are used to eliminate the nonlinearities of the problem. The resulting problem can be solved by using commercially available software. In [11] a worst-case interdiction analysis of the terrorist threat problem is performed by a generalization of Benders decomposition. In [12] the electric grid interdiction problem is formulated including line switching, which means that, after a terrorist attack, the system operator has also the option of switching some lines in order to reduce load shedding. In this paper the authors propose a novel Genetic Algorithm (GA) approach to address the electric grid interdiction problem. Instead of turning the model into a onelevel mixed-integer linear programming problem, the problem is solved in its original form as a mixed-integer nonlinear programming problem. Results of the proposed algorithm are validated with studies reported in the specialized literature, showing its capability to attain globally optimal or near-optimal solutions.

interdiction problem is provided in equations (1) to (9) [8]. In this case the objective of the disruptive agent consists on maximizing the total load shedding that can be attained given a fixed number of elements (lines or transformers) under attack.

(1)

: ∈ 0,1

(2)

∑ ∈ 1

(3)

;

,

,

,∆

(4)

; ∀ ∈

(5)

:

∆ ∈

; ∀

∈ 6

; ∀ ∈ (7)

0

; ∀ ∈ ∆

; ∀ ∈

(8) (9)

Where: VI: Interdiction vector ∆ : Load shed in node n : Power Flow in line l : Incidence Matrix in node n of line l : Phase angles in node n : Power generation by generator j : Power demand in node n M: Number of elements under attack : Impedance of line l : Minimum power generation by generator j : Maximum power generation by generator j : Minimum phase angle in node n : Maximum phase angle in node n The interdiction vector consists of a binary array in which every position indicates the state of the element. If the position is equal to one it indicates an element on service; conversely, if the position is zero it indicates an element out of service (under attack). Equations (1) to (3) represent the upper level optimization problem (disruptive agent problem). For a given number of system components that could be attacked (M), the disruptive agent must maximize

2. Mathematical Formulation The mathematical formulation of the electric grid 79


Agudelo et al / DYNA 82 (192), pp. 78-84. August, 2015.

the load shedding of the system. Such a problem is at the same time restricted by the reaction of the system operator (equations (4) to (9)). For a given interdiction vector, the system operator must perform a re-dispatch of generation in order to minimize load shedding. Note that equations (4) to (9) correspond to an optimal DC power flow; furthermore, for a given interdiction vector, the inner optimization problem is a linear programming problem.

3.2. Initial Population The initial population is randomly generated and consists of a set of different interdiction vectors (every interdiction vector must have the same number of elements under attack). For each interdiction vector a DC optimal power flow is run in order to determine the load shedding (which is the fitness function of the GA). With such information the individuals are ready to proceed for the next GA steps.

3. Specialized Genetic Algorithm

3.2. GA’s Operators

The strategy proposed in this paper to approach the electric grid interdiction problem consists of using a Genetic Algorithm (GA); however, any other metaheursistic technique could be applied. The methodology proposed in this paper takes advantage of the fact that, for a given interdiction vector, the equivalent problem is a DC optimal power flow. In this case, a number of interdiction vectors are randomly generated as initial candidate solutions and are modified in each generation according to the GA’s rules. The objective of the GA is to find the maximum load shedding if a fixed number of elements are under attack. The flowchart of the proposed algorithm is presented in Fig. 1.

The selection operator of the GA is performed by choosing the best solutions after applying a series of tournaments with a given number of elements. In this case, the number of candidates selected for the next step (recombination) is the same number of parents. The recombination is performed at a single point randomly selected. Every individual generates two new ones; in this stage two times the initial population is created. The mutation makes a minimal change in some of the individuals. The mutation is performed using a mutation factor (with a low probability) with every bit of the interdiction vector. If mutation or recombination leads to non feasible candidates, such candidates are penalized in the objective function. Finally, to keep the number of candidate solutions constant, only the best solutions are kept in each iteration. 4. Test and Results The proposed methodology was tested using three different power systems: a didactic 5 bus power system, the solution of which has already been provided in [8], a prototype of the Colombian power system and the IEEE Reliability Test System. 4.1. Case A. Five bus Power System The first test is performed with the 5 bus power system illustrated in Fig. 3. This system has been used only for comparison purposes, and to illustrate that the proposed GA is able to reach global optimal solutions. The data for this system is provided in Tables 1 and 2.

Figure 1. Flowchart of the proposed GA. Source: Own drawing

3.1. Problem codification As stated above, every solution candidate of the GA consists of a binary chain representing the interdiction vector. Fig. 2 illustrates an interdiction vector for a power system of 10 elements over which the elements 3, 5 and 8 are under attack.

Figure 2. GA Codification. Source: Own drawing Figure 3. Five bus power system Source: Adapted from Arroyo J. and Galiana 2005, [8].

80


Agudelo et al / DYNA 82 (192), pp. 78-84. August, 2015. Table 1. Load and generation of the five bus power system Bus Load (MW)

Generation (MW)

1

50

150

2

170

150

3

90

150

4

30

150

5 300 Source: Adapted from Arroyo J. and Galiana 2005, [8].

Table 2. Line parameters of the five bus power system Line

150

Impedance (p.u.)

1-2

0.336

1-3

0.126

1-4

0.180

2-3

0.215

3-5

0.215

Figure. 4. Prototype of the Colombian power system. Source: Adapted from Lopez-Lezama et al. 2006 [13].

4-5 0.130 Source: Adapted from Arroyo J. and Galiana 2005, [8].

Colombian system with actual data cannot be displayed. Fig. 4 illustrates the system under study. This system has 20 nodes and 40 lines and is distributed in 5 sub-areas. The total demand of the system is 9010 MW and the installed capacity is 12850 MW. Specific line, load and generation data for this system can be consulted in [13]. Three different scneraios were considered:  Scennario A: Considering all generation units available.  Scenario B: Limiting generation in sub-area 5. The generation availabe in node 18 was reduced to 42% (1500MW).  Scenario C: Limiting generation in sub-area 3. The generation available in node 13 was reduced to 25% (500 MW). Several tests were performed in each scenario, for different numbers of circuits under attack, to find the worst combination of destroyed lines. The GA was developed in Matlab and was run in a computer with CORE I3 processor and 2Gb of RAM memory. The mutation rate was set to 5% and 100 parents were used in the initial population. The best results were found in only ten generations and around 300 seconds. Results are shown in Tables 4, 5 and 6. Results show that Scenario C with 6 destroyed lines is the worst possible case in terms of load shedding. It causes 2637.58 MW of load shed, 29.27% of the total system demand. This attack would cause a bottling up of energy in bus 18 and load shedding in buses 4, 5, 8, 9, 10, 14, 19 and 20. Fig. 6 illustrates the lines and buses affected.

Table 3. Critical combination of destroyed lines for the five bus power system Number of destroyed lines

Load Shedding (MW)

Worst combination of destroyed lines

1

50

2

150

3-5 4-5 3-5, 4-5

3

150

4 Source: Own calculation

170

1-2, 3-5, 4-5 1-3, 3-5, 4-5 1-4, 3-5, 4-5 2-3, 3-5, 4-5 1-2, 2-3, 3-5, 4-5

It can be observed that for a single line under attack, the maximum load shedding obtained is 50 MW. In this case the line under attack must be either line 3-5 or line 4-5. Any other single attacked line would not result in load shedding. On the other hand, for two lines under attack there is only one strategy to obtain maximum load shedding (attacking lines 3-5 and 4-5 simultaneously). For a maximum of three lines under attack there are three possible interdiction vectors that would render a maximum load shedding of 150 MW. Finally, if only 4 lines could be taken down simultaneously, the maximum load shedding would be 170 MW. 4.2. Case B. Prototype of the Colombian Power System To show the applicability of the proposed approach in real power systems, a prototype of the Colombian power system was considered. Due to security reasons, a real 81


Agudelo et al / DYNA 82 (192), pp. 78-84. August, 2015. Table 4. Critical combination of destroyed lines for scenario A Number of Destroyed Load Shedding Worst combination of lines (MW) destroyed lines 1

242.3

9 ‐11

2

695.90

15‐16, 17‐14

3

1194.89

9‐11, 15‐16, 17‐14

4

1694.89

7‐8,9‐11, 15‐16, 17‐14

5

2194.89

7‐8,9‐11,9‐11, 15‐16, 17‐14

6

2428.20

7‐8,9‐11,9‐11, 15‐16, 17‐ 14,20‐18

Source: Own calculation

Table 5. Critical combination of destroyed lines for scenario B Number of Load Shedding Worst combination of Destroyed lines (MW) destroyed lines 1

291.70

9 -11

2

772.35

16-19,20-23

3

1200.18

7-8,9-11,9-11

4

1892.40

7-8,9-11, 15-16, 17-14

5

2392.40

7-8,9-11,9-11, 15-16, 17-14

6

2490.04

7-8,9-11,9-11, 15-16, 1714,20-18

Figure 6. Worst case combination of destroyed lines Source: Own drawing

The tests also revealed that sub-area 4 is the most sensitive to load shedding, especially buses 14 and 15. This is due to the fact that in this area the load is higher than the installed generation. Furthermore, such generation is mainly thermal which is the most expensive.

Source: Own calculation Table 6. Critical combination of destroyed lines for scenario C Load Number of Destroyed Worst combination of destroyed Shedding lines lines (MW) 1

264.90

20-18

2

718.17

15-16,17-14

3

1218.17

9-11,15-16,17-14

4

1718.17

7-8,9-11, 15-16, 17-14

5

2137.58

17-14,19-18,19-18,20-18,20-18

6

2637.58

9-11,17-14,19-18,19-18,2018,20-18

4.2. Case C. IEEE Reliability Test System Several tests were also performed with the 24 bus IEEE Reliability Test System [14]. This test system is composed of 24 buses, 38 lines, 32 generators and 17 loads. The load profile selected for this study corresponds to a winter weekday at 6:00 pm (2850 MW). For the sake of simplicity, and without loss of generality, lines in the same corridor are treated independently; which means that the failure of a line does not necessarily imply the unavailability of the remaining lines in the same corridor. Several tests were performed with the GA, considering an increasing number of destroyed lines. Table 7 shows the worst combination of destroyed lines and the corresponding load shedding for this system. It can be observed that the simultaneous attack of 6 lines results in the loss of 1017 MW that represents 35.7% of the total demand. However, attacking a single line does not result in any loss of load. In the IEEE Reliability Test System most of the generation is located in the upper part of the network, while most of the load is distributed in the nodes of the lower part of the system. As a consequence, the simultaneous attack of 5 or 6 lines focusses on dividing the system in two islands, separating generation from load. Fig. 7 depicts the simultaneous attack of 6 lines (marked with blackened circles). As a consequence of this attack the system is split into two islands i) the upper area with excess of generation capacity and no load shedding and ii) the lower area with deficit of generation and all the loss of load.

Source: Own calculation

Load Shedding (MW)

3000 2500 2000 Scenario A

1500

Scenario B

1000

Scenario C

500 0 1

2

3

4

5

6

Number of circuits under attack Figure 5. Load shedding for different scenarios and elements under attack Source: Own drawing

82


Agudelo et al / DYNA 82 (192), pp. 78-84. August, 2015. Table 7. Critical combination of destroyed lines for the IEEE Reliability Test System Load Number of destroyed Worst combination of destroyed Shedding lines lines (MW) 2

194

3

309

16-19,20-23,20-23

4

442

3-24,9-12,11-13,14-16

5

842

11-13,12-13, 12-23, 14-16,15-24

6

1017

7-8,11-13,12-23,12-13,14-16,15-24

Table 9. Critical combination of destroyed lines considering four destroyed lines Load Scenario Load Shedding Worst combination of (MW) destroyed lines High 194 2-6,6-10,11-14,16-14 Medium 256 15-24,16-19,20-23,20-23 Low 531 3-24,12-23,13-23,14-16 Source: Own calculation

11-14, 14-16

As is expected, the load shedding is significantly less in the low load scenario when compared with the medium and high load scenarios. It was also found that the worst combination of destroyed lines is the same for 2, 3, 5 and 6 destroyed lines; however, with 4 lines under attack there is a different solution for each of the load scenarios. The worst combination of destroyed lines, considering 4 circuits is presented in Table 9.

Source: Own calculation

5. Conclusions A specialized genetic algorithm for vulnerability assessment of power systems to intentional attacks was presented in this paper. The proposed approach is robust and efficient and can be applied to real power systems. The tests carried out with a prototype of the Colombian power system and the IEEE reliability test system allows the most vulnerable lines in terms of terrorist attacks to be identified. The main advantage of using a GA for solving the electric grid interdiction problem is the possibility of having a set of high quality solutions instead of a single one solution. This gives the system operator more information about its most vulnerable elements and provides signals for future reinforcements of the network or more strict surveillance on critical elements. With the use of a GA instead of classical mathematical programming, there is no need to bring into play duality theory or linearization schemes to transform the electric grid interdiction problem into a single-level optimization problem. This opens the possibility to approach the interdiction problem with a more accurate modeling of the network, such as an AC optimal power flow. This will be the focus of further work.

Figure 7. IEEE Reliability test system with 6 lines under attack. Source: Own drawing

Table 8 shows the results obtained considering high, medium and low load scenarios. For the high load scenario the previously defined demand of 2850 MW was considered; for the medium load scenario a winter weekday at 9:00 pm with a total load of 2365 MW was considered; and finally for the low load scenario a summer weekday at 5:00 am with a total demand of 1653 MW was taken into account.

Acknowledgments The authors would like to thank the Sustainability Program 2014-2015 of the University of Antioquia for financial support.

Table 8. Critical combination of destroyed lines considering different load scenarios Number of Load Shedding (MW) Worst combination of destroyed lines destroyed lines High Medium Low 2 194 161 114 11-14, 14-16 3 309 256 182 16-19,20-23,20-23 5 11-13,12-13, 12-23, 14842 612 279 16,15-24 6 7-8,11-13,12-23,121017 783 449 13,14-16,15-24 Source: Own calculation 83

References [1] [2] [3] [4] [5]

Leffler L., The NERC program for the electricity sector critical Infrastructure protection, Proceedings of Power Engineering Society Winter Meeting, pp. 95-97, 2001. DOI: 10.1109/pesw.2001.916871 Corredor, P. and Ruiz, M., Against all odds. IEEE Power & Energy Magazine, 9 (2), pp. 59-66, 2011. Arragada, G., Evaluaci贸n de confiabilidad en sistemas el茅ctricos de distribuci贸n, M.S. Thesis, Department of Electrical Engineering, Pontificia Universidad Cat贸lica de Chile, Santiago de Chile, 1994. Kinney R., Crucitti P., Albert, R. and Latora, V., Modeling cascading failures in the North American power grid. The European Physical


Agudelo et al / DYNA 82 (192), pp. 78-84. August, 2015.

[6] [7] [8] [9]

[10]

[11] [12]

[13]

[14]

[15]

Journal, 46 (1), pp. 101-107, 2005. DOI: 10.1140/epjb/e2005-002379 Salmeron, J., Wood, K. and Baldick, R., Analysis of electric grid security under terrorist threat. IEEE Transactions on Power Systems, 19 (2), pp. 905–912, 2004. DOI: 10.1109/TPWRS.2004.825888 Álvarez, R. E., Interdicting electrical power grids, MSc. Thesis, Naval Posgraduate School, Monterey, C.A, Mexico, 2004. Salmerón, J., Wood, K., and Baldick, R., Optimizing electric grid design under asymmetric threat (II), Tech. Rep. Naval Postgraduate School, Monterey, C.A., 2004. Arroyo, J. and Galiana, F., On the solution of the bilevel programming formulation of the terrorist threat problem. IEEE Transactions on Power Systems, 20 (2), pp. 789-797, 2005. DOI: 10.1109/TPWRS.2005.846198 Motto, L., Arroyo, J. and Galiana, F., A mixed-integer LP procedure for the analysis of electric grid security under disruptive threat. IEEE Transactions on Power Systems, 20 (3), pp. 1357-1365, 2005. DOI: 10.1109/TPWRS.2005.851942 Bertsimas, D. and Tsitsiklis, J., Introduction to Linear Optimization Belmont, MA: Athena Scientific, 1997. Salmeron, J., Wood, K. and Baldick, R., Worst-case interdiction analysis of large-scale electric power grids, IEEE Transactions on Power Systems, 24 (1), pp. 96-104, 2009. DOI: 10.1109/TPWRS.2008.2004825 Arroyo, M. and Fernandez, J.F., A genetic algorithm approach for the analysis of electric grid interdiction with line switching, Proceedings of the 15th International Conference on Intelligent System Applications to Power Systems (ISAP), Curitiba, Brazil, Nov. 2009. DOI: 10.1109/isap.2009.5352849 López-Lezama, J.M., Murillo-Sanchez, C.E., Zuluaga L.J. and Gutierrez-Gomez, J.F., A contingency-based security-constraint optimal power flow model for revealing the marginal cost of a blackout risk equalizing policy in the Colombian energy market, Proceedings of the IEEE PES Transmission and Distribution Conference and Exposition, Caracas, Venezuela, 2006. Grigg C., Wong P., Albrecht P., Allan R., Bhavaraju M., Billinton R., Chen Q., Fong C., Haddat S., Kuruganty S., Li W., Mukerji R., Patton D., Rau N., Reppen D., Schneider A., Shahidehpour M., and Singh C., The IEEE Reliability Test System-1996. IEEE Transactions on Power Systems, 14 (3), pp.1010–1020, 1999. DOI: 10.1109/59.780914

J.M. Lopéz-Lezama, studied electrical engineering at Universidad Nacional de Colombia, Medellin, Colombia, where he also obtained a MSc. Degree. He obtained a PhD. degree from the UNESP in Sao Paulo, Brazil. Currently he works at the Department of Electrical Engineering in the Universidad of Antioquia in Medellin, Colombia. His interests include power systems optimization and distributed generation. N. Muñoz-Galeano, studied electrical engineering at the Universidad de Antioquia in Medellín, Colombia. He obtained a PhD degree from the Universidad Politécnica de Valencia. Currently he works as an Assistant professor at the Department of Electrical Engineering in the Universidad de Antioquia in Medellin, Colombia. His interests include power system electronics and electrical machines.

Área Curricular de Ingeniería de Sistemas e Informática Oferta de Posgrados

Especialización en Sistemas Especialización en Mercados de Energía Maestría en Ingeniería - Ingeniería de Sistemas Doctorado en Ingeniería- Sistema e Informática

L. Agudelo, studied electrical engineering and currently she is studying a Masters in Engineering at the Universidad de Antioquia, Medellín, Colombia. She works in the operation department at Interconexión Eléctrica S.A. Utility (ISA) in Medellin, Colombia. Her interests include power systems optimization and real time operation.

Mayor información: E-mail: acsei_med@unal.edu.co Teléfono: (57-4) 425 5365

84


Spinning reserve analysis in a microgrid Luis Ernesto Luna-Ramírez a, Horacio Torres-Sánchez b & Fabio Andrés Pavas-Martínez c a

, Programa de Investigación Adquisición y Análisis de Señales PAAS, Universidad Nacional de Colombia, Bogotá, Colombia. lelunar@unal.edu.co , Programa de Investigación Adquisición y Análisis de Señales PAAS, Universidad Nacional de Colombia, Bogotá, Colombia. htorress@unal.edu.co c Programa de Investigación Adquisición y Análisis de Señales PAAS, Universidad Nacional de Colombia, Bogotá, Colombia. fapavasm@unal.edu.co b

Received: April 29th, 2014. Received in revised form: February 19th, 2015. Accepted: June 16th, 2015.

Abstract This paper proposes a methodology to model and analyze the security scheme required by a microgrid that considers the participation of renewable energy sources. This security scheme is represented by an up and down spinning reserve, which allows to drive the system frequency to a steady state after the occurrence of events associated not only to forecast errors in the electricity demand (as traditional schemes do), but also to forecast errors in the power availability of the intermittent energy sources. The proposed methodology was implemented on a real microgrid that considers the interconnection of a photovoltaic generator. From this, it was concluded that the security scheme designed for the microgrid efficiently ensured the relation between generation and demand, at each study hour. Keywords: microgrid; security scheme; spinning reserve; photovoltaic generation; forecasting techniques.

Análisis de reserva rodante en una microred Resumen Este artículo propone una metodología para modelar y analizar el esquema de seguridad requerido por una microred que considera la participación de fuentes renovables de energía. Este esquema de seguridad está representado por una reserva rodante hacia arriba y hacia abajo, la cual permite llevar la frecuencia del sistema a un estado estable después de la ocurrencia de eventos asociados no sólo a errores en el pronóstico de la demanda (tal como lo hacen los esquemas tradicionales), sino también a errores en el pronóstico de la disponibilidad de potencia de las fuentes intermitentes de energía. La metodología propuesta se implementó en una microred real que considera la interconexión de un generador fotovoltaico. A partir de ello, se concluyó que el esquema de seguridad diseñado para la microred garantizó eficientemente la relación entre generación y demanda, para cada hora de análisis. Palabras clave: microred; esquema de seguridad; reserva rodante; generación fotovoltaica; técnicas de pronóstico.

1. Introduction In recent years, the electricity sector has experienced significant changes due to the introduction of a business oriented market structure. Such structure pretends to achieve higher efficiencies in the electricity supply, i.e. prices that reflect efficient costs and ensure the fulfillment of the quality, reliability and security criteria. The advances achieved in the electricity sector, along with the high levels of demand growth and the concerns over the management of non-sustainable energy resources have led to the implementation of new generation technology alternatives. These new technologies consider a rational and efficient use of the energy, in such a manner that they have a minimal impact on the environment and contribute to ensure

energy availability for future generations. In addition, the development of these technologies (usually of small scale) becomes important again with the installation of power plants near to the consumption centers, this time with the power system backup. This is an alternative of high penetration in the electricity sector and is commonly known as Distributed Generation (DG) [1]. The integration of DG within electrical systems and the incentive to optimize energy resources has gained strength in the last years. Accordingly, some countries have adjusted their regulatory policies in order to encourage the DG participation in the electrical systems [2]. Certainly, these regulatory policies have allowed an increase in the DG participation in the electricity basket. The increase of the DG penetration, the presence of multiple

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 85-93. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48580


Luna-Ramírez et al / DYNA 82 (192), pp. 85-93. August, 2015.

The methodology consists of performing a four stage analysis at each study hour of the next day. In the first stage a generation analysis is performed, evaluating the uncertainty in the forecast power availability of the RESs. A demand analysis is carried out at the second stage, which evaluates the uncertainty in the forecast electricity demand. The third stage consists of a net demand analysis, which quantifies and evaluates the up and down spinning reserve required by the microgrid. This reserve ensures the relation between generation and demand after the occurrence of events associated with forecast errors in both the electricity demand and the power availability of the RESs. At the fourth stage a validation analysis is performed, that verifies the proper sizing of the up and down spinning reserve. The demand and the power availability of the intermittent energy sources are forecast using the statistical software OxMetrics 6.3 - module STAMP (Structural Time Series Analyser, Modeller and Predictor) and the time series technique based on structural models. The next steps implement the methodology:

nearby sources, the implementation of demand response programs and the continued evolution of distribution networks into smart grids have led to the concept of microgrids [3]. A microgrid is an active distribution network that considers the coordinated operation and control of DG sources together with storage devices and controllable loads [4]. According to IEEE 1547-4 [5], microgrids are expected to satisfy the security requirements that allow these grids to operate in a secure way for any unexpected power requirement. In most cases, this security scheme is represented by a spinning reserve. This reserve represents one of the most important economic dispatch constraints in a microgrid, because it allows the system frequency to be driven to a steady state after the occurrence of an event [6-8]. Microgrids consider the participation of Renewable Energy Sources (RESs), such as Photovoltaic Generators (PVGs) and Wind Generators (WGs). The power output of the RESs behaves intermittently due to the variable nature of their primary resources (solar radiation and temperature for PVGs, and wind velocity for WGs), and they may even cause security problems in the microgrid. Therefore, it is necessary to design a security scheme for the microgrid, which can respond not only to the statistical forecast errors in the electricity demand (as traditional schemes do), but also to the statistical forecast errors in the power availability of the RESs [9-11]. The usual practice is to minimize these statistical errors by means of appropriate forecasting techniques in order to reduce the implementation costs of a microgrid security scheme [12]. There are different forecasting techniques to analyze time series. The technique based on structural models is one of the most interesting tools to forecast the hourly behavior of the electricity demand and the primary resources of the intermittent energy sources. This technique represents a widely accepted manner to model these kinds of time series of high density or frequency, seasonal and highly stochastic [13]. This paper proposes a novel security scheme that considers the intermittency of the RESs. In section 2, a methodology is proposed to quantify and analyze the up and down spinning reserve level required by a microgrid to satisfy the security requirements demanded by this type of grid. In section 3, the microgrid is described in which the proposed methodology is applied. In section 4, the methodology implementation is presented in the microgrid and also the results of this implementation are discussed. Finally, the conclusions are shown in section 5.

2.1. Generation analysis 1.

Acquiring and managing the hourly historical information of the primary resources (solar radiation and temperature for PVGs, and wind velocity for WGs) in the area where the RES is located. 2. Forecasting the behavior of the primary resources, at each study hour of the next day. 3. Identifying the up and down standard deviation (statistical error) in the forecast primary resources, at each study hour of the next day. 4. Developing the mathematical model of the RES. This model has as input variables the primary resources, and it has as an output variable the AC power availability. 5. Acquiring the technical parameters of the RES interconnected to the microgrid. 6. Forecasting the behavior of the power availability of the RES, at each study hour of the next day (pFgt). For this, it is required that the forecast values of the primary resources are entered into the mathematical model of the RES interconnected to the microgrid. 7. Calculating the up and down standard deviation in the forecast power availability of the RES, at each study hour of the next day (σpUgt, σpDgt). For this, it is required that the values of the up and down standard deviation bounds in the forecast primary resources are entered into the mathematical model of the RES interconnected to the microgrid. 8. If there is another RES interconnected to the microgrid, it is necessary to go back to item 1. If there is not another RES, it is necessary to continue. It is considered that the forecast power availability of each RES is assumed to follow a normal distribution with expectation pFgt and standard deviation σpUgt, σpDgt. Due to this condition, the forecast power availability of all the RESs grouped also follow a normal distribution with the next parameters:

2. Proposed methodology The proposed methodology allows the security scheme required by a microgrid that considers the participation of intermittent energy sources to be modeled. The security scheme is represented by an up and down spinning reserve located in the generators of the microgrid with controllable power output.

86


Luna-Ramírez et al / DYNA 82 (192), pp. 85-93. August, 2015.

ptF  p1Ft  p2Ft  ...  pGtF U t

p 

p   p 

 ...  p

ptD 

p   p 

 ...  pGtD

U 2 1t

U 2 2t

D 2 1t

D 2 2t

represent the up and down standard deviation in the

(1)

(2)

(3)

U 2 Gt 2

forecast net demand, respectively. The following equations present the mathematical procedure to calculate these values, at each study hour of the next day.

Where, G: Total number of RES interconnected to the microgrid. pFt: Forecast power availability [W] of the RESs grouped, at the tth hour of the next day. σpUt: Up standard deviation [W] in the forecast power availability of the RESs grouped, at the tth hour of the next day. σpDt: Down standard deviation [W] in the forecast power availability of the RESs grouped, at the tth hour of the next day.

9.

Acquiring and managing the hourly historical information of the electricity demand in the microgrid. 10. Forecasting the behavior of the demand, at each study hour of the next day (dFt). 11. Identifying the up and down standard deviation in the forecast demand, at each study hour of the next day (σdUt, σdDt). It is considered that the forecast demand follow a normal distribution with the expectation of dFt and standard deviation σdUt, σdDt. The normality assumption of the forecast demand is a common practice, as can be seen in the literature [14]. It is justified through the wide diversity of the electricity demand across geographical areas and consumer classes combined with an invocation of the central limit theorem [15].

nd  d  p

SRtD  ndtD 

d   p 

(6)

D 2 t

D 2 t U 2 t

14. Verifying the proper sizing of the up and down spinning reserve, at each study hour of the next day. The following equations present the mathematical procedure to validate the up and down spinning reserve.

12. Determining the forecast net demand at each study hour of the next day. The following equation presents the mathematical procedure to calculate the net demand, which represents the difference between the forecast demand and the forecast power availability of the RESs, because the RESs are characterized as negative loads. F t

(5)

U 2 t

2.3. Validation analysis

2.2. Net demand analysis

F t

d   p 

Where, σdUt: Up standard deviation [W] in the forecast electricity demand, at the tth hour of the next day. σdDt: Down standard deviation [W] in the forecast electricity demand, at the tth hour of the next day. σndUt: Up standard deviation [W] in the forecast net demand, at the tth hour of the next day. σndDt: Down standard deviation [W] in the forecast net demand, at the tth hour of the next day. SRUt: Up spinning reserve [W], at the tth hour of the next day. SRDt: Down spinning reserve [W], at the tth hour of the next day. It is considered that both the forecast electricity demand and the forecast power availability of each RES follow a normal distribution. Due to this condition, the forecast net demand follows a normal distribution with expectation ndFt and standard deviation SRUt, SRDt.

2.1. Demand analysis

F t

SRtU  ndtU 

nd tR  d tR  ptR

 t  nd tR  nd tF

(4) If

Where, ndFt: Forecast net demand [W], at the tth hour of the next day. dFt: Forecast demand [W], at the tth hour of the next day. 13. Determine the up and down spinning reserve required

(9)

t  0

 t should be  SRtD

by the microgrid, at each study hour of the next day. This reserve allows counterbalancing the uncertainty in both the electricity demand and the power availability of the RESs. The up and down spinning reserve

(8)

t  0  t should be  SRtU

If

(7)

(10)

Where, pRt: Real power availability [W] of the RESs, at the tth 87


Luna-Ramírez et al / DYNA 82 (192), pp. 85-93. August, 2015.

hour of the next day. dRt: Real electricity demand [W], at the tth hour of the next day. ndRt: Real net demand [W], at the tth hour of the next day. εt: forecast error in the net demand [W], at the tth hour of the next day. The forecast error in the net demand follows a normal distribution with expectation zero and standard deviation described by the up spinning reserve (SRUt) and down spinning reserve (SRDt). The 68 % of the forecast errors should be located between the up spinning reserve and down spinning reserve, because these two reserves are calculated considering only one standard deviation. Different numbers of standard deviations in the spinning reserve calculation could be considered, depending on the risk aversion of the system operator.

Figure 1. Microgrid located at the Universidad Nacional de Colombia's campus. Source: Adapted from [16]

energy meter and a switching system. By means of these elements the energy resources can be managed under the scheme of microgrids, which promotes the efficient use of the energy, improves the power quality, the reliability, the security, and increases the system flexibility as well. The microgrid incorporates a PVG with a power capacity of 3640 Wp, which maintains the concept of BIPVS (Building Integrated Photovoltaic Systems). The PVG uses an electronic power inverter to connect it to the AC electric network. This PVG is coupled to two phases of the microgrid due to its two-phase output condition, which is standard for this type of small-scale generation. The microgrid considers the strategic installation of meters, which allow the dynamic behavior of bi-directional power flow caused by the RES introduction in this grid to be registered. The measurement scheme is presented in Fig. 1. The microgrid also has an automation system composed by a monitoring and a control system. This automation system allows voltage, power and frequency to be registered and to perform control strategies, which will permit in the future the assignment of energy resources based on technical and economic considerations.

2.4. Methodology Considerations  The up spinning reserve allows the system frequency to be driven to a steady state after the occurrence of events associated with: excess in the real demand with respect to the forecast demand, and/or deficit in the real power availability of the RESs with respect to the forecast power availability.  The down spinning reserve allows to drive the system frequency to a steady state after the occurrence of events associated with: deficit in the real demand with respect to the forecast demand, and/or excess in the real power availability of the RESs with respect to the forecast power availability.  The electricity demand is directly proportional to the use of spinning reserve, i.e. an increase in the real demand with respect to the forecast demand implies that the generators with spinning reserve capacity should increase their generation (up spinning reserve application).  The RESs are represented as negative loads. Therefore, the power availability of the RESs is inversely proportional to the use of spinning reserve, i.e. an increase in the real power availability of a RES with respect to the forecast power availability implies that the generators with spinning reserve capacity should decrease their generation (down spinning reserve application). This methodology permits to model, quantify, analyze and verify the security scheme required by a microgrid. The security scheme ensures efficiently the frequency stability of the microgrid, after the occurrence of events associated with forecast errors in both the electricity demand and the power availability of the RESs. The proposed methodology was applied on a real microgrid located at the Universidad Nacional de Colombia's campus (latitude 04°38' N, longitude 74°05' W and elevation 2556 m.a.s.l.). This microgrid is described in the following section.

4. Implementation of the proposed methodology For implementing the proposed methodology, the electricity demand and the power availability of the PVG was first analyzed and forecast. From this information, the spinning reserve required by the microgrid was determined and validated. 4.1. Generation analysis First, it was necessary to acquire and analyze the available historical data of the solar radiation and temperature in the area where the PVG is located. For that, a database of weather information recorded every hour during the year 2012 was obtained. This database only considers 12 of the 24 hours of a day (from 6:00 to 17:59), because during this period there is significant solar radiation. From this database, it was possible to model the behavior of the solar radiation and temperature, to forecast their availability and to identify the up and down standard deviation

3. Description of the microgrid The microgrid [16] is a low voltage system (208/120V) connected to the existing electricity distribution network. This grid considers, among others, a RES, AC loads, computer applications, a real-time database system, bi- directional 88


Luna-Ramírez et al / DYNA 82 (192), pp. 85-93. August, 2015. Table 1. Parameters of the PVG interconnected to the microgrid. Parameter Range Module KC 130 Generator power 3640 Wp Modules in series 14 Branches in parallel 2 Total modules 28 Generator open circuit voltage 306.6 V Generator maximum power voltage 264.4 V Generator short circuit current 16.04 A Inverter (xantrex) 3100W Source: Adapted from [17]

Figure 2. Forecast solar radiation in the area where the PVG is located. Source: The authors

Figure 4. Forecast power availability of the PVG interconnected to the microgrid. Source: The authors

Figure 3. Forecast temperature in the area where the PVG is located. Source: The authors

Afterwards, it was possible to forecast the power availability of the PVG interconnected to the microgrid and to identify the up and down standard deviation in its forecast, at each study hour of the next day. This was realized using the mathematical model of a PVG, its technical parameters and the hourly forecast of its primary resources (solar radiation and temperature). The results of the generation analysis are presented in Fig. 4. Fig. 4 allows us to analyze the forecast power availability of the PVG (pFt), which is represented by the solid line. This figure shows that the pFt for the next day starts with a value of 199.71 W at hour 6. This value increases to 1197.80 W at hour 11. Finally, the pFt decreases to a value of 205.40 W at hour 17. Fig. 4 shows the up and down standard deviation bounds in the forecast power availability of the PVG, which are represented by the dotted lines. This figure allows us to analyze that the up standard deviation in the forecast power availability of the PVG (σpUt), which increases its amplitude as the forecast horizon becomes larger, i.e. the forecast uncertainty increases as the forecast period becomes larger. The σpUt starts with a value of 272.01 W at hour 6. This value increases to 761.60 W at hour 17. This figure also allows us to see that the down standard deviation in the forecast power availability of the PVG (σpDt) increases its amplitude as the forecast horizon becomes larger. However, it starts to decrease its amplitude from hour 15. The σpDt starts with a value of 199.71 W at hour 6. This value increases to 659.89 W at hour 14. Finally, the σpDt decreases to a value of 205.40 W at hour 17.

in their forecast, at each study hour of the next day. This was carried out using the time series technique based on structural models [13]. The results of the solar radiation analysis are presented in Fig. 2. Fig. 2 shows that the forecast solar radiation for the next day starts with a value of 108.56 W/m2 at hour 6. This value increases to 471.31 W/m2 at hour 11. Finally, the forecast solar radiation decreases to a value of 111.31 W/m2 at hour 17. Fig. 2 also allows us to see that the up and down standard deviation in the forecast solar radiation increases their amplitude as the forecast horizon becomes larger. The up and down standard deviation starts with a value of 107.05 W/m2 at hour 6, this value increases to 281.40 W/m2 at hour 17. The results of the temperature analysis are presented in Fig. 3. Fig. 3 shows that the forecast temperature for the next day starts with a value of 7.35 °C at hour 6. This value increases to 18.63 °C at hour 11. Finally, the forecast temperature decreases to a value of 10.81 °C at hour 17. Fig. 3 also allows us to see that the up and down standard deviation in the forecast temperature increases their amplitude as the forecast horizon becomes larger. The up and down standard deviation starts with a value of 2.70 °C at hour 6, this value increases to 7.08 °C at hour 17. Next, the electric behavior of a PVG was modeled using MATLAB®. For that purpose, the mathematical model proposed in [17] for the PVG generator was employed. Then, the technical parameters of the PVG interconnected to the microgrid were identified. These parameters are described in Table 1. 89


Luna-Ramírez et al / DYNA 82 (192), pp. 85-93. August, 2015.

Figure 5. Forecast electricity demand in the microgrid. Source: The authors

Figure 6. Forecast net demand in the microgrid. Source: The authors

The decreasing behavior of the σpDt during the last forecast hours is due to two aspects: The down standard deviation bound in the forecast solar radiation has negative values during the last forecast hours; the mathematical model of the PVG interconnected to the microgrid support negative values of neither solar radiation nor temperature. These two aspects led to represent the down standard deviation bound in the forecast power availability of the PVG with a value of zero, at those hours where the down standard deviation bound in the forecast solar radiation is lower than zero. It is important to note that the standard deviation in the forecast power availability of the PVG is asymmetric, i.e. the up standard deviation is different to the down standard deviation. This asymmetry is due to both the aspects described above and because the efficiency of the PVG inverter varies exponentially with the DC power input.

4.3. Net demand analysis From the generation and demand analysis, it was possible to determine the forecast net demand and to identify the up and down standard deviation in its forecast, at each study hour of the next day, using the equations (4-6). This up and down standard deviation represents the up and down spinning reserve required by the microgrid, respectively. 68% of the net demand should be located between the up spinning reserve and down spinning reserve, because these two reserves are calculated considering only one standard deviation. The results of the net demand analysis are presented in Fig. 6. Fig. 6 shows that the forecast net demand (ndFt) for the next day starts with a value of 203.57 W at hour 6. This value starts to decrease. The microgrid goes from consuming energy of the distribution system to delivering surplus energy to the same system at hour 8. This surplus energy increases to 339.57 W at hour 11, because at this time the maximum level of forecast solar radiation and temperature are reached, and therefore the maximum level of forecast power availability of the PVG is reached. This value starts to decrease. The microgrid goes from delivering surplus energy to the distribution system to consuming energy of the same system at hour 14. This energy consumption increases to 450.93 W at hour 17. Fig. 6 shows the up and down spinning reserve bounds. This figure shows that the up spinning reserve required by the microgrid (SRUt) increases its amplitude as the forecast horizon becomes larger. However, it starts to decrease its amplitude from hour 15. The SRUt starts with a value of 222.69 W at hour 6. This value increases to 700.15 W at hour 14. Finally, the SRUt decreases to a value of 336.97 W at hour 17. This figure also allows us to see that the down spinning reserve required by the microgrid (SRDt) increases its amplitude as the forecast horizon becomes larger. The SRDt starts with a value of 289.30 W at hour 6. This value increases to 807.09 W at hour 17. The SRUt and SRDt allow the uncertainty in both the electricity demand and the power availability of the PVG interconnected to the microgrid to be countered. This reserve should be located in future generators of the microgrid with controllable power output.

4.2. Demand analysis Initially, it was necessary to acquire and analyze the available historical data of the electricity demand in the microgrid. For that, a database of demand information recorded every hour along the last trimester of the year 2012 was obtained. This database only considers 12 of the 24 hours of a day (from 6:00 to 17:59), because during this period there is significant electricity consumption in the microgrid. From this database, it was possible to model the behavior of the demand, to forecast its availability and to identify the up and down standard deviation in its forecast, at each study hour of the next day. This was realized using the time series technique based on structural models [13]. The results of the demand analysis are presented in Fig. 5. Fig. 5 shows that the forecast demand (dFt) for the next day starts with a value of 403.28 W at hour 6. This value increases to 858.23 W at hour 11. Finally, the dFt decreases to a value of 656.33 W at hour 17. Fig. 5 allows us to analyze that the up and down standard deviation in the forecast demand (σdUt and σdDt) increases their amplitude as the forecast horizon becomes larger. The σdUt and σdDt start with a value of 98.52 W at hour 6. This value increases to 267.13 W at hour 17. It is important to note that this standard deviation is symmetric, i.e. the up standard deviation is equal to the down standard deviation.

90


Luna-Ramírez et al / DYNA 82 (192), pp. 85-93. August, 2015.

Figure 9. Forecast net demand vs. Real net demand. Source: The authors

Figure 7. Forecast power availability vs. Real power availability of the PVG. Source: The authors

Table 2. Forecast error in the net demand. Hour εt 6 125.46 7 -165.77 8 -324.1 9 -467.96 10 -595.78 11 -366.96 12 241.8 13 -174.48 14 339.03 15 -67.86 16 201.02 17 87.04 Source: The authors

Figure 8. Forecast demand vs. Real demand. Source: The authors

SRUt

SRDt

222.69 350.01 441.28 513.9 573.75 621.96 663.24 687.63 700.15 684.16 500.28 336.97

289.3 388.06 469.29 537.23 595.84 646.44 692.12 730.09 763.92 792.25 807.02 807.09

4.4. Validation analysis Fig. 9 shows that the ndRt behavior is located within the spinning reserve bounds, during the whole forecast horizon. The comparative analysis presented in Fig. 9 served to determine the forecast error in the net demand, using the equation (8). The results of the validation analysis are presented in Table 2. Table 2 shows that εt is lower than SRUt for positive values of εt, which is in accordance with equation (9). This table also shows that |εt| is lower than SRDt for negative values of εt, which is in accordance with equation (10). It is worth noting that the pRt has a significant increase in its value with respect to the pFt at hours 9 and 10. However, the dRt also increases its value with respect to the dFt at these hours. This means that the εt does not exceed the SRDt at these hours. The behavior of εt at each study hour can be clearly observed in Fig. 10.

In order to validate the proper sizing of the up and down spinning reserve, it was necessary to acquire and manage both the real electricity demand in the microgrid (dRt) and the real power availability of the PVG interconnected to the microgrid (pRt), at each study hour. From this information, it was possible to develop a comparative analysis between the pRt and the forecast power availability of the PVG (pFt). The results of this analysis are presented in Fig. 7. Fig. 7 shows that the pRt behavior is located within the standard deviation bounds in the forecast power availability of the PVG, during much of the forecast horizon. However, the pRt exceeds the up standard deviation bound at the hours 9 and 10 in 19.93 W and 101.01 W, respectively. These results verify that the time series technique used represents in a proper way the primary resources of the PVG (solar radiation and temperature). However, forecasting these variables with high precision is quite a demanding task due to their high stochastic behavior. Nonetheless, it was possible to develop a comparative analysis between the dRt and the forecast demand (dFt). The results of this analysis are presented in Fig. 8. Fig. 8 shows that the dRt behavior is located within the standard deviation bounds in the forecast demand, during the whole forecast horizon. Additionally, it was possible to determine the real net demand (ndRt) using the equation (7), and to develop a comparative analysis between the ndRt and the forecast net demand (ndFt). The results of this analysis are presented in Fig. 9.

Figure 10. Behavior of the forecast error in the net demand. Source: The authors 91


Luna-Ramírez et al / DYNA 82 (192), pp. 85-93. August, 2015.

Fig. 10 shows that the εt is located within the security bounds represented by the SRUt and SRDt, during the whole forecast horizon. The security scheme designed allowed us to ensure the frequency stability of the microgrid at each study hour, after the occurrence of events associated with forecast errors in both the electricity demand and the power availability of the PVG.

References [1] [2] [3]

5. Conclusions [4]

The main contribution of this paper is the development of a methodology capable of modeling the security scheme required by a microgrid that considers the participation of intermittent energy sources. The security scheme is represented by an up and down spinning reserve located in the generators of the microgrid with controllable power output. The methodology consists of performing a four stage analysis at each study hour of the next day: a generation analysis, that evaluates the uncertainty in the forecast power availability of the RES; a demand analysis, which evaluates the uncertainty in the forecast electricity demand; a net demand analysis, which quantifies and evaluates the up and down spinning reserve required by the microgrid to ensure the frequency stability; and a validation analysis, that verifies the proper sizing of the up and down spinning reserve. In order to implement the proposed methodology, a forecasting technique was employed based on structural models, by means of the statistical software OxMetrics 6.3 module STAMP. The proposed methodology was implemented on a real microgrid located at the Universidad Nacional de Colombia's campus, that considers the interconnection of a PVG. From this, it was concluded that the time series technique used represents in a proper way the electricity demand and the primary resources of the PVG (solar radiation and temperature). The security scheme designed for the microgrid allowed us to efficiently ensure the relationship between generation and demand at each study hour, after the occurrence of events associated with forecast errors in both the electricity demand and the power availability of the PVG. It is clear that the interconnection of RESs in a microgrid significantly changes the security conditions of this grid, but it represents a challenge for the near future that needs to be assumed and evaluated with appropriate methodologies to ensure the microgrid stability.

[5]

[6]

[7]

[8]

[9]

[10] [11] [12] [13] [14] [15] [16]

[17]

Acknowledgments

Luna, L.E. and Parra, E.E., Methodology for assessing the feasibility of interconnecting distributed generation, Proceedings of IEEE PES PowerTech, 2011. Luna, L.E. and Parra, E.E., Methodology for assessing the impacts of distributed generation interconnection. Revista Ingeniería e Investigación, 31 (2), pp. 36-44, 2011. Hatziargyriou, N., Asano, H., Iravani, R. and Marnay, C., Microgrids. IEEE Power & Energy Magazine, 5 (4), pp. 78-94, 2007. DOI: 10.1109/MPAE.2007.376583 IEEE, Smart Grid Research. IEEE Vision for smart grid controls: 2030 and beyond. Institute of Electrical and Electronics Engineers, ISBN: 978-0-7381-8458-6, 2013. Standard IEEE 1547.4., IEEE guide for design, operation, and integration of distributed resource island systems with electric power systems. IEEE Standards Coordinating Committee 21 on Fuel Cells, Photovoltaics, Dispersed Generation, and Energy Storage, 2011. Bouffard, F., Galiana, F.D. and Conejo, A.J., Market-clearing with stochastic security - Part I: Formulation. IEEE Transactions on Power Systems, 20 (4), pp. 1818-1826, 2005. DOI: 10.1109/TPWRS.2005.857016, 10.1109/TPWRS.2005.857015 Bouffard, F., Galiana, F.D. and Conejo, A.J., Market-clearing with stochastic security - Part II: Case studies. IEEE Transactions on Power Systems, 20 (4), pp. 1827-1835, 2005. DOI: 10.1109/TPWRS.2005.857016, 10.1109/TPWRS.2005.857015 Ortega-Vazquez, M. and Kirschen, D., Optimizing the Spinning Reserve Requirements Using a Cost/Benefit Analysis. IEEE Transactions on Power Systems, 22 (1), pp. 24-33, 2007. DOI: 10.1109/TPWRS.2006.888951 Bouffard, F. and Galiana, F.D., Stochastic security for operations planning with significant wind power generation. IEEE Transactions on Power Systems, 23 (2), pp. 306-316, 2008. DOI: 10.1109/TPWRS.2008.919318 Ortega-Vazquez, M. and Kirschen, D., Assessing the impact of wind power generation on operating costs. IEEE Transactions on Smart Grid, 1 (3), pp. 295-301, 2010. DOI: 10.1109/TSG.2010.2081386 Wang, M. and Gooi, H., Spinning reserve estimation in microgrids. IEEE Transactions on Power Systems, 26 (3), pp. 1164-1174, 2011. DOI: 10.1109/TPWRS.2010.2100414 Morales, J.M., Conejo, A.J., Madsen, H., Pinson, P. and Zugno, M., Integrating renewables in electricity markets: Operational problems. New York: Springer, 2014. DOI: 10.1007/978-1-4614-9411-9 Lutkepohl, H., New introduction to multiple time series analysis. New York: Springer, 2005. DOI: 10.1007/978-3-540-27752-1 Gross G. and Galiana F.D., Short-term load forecasting, Proceedings of IEEE, 75 (12), pp. 1558-1573, 1987. DOI: 10.1109/PROC.1987.13927 Papoulis, A., Probability, random variables, and stochastic processes, 3rd edition. Boston, MA: McGraw-Hill, 1991. Hernández, J., Luna, L.E. and Blanco, A.M., Design and installation of a smart grid with distributed generation. A pilot case in the Colombian networks, Proceedings of IEEE Photovoltaic Specialists Conference, 2012. DOI: 10.1109/pvsc.2012.6317677 Hernández, J., Gordillo, G. and Vallejo, W., Predicting the behavior of a grid-connected photovoltaic system from measurements of solar radiation and ambient temperature. Applied Energy Journal, 104, pp. 527-537, 2013. DOI: 10.1016/j.apenergy.2012.10.022

L.E. Luna-Ramírez, received the BSc. Eng degree in Electrical Engineering from the Escuela Colombiana de Ingeniería “Julio Garavito”, Bogotá, Colombia, in 2007, and the MSc. degree in Electrical Engineering from the Universidad Nacional de Colombia, Bogotá, Colombia, in 2011. He is currently pursuing the PhD. degree at the Universidad Nacional de Colombia. He is a junior research of the PAAS-UN Group and he was recently a visiting scholar at the National Renewable Energy Center (CENER), Sarriguren, Spain. His research interests include power systems economics, reliability and stochastic programming.

The authors would like to thank the SILICE group, and the Universidad Nacional de Colombia, for their support and suggestions. The authors would also like to thank the statistical consulting group for their invaluable assistance in the time series analysis.

92


Luna-Ramírez et al / DYNA 82 (192), pp. 85-93. August, 2015. H. Torres-Sánchez, received the BSc. Eng and MSc. degrees in Electrical Engineering from the Universidad Nacional de Colombia, Bogotá, Colombia, in 1976 and 1982, respectively, and realized PhD. studies at the Technical University of Darmstadt, Darmstadt, Germany (1978-1982). He is currently a professor at the Universidad Nacional de Colombia, Bogotá Colombia and the head of the PAAS-UN Group. His research interests include power quality, electromagnetic compatibility and probabilistic aspects of power quality. F.A. Pavas-Martínez, received the BSc. Eng, MSc. and PhD. degrees in Electrical Engineering from the Universidad Nacional de Colombia, Bogotá, Colombia, in 2003, 2005 and 2013, respectively. He is currently a professor at the Universidad Nacional de Colombia and a researcher in the PAAS-UN Group. His research interests include power quality, electromagnetic compatibility and probabilistic aspects of power quality.

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64

93


Methodology for relative location of voltage sag source using voltage measurements only Jairo Blanco-Solano a, Johann F. Petit-Suárez b & Gabriel Ordóñez-Plata c a

Escuela de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Colombia, jairo.blanco@correo.uis.edu.co b Escuela de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Colombia, jfpetit@uis.edu.co c Escuela de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Colombia, gaby@uis.edu.co Received: April 29th, 2014. Received in revised form: February 17th, 2015. Accepted: July 02nd, 2015.

Abstract This paper presents a methodology for the relative location of voltage sags source based on voltage measurements only. Network faults are the most frequent disturbances in the electrical systems and in turn are the causes that generate most voltage sags. Thus, the proposed algorithm considers only voltage sags by network faults, and a minimum of three voltage meters is required. Power quality can be evaluated automatically using the characterizing and extracting information from voltage records. The positive sequence voltages are the descriptors used to determine the relative location of the voltage sags causes, and the methodology is applied to a simulation where its effectiveness is verified. Matlab was used for validating this methodology, and voltage records of simulated voltage sags were generated in ATP-EMTP. Keywords: Power quality monitoring; relative location; voltage sags; positive sequence voltage; network faults.

Metodología para la localización relativa de la fuente de hundimientos de tensión usando únicamente mediciones de tensión Resumen Este trabajo presenta una metodología para la localización relativa de las fuentes generadoras de hundimientos de tensión y la cual se basa únicamente en mediciones de tensión. Las fallas de red son las perturbaciones más frecuentes en los sistemas eléctricos y son las causantes del mayor número de hundimientos de tensión. Así, el algoritmo propuesto considera únicamente los hundimientos de tensión originados por fallas de red y requiere un mínimo de tres monitores de tensión para su aplicación. La calidad de la potencia puede evaluarse automáticamente a través de la caracterización y extracción de información de los registros de tensión. Las tensiones de secuencia positiva son usadas como los descriptores para determinar la localización relativa y la metodología es aplicada en un caso de estudio por simulación comprobando su efectividad. Para la validación de los resultados se usa Matlab, tomando como entradas los hundimientos de tensión simulados en ATP-EMTP. Palabras clave: monitorización de la calidad de la potencia; localización relativa; hundimientos de tensión; tensión de secuencia positiva; fallas de red.

1. Introduction Power quality has become an important research topic due to increasing electromagnetic disturbances. In particular, voltage sags are a power quality problem that affects both power suppliers and customers. The most severe voltage sags are caused by network faults in the power system, and they propagate through the electrical system affecting loads connected away from an event source [1]. Consequently, disturbance source identification (DSI) and fault location

(FL) have become important areas of investigation to improve the power quality [2-4]. DSI is the identification of event source (e.g. network faults, transformer saturation or induction motor starting) while FL is the localization of the event source within of the electrical network. This paper is focused on electrical distribution systems. The proposed method assumes a minimum of three voltage measurements available at electrical distribution systems. These measurements are obtained from three power quality monitors, installed at strategic points of electrical systems.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 94-100. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48581


Blanco-Solano et al / DYNA 82 (192), pp. 94-100. August, 2015. Table 1. Descriptors and rule decisions for voltage sags’ relative location. Network Motor Starting Transformer Relative Fault Saturation Location if if if Downstream

Upstream

Table 2. Descriptors and rule decisions based on the sequence current magnitude. Downstream if Upstream if |I | I 1 1 |I | |I | |I |, I currents downstream and upstream respectively. |I | current of non-faulted steady state. Source: Adapted from [6]

voltage sag sources are presented. These methodologies are based on the previous identification of the voltage sag causes and their characteristics. Among the causes of voltage sags are: identified network faults, the induction motor starting and transformer saturation. With this information, the descriptors and decision rules are formulated to determinate the relative location of a voltage sag source, as a shown in Table 1. With respect to the transformer saturation, initially there are some descriptors for the identification and classification of voltage sags that originated from transformer saturation [4,5]. An algorithm performs a previous classification of the causes of the voltage sags using the decision rules presented in Table 1 and determines the relative location of the voltage sag source. Some limitations found are the incorrect classification of some classes of voltage sags, such as voltage sags caused by network faults with fault impedance, multistage events, network faults in isolated electrical systems, transformer energization with load, network faults with harmonics and transformer saturation, among others.

Current fundamental frequency component during network fault Current fundamental frequency component before network fault ⁄ threshold of the ratio ⁄I threshold of the ratio I in transformer saturation

Thr event steady state active power before induction motor starting steady state active power after induction motor starting ⁄ threshold of the ratio ( Source: Adapted from [2,3]

The method determines the relative location (downstream or upstream of a monitor) of a voltage sag source. Different methodologies for the relative location of voltage sags are usually based on voltage and current measurements. However, in some cases, there are no available current measurements or algorithms that depend only on the voltage are required. Most relevant descriptors to determine the relative location of the voltage sag source are identified and listed in some works. Also, the appropriate thresholds for discriminating the relative location of the voltage sags upstream are selected. Unlike [1], where the proposed method is limited to be applied to measurements on the transformer's primary and secondary sides, the proposed method in this work allows the estimation of the relative location of the voltage sags using voltage measurements from three or more power quality monitors located in the electrical distribution system. The validation of the proposed methodology is performed by the simulation of voltage sags in an electrical distribution system test [5]. The test system is known as IEEE 34 Node Test Feeder and it is modeled on ATP-EMTP to generate voltage sags with a relative location previously established. This set of voltage sags is used to validate the efficiency of proposed algorithm.

2.2. Relative location of voltage sag source based on the sequence current magnitude The methodology proposed in this article focuses on the use of ratio from the positive sequence current (RPSC), calculated before and during the voltage sag [6]. Descriptors and decision rules are shown in Table 2. This method provides an optimum performance when applied to records of real and synthetic voltage sags, mainly caused by network faults. The fundamental condition for its application is the existence of current measurements in the electrical system. Thus, the purpose of this paper is to provide solutions for electrical distribution systems in which only voltage power quality monitors are implemented or records are collected of voltage electromagnetic events presented in the network. Currently, there are electrical distribution systems with a limited metering infrastructure, which generally focus on voltage records only. The proposed methodology may also result as an interesting tool to implement fault localization methods using a partial monitoring program, as these only measure voltage and distributed strategically in the electrical distribution system.

2. Background There are some methods which are used for determining the relative location of voltage sags and which primarily use voltage and current measurements. Some algorithms are presented below, and emphasis is placed on the nature of the most relevant concepts related to the proposed methodologies. 2.1.

3. Proposed algorithm Positive sequence current is important for the analysis of asymmetric faults in power systems because they are present in all types of network faults [7]. Therefore, this electrical variable is important for a detailed analysis of the relative location of voltage sags in the electrical distribution systems.

Relative location of voltage sags source for PQ diagnosis

In [4,5] methodologies for the relative location of the 95


Blanco-Solano et al / DYNA 82 (192), pp. 94-100. August, 2015. PQM 1

Is

PQM 2 Z AB

Z ss

A

E

PQM 1

Is1

C

B

E

Ifault

Load

PQM 1

A IAB1

B

E

Load

c)

B

PQM 1

Is1 Z 1 ss

Z 1 BC

Z 1 AB

A IAB1

C

b) PQM 2

Z 1 ss

Z 1 BC

Z 1 AB

a)

Is1

When a network fault occurs between the two monitors as illustrated in Fig. 1.d), the VAX1 positive sequence voltage is increased compared to the pre-fault voltage of positive sequence due to the positive sequence fault current flow (IAX1) by this branch. This generates an increase in the positive sequence voltage VAB1 during the network fault compared with the prefault voltage VAB1-prefault, although it is clear that the ratio is lower compared to the case in which the network fault is present downstream from PQM2. Initially, using the proposed rules in Table 3, it is possible to estimate that the network fault is located downstream of PQM1 and PQM2, thus it gives the correct result only for PQM1. As a solution to this imprecision, a third power quality monitor PQM3 is used as a mechanism to estimate the correct relative location of the network fault. Descriptors and rules presented in Table 3 are the basic formulation for determining the relative location of voltage sags based on voltage measurements only. The methodology proposed requires the strategic location of power quality monitors to register voltage only. These monitors should make a coordinated monitoring of the voltage events present in the network and thus apply the proposed algorithm in this work, Fig. 2. Accuracy level is improved with the availability of a large number of voltage power quality monitors in the distribution system. According to Fig. 2, a minimum of three voltage power quality monitors are required for the application of the proposed algorithm. N means the number of power quality monitors installed. The proposed algorithm has several steps:  A step for loading the voltage sags. These records are correlated, so that the same voltage sags are recorded by all monitors with their respective tags. Also, the coverage zones are defined.  Segmentation of the voltage sags, in order to determine the segments of the signals corresponding to the fault and pre-fault states.  Steps to calculate RMS sequences of the voltage sags recorded.  Calculation of the positive sequence voltages of the signals recorded.  Feature extraction and decision rules evaluation in the algorithm.  Finally, diagnosis of the relative location voltage sags source.

PQM 2

Z 1 ss

Z BC

Ifault

C Load

E

PQM 2 1 Z AX

A IAX1

X

Z 1 XB

B

Ifault

PQM 3 Z 1 BC

C Load

d)

Figure 1. Radial distribution system a) One-line diagram, b); c); d) One-line diagrams of positive sequence. Source: The authors.

Table 3. Descriptors and rule decisions based on voltage only. Downstream if Upstream if ≫ ≪ ∗ magnitude of the positive sequence voltage between nodes A and B, during the voltage sag. magnitude of the positive sequence voltage between nodes A and B, before the voltage sag. * The superscripts 1 represent magnitudes of the positive sequence. Source: The authors

Similar to the work presented in [6], for the purposes of this work, only the positive sequence voltage is chosen as a variable that could determine the relative location of voltage sags source. Fig. 1.a illustrates a one-line diagram of a radial distribution system. The network is parameterized by impedances between buses, voltage power quality monitors (PQMs) recording on the buses and finally a load. The analysis begins with the study of the behavior of the recorded voltages when a network fault occurs. Fig. 1.b shows the positive sequence network of the system presented. An upstream network fault of the monitors is presented. By analyzing the network fault types, according to [6], if a symmetrical fault occurs then only the positive sequence voltages and currents are presented. If an asymmetrical fault occurs, the analysis contemplates the interconnecting of the positive and negative sequence networks (line-to-line faults) or the interconnecting of the positive, negative and zero sequence networks (double lineto-ground and single line-to-ground faults). In all cases the positive sequence current is presented in the network. Table 3 presents the descriptors and decision rules proposed. According to Fig. 1.b), if the network fault originates upstream of PQM1, it is expected that the voltage VAB1 is reduced due to the decrease of current flowing through the two monitors. In this case it is expected that most of the current provided by the voltage source deviates with the fault current, so the voltage drop of positive sequence in the AB branch during the fault decreases in a large proportion. Furthermore, when the network fault is caused downstream of PQM1 and PQM2, it is expected that the voltage drop of positive sequence VA-B increases, Fig. 1.c).

4. Testing IEEE 34 Node Test Feeder [5] is implemented in ATPEMTP and voltage sags are obtained from simulation of network faults (single line-to-ground SLG, line-to-line fault LLF, double line-to-ground fault DLG and three-lines fault TLF, with and without fault impedance). 4.1. Simulation One-line diagram of test feeder is presented in Fig. 3. Five voltage power quality monitors are installed and the synthetic voltage sags are simulated and recorded.

96


Blanco-Solano et al / DYNA 82 (192), pp. 94-100. August, 2015.

In the electrical system, some “coverage zones�, according to the strategic location of the power quality monitors, are defined. These zones determine the relative location of voltage events with respect to power quality monitors, as a shown in Fig. 3. For example, if network faults are presented in the nodes located in the area bounded by the black solid line, then these faults generate voltage sags that are located between PQM1 and PQM2. Table 4 shows a summary of the simulated network faults according to fault types. 4.2. Results of the proposed algorithm The algorithm is applied to the 143 voltage sags. Table 5 presents some results of the descriptors to analyze the algorithm performance. Values taken by the descriptors are presented and the decision rules are analyzed. Table 4. Simulated voltage sags in ATP-EMPT. Cause Description SGL Network faults

LLF DLG

Zero fault impedance Non-zero fault impedance Zero fault impedance Non-zero fault impedance Zero fault impedance Non-zero fault impedance

TLF Total Source: The authors Figure 2. Proposed algorithm for relative location of voltage sags source. Source: The authors.

Figure 3. Distribution system for simulating voltage sags in ATP-EMPT. Source: The authors.

97

Number of voltage sags 23 19 22 18 21 18 22 143


Blanco-Solano et al / DYNA 82 (192), pp. 94-100. August, 2015.

Table 5. Results of the descriptors for some voltage sags. Fault Node N808 N820 N828 SLG SGL DLG

V PseQc M(  ()1 _ 2 )

p r e _ fa u lt

sec(  ) VPQM (1 _ 2)

fault

sec(  ) VPQM (2 _ 3)

pre _ fault

sec(  ) V PQM (2 _ 3)

V

fault

sec(  ) PQM (3 _ 4)pre _ fault

sec(  ) VPQM (3 _ 4) sec(  ) VPQM (3_ 5)

fault

pre _fault

V

sec(  ) PQM ( 3 _ 5 ) fa u lt

Table 7. Confusion matrix with the results obtained from test case. N844 TLF

N836 LLF

992.65

992.65

992.65

993.80

992.74

4016.5

2699.3

8316.7

7357.3

4118.7

823.04

823.04

823.04

824.02

823.13

670.31

741.18

1350.8

7507.1

4146.4

9.2754

9.2754

9.2754

9.274

9.57

Relative Location

Area PQM3-toPQM4

Area PQM3-toPQM5

Total

41

58

19

19

137

0

0

0

6

6

0

6

0

0

6

102

79

124

118

423

1

1

1

0.76

0.958

0

0.07

0

0

0.0139

Source: The authors 7.6054

8.37

3.063

0.105

225.33

19.52

19.52

19.52

19.72

19.69

14.81

17.04

7.424

153.03

10.60

Table 7 presents the overall performance of the algorithm. The confusion matrix validates the effectiveness of the algorithm in identifying the relative location of voltage sags caused by network faults. The results show that the algorithm is 95. 8 % efficient in the relative location of network faults, and has an error percentage of 1.39 % in the estimation of the same faults. For cases in which the relative location estimation was not correct, it is found that the reason is that network faults that occur at the boundaries of two consecutive areas generate positive sequence voltages that are not too high in the faulted feeder. Fig. 4 shows the positive sequence voltages in the coverage zones when network faults occurred at node 842. Only the six misclassified events correspond to network faults at the node 842. These positive sequence voltages are calculated from the voltage between power quality meters. Fig. 4.a and Fig. 4.b show expected behaviors, as the fault current flows through the coverage zone bound for PQM2PQM3 but not flow across the zone bound for PQM3-PQM4. Fig. 4.c exhibits unexpected behavior because the sequence voltage should be increased during the voltage sag. Table 8 shows the values of the descriptors for the all network faults simulated at node 842.

According the results in Table 5 and the decision rules of the algorithm, the relative location results are shown in Table 6. Results estimate correctly the relative location of network fault as we can corroborate the location previously identified considering the diagram of Fig. 3 and the nodes on which performed the network faults. Table 6. Some results of the relative location algorithm. N808 N820 N828 Fault SLG SGL DLG Node

N844 TLF

N836 LLF

sec(  ) VPQM 1_ 2Fault sec(  ) VPQM 1 _ 2Pr e _ fault

4.04

2.71

8.37

7.4031

4.14

sec(  ) VPQM 2 _ 3Fault sec(  ) VPQM 2 _ 3Pr e _ fault

0.81

0.9

1.64

9.1103

5.03

sec(  ) VPQM 3 _ 4 Fault sec(  ) VPQM 3 _ 4Pr e _ fault

0.82

0.90

0.33

0.0114

23.54

V V

0.75

0.87

0.38

7.7604

0.5

Down of PQM1

Down of PQM1

Down of PQM1 PQM2

Between PQM1 PQM2

Between PQM1 PQM2

Between PQM2 PQM3

Down of PQM1 PQM2 PQM3 Between PQM3 PQM5

Down of PQM1 PQM2 PQM3 Between PQM3 PQM4

Up of PQM2 PQM3 PQM4 PQM5

Up of PQM2 PQM3 PQM4 PQM5

Up of PQM3 PQM4 PQM5

Up of PQM4

Up of PQM5

Results of the algorithm

Area PQM2-toPQM3

True Positives (TP) False negatives (FN) False positives (FP) True negatives (TN) True Positive Rate (TPR) False Positive Rate (FPR)

Source: The authors

sec(  ) PQM 3 _ 5 Fault sec(  ) PQM 3 _ 5 Pr e _ fault

Area PQM1to-PQM2

Table 8. Results for network faults at node 842. Type Descriptors of networ sec(  ) sec(  ) sec(  ) VPQM VPQM VPQM k fault 1_ 2Fault 2 _ 3Fault 3_4

V

sec(  ) PQM1_ 2 Pre _fault

V

sec(  ) PQM 2 _ 3

Pr e _ fault

SGL 2.6819 3.0711 LLF 4.2487 5.1658 DLG 4.7804 5.82 TLF 7.4645 9.1881 2.55 3.0279 SGLZf* 4.1401 5.0258 LLFZf 4.8587 5.9284 DLGZf Zf* With fault impedance (2 Ohms). Source: The authors

Source: The authors

98

sec(  ) VPQM 3 _ 4Pr e _ fault

sec(  ) VPQM 3 _ 5Fault sec(  ) VPQM 3 _ 5 Pr e _ fault

0.7321 0.493 0.4824 0.002 0.8107

0.8049 0.8613 0.9646 1.3381 0.8073

0.5086

0.8674

0.3917

0.9706

Fault


Blanco-Solano et al / DYNA 82 (192), pp. 94-100. August, 2015.

The proposed algorithm is useful for automatic location of the electrical faults in distribution systems. An extensive database of voltage sag records can be analyzed automatically and its implementation is innovative using the information recorded by monitoring equipment for more advanced analysis of the power quality. The decision algorithm is easy to understand and analyze by a user. Some disadvantages are that this methodology is not applicable to systems, which have single voltage measurement equipment. Also, data processing equipment with this methodology requires a representative memory capacity when calculating the positive sequence voltage and RMS values. Erroneous results are obtained for electrical faults at the nodes near to boundaries between two coverage zones. It concluded that erroneous results are mainly due to the distance of occurrence of the network faults with respect to the meters, but not due to the fault impedance or network fault type. This new methodology for the relative location of voltage sags allows responsibility to be assumed for the occurrence of disturbances in electrical systems for electrical utilities of transmission and distribution. This is a starting point for electrical utilities, allowing the implementation of new methodologies to reduce these disturbances in the network, thus ensuring a good power quality for the end user.

According to Table 8, Fig. 2 and Fig. 3, the values in the shaded cells in the last column should be greater than unity for correct classification. Thus, the algorithm locates the fault in the area precedent to the failed area, where it is estimated that the relative location of the faults is between PQM2 and PQM3. It is due to the overvoltage low of positive sequence between the 834 and 842 nodes, because being a short length from PQM3, the positive sequence voltage is not large enough to estimate the relative location between PQM3 and PQM5. During the fault, the positive sequence voltage between PQM3 and PQM5 is less than the respective prefault voltage, Fig 4.c. 5. Conclusions This work proposes a methodology for the relative location of the voltage sags source using voltage measurements only. An advantage of this method is based on voltage measurements only, considering that, in many cases, the current measurements are not available in the electrical distribution systems. Positive Sequence of Voltage PQM2-PQM3 1.6

Voltage in kV

1.4

Acknowledgements

1.2

The authors thank the Universidad Industrial of Santander (UIS) for their support of this research through the project DIEF- VIE-5567: “Methodologies for the Characterization and Diagnosis of Voltage Sags in Electrical Distribution Systems”.

1 0.8 0.6 0

0.1

0.2

0.3

Time in [s]

0.4

0.5

0.6

a)

References

Positive Sequence of Voltage PQM3-PQM4

10

[1]

Voltage in V

9 8

[2]

7 6 5

[3]

4 3 0

0.1

0.2

0.3 0.4 Time in [s]

0.5

0.6

b)

[4]

Positive Sequence of Voltage PQM3-PQM5

30

[5]

Voltage in V

25 20

[6]

15 10

[7]

5 0 0

0.1

0.2

0.3 0.4 Time in [s]

0.5

Leborgne, R. and Karlsson, D., Voltage sag source location based on voltage measurements only, Electrical Power Quality and Utilization, Journal XIV (1), pp. 25-30, 2008. Kim, K., Park, J., Lee, J., Ahn, S. and Moon, S., A method to determine the relative location of voltage sag source for PQ diagnosis, Electrical Machines and Systems, Proceedings of the Eighth International Conference on, 2005, pp. 2192-2197. Ahn, S.J., Won, D.J., Chung, D.Y. and Moon, S.U., Determination of the relative location of voltage sag source according to event cause, IEEE, Power Engineering Society General Meeting, 1, pp. 620-625, 2004. Galijasevic, Z. and Abur, A., Fault location using voltage measurements, Power Delivery, IEEE Transactions on, 17 (2), pp. 441-445, 2002. Radial Test Feeders- IEEE Distribution System Analysis Subcommittee. [Online]. Available at: http://ewh.ieee.org/soc/pes/dsacom/testfeeders.html. January 2010. Barrera, V. and Melendez, F., A new sag source relative location algorithm based on the sequence current magnitude. SICEL 2009 – V International Symposium on Power Quality, August 4-6, Bogota, Colombia, 2009. Grainger, J.J. And Stevenson, W.D., Power system analysis. Mc. Graw Hill.

J. Blanco-Solano, was born in Socorro,Santander, Colombia. He received his BSc. in Electrical Engineering in 2009 and his MSc. in Electrical Engineering in 2012, both from the Universidad Industrial de Santander (UIS), Bucaramanga, Colombia. Currently, he is pursuing a PhD. in

0.6

c) Figure 4. Positive sequence voltages between power quality meters. Source: Authors. 99


Blanco-Solano et al / DYNA 82 (192), pp. 94-100. August, 2015. Electrical Engineering at the Universidad Industrial de Santander, Bucaramanga, Colombia. His research interests include: power quality, electromagnetic transients, modeling and simulation, smart metering, smart grids and optimization. J.F. Petit-Suárez, was born in Villanueva, La Guajira, Colombia. He received his BSc. and MSc. in Electrical Engineering from the Universidad Industrial de Santander (UIS), Bucaramanga, Colombia, in 1997 and 2000, respectively. He received his PhD degree in Electrical Engineering from the Universidad Carlos III de Madrid (UC3M), Spain, in 2007. Currently, he is a Professor at the Universidad Industrial de Santander – UIS, Colombia and his areas of interest include Power Quality, Power Electronics and Signal Processing Algorithm for Electrical Power Systems. G. Ordóñez-Plata, received his BSc. in Electrical Engineering from the Universidad Industrial de Santander (UIS), Bucaramanga, Colombia, in 1985. He received his PhD in Industrial Engineering from the Universidad Pontificia Comillas, Madrid, Spain, in 1993. Currently, he is a professor in the School of Electrical Engineering at the Universidad Industrial de Santander – UIS, Colombia. His research interests include: signal processing, electrical measurements, power quality, technology management and competency-based education.

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64

100


An epidemiological approach to assess risk factors and current distortion incidence on distribution networks Miguel Fernando Romero a, Luis Eduardo Gallego b& Fabio Andrés Pavas c a

Grupo de investigación PAAS-UN, Universidad Nacional de Colombia, Bogotá, Colombia. mfromerol@unal.edu.co Grupo de investigación PAAS-UN, Universidad Nacional de Colombia, Bogotá, Colombia. lgallegov@unal.edu.co c Grupo de investigación PAAS-UN, Universidad Nacional de Colombia, Bogotá, Colombia. fapavasm@unal.edu.co b

Received: April 29th, 2014. Received in revised form: February 21th, 2015. Accepted: July 14nd, 2015.

Abstract A methodology that is based on epidemiological analysis to assess risk factors and harmonic distortion incidence rate in a distribution network is proposed in this paper. The methodology analyzes the current harmonics emission risk at the PCC due to the connection of disturbing loads. These loads are modeled, and multiple loads connection scenarios are simulated using Monte Carlo Algorithms. From the simulation results, potential risk factors for critical harmonics indicators are identified, leading to a classification of the scenarios into groups of exposed or unexposed to risk factors. Finally, the incidence rate of harmonics is calculated for each load connection scenario and the risk of critical harmonics scenarios due to the exposure to risk factors is estimated. Keywords: Power Quality (PQ); harmonic distortion; risk factors; relative risk (RR).

Aproximación epidemiológica para evaluar factores de riesgo e incidencia de distorsión armónica en redes de distribución Resumen En este artículo se propone una metodología basada en el análisis epidemiológico para evaluar los factores de riesgo y la incidencia de distorsión armónica en una red de distribución. La metodología analiza el riesgo de emisión de corrientes armónicas en el PCC debido a la conexión de cargas perturbadoras. Dichas cargas se modelan y múltiples escenarios de conexión de cargas se simulan usando algoritmos de Monte Carlo. Posteriormente de las simulaciones se identifican los potenciales factores de riesgo para indicadores críticos de armónicos. Esto Permite clasificar los escenarios en grupos de expuestos y no expuestos a factores de riesgo. Finalmente, se calcula la tasa de incidencia de armónicos para cada escenario de conexión de carga y se estima el riesgo de escenarios críticos de armónicos debido a la exposición al factor de riesgo. Palabras clave: Calidad de potencia; distorsión armónica; factores de riesgo, riesgo relativo.

1. Introduction Currently, distribution systems are gradually moving to smart grids. The deployment of smart grids, including distributed energy resources, can reduce the cost of new power plants; facilitate carbon-oxide reduction; provide reactive power compensation, more flexible frequency regulation, and enhance the system security [1]. However, some undesired effects such as unbalance, voltage fluctuation, and harmonics due to the performance of distributed energy technologies and nonlinear loads, could affect the power quality conditions in the grid.

Critical harmonics scenarios are of a greater concern to engineers because they can overheat the building wiring, overheat transformer units, and cause protections tripping as well as random end-user equipment failures [2]. Additionally, current harmonics conditions can become critical due to the connection of new electronic technologies like Compact Fluorescent Lamps (CFL), Electric Vehicles (EV), and Distributed Generation (DG), among others. The analysis of harmonics is essential to determine the performance and designing of the equipment, so as the optimal location for harmonic compensating devices.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 101-108. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48585


Romero et al / / DYNA 82 (192), pp. 101-108. August, 2015.

Measurement systems and methodologies for power quality disturbances assessment are integrated to smart grids. In Colombia the assessment of power quality has been integrated to the design of smart grids through the research project SILICE III (Smart Cities: Design of a smart microgrid). One of the aims of this project is to develop methodologies for evaluating potential emissions of harmonics due to the connection of new loads in a distribution system. Several methodologies for evaluating harmonics have been proposed in the last decades. In [3] an assessment of harmonics at the point of common coupling, using the principle of filtering was proposed, which determined the customer dominant current harmonics. The use of indices for assessing continuous and discrete disturbances is proposed in [4], with the aim of identifying power quality problems in a 50 customer real system (20kV/400V). In [5], power quality indices throughout the system are analyzed. A measurement system consisting of a power quality disturbances monitoring system, a displaying platform, and diagnostic methodologies for the recorded information were proposed in this paper. Finally, in [6], a monitoring system with 250 computers by a Brazilian company is implemented. The system records the power quality disturbances and performs statistical analysis, estimating indicators and disturbances trends. Since the last decade the impact of CFL on the harmonic levels in the network has been extensively studied. In [7] the impact of CFL on power quality in terms of total current and voltage harmonic distortion is evaluated. In [8] the effect of compact lamps during connection and disconnection to the distribution system is analyzed, then results of harmonics for cases of compact lamps and incandescent bulbs are compared. Finally, in the last five years, the impact of loads such as electric vehicles on power quality has been of particular interest because of future overcrowding. In [9] the potential impact on power quality due to the connection of electric vehicle chargers in existing distribution networks is evaluated and [10] analyze the potential impacts of several penetration levels of electric vehicle chargers on a test distribution network. Conventional studies of power quality in distribution systems consist of statistical assessments of disturbances in each bus-bar and the calculation of global indicators that characterize the power quality of the system. Moreover, with the introduction of new technologies such as CFLs, EVs and DG the current conditions of power quality may vary in the future. Accordingly, the power quality assessment faces the following problems: (i) evaluation of the overall impact on power quality due to the massive penetration of new electronic loads, (ii) prediction of future power quality problems in order to implement possible solutions and (iii) the determining of the relationship between impact factors (topological location, connection of other users, types of loads, etc.) and power quality conditions. In order to find solutions for these problems, this paper proposes a methodology based on epidemiological theory in order to analyze harmonic conditions in the system with features of predictability and risk analysis. In contrast to the previously cited studies, this paper conducts the harmonic evaluation based on epidemiological theory. This novel approach allows the risk of future critical

harmonics scenarios to be evaluated due to the connection of new electronic loads and it determines their impact on system power quality conditions using epidemiological techniques. Specifically, the occurrence of a disturbance and its spread in the network is considered similar with the occurrence and spread of a human virus, spam, news, etc. Thus, epidemiology techniques can be a useful tool to assess power quality conditions for the estimation of risk indices to assess possible future harmonics conditions based on current conditions. The paper is organized as follows: First, the problem of power quality is formulated from the epidemiological perspective. A characterization and modeling of disturbing loads is then performed through EMTP-ATP and MATLAB® software. Subsequently, several scenarios of load connections are proposed and stochastic simulations are performed in a real system model. The relationship between risk factors and incidence of harmonics in the simulated scenarios is estimated through risk indices. Finally, the risk of critical harmonics scenarios is calculated. 2. Epidemiological assessment of Power Quality Epidemiological analysis has been used in several fields mostly related to biology, microbiology, and medical sciences [11-13]. However, there are several examples of its application in engineering which are related to the characterization of the epidemic spread of viruses on networks Internet, Wi-Fi, cell or local networks [14-17]. Conventional epidemiological methodologies try to answer the following questions in order to assess the health of a population:  Who is at risk of becoming sick or infected?  What is the rate of recovered and deceased patients?  What factors affect the rate of infected population?

Similar to an epidemiological problem, a power quality problem tries to solve the following questions: 

 

What are the power quality conditions of the customers? (Health conditions) What is the number of customers who are affected or will be affected by power quality disturbances above a threshold or reference? (Rate of infected population) Which factors influence the occurrence of disturbances in these customers? (Risk factors)

Figure 1. Epidemiology and Power Quality problems Analogy. Source: The authors.

102


Romero et al / / DYNA 82 (192), pp. 101-108. August, 2015.

In order to resolve these questions, epidemiological methodologies can be extended to power quality problems. Epidemiological assessment consists of a three-step process, as shown in Fig. 1. A detailed explanation of each stage is given in the following sections. 2.1. Description of population health Epidemiology observes phenomena related to diseases causing death in certain populations. These epidemiological phenomena are measured by means of two different and complementary ways: the prevalence ratio and the incidence ratio. The prevalence ratio (PR) describes the amount of existing cases of disease at a particular point in time. The incidence ratio (IR) studies the number of new cases of a disease within a specific period of time. According to reference [18], (PR) and (IR) are calculated as follows:

incidence of the disturbance. Establishing two groups of customers characterized by their exposure (exposed and non-exposed) to a possible risk/prevention factor. 3. Measurement of the incidence of the exposed and nonexposed groups. 4. Measurement of the association based on the values indicating the relationship between the presence of the factor and the occurrence of the disturbance. The measurement of the association is the difference of the disease frequency between the exposed and the nonexposed group. This measurement is achieved through the following risk indicators:  Relative Risk (RR) 2.

RR  

PRmid  year 

IR year 

people with disease at mid  year Mid  year population

new cases of disease in the year Mid  year population

(1) 

(2)

new users with critical disturbanc es total users

In this paper, users with critical disturbances are defined as users with TDD levels above a reference value. The reference is taken from IEEE according to maximum demand load current and maximum short circuit current at PCC.

(5)

Proportion of Exposed Attributable Risk (PEAR)

PEAR 

(3)

(4)

Attributable Risk (AR)

AR  IR  IR1

In the equations (1) and (2) the periods of time of mid-year for prevalence ratio and year for incidence ratio are selected depending on the disease under study. A year is a common period of time for many epidemiological studies [18]. Unlike most epidemiological studies, the power quality population remains constant and causeeffect temporality is instantaneous. Therefore, the period of time to assess the risk of critical disturbances is not considered on a power quality context and the prevalence ratio is equal to the incidence ratio. In order to assess critical power quality disturbances the incidence ratio (IR) on equation (3) is proposed. IR year 

IR IR1

IR  IR1 IR

(6)

Where: IR is the incidence ratio on exposed users, and IR1 is the incidence ratio on unexposed users. The analysis of the above risk indicators shows the relationship between risk factors and incidence ratio of the disturbance. The strength of the identified relationship depends on the values of those indicators. The particular case of IR1=0 means that there are no other risk factors that could cause critical disturbances in the user. Therefore the possibility of a critical scenarios occurrence is attributed entirely to the risk factor under study. However, several risk factors (Short circuit level, disturbing neighbor loads, etc.) could affect power quality conditions in real scenarios, thus IR1=0 is unlikely. The interpretation of each indicator on equations (4), (5) and (6) in the context of power quality will be described in more detail in the next sections. 2.3. Prevention of new cases

2.2. Risk/protective factors association In this step epidemiology analyses the relationship between critical levels of disturbances and Risk/Protective factors. Risk factors contribute to the occurrence of the disturbance. Therefore, the greater the exposure of customers to risk factors, the higher the incidence of disturbance. Moreover, protective factors contribute to reducing the occurrence of the disturbance, thus, the greater the exposure of customers to protective factors, the lower the incidence of the disturbance. Risk/protective factors could be identified as follows: 1. Formulation of a possible factor that could affect the

After identifying the risk factors that directly or indirectly affect the incidence of a disturbance, the epidemiological methodology assesses the corresponding actions to reduce the exposure to these factors. 3. Definition of risk factors, environment, and population With regard to the power quality issues, in this paper the risk of reaching critical harmonics levels is analyzed at the PCC, as depicted in Fig. 2.

103


Romero et al / / DYNA 82 (192), pp. 101-108. August, 2015. Table 1. Current Distortion limits for General distribution systems (120V 69kV) Individual Harmonic Order (odd harmonics) Isc/IL h<1 11<h<1 17<h<2 23<h<3 35< 1 7 3 5 h <20 4.0 2.0 1.5 0.6 0.3 20<50 7.0 3.5 2.5 1.0 0.5 50<100 10.0 4.5 4.0 1.5 0.7 100<100 12.0 5.5 5.0 2.0 1.0 0 >1000 15.0 7.5 6.0 2.5 1.4 Source: IEEE Std 519-2014.

Figure 2. Risk factor on the distribution system. Source: The authors.

The circuit in Fig. 2 is a real 11,4kV/208V feeder at the National University of Colombia. In order to make a precise feeder modeling, the following parameters were considered:  Network Equivalent. A network equivalent represents the feeder voltage variations characteristics and the impact of power quality disturbances at the upstream grid. In order to model the distribution network equivalent, the short circuit impedance 0.4113Ω was obtained from the network operator.  Distribution Lines. The model considers the cable type (overhead or underground), the line caliber and physical characteristics proper of each facility.  Transformers. A model was developed, representing all the parameters of the transformer by using the available manufacturer data.  Loads. Linear and non-linear loads are modeled. The total demand distortion index (TDD) in equation (7) is used to assess the impact of risk factor (L4) on harmonic distortion at the PCC [19].

TDD 

40 h2

IL

I h2

(7)

through TD D 5.0 8.0 12.0 15.0 20.0

In this case, a critical harmonics level is defined as the value of total demand distortion TDD, measured at the PCC over the reference level. When the point of the system has waveform distortion levels above the reference, it is considered to be a critical point. 3.1. Risk factors hypothesis Several factors could affect the harmonics incidence in a point of the network. Some of these factors are:  Short circuit level. Although short circuit level is not a direct cause of harmonics disturbances in a network point, this parameter could favour the presence of harmonics due to other causes.  Linear and non-linear loads on customers. Connection of several types of loads can cause high harmonics levels at the PCC.  Harmonics disturbances from other circuits. Loads connected to other nearby circuits could affect the harmonics level in the current feeder.  Capacitor banks used for voltage control and power factor improvement. The connection of capacitors can cause resonance conditions that can affect harmonics levels. In this paper the connection of non-linear loads at user L4 is analyzed as a risk factor for the harmonics incidence at the PCC. 3.2. Environment and population

Where: Ih= Harmonic current component in amperes with the harmonic order indicated by the subscript h. IL= Maximum demand load current in amperes (fundamental frequency component) at PCC. The current distortion limit of TDD is used as a reference to determine which scenarios of harmonics distortion are considered to be critical. Table 1 shows the limits values for current distortion for general distribution systems according to IEEE 519 [20]. Where: Isc= Maximum short circuit current at PCC. IL=Maximum demand load current (fundamental frequency component) at PCC. In the case of the circuit in Fig. 2, the relationship between (Isc) and (IL) is 192.3. According to Table 1, the reference value to assess TDD indices at the PCC is TDD=15%.

In order to analyze the relationship between the risk factor and the current harmonics incidence at the PCC, individuals belonging to a population are defined based on the circuit in Fig. 2. L1, L2, and L3 represent linear and non-linear loads that users can connect at nodes 1 and 2. L4 is a non-linear load and represents the risk factor of interest. Linear loads are modeled by equivalents RL with RMS currents between 0A and 50A. Non-linear loads are modeled like full wave rectifiers with capacitive filter and a variable resistance with RMS currents between 0A and 50A. Every individual in the population is represented by every possible load connection scenario in the circuit. In order to generate the study population and risk factors levels, several connection scenarios are proposed as follows: 1. Generating study population. 169 different load scenarios of L1, L2, and L3 are generated through current variations of each load between 0A and 50A. An example of loads combination in order to obtain a view of the load scenarios is shown in Table 2.

104


Romero et al / / DYNA 82 (192), pp. 101-108. August, 2015. Table 2. Population. Load connection scenarios. Scenario IL1(A) 1 0.00 2 0.00 3 0.00 ⁞ ⁞ Source: The authors. Table 3. Risk factor levels. Risk factor level 1 2 3 ⁞ Source: The authors.

IL2(A) 50.00 47.58 43.50 ⁞

Table 4. Power quality conditions at PCC. Vrms (V) Irms(A) Average 119.96 54.28 Percentile 75 119.97 71.29 Percentile 90 119.98 89.68 Percentile 95 119.99 97.81 Source: The authors.

IL3(A) 0.00 0.58 6.13 ⁞

THDv(%) 0.27 0.33 0.48 0.52

TDD(%) 16.82 22.36 30.45 32.43

IL4(A) 0.00 0.58 6.13 ⁞

2.

Generating risk factor levels. 12 levels of the risk factor L4 are generated by means of current variation of the load between 0A and 50A. An example of risk factor levels is shown in Table 3. 3. Stochastic simulation. Load scenarios are generated by combining L1, L2, L3, and risk factor L4 vectors. Load combinations are included to observe the cumulative effect of loads on the harmonics level. Finally 2028 cases were obtained by simulating 169 scenarios for each 12 risk factor levels. Harmonics levels at PCC in all scenarios are obtained by simulations in time domain. The results are classified and analyzed in the following section in order to determine the association between risk factors and the level of harmonics. In the following sections an epidemiological methodology is used to assess several scenarios of harmonics levels in the distribution network. 4. Assessment of association levels due to harmonic disturbances in distribution networks

Figure 3. Risk factor identification process Source: The authors

From Table 4 the average voltage distortion THDv is about 0.27%, which is very low according to the reference value in IEEE 519 standard (5%). Furthermore, the average total demand distortion TDD is 16%. According to Table 1 these values exceed the TDD limits (15%). Therefore, the possibility of critical harmonic levels due to the loads connection could be very high. In the next paragraph, the epidemiological methodology is applied to estimate the impact of connecting L4 load on the TDD level observed at the PCC. 4.2. Risk factor identification and association

In this section the above explained epidemiological analysis is applied to previous simulations. First, an evaluation of the power quality status of the entire study population (load connection scenarios) is performed. Subsequently a reference TDD value is proposed to evaluate the population in terms of “sick” and “healthy”. Finally the population is classified into exposed and unexposed to the risk factor and risk indicators are calculated. 4.1. Description of the epidemiological power quality status The study population consists of 169 different load scenarios or individuals. Every individual is exposed to 12 different levels of the risk factor. Therefore, the total population contains 2028 individuals or load scenarios. Every load scenario is simulated on ATP-EMTP software computing current and voltage signals. These signals are processed to calculate the electrical variables and the harmonic levels at the PCC. Table 4 shows the statistical indicators of voltage, current, and harmonic distortion observed at the PCC for the 2028 load connection scenarios.

According to Fig. 3, individuals of the population are split into two groups, exposed and unexposed to the risk factor. Unexposed individuals are the load scenarios where risk factor L4 is not connected. On the other hand, exposed individuals are load scenarios where L4 is connected. Subsequently, a reference value of current distortion TDD=15% is defined and individuals with critical conditions of current distortion are identified in exposed and unexposed groups. Furthermore, as L4 could have different current load conditions, the risk factor can have several levels of exposure. In order to verify the individual impact of each of these 12 exposure levels in the occurrence of critical harmonic distortion, every level is considered as an exposed group and the incidence is calculated for each of them. Table 5 shows incidence values for 12 different exposure levels of the risk factor. Level 1* indicates the particular unexposed case. According to Table 5, the unexposed group 1* has 169 individuals, from which 45 have TDD values above the threshold (critical individuals). Therefore, the incidence ratio of high harmonics in the population according to equation (2) is:

105


Romero et al / / DYNA 82 (192), pp. 101-108. August, 2015. Table 5. Incidence ratio of exposed and unexposed groups. Exposure level Exposed Critical individuals individuals 1* 169 45 2 169 43 3 169 26 4 169 44 5 169 104 6 169 104 7 169 115 8 169 117 9 169 169 10 169 169 11 169 169 12 169 169 Source: The authors.

Table 6. Risk indices of exposed and unexposed groups. Exposure Incidence Attributable Relative level ratio IR risk AR risk RR

Incidence ratio IR 0.266 0.254 0.153 0.260 0.615 0.615 0.680 0.692 1.000 1.000 1.000 1.000

1* 0.266 2 0.254 3 0.153 4 0.260 5 0.615 6 0.615 7 0.680 8 0.692 9 1.000 10 1.000 11 1.000 12 1.000 Source: The authors.

Figure 4. TDD values for risk factor exposure levels. Source: The authors.

IR1 

45 critical individual s  0,266 169 Total individual s

117 critical individuals  0,692 169 Total individuals

1.000 0.956 0.578 0.978 2.311 2.311 2.556 2.600 3.756 3.756 3.756 3.756

Figure 5. Incidence ratio and Attributable risk vs exposure levels. Source: The authors.

(8)

In the case of exposure level 8, the incidence ratio of critical distortion is calculated as follow:

IR8 

0.000 -0.012 -0.112 -0.006 0.349 0.349 0.414 0.426 0.734 0.734 0.734 0.734

Exposed attributable risk PEAR 0.000 -0.047 -0.731 -0.023 0.567 0.567 0.609 0.615 0.734 0.734 0.734 0.734

Fig. 5 shows the incidence of critical TDD scenarios (IR) and attributable risk (AR) that are calculated according to equation (5). The attributable risk is the portion of incidence that is due to exposure to the risk factor. The interpretation of the attributable risk is as follows: In the case of a risk factor level equal to 5, the incidence ratio of critical TDD scenarios is 0.615 and the attributable risk is 0.349. This means that 0.349 of 0.615 is due to the risk factor under study.

(9)

The Fig. 4 shows the average and 95 percentile values of TDD(%) for each exposure level. From Fig. 4, level 1* (unexposed) has a TDD average value around 16.18%. This current distortion is caused by the presence of others nonlinear loads different than L4 (risk factor). For exposure levels between 2 and 4, the TDD average decreases from 16.14% to 15.35%. For exposure levels between 4 and 12, the TDD average increases from 15.35% to 23.36%. In order to analyze the association between levels of exposure to the risk factor and the presence of TDD critical scenarios, attributable risk (AR), relative risk (RR) and proportion of exposed attributable risk (PEAR) are calculated from equations (5), (4), and (6). The results for each exposure level are shown in Table 6.

In Fig. 6, the relative risk measures how big the impact of the risk factor is as compared with the unexposed scenario to that factor. Fig. 6 shows the relative risk for each exposure levels. The interpretation of the relative risk is as follows: In the case of a risk factor level equal to 5, the risk of critical TDD scenarios increases 2.311 times as compared with the unexposed scenario. Fig.7 shows the exposed attributable risk. This indicator shows the proportion of risk that is due to exposure to the factor under study. In the case of level 5, the following interpretation holds:

106


Romero et al / / DYNA 82 (192), pp. 101-108. August, 2015.

4.2. Prevention of new cases In order to prevent or reduce the risk of high TDD levels two possibilities are proposed. First, the exposure to the risk factor could be reduced. In this case, scenarios 5 to 12, where the risk of high TDD levels increases significantly, need to be avoided. Another option is to connect loads whose interaction with the present loads decreases the risk of TDD values above desired limits. This methodology can be extended to analyze complex load interactions. From this perspective, epidemiology offers tools to verify the interaction between variables and to evaluate complex problems of power quality with a new approach. Figure 6. Relative risk vs. exposure levels. Source: The authors.

5. Conclusions A methodology based on epidemiological analysis for assessing risk factors and incidence rates of harmonic distortion in a distribution network was proposed. Current harmonics emission risks at the PCC due to the connection of disturbing loads were analyzed. Multiple loads connection scenarios were simulated using Monte Carlo Algorithms. From the simulation results, potential risk factors for critical harmonics indicators were identified and connection loads scenarios are classified into exposed and non-exposed to these risk factors. Finally, the incidence ratio of harmonics, the attributable risk, the relative risk and exposed attributable risk were calculated, allowing the identification of critical connection loads scenarios. The results show an interesting application of epidemiological methods in characterizing the grid vulnerability under harmonic pollution and the proposed methodology may be easily extended to other types of power quality disturbances.

Figure 7. Exposed attributable risk vs exposure levels. Source: The authors.

Acknowledgements The authors would like to thank to COLCIENCIAS, CODENSA S.A. E.S.P., the National University of Colombia and PAAS-UN Research Group for the support given during the execution of the research project SILICE III (Smart Cities: Design of a smart micro-grid).

In the case of a risk factor level equal to 5, the 56.7% of critical TDD scenarios are due to factor exposure. The rest of the risk is due to the presence of other factors. For the particular case of exposure level 3, the factor L4 is considered protective of 73.1% of the occurrence of critical TDD scenarios. According to Fig. 5 and 6, the impact depends on the level of exposure to the risk factor; it therefore depends on the current level of the disturbing load L4. Due to the presence of harmonics induced by other loads, the risk factor incidence could interact with them and the result of this interaction is the TDD levels on the main feeder. An example of this is level 4, wherein the interaction between the risk factor with other disturbances decreases the risk of critical TDD levels on the feeder. In this case, the interaction with L4 acts as a protective factor. Moreover, in cases 5 to 12, the interaction of L4 increases the risk of high TDD levels, thus L4 acts as a risk factor and the higher the current, the greater the risk of critical TDD scenarios.

References [1]

[2]

[3]

[4]

107

Su, H.J., Chang, G.W., Ranade, S. and Lu, H.J., Modeling and simulation of an AC microgrid for harmonics study, Harmonics and Quality of Power (ICHQP), 2012 IEEE 15th International Conference, pp.668-672, 17, 20 June, 2012. Mikkili, S. and Panda, A.K., Mitigation of harmonics using fuzzy logic controlled shunt active power filter with different membership functions by instantaneous power theory. In Students Conference on Engineering and Systems. pp. 1-6, 2012. DOI: 10.1109/sces.2012.6199103 Huang, X., Nie, P. and Gong, H., A new assessment method of customer harmonic emission level. In Asia-Pacific Power and Energy Engineering Conference, pp. 1-5, 2010. DOI: 10.1109/appeec.2010.5449329 Siahkali, H., Power quality indexes for continue and discrete disturbances in a distribution area. In IEEE 2nd International Power and Energy Conference, pp. 678-683, 2008. DOI: 10.1109/pecon.2008.4762561


Romero et al / / DYNA 82 (192), pp. 101-108. August, 2015. [5]

[6]

[7] [8] [9]

[10]

[11] [12]

[13]

[14]

[15]

[16]

[17]

[18] [19] [20]

Chung, I.Y., Won, D.J., Kim, J.M., Ahn, S.J., Moon, S.I., Seo, J.C. and Choe, J.W., Development of power quality diagnosis system for power quality improvement. In Power Engineering Society General Meeting, IEEE, 2, pp. 1256-1261, 2003. Mertens, E., Dias, L., Fernandes, E., Bonatto, B., Abreu, J. and Arango, H., Evaluation and trends of power quality indices in distribution system. In Electrical Power Quality and Utilization, EPQU 2007. 9th International Conference, pp. 1-6, 2007. DOI: 10.1109/epqu.2007.4424212 Jabbar, R. and Al-Dabbagh, M., Impact of compact fluorescent lamp on power quality. Power Engineering Transactions, pp. 1-5, 2008. Matvoz, D., Impact of compact fluorescent lamps on the electric power network. Harmonics and Quality of Power, pp. 1-6, 2008. DOI: 10.1109/ichqp.2008.4668864 Putrus, G.A., Suwanapingkarl, P., Johnston, D., Bentley, E.C. and Narayana, M., Impact of electric vehicles on power distribution networks, IEEE Vehicle Power and Propulsion Conference, pp. 827831, 2009. DOI: 10.1109/vppc.2009.5289760 Richardson, P. and Flynn, D., Impact assessment of varying penetrations of electric vehicles on low voltage distribution systems. Power and Energy Society, pp. 1-6, 2010. DOI: 10.1109/pes.2010.5589940 Britton, T., Stochastic epidemic models: A survey. Mathematical Biosciences. Elsevier Inc.Department of Mathematics, Stockholm University, SE-10691 Stockholm, Sweden. pp. 24-35, 2010. Altowaijri, S., Mahmoud, R. and Williams, J., A quantitative model of grid systems performance in healthcare organizations. In 2010 International Conference on Intelligent Systems, Modelling and Simulation, pp. 431-436, 2010. DOI: 10.1109/ISMS.2010.84 Corley, C.D. and Mikler, A.R., A computational framework to study public health epidemiology. International Joint Conference on Bioinformatics, Systems Biology and Intelligent Computing, p 360363, 2009. DOI: 10.1109/ijcbs.2009.83 Liu, Z. and Lee, D., Coping with instant messaging worms, statistical modeling and analysis. In 15th IEEE Workshop on Local & Metropolitan Area Networks, pp. 194-199, 2007. DOI: 10.1109/lanman.2007.4295998 Hashim, F. and Munasinghe, K.S. and Jamalipour, A., Biologically inspired anomaly detection and security control frameworks for complex heterogeneous networks, in: IEEE Transactions on Network and Service Management, pp. 268-281, 2010. DOI: 10.1109/tnsm.2010.1012.0360 Chao-Sheng, F., Zhi-Guang, Q. and Ding, Y., Modeling passive worm propagation in mobile P2P networks. In: International Conference on Communications, Circuits and Systems (ICCCAS), pp. 241-244. 2010. DOI: 10.1109/ICCCAS.2010.5582004 Bowei, S., The spread of malware on the WiFi network: Epidemiology model and behaviour evaluation, In: Information Science and Engineering (ICISE), 2009 1st International Conference, pp. 1916-1918, 2009. Woodward, M., Epidemiology, study design and data analysis. Statistics in Medicine, 2nd ed., Vol. 25, 699P. Boca Raton, Chapman & Hall/CRC, 2005. Norma Técnica Colombiana NTC 5001. Calidad de la potencia eléctrica. Límites y metodología de evaluación en punto de conexión común, ICONTEC. 2008. IEEE Recommended practice and requirements for harmonic control in electric power systems, IEEE Std 519-2014, Revision of IEEE Std 519-1992, pp.1-29, 2014.

L.E. Gallego, received his BSc. in 1999, MSc. in 2002, and PhD in 2008 from the Universidad Nacional de Colombia, Bogotá Colombia. He has been a researcher in the Research Program of Acquisition and Analysis of Electromagnetic Signals of the Universidad Nacional de Colombia - PAASUN since 2000, working in research projects mainly related to power quality, lightning protection, and power markets. He is involved in teaching activities related to power quality and computational intelligence. His research interests include power quality analysis, power markets and computational intelligence applied to power systems modelling. A. Pavas, received his BSc. in 2003) and MSc. in 2005 in Electrical Engineering from the Universidad Nacional de Colombia and finished his PhD in Electrical Engineering at the same institution. Pavas has been a researcher in the Program of Acquisition and Analysis of Electromagnetic Signals of the National University of Colombia-PAAS-UN Group since 2001, where he has proposed and executed research projects related to power quality and lightning protection. He is a lecturer in the Universidad Nacional de Colombia in the areas of electric machinery and power quality. His research interests include power quality, electromagnetic compatibility and probabilistic aspects of power quality.

M.F. Romero received his BSc. in 2006 and MSc. in 2011 in Electrical Engineering from the Universidad Nacional de Colombia, Bogotá, Colombia. Currently, he is developing a PhD in Electrical Engineering at the same institution. He has been a researcher in the Program of Acquisition and Analysis of Electromagnetic Signals - PAAS at the Universidad Nacional de Colombia - since 2005. His research interests include power quality, power systems analysis and network distribution design.

108

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64


Generalities about design and operation of microgrids Juan Manuel Rey-López a, Pedro Pablo Vergara-Barrios b, Germán Alfonso Osma-Pinto c & Gabriel Ordóñez-Plata d b

a Esc. de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia. juanmrey@uis.edu.co Esc. de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia. pvergarabarrios@gmail.com c Esc. de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia. german.osma@gmail.com d Esc.de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia. gaby@uis.edu.co

Received: March 23th, 2014. Received in revised form: February 17th, 2015. Accepted: July 14th, 2015.

Abstract The need for new generation systems has motivated the development of microgrids. This new concept may provide significant benefits such as losses reduction, high degree of efficiency and reliability to the transmission and distribution networks. This paper presents generalities about microgrids, including general structure and different topologies. Also an original methodology for facilitating its design and evaluation is proposed. Finally, the microgrid located at the Parque Tecnológíco de Guatiguará at the Universidad Industrial de Santander, is analyzed and an operation analysis is included for different operations stages of loads and generation, the performance of operation of storage systems, the interaction with the grid and an energy balance for all the system. Keywords: energy design; energy implementation; energy simulation; microgrids; photovoltaic and wind system.

Generalidades sobre diseño y operación de micro-redes Resumen La necesidad de nuevos sistemas de generación ha motivado el desarrollo de las micro-redes. Este nuevo concepto traerá consigo significantes beneficios a las redes de transmisión y distribución como reducción de pérdidas, mayor grado de eficiencia y confiabilidad. Este artículo presenta generalidades sobre las micro-redes, incluyendo su estructura general y diferentes topologías; además se propone una metodología original para facilitar su diseño y evaluación. Finalmente, se analiza el diseño de la micro-red localizada en el Parque Tecnológico de Guatiguará de la Universidad Industrial de Santander, incluyendo un análisis de operación para diferentes escenarios de operación y carga, la operación del sistema de almacenamiento, la interacción con la red eléctrica y un balance energético para todo el sistema. Palabras clave: diseño energético; implementación energética; simulación energética; micro-redes, sistema fotovoltaico y eólico.

1. Introduction The electric energy generation industry is headed towards reducing its dependence on fossil fuels through the integration of technologies that are friendlier to the environment. This is due, in part, to the integration of distributed systems based on renewable energy sources. This strategy seeks to address new requirements of generation, operation and energy supply, and provoke the modification of the current structure of the transmission and distribution system, in order to change it from a centralized to a decentralized structure [1]. A particular case of these systems is the smart electrical microgrids, where a consumer with in situ generation can also act as a small-scale generator due to bidirectional power flow.

The microgrids are characterized by a scheme of electric energy supply based on multiple sources of energy, known as hybrid generation. The selection of energy generation systems depends on geographical and climatic conditions, availability of conventional energy sources at the site, as well as the overall financial characteristics of the project. Its operation is based on energy supply management from monitoring the demand and generation, which allows: the mitigation of the impact on the environment by prioritizing renewable generation over conventional generation, the reduction of the energy consumption peaks of the grid by making use of local generation, and the increasing of the reliability of energy supply to avoid the effect of interruptions in transmission grids [2], among others.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 109-119. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48586


Rey-L贸pez et al / / DYNA 82 (192), pp. 109-119. August, 2015.

For the design of an electric microgrid, information about the demand to be supplied and the energy potential for in situ generation is required. Then, the system configuration is set, from which the dimensioning of the components and electrical installations is made, and finally, the energy and financial analysis can be done. Unfortunately, the development of the steps mentioned above can be confusing, since there is not a sequential approach that takes into detailed account various guidelines for the proper design of a microgrid. For this reason, the aim of this work is to meet the manifest need. In order to achieve this, the paper begins with the generalities and types of microgrids. Next, the proposed design methodology and additional technical considerations are presented. Based on this, we proceed to show the operation of a microgrid to be located in university facilities, in order to facilitate the understanding of the microgrid operation, for which we considered various generation and load scenarios. Finally, the conclusions of the work are presented.

Figure 1. General structure of a microgrid. Source: The authors.

2. General structure of a microgrid A microgrid is defined as a system consisting of generation sources, storage equipment, and electrically connected loads, to meet a determined energy demand. It can operate connected to the main grid in medium voltage or low voltage, or isolated from the same [2]. Energy sources in a microgrid can be renewable (e.g. photovoltaic, wind, thermal and geothermal generation, biomass, tidal and cogeneration, etc.), conventional in situ (e.g. diesel thermal plants) or the same electric grid [3]. The current trend focuses on the use of renewable energies in the majority of cases, leaving conventional generation for exceptional cases, such as [4] details. Fig. 1 shows the general structure of a microgrid, formed by different energy generation systems (conventional and unconventional), energy storage system, and power management units (e.g. converter, grid-tied inverter, pure inverter, regulator [5]) for the system operation and the possible connection to the grid. The most advanced microgrids have a power management strategy (PMS) and an energy management strategy (EMS) [6], which are described in the information flow diagram (Fig. 2). These management functions require a communication infrastructure consisting of smart metering equipment [4,7]. The information is analyzed in real time to determine the load, to prioritize the renewable generation, calculate the power flow, make connections to the grid, operate conventional generators, as well as storing energy and using it according to the criteria of operation. Although storage systems increase the investment of the project, the implementation ensures a continuous flow in the load supply.

Figure 2. Information flow and functions of a real time PMS/EMS for a microgrid. Source: [6].

Table 1. Microgrid configurations. Criteria

Bus Interaction with the grid Back-up energy system Types of energy sources Numbers of energy sources

3. Microgrids topologies

Types of Loads

To set the proper design of a microgrid, it is necessary to determine its configuration according to the needs to be satisfied.

Source: The authors.

110

Configurations CC bus CA bus Mixed bus Remote Complement Support Grid Emergency Plant Batteries Renewables Conventional Hybrid Individual Hybrid Generation DC Loads AC Loads AC and DC loads


Rey-L贸pez et al / / DYNA 82 (192), pp. 109-119. August, 2015.

The remote microgrids are those that are located in distant areas or islands where it is too expensive to make the interconnection to the main grid. Natural gas, diesel, biomass or renewable resources available in the area are often used. The microgrids connected to the grid of particular or complementary applications are those that are used to meet a given load, and that from its connection to the grid, increase the reliability of the energy supply. The support microgrids are also connected, although they generate a considerable percentage of energy. Its installation is ideal in areas with limited availability of fossil fuels. a) Basic structure of a DC bus b) Basic structure of an AC bus Figure 3. Microgrid topology by bus kind. Source: The authors.

(a) without grid connection. (b) with grid connection Figure 4. Microgrid of mixed bus. Source: The authors.

Microgrids can be designed according to criteria such as connection with the grid, support, relations with types and number of energy sources and according to load. The most common configurations are shown in Table 1. The following are the first two classes.

4. Methodology for the design of microgrids The design of a microgrid is complex due to factors related to the renewable and conventional generation, the system configuration, the dimensioning and components selection, and the analysis of the operation. Most of the guidelines required for the design of generation systems, energy management and integration with the grid associated with a microgrid are scattered around in different places, while the technical requirements of the electrical installations are found in codes, standards and/or regulations of general application [9,10]. Additionally, the financial viability analysis of the microgrid must be given to the designer [9,10]. Therefore, it was considered necessary to establish a sequential approach as a tool for the assisted design of microgrids that explains the most important considerations about it. This methodology consists of five phases and is presented in Fig. 5. This methodological approach is based on the design experience of three (3) microgrids to be implemented in the central campus and the Parque Tecnol贸gico Guatiguar谩 (PTG) of the Universidad Industrial de Santander, based on research carried out by Osma [9], Rey and Vergara [10].

3.1. According to the type of bus The microgrids can be classified according to the type of bus through which the energy exchange happens: direct current (DC), alternating current (AC) or mixed [2], which depends on the load. Fig. 3.a) shows the basic structure of a microgrid with AC source that requires a rectifier to supply power. Similarly, Fig. 3b) shows a microgrid with DC load that requires and inverter to charge the AC loads. Both types of microgrids can be connected to the grid, taking into account the signal conversion required to charge loads. Fig. 4 shows examples of microgrids of mixed bus, which describe configurations for AC and DC loads.

Phase A. Energy Potential

Solar potential Wind potential Availability of energy grids

Phase B. Generalities of the system

Energy needs System configuration Design Restrictions Dimensioning of the system

Phase C. Equipment Selection

Phase D. Electrical Installations and communications

3.2. According to the level of interaction with the grid The interaction of a microgrid with the main grid and the loads allows classifying microgrids into three types: remote microgrids, complementary microgrids and support microgrids [8].

Generation Components Conditioning and management units Energy Storage Monitoring and communication components Conductors and ducts Protections PAT and boards Graphic Representation

Energy Analysis Financial Analysis Figure 5. Structure of the proposed methodology for the design of microgrids. Source: The authors.

111

Phase E. System Behavior


Rey-LĂłpez et al / / DYNA 82 (192), pp. 109-119. August, 2015.

4.1. Energy potential A microgrid can incorporate both renewable energy sources (e.g. photovoltaic, wind, biomass) and conventional sources (e.g. diesel fuel, gasoline), and also, have support from electric and natural gas networks. 4.1.1. Solar potential The historical global solar radiation must be quantified; either by consulting solar maps or by monitoring in the site. These data allow analyzing typical and extreme conditions of operation of photovoltaic panels as well as the potential benefit of solar static or dynamic tracking. 4.1.2. Wind potential Regarding the wind potential, it must be set from monitoring the speed and direction of the wind, is expressed using a wind rose and a histogram of the frequency of the wind speed. Data analysis allows the energy potential per square meter to be established and the utility that this resource could offer, either for energy generation or natural ventilation. 4.1.3. Availability of other energy sources The access to biomass, natural gas, diesel fuel and gasoline, among others must be studied. It is important to note that these sources involve thermal processes and, consequently, generate greenhouse gases. 4.2. Generalities of the microgrid In this phase, the dimensioning of the system based on the study of energy demand, the configuration of the microgrid required and the design restrictions is made.

Table 2. Restriction by area and initial investment. System Device to use PV From 7.0 to 10.0 dollars per Watt installed Wind From 5.0 to 8.0 dollars per Watt installed Cogeneration From 2.0 to 4.0 dollars per Watt installed Source: The authors.

capacity for energy demand (CID) for generation systems is established. The installed capacity according to the initial investment (CIII) is calculated based on the value per Watt installed. Table 2 presents the cost associated in Colombia for three types of systems as an example. Regarding the boundary area of the project, an analysis should be undertaken by a generation system. For example, according to [9], the maximum installed capacity due to the area available (CIA) for a photovoltaic system can be expressed by (2) where A is the area in square meters.

140 ∙

(2)

4.2.4. Dimensioning of the system The next step is to proceed to do the dimensioning of generation systems, as is shown in [9,10], thereby establishing a block diagram of the microgrid to associate the electrical characteristics (power, voltage and current) of each component and system flow. For batteries, additionally takes into account the desired autonomy time. Finally, the normal operating values and the system ends and of each component are related in the block diagram, in order to provide the necessary information for the selection of the equipment to be used. 4.3. Equipment selection

4.2.1. Energy needs

Below are technical considerations for equipment selection. The financial factor can also be included in the analysis.

The quantification of the demand is made based on load monitoring and the desired technical characteristics of the microgrid (e.g. autonomy and reliability).

4.3.1. Generation components

4.2.2. Microgrid configuration The configuration of a microgrid is defined primarily according to energy needs and energy sources available on site. 4.2.3. Design restrictions The maximum installed capacity (CIMax) of each generation systems can be restricted according to the energy demand on site, the initial investment and the area.

,

,

(1)

Based on the energy needs to be met, the installed

Photovoltaic panels should be selected according to the operating voltage level settings. They will be on-grid type for a grid-tied system or off-grid type for the case of stand-alone systems. On the other hand, if there’s no area restriction, one must look for photovoltaic panels the price for which, per installed Watt, is less than USD 2.5/W, which is typical for panels with an efficiency of approximately 14%. The panels are responsible for 60% of the cost of a generation system. The selection of micro wind turbines should be based on an analysis of the power generated by the power curve, characteristics of this and the site wind resource; in addition, the noise level of the units should be considered. The wind turbines for micro-generation (<3 kW) available on the market are equipped with a power conditioning stage, so they only require a simple connection to a barrage of operation or 24 VDC or 48 VDC.

112


Rey-López et al / / DYNA 82 (192), pp. 109-119. August, 2015. Table 3. Conditioning units according to the type of system. Unit Function Synchronize and inject energy to the grid. Grid-tied DC/AC Inverter Optimize the photovoltaic panels generation. Supply AC power with pure sinusoidal signal. Off-grid DC/AC Inverter Optimize the photovoltaic panels generation. Manage photovoltaic panels, battery and load Regulator or charge controller or converter operation. Optimize the photovoltaic panels generation. Perform the reduction or elevation of DC DC/DC Converter voltage. Allows load supply by three sources: Flexpower One photovoltaic panels, batteries and support Flexpower Two system (electrical grid or alternate generator) Inject the surplus energy to the main grid. Allows load supply by three sources: NPS 1000 photovoltaic panels, batteries and main grid. Requires of an external regulator. The Static Transfer System guarantees the uninterrupted supply to the load from the STATYS-100 management of the distributed energy sources and the grid. Source: The authors.

Table 4. Protections in a microgrid Protection type Against overvoltages Against overcurrents and disconnect means Against inverse current flux Source: The authors.

These components are intended to facilitate the tracking and control of the operation of the system, as well as providing sufficient data for its energy analysis. Some management units have these functions integrated, which reduces the required infrastructure. It is recommended that information be provided in an open format, to facilitate the development of an analysis and visualization application.

4.3.2. Conditioning and management units

4.3.3. Energy storage (batteries) The storage system configuration depends on the nominal system voltage (Vsyst) of the battery voltage and the nominal operating current required. The capacity of the battery bank (Cbb) is set with the higher value of two requirements: maximum load current (CBcharge) and maximum discharge current (CBdischarge). Because the charging process must be done at a rate no greater than 20% of the nominal battery capacity, CBcharge can be calculated according to [9] as expressed in (3).

(3)

, ∙

Nominal discharge capacity is determined according to [9], taken from (4), and considers compensation for quick discharge (FCDR), the depth of discharge (PD) and the system efficiency (η). ∙

∙ ∙

4.4. Electrical and complimentary Installations This phase includes the dimensioning of conductors, ducts, protection and earthing system. 4.4.1. Conductors and ducts

These units allow the successful use of the energy generated, the voltage level adjustment, the interaction with other units and with the load. Selection must be made according to the configuration of the microgrid and power generation sources. Some of these units are described in Table 3. The most relevant parameters to be evaluated can be found in [9,10].

Diodes (Art. 690-2)

4.3.4. Monitoring and communication components

In the case of small-scale cogeneration, it is recognized that Capstone units are the best choice. Regarding conventional thermal generation, look for plants of high efficiency and low noise.

Device Voltage limiter (NTC2050 – Section 280) AC Voltage limiter (NTC2050 – Section 280) Fuses (Arts. 690-1, 690-8, 690-9) Automatic breakers (Subsection 690-C) Manual breakers (Section 240)

(4)

In Colombia, what is indicated in NTC 2050 should be applied, particularly sections 220 and 310. For DC areas of the microgrid the indications of Section 690 can be used. This requires the use of unipolar cables type SE, UF and USE. The use of flexible cable is recommended when mobile parts exist such as photovoltaic trackers, in order to withstand the mechanical stress (see section 400 of the NTC-2050). In terms of regulation between the feeder and the final load, you must comply with the literal 210-19d). 4.4.2. Protections A microgrid must have different means to safeguard the lives of individuals and the integrity of its components, as presented in Table 4. 4.4.3. Earthing system and boards The earthing system should be in accordance with NTC2050 (section 250 and subsection 690-E). System boards must comply with the discussion in section 370, 373 and 384 and Article 110-16 of the NTC-2050. 4.4.4. Graphic representation This consists of block diagrams of the microgrid and it aims to describe the system components and the technical characteristics. Additionally, it involves the process of elaborating plant electrical drawings, details and line diagrams.

113


Rey-López et al / / DYNA 82 (192), pp. 109-119. August, 2015.

4.5. System behavior The system behavior can be described from examining the energy analysis and financial analysis. 4.5.1. Energy analysis The energy analysis refers to understanding what, how, where, and how much is consumed. The quantification must be made from the modeling of the microgrid as a system based on a graph, described in interrelated blocks (components) [9]. The modeling can be exhaustive, when the mathematical formulation is based on the detailed operation of each component, or simplified if considering efficiency ratios between output and input variable. Subsequently, with monitoring data of operation, the energy analysis can be made directly. 4.5.2. Financial analysis In case of interest, regarding the financial viability of the microgrid, it can be analyzed as an investment project, for which the projection of cash flow during its lifetime and the determination of indicators such as net present value (NPV) and internal rate of return (IRR) are required to be contemplated. 5. Complimentary technical considerations The proposed methodology considered the design of the microgrid under general conditions; if it is of interest to the designer to establish a system with a more technical level, it is necessary to address the following issues. 5.1. Solar tracking Solar trackers are devices that serve to increase the generation of a photovoltaic system, allowing a better uptake of solar radiation to the same area [11]. They can be static or dynamic. The latter can operate passively (absence of electrical and/or mechanical elements) or actively, for example with servomotors [12-14]. Static solar tracking is maximized when considering an inclination close to the latitude and with an orientation opposite to the direction of this. For Colombia, this increment is less than 2% due to its proximity to the equator [9]. With respect to the dynamic monitoring, the solar radiation increment for Colombia is between 20% and 30%.

values and the dimensioning of the components. The effect of temperature on the generation of a specific PV panel can be analyzed as stated in [9,10]. 5.3. Limitations of the DC current supply The microgrids with DC loads involve a higher level of efficiency due to the omission of the inversion and rectification processes. However, the inflexibility in the voltage ranges of some devices and the lower performance of these compared to their AC counterparts mean that, at the moment, their use does not represent minor energy losses [9]. 5.4. Wind potential for low wind speeds It is recommended to avoid the use of micro wind turbines when the wind potential is less than 40 kWh/m2 per year (typical of places where the wind speed is below 2 m/s), since most wind turbines do not operate in such conditions [9]. 6. Considerations for microgrid’s operation In order to describe the possible energy behavior of a microgrid, an analysis is undertaken for the case of the microgrid of the Energy Integration Laboratory (EIL) located on the PTG [10] in Piedecuesta, Santander. The general structure of this microgrid can be seen in Fig. 6, which also shows the nominal values of the equipment for the proposed design. This experimental microgrid aims to appropriate technology and to study operation scenarios to support teacher training and future research, which is why it has been considered a battery bank despite having a stable connection to the grid. Below are considerations on the modeling of the microgrid components in order to establish its performance in real time, using MATLAB ® to do so. 6.1. Behavior of the solar radiation and wind The weather station installed has allowed for it to be concluded that the average daily solar radiation in the PTG is 4.27 kWh/m2. It is observed that the average peak value is equal to 678 W/m2 and the maximum peak value is 994 W/m2.

5.2. Temperature effect on photovoltaic generation The operation of the photovoltaic panels in a tropical environment is significantly different to what is described in the technical specification sheets, due to the increasing operating temperature of the semiconductor material [9]. This produces variations in the values of current and voltage operations of the panel to the point of reducing the power generation by up to 15%, mainly because the voltage decreases. Due to such variations one should adjust for the system operation

Figure 6. Structure and nominal values of a PTG microgrid. Source: The authors.

114


Rey-López et al / / DYNA 82 (192), pp. 109-119. August, 2015.

(a) Current (A) vs Voltage (V) (b) Power (W) vs Voltage (V) Figure 7. Characteristic operations curves for the 225 W panel. Source: The authors.

Furthermore, it is observed that the wind potential is low, since almost 60% of the time the speed is between 0 and 1 m/s. While only 20% of the time the speed exceeds 3 m/s, values that have greater potential for power generation. 6.2. Photovoltaic array and MPPT regulator model The PV panel model considers radiation and temperature as an input to establish the values of voltage and current of the panel operation. In Fig. 7 one can see the characteristic curve of the selected panel for different values of radiation. The maximum power value is obtained for a radiation of 1 000 W/m2 at 25°C (SCT conditions). The maximum operating condition for a panel of solar radiation and ambient temperature given is calculated from (5), depending on the maximum power voltage given by (6) and the maximum power current given by (7) [16].

Table 5. Simulation parameters for the PV system. Thermal Voltage * (V) Number of cell per 60 panel Open circuit voltage 37.3 (V) Short circuit voltage 8.13 (A) Series resistance of a * (Ω) cell Current at SCT 7.59 (A) conditions Insolation ** ( ⁄ ) Thermal Coefficient at * (1/°C) ∝ maximum output current Reference temperature * (°C). Cell temperature * (°C). Cell temperature at 47 (°C). SCT Temperature ** (°C). * Calculated by the model given by [17] using G and Ta as an input data. **Input data, measured in the PTG. Source: The authors.

by the manufacturer at 99%. It is important to note that this is valid as long as the input voltage of the array is within the operating voltage range of the regulator and the output current does not exceed the maximum limit specified in the equipment's operation. 6.3. Wind turbine model The wind turbine model considers the instantaneous wind speed [7] as input data. Therefore, the power output of the wind turbine is given by (10). 0

(5) ∙

1

1 ∝

1

(10)

is the initial speed (m/s) Where υ is the wind speed (m/s), equal to 0.5 m/s, ; the cutting speed (m/s) is equal to 5 m/s, , which corresponds to the nominal speed (m/s) which is equal to 2.5 m/s, which is the nominal power of the equipment (W) equal to 1000 W, according to the manufacturer's data.

(6) (7)

Table 5 summarizes the values used for the simulation of the photovoltaic system. As the photovoltaic system of the microgrid consists of three panels in parallel, the output power is three times the value of , given by (8). 3

(8)

To ensure the operation of the photovoltaic system under maximum power, a MPPT regulator is connected [17]. The output power of the regulator is given by (9):

6.4. Inverter model Under the assumptions made for the simulation, the output power of the inverter is modeled according to [15] as shown in (11), based on the input power and the efficiency η in function of the normalized input power or operating point where is the nominal power of the equipment, which corresponds to 3.6 kW. (11) 6.5. Battery model

(9) Where, corresponds to the power of the array (W), and corresponds to the efficiency of equipment, specified

To describe the process of charging and discharging of the batteries the model presented in [18] and [19] is

115


Rey-López et al / / DYNA 82 (192), pp. 109-119. August, 2015. Table 6. Arrangements and functioning of batteries. Regime

The model can generate a discontinuity in the battery voltage between the saturation and discharge zone. To ensure continuity to the function it is possible to linearize an interval that allows connecting the discontinuous points of the function [20].

Conditions

Charge

0

1

0

Saturation Discharge Source: The authors.

1 0

6.6. Charge controller

considered, which is set by the expressions of the load (12), the nominal current (13), the load in a given case (14) and the state of charge (15). ∆

(12)

(13) .

|

.

|

.

∙ 1

0.005∆

(14)

(15)

Where is the quantity of battery charge (A-h), is the current of the battery (A), ∆ is the time in which the battery is acquires the charge (h), is the nominal current (A), the nominal storage capacity (A-h), is the nominal regime of charge and discharge of the battery (h) , ∆ is the accumulator heating (presumably identical for all elements) in comparison with the ambient temperature (oC) and is the state of charge. Three states are considered: charging, saturation and discharging, from the ranges described in Table 6. Where is the ending charge (V) and is given by (16). 2.45

2,011 ∙ ln 1

∙ 1

0.002 ∙ ∆

(16)

The main function of the STS is to allow transfer between the generation of the microgrid and the public grid, to meet the different charge conditions. This transfer depends on the availability of power generation systems and storage, and allows an increase in the reliability of the power supply. The modeling can be done from (21), and the efficiency value will be equal to 98.7%, according to the manufacturer.

0.16 ∙ 6 , 1 ∙ 1

This analysis allows the operation of scenarios to be established and calculates the power flows associated with them, and thus efficiency values, charging limits of the battery and interaction with the grid.

(17)

0,48 ,

1 0.025∆

0.036

Model of the voltage of the battery for saturation regime: (18) Model of the voltage of a battery for discharge regime: 1,965 0,12 | | 4 1 | |,

0,27 ,

(19) 0,02

1

0,007∆

(21)

6.8. Operation analysis

The expressions describing the voltage in the batteries during different intervals are shown in (17), (18) and (19). 2

(20)

6.7. Static transfer switch - STS

The main function of the charge controller is to ensure the charging and discharging processes of the batteries as well as to prevent deep overcharges and overdischarges that limit its lifetime. This equipment can be modeled as in (20), and will have an efficiency value equal to 97.5%, according to the manufacturer.

Figure 8. Performance scenarios. Source: The authors.

116


Rey-López et al / / DYNA 82 (192), pp. 109-119. August, 2015.

Figure. 9. Efficiency of PV array on the location of the microgrid. Source: The authors.

Figure 11. Battery SOC for different load values. Source: The authors.

Figure 10. Histogram of the power generation microgrid designed at the PTG. Source: The authors.

Fig. 8 shows the flow diagrams corresponding to three scenarios of the experimental microgrid, (a) in situ generation of the microgrid is greater than load, (b) the load must be served by in situ generation and batteries, and (c) the grid supplies the load because the generation in situ and from batteries is not enough. 7. Study results Below the simulation results on generation, storage and energy balance of the microgrid, located in the PTG are presented. 7.1. Energy generation The analysis of the solar resource allowed it to be determined that the photovoltaic generation is very favorable, since radiation values close to 1000 W/m2 were reached during the day. It was also found that the wind potential of the PTG is not significant, as the average wind speed is in the order of 1 m/s; however, it’s possible to use micro-generators. Fig. 9 shows the variation of the efficiency of the PV array. The maximum efficiency achieved is 13.37%, allowing a maximum generation of 668 W. Under these

conditions it is estimated that the average generation of the array is within 2.60 kWh. It was found that the photovoltaic system injects energy into the system only when solar radiation exceeds 15 W⁄m , since the minimum operation voltage of the regulator is achieved only under this condition. Because the PV generator only operates 12 hours a day, only the wind system will be available during the remaining time. Fig. 10 shows the histogram of total renewable generation for the total hours measured as part of this study. It is observed that for 48.94%, the power generated by the microgrid is negligible. This is because the photovoltaic system only operates for 12 hours a day, and over 41.3% of annual wind hours a speed of 0.5 m/s was not exceeded [10]. However, during the day it is possible to reach maximums of 1340.2 W. To increase the generation capacity of the microgrid, it is possible to evaluate the PV system expansion or to use conventional generation systems that supply the load when the generation systems of renewable sources cannot provide the required energy. For the proposed design of the microgrid in EIL, a 6 kW diesel generator is included, which can operate when unconventional generation systems cannot provide the power required by the load. The energy analysis presented in this paper does not include the above-mentioned equipment. 7.1. Storage With regard to the storage system, Fig. 11 shows the behavior of the batteries SOC for a constant load of 100 W and 500 W. The charging process occurs because the excess power generated by the microgrid sources with respect to the load is stored by the batteries. They supply power to the load when the sources do not generate enough energy, and they store surpluses that accumulate during each daily cycle. It is observed that the SOC reaches a value of 100% in all daily cycles for a load of 100 W, while in discharge processes it remains above 75%. For the case of the load of 500 W, the SOC never becomes 100%, and it does not drop to the lowest possible value of SOC (60%). The latter occurs because the STS unit states that

117


Rey-López et al / / DYNA 82 (192), pp. 109-119. August, 2015.

the combined supply of renewable generation and batteries is not sufficient to supply the load, thus it transfers to the grid. 7.2. Energy balance Fig. 12 shows the power values of the sources and the batteries of the microgrid for a period of 72 hours when the load has a constant value of 315 W. Generation behavior is due to generation models as well as solar radiation and wind on site. The behavior of the stored power (negative) and delivered (positive) of the batteries, is due to an excess or shortage of generated power. From the simulation it was found that for loads of less than 315 W, the storage system has the ability to provide the energy required by the load, so the STS does not need to make a generation transfer to the grid. This type of analysis allows the power provided by the grid to be estimated as well as an idea of seeing under what conditions the microgrid is not capable of providing the required power. Fig. 13 shows the power behavior of the microgrid for a load of 500 W. Because the value of the charge is considerably high to allow battery charging in all cycles, the STS does the commutation, allowing supply directly from the grid. If, within the design and operation conditions of the microgrid, power consumption of the grid is being minimized, though a similar analysis it is possible to determine the generation

Figure 12. Energy analysis of the microgrid for 3 days with 315 W as load. Source: The authors

capacity of the generation systems, the storage characteristics of the batteries and other operation aspects to satisfy this requirement. 8. Conclusions The study of microgrids can have an important impact if there is a focus on solving problems such as the supply to non-interconnected zones, since they can use the in-situ energy potential and require a low investment in comparison with the interconnection infrastructure with the grid. The design and implementation of pilot microgrid projects in universities favors the technological appropriation in these applications, especially for environmental conditions of its social influence. In general, the design of a microgrid depends on the designer’s expertise in the interpretation of information such as energy needs and available energy potential, as well as the level of his technical appropriation of topics such as renewable energy systems and energy management for such applications. The design methodology presented is a first effort to better establish the design of the scheme and the dimensioning of microgrids. This methodological approach can be improved when it is implemented by using an assisted design computer tool for optimization, which will mean a lower financial cost and a better energetic use of the system, taking into consideration both energy and financial analysis. In addition, this tool could automatically perform the dimension of the electric installations. The ad-hoc applications on energy behavior based on a modular approach and simplified modeling of the components of a microgrid, allow the realization of energy analysis for different scenarios of load and generation, as was the case presented in this document. In order to facilitate and standardize the analysis of microgrid energy, behavior requires the development of applications that incorporate graphical interfaces to set the desired configuration and display results. Preliminary, it should define the generation and load analysis scenarios, and also the relevant indicators about the quality of a microgrid. References [1]

[2] [3] [4] [5] Figure 13. Energy analysis of the microgrid for 3 days with 500 W as load. Source: The authors.

118

Hamidi,V., Smith, K.S. and Wilson, R.C., Smart grid technology review within the transmission and distribution sector, in 2010 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT Europe), 2010, pp. 1-8. DOI: 10.1109/ISGTEUROPE.2010.5638950 Alfonso, J., Microredes, una alternativa micro para un mundo cada vez mas macro, Energias Renovables Magazine, Madrid, España, pp. 18-22, 2009. Basak, P., Chowdhury, S., Chowdhury, S.P., Member, S. and Dey, S.H., Simulation of microgrid in the perspective of integration of distributed energy resources, 2011. Fossati, J.P., Revisión bibliográfica sobre microredes inteligentes, Memoria de trabajos de difusión cientifica y técnica, 9, pp. 13-20, 2011. Jamil, M., Hussain, B., Abu-Sara, M., Boltryk, R.J. and Sharkh, S.M., Microgrid power electronic converters: State of the art and future challenges, Universities Power Engineering Conference (UPEC), 2009 Proceedings of the 44th International, pp.1-5, 1,4 Sept. 2009


Rey-López et al / / DYNA 82 (192), pp. 109-119. August, 2015. [6] [7] [8] [9]

[10]

[11]

[12] [13] [14]

[15] [16]

[17]

[18] [19] [20]

Katiraei, F., Iravani, R., Hatziargyriou, N. and Dimeas, A., Microgrids management, IEEE Power and Energy Magazine, 6 (3), pp. 54-65, 2008. DOI: 10.1109/MPE.2008.918702 Meiqin, M., Shujuan, S. and Chang, L., Economic analysis of the microgrid with multi-energy and electric vehicles, in 8th International Conference on Power Electronics - ECCE Asia, 2011, pp. 2067-2072. Ma, C. and Hou, Y., Classified overview of microgrids and development in China, in IEEE International Conference Energy Tech 2012, pp. 1-6. Osma, G., Uso racional de la energía a partir del diseño de aplicaciones sostenibles en el Edificio Eléctrica II de la Universidad Industrial de Santander, M.S. Thesis, Electric, Electronic and Telecomunications School, Universidad Industrial de Santander, Colombia 2011. Rey, J. and Vergara, P., Diseño de una microred de baja tensión para el Laboratorio de Integración Energética del Parque Tecnológico de Guatiguará, B.S. Thesis, Electric, Electronic and Telecomunications School, Universidad Industrial de Santander,Colombia, 2012. Rosiek, S. and Batlles, F.J., Integration of the solar thermal energy in the construction: Analysis of the solar-assisted air-conditioning system installed in CIESOL building, Renewable Energy, 34 (6), pp. 1423-1431, 2009. DOI: 10.1016/j.renene.2008.11.021 Sungur, C., Multi-axes sun-tracking system with PLC control for photovoltaic panels in Turkey, Renewable Energy, 34 (4), pp. 11191125, 2009. DOI: 10.1016/j.renene.2008.06.020 Sefa, İ., Demirtas, M., and Çolak, İ., Application of one-axis sun tracking system, Energy Conversion and Management, 50 (11), pp. 2709-2718, 2009. DOI: 10.1016/j.enconman.2009.06.018 Alata, M., Alnimr, M. and Qaroush, Y., Developing a multipurpose sun tracking system using fuzzy control, Energy Conversion and Management, 46 (7–8), pp. 1229-1245, 2005. DOI: 10.1016/j.enconman.2004.06.013 Biczel, P. and Michalski, L., Simulink models of power electronic converters for DC microgrid simulation, 2009 Compatability and Power Electronics, pp. 161-165, 2009. Zerhouni, F.Z., Zerhouni, M.H., Benmessaoud, M.T., Stambouli, A.B. and Naouer, O.E.M., Proposed methods to increase the output efficiency of a photovoltaic system, Acta Polytechnica Hungarica, 7 (2), pp. 55-70, 2010. Cai, W., Ren, H., Jiao, Y., Cai, M. and Cheng, X., Analysis and simulation for grid-connected photovoltaic system based on MATLAB, in 2011 International Conference on Electrical and Control Engineering, 2011, 1, pp. 63–66. DOI: 10.1109/ICECENG.2011.6056813 Copeti, J.B., Lorenzo, E. and Chenlo, F., A general baterry model for PV system Simulation, Progres in photovoltaics: Research and Apllications, I, pp. 283-292, 1993. Gergaud, O., Robin, G., Multon, B. and B.A.H., Energy modeling of a lead-acid battery within hybrid wind-photovoltaic systems, in 2003 European Power Electronic Conference, 2003, pp. 1-10. Murillo, D.G., Modeling and analysis of photovoltaic system, Ph.D. Thesis, Deparment of Telematic Engineering, Universidad Politecnica de Cataluña, España, 2003.

P.P. Vergara-Barrios, received his BSc. Eng in Electronic Engineering in 2012 from the Universidad Industrial de Santander, Bucaramanga, Colombia. From 2012 to 2013, he was a professor and junior researcher in Engineering at the Escuela de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, UIS. His research interests include: simulation, modeling, design and operations of microgrids, solar and wind systems. G.A. Osma-Pinto, received his BSc. in Electrical Engineering in 2007, and his MSc. in Electrical Engineering in 2011 from the Universidad Industrial de Santander (UIS), Bucaramanga, Colombia. He is currently a PhD candidate at this university and a researcher at the Escuela de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, UIS. His research interests include: green buildings, optimization of energy application, automation, energy, and microgrids. G. Ordóñez-Plata, received his BSc. degree in electrical engineering from the Universidad Industrial de Santander (UIS), Bucaramanga, Colombia, in 1985. He received his PhD in Industrial Engineering from the Universidad Pontificia Comillas, Madrid, Spain, in 1993. Currently, he is a professor at the Escuela de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones at the UIS-Colombia. His research interests include: electrical measurements, power quality, smart grids, smart metering, and education based on competences.

J.M. Rey-López, received his BSc. Eng in Electrical Engineering in 2012, from the Universidad Industrial de Santander (UIS), Bucaramanga, Colombia. From 2012 to 2013, he was a lecturer at the Universidad Industrial de Santander. Currently, he is an assistant professor at the Escuela de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, UIS, Bucaramanga, Colombia. His research interests include: simulation, modeling, design and operations of microgrids and power electronics applications.

119

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64


Energy considerations of social dwellings in Colombia according to the NZEB concept Germán Alfonso Osma-Pinto a, David Andrés Sarmiento-Nova b, Nelly Catherine Barbosa-Calderón c & Gabriel Ordóñez-Plata d a

E. de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones,, Universidad Industrial de Santander, Bucaramanga, Colombia. german.osma@gmail.com b E.de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones,, Universidad Industrial de Santander, Bucaramanga, Colombia vrdeoscuro@gmail.com c E.de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia, ncatherine.barbosac@hotmail.com d E. de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones,, Universidad Industrial de Santander, Bucaramanga, Colombia. gaby@uis.edu.co Received: April 29th, 2014. Received in revised form: February 17th, 2015. Accepted: July 3nd, 2015.

Abstract In this paper, the characteristics and definitions of NZEBs are studied. In particular, the methods for calculating balance for each concept and methodology are analyzed in this work, taking into account the interaction of the NZEB with the energy grid, the emissions produced per energy consumption and the introduction of the primary energy concept as an indicator of balance. High-energy-efficient appliances are the main interest in this paper due to their importance and level of use in tropical regions. How these appliances can reduce energy consumption is described, as well as their impact on electrical performance being of significant benefit in Colombia. That is if they could be applied on a massive scale in projects related to Viviendas de Interés Social –VIS (social dwellings) in the long term. Keywords: NZEB, NZEB Balance, emission, site, source, on-grid, off-grid, energy efficient appliances, passive applications.

Consideraciones energéticas de viviendas de interés social en Colombia según el concepto NZEB Resumen En el presente artículo, se estudian las características y defniciones de una construcción tipo Net Zero Energy. En particular, las formas para calcular el balance y la metodología para realizarlo, teniendo en cuenta la interacción de la construcción con la red de energía, las emisiones generadas por el consumo de energía y la introducción del concepto de energía primaria como indicador de balance. Las aplicaciones energéticas altamente eficientes representan gran interés en este artículo debido a la importancia que tiene su uso en las regiones tropicales. Se describe cómo las aplicaciones podrían disminuir el consumo de energía y el impacto en el comportamiento energético siendo un beneficio significativo en Colombia si pudiese ser aplicado masivamente en proyectos de viviendas de interés social - VIS en el largo plazo. Palabras clave: Net Zero Energy Buildig, balance, emisiones, en sitio, fuente, conectado a la red, autónomo, aplicaciones energéticamente eficientes, aplicaiones pasivas.

1. Introduction In Colombia, since the Census of 2005 was recorded, there has been a housing deficit of close to 3.8 million households [1]. This represents a social problem that has become a challenge for the national, departmental and local governments. This is the reason why two types of housing have been defined in order to aim at solving the deficit housing problem of the most

vulnerable population: vivienda de interes social (VIS) and vivienda de interes prioritario (VIP). [1]. The VIS is a dwelling unit whose value does not exceed 135 SMMLV (Spanish acronym for the legal monthly minimum wage in Colombia) or about USD 42,000, while the VIP must not exceed a value of 70 SMMLV or about USD 22,000 [1]. These buildings are destined for citizens who earn less than 4 SMMLV (about USD 1,200) [2].

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 120-130. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48587


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015.

Given that solving the housing deficit requires a massive construction program, it is considered that the environmental impact of this initiative will be significantly detrimental. The particular reason for this claim is that buildings constructed in a traditional way are responsible to approximately 40% of the world's energy consumed [3]. Unfortunately, in Colombia there is a lack of these criteria as a successful alternative [4] to design energy efficient homes, which is one of the aspects that makes the undertaking of projects with minimum energy consumption from the grid more difficult. For this reason, this document intends to provide information to fill the identified gap. In order to do this, initially, the concepts and approaches of NZEB are addressed; then, the influence of climatic zones in Colombia on the energy performance of homes is discussed. Based on this, considerations for a set of energy applications characteristics of NZE housing with potential implementation in tropical environments is established. Finally, it is presented a set of conclusions of this work. Therefore, there is need to find useful and financially viable strategies, which will allow building design guidelines to be redirected and make it possible to adapt to new technologies, in order to make them more environmentally friendly. One possible alternative is the development of building projects based on the Net Zero Energy (NZE) approach. 2. Generalities of nez zero energy buildings

Figure 1. Schematic of a site generation system connected to the network Source: The authors.

(1) Egen is higher than Econ Figure 2. Schematic of energy flow. Source: The authors.

(2) Econ is higher than Egen

2.1. Methodologies for calculating the balance

Nowadays, some building projects are undertaken to promote the rational use of energy: these are called "green buildings" [5,6], “smart buildings” [7], “low energy buildings” [8,9] or”sustainable buildings” [10], etc. A particular case of these type of buildings are the Net Zero Energy Buildings - NZEB, referring to a building with net zero energy consumption from energy grids. To accomplish this, first the demand for energy must be reduced through the use of efficient electrical energy appliances. Then, it is necessary to establish an on-site system generation to compensate for the resulting energy consumption [11-17]. These types of buildings have different approaches, as shown in Table 1. From the revised literature [23,27-30], definitions associated with a NZEB network are presented in Fig. 1. In the definitions of a system it is important to take into account aspects such as on-site or off-site generation, and if it covers a single building or group of buildings.

The balance is the comparative study of the factors that play a role in the energy process of a system. The main objective of any NZEB is to produce a balance, which is made looking for the correct way to improve energy processes. The period chosen for the balance calculations of a building is called energy balance period. [31]. It is important to understand that the energy that enters or leaves the system must be characterized, considering that the energy consumed (Econ) does not always coincide with energy demanded from the network (Edem), and similarly, the energy generated (Egen) does not always coincide with the energy exported (Eex). This is when it is possible to use the energy produced by the on-site generation for self-consumption, producing less demanded energy than consumed and less exported energy than generated. Fig. 2(1) illustrates the case in which the power consumed is lower than the generated energy, so there is no grid power demand ( 0 ). Fig. 2(2) shows the case when the consumed energy is greater than the generated energy on site,

Table 1 NZEB concept approaches Term Definition Reference A building with greatly reduced energy needs that NZEB [18–20] offsets all its energy use from renewable resources. Facilities where the primary energy used and the embodied energy within their constituent materials LC-ZEB is equal to or less than the renewable energy [21–23] produced inside the building during the life of the building. NZEB- Facility connected to the grid that find a balance [24] Grid between energy sold and purchased power. Facility that generates more renewable energy than [16], [25], PEB it consumes from the grid. [26] Source: The authors

Table 2 Balance type as defined by NZEB. Definition Type of Balance Balance between energy generated on-site with the Site ZEB energy consumed in the facility Source Balance between the primary energy consumed in ZEB the facility with power generated on-site. Balance between the cost generated by energy Cost ZEB consumption and the economic profits generated by energy exports. Balance between the amount of emissions Emission produced by the power consumption and the ZEB amount of emissions that is not produced by the energy from on-site generation. Source: The authors.

121

Indicator Electric power Primary Energy Cost of energy Carbon Emissions


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015.

so the missing energy is required from the network. This indicates that no power is exported. The energy balance of a dwelling is produced by offsetting the energy from on-site generation. Based on the type of balance and indicator, Table 2 describes the most common concepts, as presented in [12,15,23,32]. Now, each indicator is explained in detail. 2.1.1.

Table 3 Primary energy factors Energy Carrier

Fuel

Electric power

Electric Power

The analysis is based on making the building network’s (Edem) demand for electric power equal to the exported (Eex) [33]. From the balance shown in Equation (1), the condition shown in Equation (2) is given. E

con

 E

gen

E

dem

E

ex

E con  E gen

(1)

An assessment of this type is easily verified through onsite measurements. The downside of this method is that a unit of electricity energy has the same value no matter what type of generation process the energy has gone through. 2.1.2.

Primary energy

This indicator seeks to equal the amount of energy that the building’s primary demand network ( ) requires , with the primary energy exports ( , ), seeking a net consumption of primary energy equal to zero. , With primary energy, it becomes possible to consider the different types of energy (eg, thermal and electrical) in the calculations, since this approach integrates the losses of the energy chain [34,35]. The net primary energy is calculated from Equation (3) [35].

E prim , neta  E prim , dem  E prim ,ex

(3)

Where,

E prim ,dem   E dem,i  f dem,i 

E prim , ex   Eex ,i  f ex , i 

1

0

Carbon emissions

With this indicator it is intended that the system exports as much free energy emissions so as to offset the amount of emissions produced by the amount of energy required. The mass of CO2 is calculated from the energy demand ( ) and exported energy ( ) for each grid, as shown in (6).

M CO 2   E dem , i  K dem , i    E ex ,i  K ex , i  (6) Where , is the emission coefficient CO2 to the energy delivered by the grid , and , is the emission coefficient CO2 or energy exported to the energy grid . , and , will always be of equal value if there is only one energy supply system regardless of the on-site generation. The coefficients for calculation of emissions CO2 are presented in Table 3. 2.1.4.

Energy cost

With this indicator it is intended that the system, at the end of each period, has a net cost equal to zero [33]. This can be calculated as described in Equation (7).

Cn   Edem ,i  Cdem ,i    Eex ,i  Cex ,i 

(4) (5)

Where , is the energy delivered by the power grid , is energy exported to the power grid , , is the primary energy factor for electricity supplied by the grid and , is the primary energy factor for energy exported to the power grid . The electrical energy generated and consumed could not be calculated as shown in Equation (1) and Equation (2) because the primary energy associated did not only come from electricity, but also from other energy sources such as natural gas or wood. The conversion factors are linked to the type of energy and power that is supplied. In Table 3 type values taken from [35,36] can be observed ,

Renewable Electricity and solar thermal Energy Source: Adapted from [35,36].

2.1.3.

(2)

Fuel Oil, Liquefied Petroleum Gas Natural Gas Anthracite, Coal Lignite Wood Electricity hydropower plant Electricity from nuclear power Electrical energy mix Electricity from coal power plant

Primary K - CO2 factors [kg/MWh] 1,1 330 1,1 277 1,1 394 1,2 433 1,2 4 1,5 7 2,8 16 3,31 1340 4,05 617

(7)

Where , is the energy cost delivered by the grid and is the cost for energy exported to the electricity , grid . The costs of both, export and demand energy, are determined by the entity that governs the energy price in each country, if they are equal the balance would be the case of the power indicator. 2.2. Base configurations The NZEB are classified into on-grid and off-grid according to the interconnection or not the grid [29]. For the case of off-grid NZEB energy consumption must be supplied by the generation site, which describes its permanent condition balance.

122


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015.

Figure 4. Number of inhabitants (millions) by strata. 2012 projection. Source: [45].

(1) On – grid with energy storage units

Table 5 Average monthly income per home - 2012 Social strata

Dollars [USD$]

1 639,76 2 909,76 3 1 415,43 4 3 062,57 5 3 241,08 6 5 681,73 Source: adapted from [46–49].

(2) On – grid without batteries (3) Off – grid Figure 3. Configurations of a Net Zero Energy dwelling. Source: The authors

Pesos [COP$]

Equivalent salaries

1 150 455 1 635 994 2 545 329 5 507 307 5 828 313 10 217 228

2,03 2,89 4,49 9,72 10,28 18,03

Table 4 Connection pluses and minuses Configuration

Advantages

Disadvantages

(a)

Guarantees reliability

It requires a higher initial investment

(b)

Cheaper

There is no backup system in the case where the supply network fails

(c)

Works in interconnected and non-interconnected areas (Autonomous)

Requires a high storage capacity and high initial investment

3.1. Energy performance of the residential sector in Colombia The analysis of the energy performance of a house located in Colombia should take into consideration factors for residential energy design, such as socioeconomic strata. 3.1.1.

Source: The authors.

According to [12,16,34,37-40], it could be possible to consider the connection to the supply grid as an energy storage system because of the bidirectional flow of energy, so the addition of batteries would not be required. Fig. 3 shows three basic configurations of building's operation constructed under the NZE concept. The advantages and disadvantages of each form of connection are shown in Table 4. 3.

Technical aspects that have influence on the energy performance of a social dwelling, considering the variety of thermal levels in Colombia

The residential electricity demand in Colombia depends of sociocultural, socioeconomic, political, climatic (variety of thermal levels) [41] and geographic [42] factors, among others. Those factors characterize the electricity demand in homes, and so they have influence on the design of energy efficient systems.

Social dwelling location according to the socioeconomic strata

Currently, in Colombia the residential property is classified in six socioeconomic strata: low-low (1), low (2), medium-low (3), medium (4), medium-high (5) and high (6) [43]. The VIS and VIP are buildings designed to guarantee the right to dwelling for low-income families, which are localized primarily in strata 1, 2 and 3, and have subsidised public services [44]. It is worthwhile mentioning that these strata have the major percentage of country’s inhabitants (87%) as can be determined in the Fig. 4. However, it must be taken into account that the VIS are intended for people who have a monthly income less than 4 SMMLV and are not allocated according to socioeconomic strata [2], although there is a correlation. Therefore, Table 5 shows the average monthly income per home in dollars and pesos according to each specific strata and many SMMLV equivalent monthly income per home. This projection was made with the income values found in [46] for 2008 and with the consumer price index found in [47] for 2012. According to Table 5, it is possible to observe that the candidate homes to the VIS are located primarily in strata 1 and 2.

123


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015. 30,0 28,0 26,0 24,0 22,0 20,0 18,0 16,0 14,0 12,0

Strata 2

Strata 3

Strata 4

Strata 6

Average monthly consumption [kWh]

Jan

Figure 5. Yearly consumption electricity of all residential users per strata. 2012. Source: [50,51].

600,0

237,1 200,0

Figure 8. Average temperatures from five Colombian cities Source: [52].

564,6

400,0 207,2

250,7

303,1

Dec

Strata 1

Barranquilla Cali Bucaramanga Medellín Bogotá Nov

Strata 5

0

Oct

3,43%

Sep

4,17%

Aug

7,64%

2000

Jul

4000

Jun

21,06%

6000

May

34,1%

Apr

29,6%

Mar

8000

Feb

GWh/year

10000

351,1 100,0% 80,0% 60,0%

0,0 Strata 1 Strata 2 Strata 3 Strata 4 Strata 5 Strata 6

Figure 6. Average monthly electricity consumption per user, 2012. Source: The authors.

40,0%

Bogotá

20,0%

Atlántico

0,0%

Antioquia

Figure 9. Colombian housing that have energy appliances. 2012. Source: [53].

reason altitude is a meaningful climatic factor that gives rise to thermal levels [52]. Fig. 7 shows the thermal levels associated with the temperature range and the elevation above sea level. Some Colombian cities are also given as examples with their corresponding altitude and temperature values.

Figure 7. Thermal levels in Colombia Source: [52]

3.1.2.

3.1.4.

Behavior of the electricity demand according to the possible strata where the VIS are located

Once identified, the strata where the VIS could possibly be located and the behavior of the energy demand following the residential sector per strata in the country is described. Fig. 5 shows the annual electricity consumption per strata and the percentage of the total consumption (23 748.04 GWh/year). This shows that the total demand of about 85% is consumed in the low-low, low and medium-low strata. Furthermore, Fig. 6 shows the average monthly consumption per user for each stratum. The heighest and lowest consumption are in two and six strata, respectively. 3.1.3.

Thermal levels influence the potential electrical appliances that are customizable in Colombia

Colombia is located in a tropical area, therefore it has significant solar radiation throughout the year; moreover, it has a high relief characteristic from the Andes mountain range, for this

Thermal levels classification with the identification of the potential electrical appliances in VIS

In Colombia, the climatic diversity found in the same country makes it difficult to classify the climates [52]. It is easier to do so therefore by municipality. Thus, to represent the several scenarios of climatological and energy behavior, five cities were selected based on three aspects:  Prinicple cities where the operation areas that are managed by the meteorological inspection, measurement and vigilance system of the IDEAM are localized.  From the cities found in [52], five have the highest number of families with electricity availability according to the DANE [47]: Bogotá, Medellin, Cali, Barranquilla and Bucaramanga.  Last, the thermal levels of the chosen cities is also relevant. Fig. 8 shows the average temperature for each one of the selected cities, where the cities can be classified to be in a specific thermal level considering the information given in Fig. 7. Barranquilla and Cali fall under the hot climate

124


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015.

category, Medellin and Bucaramanga the mild climate and Bogotá the cold climate. Once the chosen cities have been classified in their different thermal levesl, it is possible to identify the energy appliances that have a relevant usability in the house. Fig. 9 shows the percentage of the housing for Bogota, the Atlántico region (Guajira, Cesar, Magdalena, Atlántico, Bolívar, Sucre y Córdoba) and the Antioquia Department, which, in 2012, bought some kind of electrical appliance or household gas. These three ares represent the cold climate, mild climate and hot climate, respectively. The need to climatize the house environment, in hot and mild climates like the Atlántico region and the Antioquia department, making use of the fans and air conditioning is relevant. Also, the domestic hot water use in cold and mild climates like Bogota and the Antioquia department, respectively is important. In the identification of the potential implementation energy appliances for the different thermal levels in NZE homes, the methodology used is given in [40,54-59]. Such a methodology begins with the reduction of the energy demand through passive building design, and takes into account the energy efficiency of the increase in number of appliances and rational use of the energy, and lastly the micro generation systems are identified from renewable energies adaptable to the house. 3.2. Recommendations for low energy consumption with use of passive appliances and energy efficient systems The recommendations are given taking into the account the passive and active energy appliances that could be adaptable in a social dwelling, highlighting those that are frequently used according to the thermal level, as it is shown in Fig. 9. 3.2.1.

Construction materials

The construction materials have to include renewables, the production of which emits low amounts of CO2to the atmosphere. Table 6 describes the materials that are considered more adequate for social dwelling construction in terms of thermal inertia and their adaptability in the different climates [60]. The regional materials have a manufacturing process that has been transmitted from one generation to the next, and modern materials are industrialized. When the materials have a high thermal inertia, and maintain a stable temperature, they take a long time to let the heat pass to the building inside and they are recommended for cold and mild climates, acquiring the heat in the morning which is then conducted into the dwelling in the night. If material has a low thermal inertia the time is short, which is recommended in hot climates because avoiding the heat gain is required [60]. Also, the house has thermal insulation in walls, floors, windows and roofs (green roofs) and can use eaves, blinds, cantilevers and shadows to avoid the solar gain in hot climates. 3.2.2.

Strategic orientation and location

For mild and hot climates, it is recommended that the facade orientation is in a NORTH-SOUTH direction and

Table 6 Construction materials most usually in social dwelling. Weather Cold Mild Hot Walls Rammed earth Guadua mat with Guadua mat with clay Regional and adobe clay and adobe and adobe Units of masonry Units of masonry Units of masonry and Modern and plastering and plastering plastering Roofs Structure in Regional wood and clay Wood and tile zinc Wood tile Fiber cement Fiber cement roof Fiber cement roof tile, Modern roof tile and clay tile and clay tile clay tile and tile zinc tile Floors Regional Wood Wood Tile Carpet, wood and Carpet, wood, vinyl Cement tile and Modern vinyl and ceramic floors ceramic tablet Windows Wooden swing Wooden vertical Wood with large Regional middle vertical tilt aperture vertical tilt opening Aluminum, glass Aluminum, glass Openwork, wood, foil Modern and angle and angle and burlap Source: adapted from [60].

located such a way that it is protected from the sun and exposed to the cold winds. In the cold climate the façade must be in NORTHEAST-SOUTHEAST or EAST-WEST direction and located in such a way that it is exposed to the sun and protected from the cold winds [60]. 3.2.3. Natural ventilation and lighting As a passive technique, significant window areas are typically installed to facilitate the airflow through the housing, which is designed so that there is natural cross ventilation. For natural lighting the passive techniques that can be used are, solar tubes [39], skylights [61], low-growing trees that allow light in the house, and large areas of windows. 3.2.4.

Energy efficient systems on a VIS

The housing design must reduce energy consumption by replacing the conventional systems using high-efficiency systems. Table 7 compares the two systems that may be used in a dwelling. 3.3. Energy generation in situ To implement an energy generation system, the energy supply, project scale and sector of the economy to which it is directed must be considered. Table 8 highlights the power generation according to the generation and obtaining resources site. In Table 9, according to the installed capacity the generation systems are classified depending on the scale, residential or industrial application and the way that energy is supplied. It is important to observe that the photovoltaic wind generation systems can be adapted to the residential level, according to the installed capacity; however, it depends on the wind speed and the solar radiation in generation place.

125


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015. Table 7 Comparison between conventional and efficient systems found in a dwelling. System

Characteristics

Conventional Efficient Incandescent bulbs The LED bulb generates energy savings up to 90% and Lighting convert 5% of the energy used into light. compact fluorescent to 80%. Low efficiency of Efficient refrigerators can Refriconventional consume 20% less energy than geration refrigerators, freezers side required by some standards. Central heating system, Radiant floor can generate Space low efficiency, fireplaces. energy savings of 52% while heat heating Gas water heater, electric pumps 30% to 40% over and DHW shower. conventional. Solar water heater. Mechanical Ventilation Heat extractors, reversible Ventilation and low efficiency air heat pump, room coolers. conditioners. Horn gaseous fuel based Furnaces that consume 65% efficiency, liquid electricity present a yield of 65% Cooking fuel 50% -55% 70%. Source: The authors. Table 8 Description of the on-site and off-site energy supply systems Energy Option Description Example Supply Photovoltaic panels and Power generation in the solar collectors installed 1 On-site building footprint from site on the building roof or resources. façade. Power generation near to the building footprint: open and 2 On-site Small wind turbines parking areas, the resources are on-site. Power generation in the Biomass, ethanol, building and resources 3 Off-site biofuel, waste treated at proceeds from off-site the site. generation. Off-site power generation and The owner invests in 4 Off-site partly financed by the wind farms, solar and homeowner. wind turbines remote. Off-site power generation and Power grid trade 5 Off-site the owner is sold through a certified renewable supply contract. energy. Source: The authors.

Table 10 Average monthly global solar radiation uniform for different country zones Insolation Departments or Regions [kWh/m2] Arauca, Casanare, Meta, Boyacá, Vichada and the Caribbean 5,0 – 6,0 Coast including San Andrés y Providencia Islands Orinoquía, Santander and North Santander, Cundinamarca, 4,5 – 5,5 Tolima, Huila, Cauca and Valle del Cauca Chocó, Nariño y Putumayo 3,0 – 4,0 Source: adapted from [62].

Figure 10. Solar energy trends, technologies and uses. Source: The authors.

Figure 11. Photovoltaic systems according to the connection type. Source: The authors.

Table 9 Renewable power generation systems classification according to the generation capacity OnOffRenewable Energy site site  Small scale <2.5kW   Medium scale 2.5kW-20MW Wind  Large scale >20MW  Residential system <10kW  Industrial buildings 10kW-100kW PV   Small industrial plants 100kW-1MW  Large plants >1MW Source: The authors.

Generally, Colombia shows a monthly average global solar radiation uniformly throughout the year approximately from 4,0 kWh/m2 to 4,5 kWh/m2: this being the largest spatial distribution. However, some areas are outstanding, as Table 10 shows. Typically, solar energy has three technology trends, photovoltaic, thermal and passive [63], these being easily implemented in residential areas, as seen in Fig. 10.

3.3.1. Photovoltaic energy This application is based on the photovoltaic panels, placed strategically to capture solar radiation. The systems most used are shown in Fig. 11. 3.3.2. Solar thermal energy Solar thermal energy is mainly used in homes as a source of heat for space conditioning through the use of radiant floors and the usability of domestic swimming pools and hot water [64]. Additionally it has other uses according to the temperature required, as seen in Fig. 12. 3.3.3. Wind energy Colombian winds are mainly located on the peninsula of La Guajira, and are among the best in South America for wind power generation, reaching speeds of around 10 m/s.

126


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015.

According to [65], the annual wind power density of 20 meters high has values of 216 W/m2 and 343 W/m2 for areas in the Departments of La Guajira, AtlĂĄntico and between QuindĂ­o and Tolima. Also, in some areas in the Department of La Guajira wind density values can be achieved of between 343 W/m2 and 729 W/m2. It is important to highlight that for heigher altitudes and different times of the year the energy densities can increase considerably; however, the wind resource is sectored in Colombia and not all regions enjoy wind speeds suitable for generation.

Figure 15. Classification of buildings constructed under the NZEB concept according to the type of construction. Source: [67].

36

Frequiency

40 Figure 12. Temperature ranges, applications and uses of the solar thermal energy. Source: The authors.

30 20

22

23

25

12

10 0 A

B

C D E Passive appliances NOTE: A. Orientation; B. Thermal insulation; C. Thermal mass, sustainable materials; D. Natural lighting; E. Natural ventilation Figure 16. Passive appliances in NZEB. Source: The authors.

25

27

Frequency

40

Figure 13. Net zero energy buildings around the world Source: [66].

30

30 22

17

20 10 0 F

G

H

I

J

Active appliances NOTE: F. Lighting; G. Ventilation; H. Domestic Hot water; I. Heating; J. Eficient appliances Figure 17. Active appliances in NZEB. Source: The authors

4.

Figure 14. Classification of buildings constructed under the NZEB concept according to the weather classification. Source: [67]

Case study

In [66] a database with information of about 300 Net Zero Energy building case studies is shown. They are distributed around the world and as shown in Fig. 13. In Fig. 14 is possible to see a ranking of NZEB cases found in [67], depending on the climate and Fig.15 for different types of construction. Moreover, 42 case studies were compiled and classified to identify the most used active and passive appliances, technologies and generation technologies. to Fig. 16, the passive appliance that has the most use in

127


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015.

Frequency

40

34

30

22

20

10

6

10 0 K

L

M

N

Energy generation NOTE: K. Photovoltaic; L. Wind energy; M. Solar thermal; N. Biomass Figure 18. Generation in NZEB. Source: The authors.

NZEB is thermal insulation, stating that 84% of the buildings studied were applying. Also, is noteworthy to mention the implementation of passive techniques such as the use of sustainable materials, natural lighting and ventilation, which are used in more than half of the buildings. Moreover, it is possible to see that the strategic orientation is an application that is not used very often in this type of construction. Fig. 17 shows that the frequency of the use of active and eficient appliances in the buildings is high, especially the use of heating systems due the majority of reported cases being located in countries with seasonal climate variability. Fig. 18 showns how the use of generation systems and solar energy to produce heat or electricity has great reception in NZEB; however, the use of wind energy and other technologies for on-site generation is still very poor. 5.

Conclusions

The concept of NZE homes involves the covering of basic energy consumption needs that occur in residential buildings, using highly efficient energy applications. Homes around the world have been able to reduce their energy consumption in a way that no resident comfort was lost, thanks to new technologies developed to cover the basic needs in these homes. A building can achieve NZE energy balance being off-grid, but this type of housing requires a large autonomous investment due to the cost of energy storage systems. Therefore, internationally, on-grid type buildings are preferred and autonomous housing design is performed for non-interconnected areas that have difficult access to supply networks. In on-grid NZE buildings the bi-directionality in the energy grids has a great weight requirement when we want to produce balance compensation, because energy injection into the grid is required. In buildings that are NZE designed under the methodology cost, ZEB will be always necessary to ensure that payment is made for the energy that is exported to the network. This is because it could only produce a balance in this way. The NZE concept applied in social dwellings could generate a very positive impact on the society and the environment. This can be seen through the analysis made about the behavior of the country’s energy demands according to the socioeconomic strata. It was possible to infer

that the homes applying to become social dwelling are found primarily in strata one and two, and the major residential energy demand is from the same strata. This generates a reduction in the energy consumption in the residential sector and in the low strata. The influence of the thermal levels on consumption of energy of electrical appliances that are used in the home have a substantial impact since they cause an increase or decrease energy demand. Thus, it can be known what kind of appliances have a major energy demand when the thermal levels vary for each place, as well as how they could be replaced by other more efficient appliance, considering those with more frequent use than could meet the basic needs of social dwellings. In the residential sector, the utilization of solar resources in energy generation through the use of photovoltaic panels and solar collectors has great utility. Colombia enjoys this resource in abundant quantities; therefore, encouraging energy generation on-site and fulfilling one of the main features of a NZE home, energy generation using renewable sources for energy demand compensation should be encouraged. In the NZE home design is important to establish the NZEB concept you wish to operate and the type of balance to be achieved, giving way to correct sizing of efficient energy systems and alternative energy source that best suits the place. Also, it is very important to design a home that is financially feasible and especially oriented to the social dwellings. Based on the case studies mentioned, it is possible to see that regardless of the type of NZEB, the implementation of solar energy (photovoltaic and thermal) is better than wind energy and biomass for energy generation on-site. It is also clear that the use of the design concept in NZE buildings located in Latin America is very poor, as in this database there were only two cases found: one in Argentina and one in Costa Rica. References [1] [2]

[3] [4]

[5] [6]

[7]

128

Secretaria de Planeación Departamental del Valle del Cauca and Gobernación del Valle del Cauca, Vivienda de interés social en el Valle del Cauca. Normas e instrumentos de gestión. 2011, pp. 1-57. Ministerio de Ambiente Vivienda y Desarrollo Territorial, Procedimientos en vivienda de interés social, Serie Guías de Asistencia Técnica para Vivienda de Interés Social, vol. 4, pp. 1-73, 2011. Pless, S. and Torcellini, P., Net-Zero energy buildings: A classification system based on renewable energy supply options, National Renewable Energy Laboratory, June, pp. 1-14, 2010. Gangolells, M., Casals, M., Gassó, S., Forcada, N., Roca, X. and Fuertes, A., A methodology for predicting the severity of environmental impacts related to the construction process of residential buildings, Building and Environment, 44 (3), pp. 558-571, 2009. DOI: 10.1016/j.buildenv.2008.05.001 Yudelson, J., Green building A to Z. Understanding the lenguage of green building. New Society Publishers, 2007, 241 P. Paul, W.L. and Taylor, P.A., A comparison of occupant comfort and satisfaction between a green building and a conventional building, Building and Environment, 43 (11), pp. 1858-1870, 2008. DOI: 10.1016/j.buildenv.2007.11.006 Wang, Z., Wang, L., Dounis, A.I. and Yang, R., Multi-agent control system with information fusion based comfort model for smart buildings, Applied Energy, 99, pp. 247-254, 2012. DOI: 10.1016/j.apenergy.2012.05.020


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015. [8]

[9]

[10]

[11]

[12] [13] [14] [15] [16]

[17]

[18] [19] [20] [21]

[22] [23]

[24] [25]

[26]

[27]

Malet-Damour, B., Garde, F., David, M. and Prasad, D., Impact of the Climate on the design of low-energy buildings for Australia and Reunion Island, 12th Conference of International Building Performance Simulation Association, Proceedings of Building Simulation, pp. 2867-2873, 2011. Rodrigues, L.T., Gillott, M. and Tetlow, D., Summer overheating potential in a low-energy steel frame house in future climate scenarios, Sustainable Cities and Society, 7, pp. 1-15, 2013. DOI: 10.1016/j.scs.2012.03.004 Cooper, P., Liu, X., Kosasih, P.B. and Yan, R., Modelling net zero energy options for a sustainable buildings research centre, 12th Conference of International Building Performance Simulation Association, Sydney, Proceedings of Building Simulation, pp. 27992086, 2011. Abdalla, G., Maas, G.E.R. and Huyghe, J., Barriers to Zero Energy Construction (ZEC) technically possible, Why not succeed yet?, PLEA 2009 - 26th Conference on Passive and Low Energy Architecture, Quebec City, Canada, June, pp. 1-3, 2009. Torcellini, P., Pless, S., Deru, M. and Crawley, D., Zero energy buildings: A critical look at the definition, National Renewable Energy Laboratory, U.S Department of Energy, pp. 1-12, 2006. Miller W., and Buys, L., Anatomy of a sub-tropical Positive Energy Home (PEH), Solar Energy, 86 (1), pp. 231-241, 2012. DOI: 10.1016/j.solener.2011.09.028 Lenoir, A., Thellier, F. and Garde, F., Towards Net Zero Energy buildings in hot climate, Part 2: Experimental feedback, ASHRAE Transactions, 117, pp. 1-8, 2011. Attia, S. and De Herde, A., Defining zero energy buildings from a cradle to cradle approach, PLEA 2011 - 27th Conference on Passive and Low Energy Architecture, July, pp. 205-210, 2011. Kolokotsa, D., Rovas, D., Kosmatopoulos, E. and Kalaitzakis, K., A roadmap towards intelligent net zero- and positive-energy buildings, Solar Energy, 85 (12), pp. 3067-3084, 2011. DOI: 10.1016/j.solener.2010.09.001 Bojić, M., Nikolić, N., Nikolić, D., Skerlić, J. and Miletić, I., Toward a positive-net-energy residential building in Serbian conditions, Applied Energy, 88 (7), pp. 2407-2419, 2011. DOI: 10.1016/j.apenergy.2011.01.011 Gilijamse, W., Zero-Energy house in the Netherlands building performance, IBSA, Madison, Wisconsin, pp. 276-283, 1995. Iqbal, M.T., A feasibility study of a zero energy home in Newfoundland, Renewable Energy, 29 (2), pp. 277-289, 2004. DOI: 10.1016/S0960-1481(03)00192-7 European Parliament and the Council of the European Union, Energy performance of buildings. Legislative resolution, Officiial Journal of the European Union, April, pp. 263-291, 2009. Hernandez, P. and Kenny, P., From net energy to zero energy buildings: Defining life cycle zero energy buildings (LC-ZEB), Energy and Buildings, 42 (6), pp. 815-821, 2010. DOI: 10.1016/j.enbuild.2009.12.001 Marszal, A.J. and Heiselberg, P., Life cycle cost analysis of a multistorey residential net zero energy building in Denmark, Energy, 36 (9), pp. 5600-5609, 2011. DOI: 10.1016/j.energy.2011.07.010 Srinivasan, R.S., Braham, W.W., Campbell, D.E. and Curcija, C.D., Re(De)fining net zero energy: Renewable emergy balance in environmental building design, Building and Environment, 47, pp. 300-315, 2012. DOI: 10.1016/j.buildenv.2011.07.010 Vale, R.V.B., The new autonomus house: Design and planning for sustainibility. Thames & Hudson, 2002, pp. 1-256. Thiers, S. and Peuportier, B., Energy and environmental assessment of two high energy performance residential buildings, Building and Environment, 51, pp. 276-284, 2012. DOI: 10.1016/j.buildenv.2011.11.018 Atanasiu, B., Ecofys Germany GmbH, and Danish Building Research Institute (SBi), principles for nearly zero-energy buildings. Paving the way for effective implementation of policy requirements. Building Performance Institute Europe (BPIE), 2011, pp. 1-99. Kurnitski, J., Allard, F., Braham, D., Goeders, G., Heiselberg, P., Jagemar, L., Kosonen, R., Lebrun, J., Mazzarella, L., Railio, J., Seppänen, O., Schmidt, M. and Virta, M., How to define nearly net zero energy buildings nZEB. REHVA, Federation of European

[28]

[29] [30] [31] [32] [33]

[34]

[35] [36]

[37]

[38] [39]

[40] [41] [42] [43]

[44]

[45]

[46] [47]

129

Heating, Ventilation and Air-conditioning Associations, pp. 1-18, 2012. Booth, S., Barnett, J., Burman, K., Hambrick, J. and Westby, R., Net zero energy military installations: A guide to assessment and planning, National Renewable Energy Laboratory, pp. 1-48, 2010. DOI: 10.2172/986668 Marszal, A.J. and Heiselberg, P., A literature review of Zero Energy Buildings (ZEB ) definitions, 2009. Sartori, I., Napolitano, A. and Voss, K., Net zero energy buildings: A consistent definition framework, Energy and Buildings, 48, pp. 220232, 2012. DOI: 10.1016/j.enbuild.2012.01.032 Marszal, A.J., Life cycle cost optimization of a BOLIG + Zero Energy Building, in Aalborg University, Denmark, 2012, pp. 1-57. Carlisle, N., Geet, O.V. and Pless, S., Definition of a ‘Zero Net Energy’ community, National Renewable Energy Laboratory, November, pp. 1-14, 2009. Kurnitski, J., Saari, A., Kalamees, T., Vuolle, M., Niemelä, J. and Tark, T., Cost optimal and nearly zero (nZEB) energy performance calculations for residential buildings with REHVA definition for nZEB national implementation, Energy and Buildings, 43 (11), pp. 3279-3288, 2011. DOI: 10.1016/j.enbuild.2011.08.033 Marszal, A.J., Heiselberg, P., Bourrelle, J.S., Musall, E., Voss, K., Sartori, I. and Napolitano, A., Zero Energy Building – A review of definitions and calculation methodologies, Energy and Buildings, 43 (4), pp. 971979, 2011. DOI: 10.1016/j.enbuild.2010.12.022 European Comission and European Free Trade Association, Energy performance of buildings — Overall energy use, CO2 emissions and definition of energy ratings, European Standard, pp. 1-45, 2006. Deutsches Institut für Normung (DIN), Energy efficiency of buildings — Calculation of the energy needs, delivered energy and primary energy for heating, cooling, ventilation, domestic hot water and lighting — Part 1: General balancing procedures, terms and definitions, zoning and evalu, pp. 1-66, 2007. Salvador, M. and Grieu, S., Methodology for the design of energy production and storage systems in buildings: Minimization of the energy impact on the electricity grid, Energy and Buildings, 47, pp. 659-673, 2012. DOI: 10.1016/j.enbuild.2012.01.006 Voss, K., Musall, E. and Lichtmeb, M., From low-energy to net zeroenergy buildings: status and perspectives, Journal of Green Building, 6 (1), pp. 46-57, 2011. DOI: 10.3992/jgb.6.1.46 Kılkış, Ş. ,A net-zero building application and its role in exergy-aware local energy strategies for sustainability, Energy Conversion and Management, 63, pp. 208-217, 2012. DOI: 10.1016/j.enconman.2012.02.029 Fong, K.F. and Lee, C.K., Towards net zero energy design for lowrise residential buildings in subtropical Hong Kong, Applied Energy, 93, pp. 686-694, 2012. DOI: 10.1016/j.apenergy.2012.01.006 Ministerio de Minas y Energía, Memorias al Congreso de la República. Sector Energía Eléctrica, 2010. Ramos-Niembro, G., Fiscal, R., Maqueda, M., Sada, J. and Buitrón, H., Variables que influyen en el consumo de energía eléctrica. México, Aplicaciones tecnológicas, pp. 11-18, 1999. Congreso de la República de Colombia, Ley 689 / 2001 Modificatoria Ley 142 / 1994. El régimen tarifario de las empresas de servicios públicos, artículo 16.,” 2001. [Online]. [Accessed: 02-Jun- 2013]. Available at: http://www.ccconsumidores.org.co/index.php?option=com_content &vi ew=section&layout=blog&id=8&Itemid=126. Consejo Nacional de Política Económica y Social, Plan de acción para la focalización de los subsidios para servicios públicos domiciliarios. Documento Conpes, República de Colombia, Departamento Nacional de Planeación, pp. 1-30, 2005. DANE, Proyección de población, 2012. [Online]. [Accessed: 03-Jul2013]. Available at: http://www.dane.gov.co/index.php?option=com_content&view=arti cle &id=75&Itemid=72. Ministerios de Comunicaciones. República de Colombia, Resumen ejecutivo. Impacto socioeconómico de la implementación de la televisión digital terrestre en Colombia. pp. 1-17, 2008. DANE, Censo general, 2005. [Online]. [Accessed: 02-Jun-2013]. Available at:


Osma-Pinto et al / DYNA, pp. 120-130. August, 2015.

[48] [49] [50]

[51] [52] [53]

[54] [55] [56] [57] [58] [59]

[60] [61] [62] [63] [64] [65] [66]

[67]

http://www.dane.gov.co/index.php?option=com_content&view=arti cle &id=307&Itemid=124. Ministerio de Trabajo, Salario minimo mensual legal vigente, 2012. [Online]. [Accessed: 02Jun-2013] Available: http://www.mintrabajo.gov.co/ Banco de la República de Colombia, Serie histórica empalmada de datos. Promedio anual, 2012. [Online]. Available at: http://www.banrep.gov.co/es/-estadisticas. [Accessed: 02-Jun-2013]. Sistema Único de Información de Servicios Públicos, Consolidado de energía por empresa y departamento. [Online]. Available: http://reportes.sui.gov.co/reportes/SUI_ReporteEnergia.htm. [Accessed: 02-Jun-2013]. XM S.A. E.S.P, Comportamiento de la demanda de energía eléctrica, 2012. [Online]. [Accessed: 02-Jun-2013]. Available at: https://www.xm.com.co/Pages/Home.aspx. Instituto de Hidrología Meteorología y Estudios Ambientales, Atlas Climatológico Nacional. 2005, pp. 1-184. DANE, Encuesta Nacional de Calidad de Vida, 2012. [Online]. [Accessed: 02-Jun-2012]. Available at: http://www.dane.gov.co/index.php?option=com_content&view=arti cle &id=2314&Itemid=66. Bessoudo, M., Mcminn, J., Theaker, I. and Webber, D., Approaching Net-Zero Energy, Canadian Architect, pp. 28-31, 2011. Charron, R. and Athienitis, A., Design and optimization of net zero energy solar homes, ASHRAE Transactions, 112 (2), pp. 285-296, 2006. Laws, J., Getting to zero, Occupational health & safety (Waco, Tex.), 81 (1), pp. 32-34, 2012. Proskiw, G. and Proskiw Engineering Ltd, Identifying affordable net zero energy housing solutions, 2010. Hoque, S., Net zero energy homes: An evaluation of two homes in the northeastern United States, Journal of Green Building, 5 (2), pp. 7990, 2007. DOI: 10.3992/jgb.5.2.79 Attia, S., Hamdy, M., O’Brien, W. and Carlucci, S., Assessing gaps and needs for integrating building performance optimization tools in net zero energy buildings design, Energy and Buildings, 60, pp. 110124, 2013. DOI: 10.1016/j.enbuild.2013.01.016 Ministerio de Ambiente Vivienda y Desarrollo Territorial, Los materiales en la construcción de vivienda de interés social, Serie Guías de Asistencia Técnica para Vivienda de Interés Social, vol. 2, pp. 1-42, 2011. Panao, M.J.N.O. and Gonçalves, H.J.P., Solar XXI building: Proof of concept or a concept to be proved?, Renewable Energy, 36 (10), pp. 2703-2710, 2011. DOI: 10.1016/j.renene.2011.03.002 Unidad de Planeación Minero Energética and IDEAM, Mapas de radiación solar global sobre una superficie plana, in Atlas de Radiación Solar de Colombia, 2005, pp. 25-40. Instituto de Tecnología y Formación, en: Méndez-Muñiz, J.M. and Cuervo-García, R., Energía solar fotovoltaica, Segunda ed. FC Editorial, 2005, pp. 1-245. Perales-Benito, T., Instalación de paneles solares térmicos, Cuarta Ed. Creaciones Copyright, 2007, pp. 1-146. Unidad de Planeación Minero Energética and IDEAM, Densidad de energía eólica a 20 y 50 metros de altura, in Atlas de Viento y Energía Eólica en Colombia, 2006, pp. 75-102. Reserch for Energy Optimized Building, Net zero energy builging map of international project.” [Online]. [Accessed: 21-Jul-2013] Available at: http://www.enob.info/en/net-zero-energybuildings/map/. Aalborg University, NZEB Demonstration Buildings, International Conference Towards Net Zero Energy Buildings. [Online]. [Accessed: 02-Jun-2013]. Available at: www.zeb.aau.dk.

G.A. Osma-Pinto, received his BSc. in Electrical and Industrial Engineering in 2007, and his MSc. in Electrical Engineering in 2011 from the Universidad Industrial de Santander (UIS), Bucaramanga, Colombia. Currently he is a PhD(c) in this university and a researcher in the School of Electrical Engineering at UIS. His research interests include: green buildings, energy application, automation, exergy, and micro-grids. D.A. Sarmiento-Nova, received his BSc. degree in Electrical Engineering in 2013, from the Universidad Industrial de Santander, Bucaramanga, Colombia. Currently he is a MSc. student in the School of Electrical and Computer Engineering in the University of Campinas – UNICAMP, Brasil. N. C. Barbosa-Calderón, received her BSc. degree in Electrical Engineering in 2013, from the Universidad Industrial de Santander, Bucaramanga, Colombia. Since 2013 she has been working as an Electrical Engineer in the oil sector, focusing mainly in the areas of quality and engineering. She has worked as contractor for companies like Ecopetrol and Occidental Andina, OXY. G. Ordóñez-Plata, received his BSc. in Electrical Engineering from the Universidad Industrial de Santander (UIS), Bucaramanga, Colombia, in 1985. He received his PhD in Industrial Engineering from the Universidad Pontificia Comillas, Madrid, Spain, in 1993. Currently, he is professor in the School of Electrical Engineering at the Universidad Industrial de Santander (UIS-Colombia). His research interests include: Electrical Measurements, Power Quality, Smart Grids, Smart Metering, and Education based on competences.

130

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64


Measurement-based exponential recovery load model: Development and validation Luis Rodríguez-García a, Sandra Pérez-Londoño b & Juan Mora-Flórez c a

Programa de Ingeniería Eléctrica, Universidad Tecnológica de Pereira, Pereira, Colombia. luferodriguez@utp.edu.co b Programa de Ingeniería Eléctrica, Universidad Tecnológica de Pereira, Pereira, Colombia. saperez@utp.edu.co c Programa de Ingeniería Eléctrica, Universidad Tecnológica de Pereira, Pereira, Colombia. jjmora@utp.edu.co Received: April 29th, 2014. Received in revised form: February 20th, 2015. Accepted: July 3nd, 2015.

Abstract Load modeling is an important task in power system stability analysis and control. Taking this into account, the development of dynamic load models using a measurement-based load modeling strategy and an improved particle swarm optimization algorithm is presented in this paper. To accomplish this objective, a measurement-based parameter estimation method is used for identification of an exponential recovery load model. Measurements are obtained performing dynamic simulation of an IEEE 30-bus test system under several disturbances, and, additionally, a cross validation technique is applied for an analysis of load model generalization capability. An adequate load modeling improves the comprehension of load behavior and the capability of reproducing transient events on power systems. Keywords: Measurement-based load modeling, particle swarm optimization, parameter estimation, exponential recovery load model.

Modelo de carga de recuperación exponencial basado en mediciones: Desarrollo y validación Resumen El modelado de carga es una tarea importante en el análisis de estabilidad y el control de un sistema de potencia. Teniendo en cuenta esto, en este artículo se presenta el desarrollo de modelos dinámicos de carga empleando una estrategia basada en mediciones y un algoritmo mejorado de optimización por enjambre de partículas. Para lograr este objetivo, se emplea un método de estimación de parámetros basado en mediciones para la identificación de un modelo de carga de recuperación exponencial. Las mediciones se obtienen mediante simulación dinámica de un sistema de prueba IEEE de 30 barras ante diferentes perturbaciones y se realiza adicionalmente un análisis mediante la técnica de validación cruzada para estudiar la capacidad de generalización del modelo de carga. Un modelado de carga adecuado mejora la comprensión del comportamiento de la carga y la capacidad de reproducir eventos transitorios en los sistemas de potencia. Palabras clave: modelado de carga basado en mediciones, optimización por enjambre de partículas, estimación de parámetros, modelo de carga de recuperación exponencial.

1. Introduction Adequate planning and safe operation of power systems are strongly related with the adequate knowledge of all of the elements connected to the network. The model quality of each component in the power system considerably affects the simulation veracity. Specifically, the load model represents a continuous challenge to the power system analysis, due to its stochastic and time-varying characteristics [1]. Load characteristics play an important role in determining

the voltage stability of a power system. For this reason, the selection of appropriate load models is one of the most challenging tasks in these studies. Different alternatives, such as static or dynamic loads models can be obtained using a component-based approach, by gathering information of individual load characteristics and then aggregating them in one single load. Another strategy, known as the measurement-based approach, uses measurements acquired from the load substation (such as voltage, active and reactive power measurements), when the power system is under a

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 131-140. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48588


RodrĂ­guez-GarcĂ­a et al / DYNA 82 (192), pp. 131-140. August, 2015.

disturbance [2-3]. These measurements are used to tune the parameters of a model structure, to reduce the mismatch between the real power system and the model response to a minimum [3-4]. Over recent years, research has been focused on measurement-based load modeling, due to the availability of monitoring devices connected in the power systems. To develop a load model based on measurements, two requirements must be defined: a structure to represent load characteristics and a strategy to estimate the model parameters. Several authors have proposed different strategies for load parameter estimation, where search algorithms have been used [5-9]. Examples of these include Search algorithms based on statistical techniques such as Least Square (LS) based parameter estimation [5] and Weighted Least Square (WLS) based parameter estimation [6]. One of the difficulties of these methods consists on the convergence into a local minimum and its high sensitivity to the initial value of the parameters (e.g. proper weights). In other categories the following are included: the search algorithms that use metaheuristic techniques such as Simulated Annealing [7], Genetic Algorithms [8], and Neural Networks [9], among others. Particle swarm optimization (PSO) algorithm is an evolutionary technique for optimization that has been used in several applications related to power systems [10-13]. An application for measurement-based load modeling is shown in [14], where a parameter estimation process is solved using a standard (PSO) algorithm for a composite load with a high penetration of distributed generation. Related to load characteristics, the use of dynamic load models has become increasingly popular compared to static load models, but sometimes the computational cost required for its development is higher and reduces the model applicability. Although static load models, such as ZIP or exponential, are widely used for power systems stability analysis, those models are unable to reproduce the dynamic response of load in the case of transient events and often show deviation from field measurements [15]. This indicates the inefficiency of static load models, due to characteristics such as under-voltage motor protection, thermostatic control, under-load tap changers or slow voltage recovery, which require of dynamic models to have an adequate representation [16]. One of the dynamic load models that have been used in the literature is the Exponential Recovery Load (ERL). This model is mostly used to approximate loads that recover slowly over a time period, from several seconds to tens of minutes, and it has been adopted in some papers for voltage stability analysis [17-18]. Another structure used to represent load dynamic behavior is the Composite Load (CL) model [19]. However, this structure has a disadvantage compared to ERL model, relating to the number of parameters to be estimated when the measurement-based approach is applied. The ERL model has less parameters to estimate. Although defining a load model structure and an estimation method is enough to obtain a load model, it is very important to determine its generalization capability. A load model built from specific data measurements may not be adequate, since it may lack the ability to fit new data measurements. Therefore, it is necessary to validate the load

model adaptability. However, this analysis is not fully addressed in most of the papers that use the ERL model. According to the above, this paper is focused on developing a methodology for dynamic load modeling, with which results may be applied for voltage stability studies. This methodology is founded in measurement-based load modeling, and determines a load representation using exponential recovery load models, whose parameters are estimated using load measurements and an improved PSO algorithm. Finally, the generalization capability of such load model is analyzed by applying a modification of cross validation technique. This paper is organized as follows: first, the load model is described in Section 2. A measurement-based approach for load modeling is presented in Section 3. Next, the optimization technique for the parameter estimation process is presented in Section 4. Section 5 contains the proposed methodology for the development of load models based on measurements. Simulation results, validation and discussion are presented in Section 6, and finally, conclusions are given in Section 7. 2. Dynamic load modeling Several dynamic load model structures have been proposed and most of them were only applicable to a particular system or for a specific study case [2]. Two of the most widely used dynamic load models are the Exponential Recovery Load (ERL) model [4] and the Composite Load (CL) model [19], which are specially designed for specific applications. For example, the ERL model is commonly used to approximate loads that slowly recover over a time period, which varies from several seconds to tens of minutes. Meanwhile, the CL model is employed in cases where Induction Motors (IM) are a dominant component. According to [2], due to the fact that the IM are responsible for consumption of approximately 60% to 70% of the total energy supplied by a power system, the CL model will quite often be applicable, but this is not a mandatory condition. Other characteristics must be evaluated in the load model selection, such as low computational effort, availability of measurements and comparison to determine if the model structure selected is better than others, in the case of a specific set of measurements. 2.1. Composite load model This structure is proposed in [19] and has been the object of many studies, where load is represented as a combination of a static load ZIP for the representation of load static behavior, and a third-order induction motor model for the representation of load dynamic behavior. The dynamic part of the load model is represented using third-order induction motor equations, as is shown in (1) and (2). Where is the d-axis internal EMF, is the q-axis internal EMF, ω is the mechanical speed, and are the daxis and q-axis stator currents, respectively, , , , , are the stator resistance, stator reactance, magnetizing reactance, rotor resistance and rotor reactance, respectively; is the inertia constant and is the mechanical load torque.

132


RodrĂ­guez-GarcĂ­a et al / DYNA 82 (192), pp. 131-140. August, 2015.

1 1

;

(1)

(4)

(2)

(5)

;

3. The Measurement-based load modeling approach

The static part of the load model is represented using a static ZIP load model, where load is modeled as a combination of constant impedance, constant current and constant power loads, as is presented in (3).

(3)

Where PbaseZIP,QbaseZIP are steady state ZIP load active and reactive power, , , are the constant power, constant current and constant impedance coefficients for active power, respectively, and , , are the constant power, constant current and constant impedance coefficients for reactive power, respectively. A detailed description of this model is presented in [19]. Using this load model structure, there are eleven parameters to be estimated: , , , , , , , , , , . The first seven parameters correspond to induction motor parameters, and the last four parameters correspond to static load parameters.

The Measurement-based load modeling approach has been developed in a two-stage procedure. Once measurements are obtained, a model structure is selected to represent the load behavior. Next, a parameter estimation method is applied, to determine a set of parameters that minimizes the difference between measured data from the power system, and simulated data from the load model [16][20], as shown in Fig. 1. After an event occurs in the power system, a dataset of voltage , active power and reactive power is acquired from load response. Voltage measurements can be used to calculate the load model response for active and reactive power, denoted as and , respectively. Both measured and calculated values are used to evaluate the objective function of the optimization problem. Using this value, the optimization algorithm updates the load parameters , to minimize the objective function [21]. In this paper, the PSO algorithm was used for this purpose. 4. Particle Swarm Optimization (PSO) The PSO algorithm is a meta-heuristic optimization technique based on the behavior of birds and fish. Individuals of the population (particles) move around in a multidimensional search space, combining local search methods and global search

2.2. Exponential recovery load model An exponential recovery load model, proposed in [17], is based on active and reactive power exponential response after a step disturbance at the bus voltage. Non-linear firstorder equations are deployed for the representation of load response, which are shown in (4) and (5). Where , , are the nominal bus voltage, active power and reactive power at the load bus respectively, , are state variables related to active and reactive power dynamics, , are time constants of the exponential , are exponents related to the recovery response, steady state load response, , are exponents related to the transient load response. Based on (4) and (5), there are six parameters in total for the ERL model that must be estimated: . In the next section, the measurement-based load approach to derive these load parameters is presented.

Voltage Magnitudes

Measured data

Load Objective Function Load model

Simulated data

Updated load parameters

Optimization algorithm

Figure 1. Block diagram of parameter estimation process for load modeling. Source: [21]

133


RodrĂ­guez-GarcĂ­a et al / DYNA 82 (192), pp. 131-140. August, 2015.

methods as the particles position is adjusted according to its experience and the experience of its neighboring particles, tending to the best position encountered by itself or its neighbors [22-23]. The initial population for the optimization algorithm is generated randomly or heuristically. Each particle is represented by a position vector, a speed vector and fitness value. Initial speeds are generated randomly, and each speed vector is updated at every iteration, using the information about the relative position of the particles with respect to the best position found by itself and the best position found by the swarm, as represented in (6).

Figure 2. Illustration of the speed restart procedure. Source: Authors

(6)

Where is the speed of particle at current iteration, is the speed of particle at previous iteration, is the best position found by the particle until the current iteration, is the best position found by the swarm until the current iteration and is the position of particle at current iteration. Then, particle speed is calculated as a function of the speed of the previous iteration, the relative position of the particle with respect to the best position found by itself, and the relative position of the particle with respect to the best position found by the swarm, where each of these components is weighted with the inertia weighting factor , and the social coefficient . the cognitive coefficient Variables and are uniformly distributed random numbers. 5. Proposed methodology The methodology used in this paper is described below. 5.1. Selected load model As was previously mentioned, to obtain a load model using measurements require, as initial step, the definition of the model structure. When the CL model is used to represent the load dynamic characteristics under system disturbances, it is necessary to estimate eleven parameters of the model, according to the model described in Section 2.1. Instead, when the ERL is used, only six parameters must be estimated. This represents a considerable difference in computational time. For this reason, an ERL model is used in this paper for the representation of the dynamic load response. Later, a parameter estimation method is applied as a second stage. For this case, an improved PSO algorithm was used. 5.2. Improved PSO algorithm It is observed that during PSO evolution, the particle speeds start with a high value to improve the exploration of the search space. Later, as the evolution process continues, the speeds of the particles decrease as these move towards the optimal solution. However, the magnitudes of the speed vectors may decrease in such a way that particles remain static in the search space [24]. This implies that a lot of

computational time is required but the solutions may not improve after some iterations. For this reason, a speed restart mechanism is applied to generate a new set of speed vectors when the particle speeds reach low values. If the maximum of all the speed vectors components is lower than a tolerance, the particles are moved around their current average position, using the vectors of the relative position of each particle with respect to the average position as the new set of speeds. By means of this mechanism, particles are moved based on the calculated speed vectors, so better solutions may be found because of the exploration of new areas in the search space. Fig. 2 illustrates the speed restart procedure. The PSO algorithm implemented in this paper, featuring the speed restart mechanism explained above, is presented below: Initial adjusts: define the number of particles N, inertia weighting factor , cognitive coefficient and social coefficient . Step 1: Generate a set of initial of N particles and speeds. Step 2: Evaluate the fitness function of each particle at their current position and initialize the information of the best particle position and the best swarm position. Step 3: Move the particles using the speed vectors generated in Step 1. Step 4: For the new position, evaluate fitness function. If a particle finds a better solution than the value stored in its own memory, the best value and position of the particle are updated. If the swarm finds a better fitness function than the best value stored in the swarm memory, the best value and position of the swarm are updated. Step 5: Check the speed restart criterion. If true, go to step 6; otherwise, go to step 7. Step 6: Determine the average position of the particles, calculate the vectors of relative position of the particles with respect to the average position and move the particles using the calculated vectors. Go to step 8. Step 7: Calculate a new set of speeds using (6) and move the particles. Step 8: Check the stopping criterion. If the stopping criterion is satisfied, stop. Otherwise, go to Step 4. Considering the PSO algorithm, the initial population is generated using uniformly distributed random numbers in specified intervals, as is recommended in [25] and also adjusted experimentally. The parameters are shown in Table 1.

134


RodrĂ­guez-GarcĂ­a et al / DYNA 82 (192), pp. 131-140. August, 2015. Table 1. Intervals for initial population. Parameter

Minimum value

Maximum value

0.001 0 0 0.001 0 0

0.1 3 3 0.1 3 3

5.5. The cross validation technique for load model validation Once the parameters of the exponential recovery load model are estimated with the above-proposed methodology, it is necessary to validate its capability to fit new measurements, obtained under other perturbations. At this point, the cross validation technique is adopted. The cross-validation method is the most commonly used technique to determine errors in classification problems [2627]. A modification of cross-validation procedure is used in this paper to analyze the load model response, using disturbances that were not considered during its development. The algorithm is described below: Step 1: Simulate disturbances and for each case, store voltage, active power and reactive power measurements at study bus. Let 1. Step 2: Use the -th disturbance data as the training dataset, and develop a load model. Step 3: Validate the load model response using the 1 remaining datasets and obtain a fitting error. Step 4: If each set of measurements has been used for training, stop. Otherwise, 1 and go to Step 2. Figs. 3(a) and 3(b) summarizes Step 2 and Step 3 of the algorithm, respectively.

Source: [25]

Table 2. Parameters of the PSO algorithm Parameter

Value

Inertia weighting factor Cognitive coefficient Social coefficient Maximum number of iterations Number of iterations to stop the algorithm is solution is not improved Source: Authors

0.7 1.5 1.5 1000 50

5.3. The objective function and stopping criterion The objective for the optimization process is aimed to minimize the sum of the square difference of the measured data and the simulated data. The objective function is shown in (7). min

1 ∀

6. Results and discussion Tests were carried out in the modified IEEE 30-bus test system shown in Fig 4. In order to simulate load dynamic behavior, induction motors are included at buses 23, 29 and

(7)

Where N is the numbers of samples. A penalty factor is added to the fitness function if any component of the particle vector is lower than zero. The stopping criterion for the optimization process is based on monitoring the evolution of the best solution of the swarm; if this solution does not improve after a specified number of iterations, the algorithm stops. Moreover, the algorithm is stopped if a maximum number of iterations are reached. 5.4. Initialization of the PSO algorithm The parameters of the PSO algorithm were determined by exhaustive testing, where the parameters that presented the best performance are summarized in Table 2. Inertia weighting factor was changed from 0.5 to 1.0, cognitive coefficient and social coefficient were changed from 1.0 to 2.0 [12]. For the purpose of this paper, parameters of different load models are identified using the PSO algorithm and measurements of voltage, active and reactive power after a disturbance. Once the models are estimated, a validation test is carried out, in order to analyze the model response using datasets that were not included in the estimation process.

Figure 3. Cross-validation procedure for load modeling. Source: Authors

135


Rodríguez-García et al / DYNA 82 (192), pp. 131-140. August, 2015.

Figure 4. Modified IEEE 30-bus test system. Source: NEPLAN

30. Simulations are performed using NEPLAN®. The purpose is to represent each of those composite loads using exponential recovery load model, which are estimated using the methodology above exposed. The results are organized as follows: Initially, parameters of ERL models are estimated at study buses (23, 29 and 30) for a specific perturbation. Then, the validation of each one of the load models is presented for other perturbation. Finally, to check out the generalization capability of the ERL model obtained for bus 30, the cross-validation method is applied. 6.1. Estimation of load models at study buses Initially, parameters of exponential recovery load models are estimated for each load using measurements after a threephase fault at line 24-25, having duration of 200ms and a fault resistance of 0.5Ω. Voltage measurements at buses 23, 29 and 30 during the disturbance are depicted in Fig. 5. The estimated parameters using the proposed methodology are summarized in Table 3. Estimation error is also included as an indicator of estimation accuracy. Using the load model parameters of Table 3, a comparison between real load measurements and estimated load responses are shown in Figs. 6, 7 and 8 for load 23, 29 and 30, respectively. From the results obtained, it can be seen that the developed load models fit accurately with the measurements obtained from the power system, for both steady state and transient conditions. For the cases considered, the optimization procedure was successful, even when study buses are located near or far from the disturbance location.

Figure 5. Voltage measurements at buses 23, 29 and 30 during the disturbance for load model estimation. Source: Authors Table 3. Estimated parameters for dynamic load models at study buses Parameter Load 23 Load 29 Load 30 Tp 0,17230682 0,01203057 0,00688679 Nps 1,06776851 0,7198354 0,98292799 Npt 2,07306465 2,36677772 2,37985285 Tq 0,17925739 0,01231673 0,00788255 Nqs 1,1783639 1,25871283 1,28242865 Nqt 2,07488259 2,20233816 2,19255439 Error 1,34E-05 2,69E-05 1,79E-05 Source: Authors

6.2. Validation test of the developed load models A validation test is carried out, in order to check load model responses under a different disturbance. An outage is simulated for a transformer connected between buses 27 and 28. Voltage measurements at study buses during this disturbance are depicted in Fig. 9.

Figure 6. Comparison between estimated load response and real load measurements at bus 23. Source: Authors

136


RodrĂ­guez-GarcĂ­a et al / DYNA 82 (192), pp. 131-140. August, 2015.

Figure 7. Comparison between estimated load response and real load measurements at bus 29. Source: Authors

Figure 9. Voltage measurements at buses 23, 29 and 30 for validation test. Source: Authors

Figure 8. Comparison between estimated load response and real load measurements at bus 30. Source: Authors

Using the parameters calculated in Section 6.1, load model responses and their comparisons to the real load behavior are given in Figs. 10, 11 and 12. From Figs. 10 to 12, load model responds adequately for active and reactive power of each load. Table 4 summarizes the validation fitting errors for each load. Considering the nature and characteristics of the load response, it was expected that the exponential recovery load model accurately adjusted to load response under a different disturbance scenario.

Figure 10. Comparison between estimated load response and load measurements at bus 23. Source: Authors

6.3. Generalization capability analysis

137

One important aspect related to load modeling is the


Rodríguez-García et al / DYNA 82 (192), pp. 131-140. August, 2015.

model generalization capability. After a load model is obtained, it is expected to obtain low estimation errors, which indicates that load model accurately fits to the measurements used for the model development. Nevertheless, it is necessary to analyze the capabilities of the load model to represent load behavior under disturbances that were not considered during the estimation process. A specific validation test was carried out in Section 6.2; however, an analysis of generalization is required for different sets of disturbance measurement data.

Table 4. Validation errors for each load Load 23 2,14E-06 Source: Authors

Load 29 1,47E-05

Load 30 8,12E-06

Figure 11. Comparison between estimated load response and load measurements at bus 29. Source: Authors

Figure 13. Voltage measurements for cross validation test. Source: Authors

Figure 12. Comparison between estimated load response and load measurements at bus 30. Source: Authors

A cross validation test is carried out to verify generalization capabilities of developed load models. Data of five disturbances are simulated and voltage, active and reactive power measurements at bus 30 are stored: Data 1 Outage of line 29-30 Data 2 Three-phase fault at bus 14, duration of 180ms, fault resistance of 2Ω. Data 3 Excitation loss of a synchronous motor at bus 5. Data 4 Transformer tap change of transformer connecting buses 27 and 28. Data 5 Three-phase fault at bus 25, duration of 150ms, fault resistance of 5Ω. Voltage measurements at bus 30 for the simulated disturbances are depicted in Fig. 13. The cross validation process is described below. Initially, a load model is obtained using the measurement set Data 1. Posteriorly, using the parameters obtained by means of Data 138


Rodríguez-García et al / DYNA 82 (192), pp. 131-140. August, 2015. Table 5. Estimation errors Load model at bus 30 Model 1 Model 2 Model 3 Model 4 Model 5 Source: Authors

Estimation error 3,18E-09 0,00000322 1,54E-09 5,77E-08 0,00012223

Table 6. Cross-validation errors for each model Data 1 Data 2 Data 3 Data 4 Model 1 3,06E-05 4,15E-08 4,59E-05 Model 2 5,05E-06 2,16E-06 1,45E-06 Model 3 1,28E-07 3,90E-05 6,22E-05 Model 4 4,58E-06 5,54E-06 1,99E-06 Model 5 5,90E-05 0,00017086 2,07E-05 0,00031594 Source: Authors

Data 5 0,00121373 0,00068257 0,00130358 0,00073917 -

1, load model response is validated using the other measurement sets and an error index is calculated for each validation test. The procedure is repeated obtaining a new load model using the measurement set Data 2, validating the model using the other measurement set, and successively for each data set. Table 5 contains the error estimation for each of the obtained models using the test datasets. The highest error estimation is obtained for Model 5, which is the scenario where with the greatest voltage variation. In this case, it has to be considered that exponential recovery load model has limitations representing load response after considerable voltage variations. Table 6 summarizes the cross validation errors. Each model is validated with the remaining data and a validation error is calculated (rows represent the studied load model, and columns represent the dataset used for validation). From Table 6, it is clear that load model response in a validation test is accurate in most cases. A higher error occurs in the case of using Data 5 to validate; this represents that a load model obtained using small disturbance data may not be accurate enough to represent load behavior for disturbance where voltage variations are greater. On the other hand, as Model 5 presents the highest error during estimation stage, it can be seen that high validation errors are obtained in comparison with the other models. It implies that load models developed using considerable voltage variation disturbance data are not necessary more accurate and its generalization capability may be limited for these disturbances. 7. Concluding remarks The Measurement-based load modeling approach has certain advantages over component-based approaches, because the first strategy allows a more accurately representation of the electrical load characteristics. For this reason, a methodology for dynamic load modeling based on measurements, which uses ERL model and an improved PSO algorithm, is developed and additionally, its generalization capabilities are also studied. Furthermore, ERL models can be developed using a set

of measurements obtained at one specific operating condition and can be applied to other power system operating conditions with satisfactory results. This is an important advantage, because measurements for load modeling can be obtained from normal operational disturbances, and then additional and harmful disturbances are not strictly required for the development of accurate load models. However, it is important to highlight that this is a partial conclusion, because the generalization capability of the load model for system configuration or load composition changes are not analyzed in this paper and are proposed as future research. Another major contribution of this paper is to demonstrate that an exponential recovery load model can be satisfactorily used when a specific bus has a considerable number of induction motors. This represents an advantage compared to the CL model, which has a high number of parameters to be estimated. In general terms, exponential recovery load models are accurate in representing load response at different operating scenarios; however, this response is limited when voltage variations are considerable (greater than 20% with respect to nominal voltage). References [1]

Morison, K., Hamadani, H. and Wang, L., Practical issues in load modeling for voltage stability studies, Proceedings of IEEE Power Engineering Society General Meeting, pp. 1392-1397, 2003. DOI: 10.1109/pes.2003.1267355 [2] Kundur, P., Power system stability and control, 1ra Ed., USA, Mc. Graw Hill, 1994. [3] IEEE, Task force on load representation for dynamic performances, Load representation for dynamic performance analysis (of power systems), IEEE Transactions on Power Systems, 8 (2), pp. 472-482, 1993. DOI: 10.1109/59.260837 [4] Choi, B.-K., Chiang, H.-D., Li, Y., Li, H., Chen, Y.-T., Huang, D.-H., and Lauby, M., Measurement-based dynamic load models: Derivation, comparison and validation. IEEE Transactions on Power Systems, 21 (3), pp. 1276-1283, 2006. DOI: 10.1109/TPWRS.2006.876700 [5] Liu, Q.S., Chen, Y.P., and Duan, D.F., The load modeling and parameter identification for voltage stability analysis, International Conference of Power System Technology, pp. 2030-2033, 2002. [6] Hiskens, I.A., Nonlinear dynamic model evaluation from disturbance measurements, IEEE Transactions on Power Systems, 16 (4), pp. 702710, 2001. DOI: 10.1109/59.962416 [7] Knyazkin, V., Cañizares, C. and Söder, L., On the parameter estimation and modeling of aggregate power system loads. IEEE Transactions on Power Systems, 19 (2), pp. 1023-1031, 2004. DOI: 10.1109/TPWRS.2003.821634 [8] Ju, P., Handschin, E. and Karlsson, D., Nonlinear dynamic load modeling: model and parameter estimation. IEEE Transactions on Power Systems, 11 (4), pp. 1689-1697, 1996. DOI: 10.1109/59.544629 [9] Hiyama, T., Tokieda, M. and Hubbi, W., Artificial neural network based dynamic load modeling. IEEE Transactions on Power Systems, 12 (4), pp. 1576-1583, 1997. DOI: 10.1109/59.627861 [10] Del Valle, Y., Venayagamoorthy, G.K., Mohagheghi, S., Hernandez, J.C. and Harley, R.G., Particle swarm optimization: Basic concepts, variants and applications in power systems. IEEE Transactions on Evolutionary Computation, 12 (2), pp. 171-195, 2008. DOI: 10.1109/TEVC.2007.896686 [11] Yoshida, H., Kawata, K., Fukuyama, Y., Takayama, S. and Nakanishi, Y., A particle swarm optimization for reactive power and voltage control considering voltage security assessment. IEEE Transactions on Power Systems, 15 (4), pp. 1232-1239, 2000. DOI: 10.1109/59.898095

139


Rodríguez-García et al / DYNA 82 (192), pp. 131-140. August, 2015. [12] Huang C.-M., Huang C.-J. and Wang M.-L. A particle swarm optimization to identifying the ARMAX model for short-term load forecasting. IEEE Transactions on Power Systems, 20(2), pp. 11261133, 2005. DOI: 10.1109/TPWRS.2005.846106 [13] Panigrahi, B.K., Ravikumar-Pandi, V. and Das, S., Adaptive particle swarm optimization approach for static and dynamic economic load dispatch. Energy Conversion and Management, 49 (6), pp. 14071415, 2008. DOI: 10.1016/j.enconman.2007.12.023 [14] Rodriguez-Garcia, L., Perez-Londono, S. and Mora-Florez, J., A methodology for composite load modeling in power systems considering distributed generation, Proceedings of IEEE/PES Transmission and Distribution: Latin America Conference and Exposition (T&D-LA), pp. 1-6, 2012. DOI: 10.1109/tdcla.2012.6319122 [15] Kosterev, D.N., Taylor, C.W. and Mittelstadt, W., A. model validation for the August 10, 1996 WSCC system outage, IEEE Transactions on Power Systems, 14 (3), pp. 967-979, 1999. DOI: 10.1109/59.780909 [16] Concordia, C. and Ihara, S., Load representation in power system stability studies. IEEE Transactions on Power Apparatus and Systems-PAS, 101 (4), pp. 969-977, 1982. DOI: 10.1109/TPAS.1982.317163 [17] Karlsson, D. and Hill, D.J., Modelling and identification of nonlinear dynamic loads in power systems. IEEE Transactions on Power Systems, 9 (1), pp. 157-166, 1994. DOI: 10.1109/59.317546 [18] Zeng, Y.G., Berizzi, G. and Marannino, P., Voltage stability analysis considering dynamic load model, in Proceeding of the 4th International Conference on Advances in Power System Control, Operation and Management, APSCOM-97, Hong Kong, 1, pp. 396401, 1997. DOI: 10.1049/cp:19971866 [19] He, R.M., Ma, J., and Hill, D., Composite load modeling via measurement approach. IEEE Transactions on Power Systems, 21 (2), pp. 663-672, 2006. DOI: 10.1109/TPWRS.2006.873130 [20] IEEE, Task force on load representation for dynamic performance. Standard load models for power flow and dynamic performance simulation. IEEE Transactions on Power Systems, 10 (3), pp. 13021313, 1995. DOI: 10.1109/59.466523 [21] Bai, H., Zhang, P. and Ajjarapu, V., A novel parameter identification approach via hybrid learning for aggregate load modeling. IEEE Transactions on Power Systems, 24 (3), pp. 1145-1154, 2009. DOI: 10.1109/TPWRS.2009.2022984 [22] Olariu, S. and Zomaya, A., Handbook of bioinspired algorithms and applications, Chapman and Hall/CRC, 2005. DOI: 10.1201/9781420035063 [23] Gómez-González, M., Sistema de generación eléctrica con pila de combustible de óxido sólido alimentado con residuos forestales y su optimización mediante algoritmos basados en nubes de partículas, Ph.D. Tesis, Universidad Nacional de Educación a Distancia, España, 2008. [24] Abdelhalim, M.B. and Habib, S.E., –D. Particle swarm optimization [online]. 2009 [consulta, 11 de abril de 2014]. Available at: http://www.intechopen.com/books/particle_swarm_optimization/par ticle_swarm_optimization_for_hw_sw_partitioning [25] Navarro, I.R., Dynamic power system load -estimation of parameters from operational data, PhD. Thesis, Lund University, Sweden, 2005. [26] Yang, K., Wang, H., Dai, G., Hu, S., Zhang, Y. and Xu, J., Determining the repeat number of cross-validation, Proceedings of International Conference on Biomedical Engineering and Informatics (BMEI), pp.1706-1710, 2011. DOI: 10.1109/bmei.2011.6098566 [27] Webb, A.R., Statistical pattern recognition, 2nd Ed., England: John Wiley & Sons, 2002. DOI: 10.1002/0470854774

L. Rodríguez-García, received his BSc. in Electrical Engineering from the Universidad Tecnológica de Pereira (UTP), Pereira, Colombia, in 2011 and his MSc. in Electrical Engineering at the Universidad Tecnológica de Pereira (UTP), Pereira, Colombia, in 2014. His areas of interest include power system analysis, load modeling, voltage stability analysis and application of optimization techniques for electrical engineering studies. Mr. RodríguezGarcía is a member of ICE3 (Col) Research Group on Power Quality and System Stability. S. Pérez-Londoño, received her BSc. in Electrical Engineering from the Universidad Tecnológica de Pereira (UTP), Pereira, Colombia, in 2000, her MSc. in Electrical Engineering from UTP in 2005, and her PhD. degree in Engineering from the Universidad Nacional de Colombia in 2014. Currently, she is an associate professor at the Electrical Engineering School, Universidad Tecnológica de Pereira, Pereira, Colombia. Her areas of interest include electric machines, power system stability, fault analysis and soft computing techniques. Mrs. Pérez-Londoño is a member of ICE3 (Col) Research Group on Power Quality and System Stability. J. Mora-Flórez, received his BSc. in Electrical Engineering from the Universidad Industrial de Santander (UIS), Colombia, in 1996, his MSc. in Electrical Power from UIS in 2001, his MSc. in Information Technologies from the University of Girona (UdG), Spain, in 2003, and his PhD. in information technologies and electrical engineering from UdG in 2006. Currently, he is an associated professor at the Electrical Engineering School at the Universidad Tecnológica de Pereira, Colombia. His areas of interest include power quality, transient analysis, protective relaying and soft computing techniques.

140

Área Curricular de Ingeniería de Sistemas e Informática Oferta de Posgrados

Especialización en Sistemas Especialización en Mercados de Energía Maestría en Ingeniería - Ingeniería de Sistemas Doctorado en Ingeniería- Sistema e Informática Mayor información: E-mail: acsei_med@unal.edu.co Teléfono: (57-4) 425 5365


Complete power distribution system representation and state determination for fault location Andres Felipe Panesso-Hernández a, Juan Mora-Flórez b & Sandra Pérez-Londoño c a Programa de Ingeniería Eléctrica, Universidad de la Salle, Bogotá, Colombia. afpanesso@unisalle.edu.co Programa de Ingeniería Eléctrica, Universidad Tecnológica de Pereira, Pereira, Colombia, jjmora@utp.edu.co b Programa de Ingeniería Eléctrica, Universidad Tecnológica de Pereira, Pereira, Colombia, saperez@utp.edu.co b

Received: April 29th, 2014. Received in revised form: February 13th, 2015. Accepted: July 17th, 2015.

Abstract The impedance-based approaches for fault location in power distribution systems determine a faulted line section. Next, these require of the voltages and currents at one or both section line ends to exactly determine the fault location. It is a challenge because in most of the power distribution systems, measurements are only available at the main substation. This document presents a modeling proposal for the power distribution system and an easy implementation method to estimate the voltages and currents at the faulted line section, using the measurements at the main substation, the line, load, transformer parameters and other serial and shunt connected devices and the power system topology. The approach here proposed is tested using a fault locator based on superimposed components, where the distance estimation error is lower than 1.5% in all of the cases. Keywords: Fault location; forward sweep-based method; modeling; power distribution systems.

Representación completa del sistema de distribución de energía y determinación del estado para localización de fallas Resumen Las propuestas basadas en la estimación de la impedancia para localización de fallas, sirven para determinar la sección en falla. Posteriormente, éstas requieren de la estimación de tensiones y corrientes en uno o ambos terminales de la sección de línea, para determinar exactamente la localización de la falla. En la mayoría de los sistemas de distribución, esto es un reto, debido a que únicamente existen medidas en la subestación principal. Este documento contiene una propuesta de modelado del sistema de distribución y un método de fácil implementación para estimar las tensiones y las corrientes en la sección de línea bajo falla, a partir de las medidas en la subestación principal, la línea, los parámetros de la carga, el transformador y demás elementos conectados en serie y en paralelo en el sistema de potencia y la topología del sistema. La propuesta aquí presentada se probó utilizando un método de localización basado en las componentes superimpuestas, donde el error en la estimación de la distancia es menor a 1.5% en todo los casos. Palabras clave: Localización de fallas; método de barrido hacia delante, modelado, sistemas de distribución.

1. Introduction Power quality has become fundamental over recent years, due regulatory demands and the customer requirements. Continuity has become important and, fault location in power distribution systems has attained remarkable attention. In the case of Colombia, the regulatory energy and gas commission (CREG) defines the requirements of the electricity service continuity for power distribution systems, by considering two

indexes (IRAD and ITAD) [1]. The adequate location of permanent faults helps to reduce two aspects, the restoration time and the number of faults. The first aspect is obtained by the opportune indication of the place of the fault reducing the searching time. The second aspect is reached by establishing an adequate preventive maintenance schedule at the critical sectors, identified as these that have a high fault incidence.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 141-149. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48589


Panesso-Hernández et al / DYNA 82 (192), pp. 141-149. August, 2015.

There are several fault location alternatives, but considering the low instrumentation of most of the power distribution systems, the most suitable fault location methods are those based on the circuit model and the measurements of voltage and current at the main power substation, which allow an efficient estimation of the faulted node [2-4]. One of the disadvantages of these methods is the lack of reliable strategies to the exact determination of the voltages and currents at the faulted section. There are approaches to estimate the apparent power system reactance using the information available at the substation [5]. These additionally include capacitive effects [6], approaches to determine the load current downstream of the fault section [7], and alternatives to take into account variations in load size [8], among others. All of these approaches use approximations to estimate the voltages and currents at the faulted section, which reduce the locator performance. To estimate the voltages and currents at the faulted line section, simple power flow methods are used. These are classified into two groups: Those that adapt the methods normally used in power systems, and the sweep methods, which are the most commonly used [9]. The main problem is the requirements of a complete knowledge of the analyzed power distribution system. This paper is devoted to presenting an approach that allows the voltages and currents at the faulted line sections to be estimated, using the line impedance, load admittance and the phase measurements at the substation of the analyzed power system. This also considers the no homogeneity along the line sections. 2. Basic aspects considered 2.1. Power system modeling The adequate model strategy plays an important role to avoid oversimplifications to have reliable estimations of voltage and currents at any node of the power distribution system. The transmission parameters are widely used to describe the characteristic of an overhead line or a cable [12]. At the cited reference, the transmission parameters A, B, C and D, the ratio of the open circuit voltage, the negative transference impedance during short circuit, the transference admittance in open circuit and the ratio of the negative current in short circuit, are well described. Power lines in a distribution system can also be modeled using lumped parameters at a one-line π equivalent circuit [13]. This model is suitable for power distribution systems to increase the performance of the fault locators, only if all of the parameters are available. Eqs. (1) to (3) allow the voltages and currents in a π modeled line to be obtained.

2

Where the sub-indexes R and S represent the sending and the receiving ends and the constants A, B, C and D are given in (4). ∙ 2 ∙ 2

(2) 2

1,

, ∙ 2

1 ,

(4) 1.

2.2. Superimposed components used for fault location Normally, power distribution systems have radial topology, heterogeneous section lines, tapped loads, single and three phase laterals, measurements only available at the substation and they usually have high fault rates. To deal with faults, location methods are proposed. These location approaches are normally defined using the currents and voltage at the faulted line section to determine the fault distance, based on measurements in one or both terminals. Fig. 1 presents a power system at pre-fault condition, while Fig 2 shows the faulted line section during a fault. In the previous figures, sub-index p and f represents pre-fault and fault situations. Rf represents the fault resistance, m is the fault distance in per unit and Zs , If is the fault current and Vs is the source equivalent. Precise information about voltage and current per phase (or only at the faulted phase) at the initial node of the faulted section is required to obtain the distance to the fault. The superimposed components are proposed for fault location as is presented in [10] and used in [11]. This method is based on the substation measurements, to determine the values of superimposed components of voltage and current at any node along on the power line, and is widely used by other fault location methods such as the one proposed by Novosel et al. in [11]. A superimposed component is defined as the difference between the values of voltage and current during the fault and the pre-fault steady state. The equivalent circuit is presented in Fig. 3, and the method is based on the fact that the superimposed components of the non-faulted phases, takes its minimum value at the faulted node F [2]. The fault location method presented in [11] considers the effects of tapped loads, from the power substation and the faulted line section. However, its performance depends on an adequate estimation of voltage and current at the ends of the faulted line section, during the pre-fault and fault steady states. ZS

VS

(1)

(3)

2

VSp

Zline

VRp

ISp

Figure 1. Simplified power system at pre-fault condition. Source: The authors

142

ZLoad

ILp


Panesso-Hernández et al / DYNA 82 (192), pp. 141-149. August, 2015.

VSf

ZS

m∙Zline

(1-m)∙Zline

VRf

ISa

ZLoad

F ISf

VS

ILf

Rf

If

Zab=Zba Zac=Zca Zbc=Zcb

Zaa

+ VSa

Yaa/2

ILb

ΔIS

F

If

(1-m)∙Zline

VRf

Ybb/2

+ VRb

Ybb/2

-

ZLoad

-

ISc Rf

IRb

Zbb

+ VSb

m∙Zline

VRa

Yaa/2

-

ISb

ZS

+

-

Figure 2. Simplified power system during fault condition. Source: The authors

ΔVS

IRa

ILa

IRc

ILc

Yab/2=Yba/2 Yac/2=Yca/2 Ybc/2=Ycb/2

ΔIL Zcc

+

VFp

VSc

+

Ycc/2

+ VRc

Ycc/2

-

Figure 3. Superimposed representation of the power system obtained from Figs. 1 and 2. Source: The authors

-

Figure 4. Model of a three-phase line section considering the mutual effect. Source: The authors

3. Power system representation approach

,

The proposed power system modeling strategy, based on the matrixes of transmission parameters for fault location, is described in this section.

2

2

1 2

(5)

2

2

2

(6)

3.1. Lines proposed modeling The parameters of the power distribution system are obtained from the information at the utility databases. Power lines are usually represented by serial models that neglect the shunt admittance; these give good approximations of the power distribution systems because of the low level of voltage and the short section line length. Lately in proposals presented in [6,7,14,15], the authors show how the capacitive effect has considerable influence in fault location. As a consequence, the capacitive effect has to be considered in cases such as lightly loaded systems, long and underground lines [16]. Normally, the power distribution systems are unbalanced, and then the one-line model is not adequate to represent distribution lines. As a consequence, it is necessary to use a best model to represent the lines and a new methodology to estimate the system state from voltages and currents at the substation, such as the proposed model presented in Fig. 4, which uses a three-phase representation and includes the mutual effect. From Fig. 4, the impedance and admittance parameters of lines along the power distribution system are given in (5). Then, the currents flowing through each phase are obtained using the lumped impedance (load), starting with phase a, as is presented in (6). Eq. (6) is generalized as presented in (7), for phases a, b and c, represented by numbers 1, 2 and 3.

2

,

2

(7)

The next step is devoted to arranging the phase currents in a matrix form, as is given at (8). (8) The admittance matrix is redefined as is presented in (9).

1 2

(9)

Having the line current IL of each phase, it is possible to find the voltages at the sending node, as is presented in (10) for phase a. (10) Similarly, for other phases or wires (11) is obtained, where i represents the corresponding phase.

143


Panesso-Hernández et al / DYNA 82 (192), pp. 141-149. August, 2015.

ISa

(11)

IRa

ISb

Eq. (11) is arranged in matrix form and by considering (8), the sending voltage is a function of the variables at the receiving node, as is presented in (12).

ISc

Variable [I] is identity matrix. Then, the sending currents are estimated based on the variables of the receiving node, as given in (13). 2

2

2

2

2

,

2

-

-

Figure 5. Diagram of elements connected in series at the power distribution line. Source: The authors

elements are presented in (18). The capacitors, reactors, reclosers, active power compensators, among others, are represented by an equivalent impedance, as is shown in Fig. 5. (18)

Matrix [Ψ] contains the model that represents the impedance of each element, such as capacitors, inductors or another device connected to the network. 3.3. Shunt devices modeling

[VS] and [IS] are described in (16). (16)

In summary, Eq. (16) presents the matrix of transmission parameters A, B, C and D, as in (17). 2

VRc

0

(15)

2

+

VSc

(14)

Then, sending currents are arranged in matrix form, as is presented in (15). 2

Zcc

(13)

Likewise, the currents in the sending node are given as a function of the phase currents and voltages at the receiving node. Other phases are presented in (14).

2

IRc

Zbb

+

(12)

IRb

Zaa

The model of shunt connected devices as loads, and compensators, among others, is similar to those presented for serial connected devices; the difference is that these shunt elements are exposed to the phase voltage, which is supposed to be equal for both the sending and received nodes as is presented in Fig. 6. Eq. (19) presents the variation at the system current, when shunt elements are connected to radial network.

(17) ISa

Eq. (16) is a generalized expression that involves voltages and currents at the sending and the receiving nodes, where the transmission matrix elements shown in (17) are similar to these elements presented in (4). According to equations form (6) to (17), it is possible to develop a general procedure aimed at considering power lines with neutral wire and ground effect (Carson’s correction). In this case the resulting matrices A, B, C and D in (17) have an n n dimension and, therefore, it may be necessary to reduce the dimensions of this matrix as presented in [17], to obtain square matrices of 3×3 (Kron reduction technique).

ISb

IRb

ISc

IRc

+ VSc -

3.2. Serial devices modeling The serial elements are located along the system lines and can be modeled in a general form as given in (4), where matrix

IRa

+ YLc

YLb

YLa

VRc -

Figure 6. Diagram of a load shunt connected to the power distribution feeder. Source: The authors

144


Panesso-Hernández et al / DYNA 82 (192), pp. 141-149. August, 2015.

3.4. Distribution power transformers modeling

Main substation 0

6

Transformers are special elements connected in series with the lines of the distribution feeders. The inclusion of this elements cause variations in voltage and current levels. For that reason, the applied considerations are different to those previously presented. The authors of references [19,20] introduce a generalized model in admittance matrices form for different transformer connections, which is presented in (22).

7

1 2

5 3

11

4 10

8 F

12

9 Figure 7. Faulted power distribution system. Source: The authors

0

(22)

Where Ytrafo Ypp Yps; Ysp Yss can be turned to a transmission matrix that relates the same elements, as shown in (23). The subscripts p and s indicate primary and secondary windings respectively.

(19)

3.3.1. Tapped loads

(23)

Power distribution systems supply energy to different load types such as single-, two- and three-phase connected loads. According to the connection between the lines and the reference, the value of the load admittance is changed, implying the variation of the C parameter of the transmission matrix as presented in (19). 3.3.2. Lumped radial loads To find the voltage and current at the faulted node (F) of the power system presented in Fig. 7, the value of the admittance of all equivalent laterals seen by the analyzed feeder have to be determined [18]. In case of a fault located at the line section between the nodes 8 and 9 from Fig. 7, it is necessary to know the equivalent admittance of the radial composed by the nodes (0-1-4-8-9), then, the admittance observed by the radial in nodes 1, 4 and 8 have to be estimated. In [18] authors propose the equation (20), which is used to obtain the equivalent admittance of a lateral at the node n, considering the parameters of such lateral. (20) Where Yneq is the equivalent admittance of loads and lines sections from node n to the end section node, Yn is the admittance matrix of the other line sections located in the node n, Zn is the impedances matrix of the line between the nodes n and n 1, and Yn 1 is the equivalent admittance matrix at the node n 1. For any node n in which contains equivalent laterals and tapped loads, the matrix C is given by (21). (21) Where YLn is the load admittance at node n. A similar technique is proposed in [6], to define the current that flows into the circuit downstream to the faulted node.

Similarly to the models presented in earlier sections, at three-phase systems, the submatrices A, B, C and D have 3×3 dimensions. Moreover, equations from (24) to (26) show the YI, YII and YIII matrix, which generalize the most common connections for three-phase transformers, as shown in Table 1. YI

YII

YIII

1 0 0 1 2 ‐1 3 ‐1 1 ‐1 0 √3 1

0 1 0

0 0 ∙ yt 1

‐1 2 ‐1 1 ‐1 0

‐1 ‐1 ∙ yt 2 0 1 ∙ yt ‐1

(24)

(25)

(26)

Where yt is the dispersion admittance of the power transformer per unit. Finally, in [19] the authors describe the admittance matrix for any connection of transformer banks, considering taps on both sides. Table 1 Connection submatrices for voltage reducer transformers Connection Self-Admittance Mutual Admittance Secondar Primary Ypp Yss Yps Ysp y Yg Yg YI YI -YI -YI Yg Y YII YII -YII -YII Yg Δ YI YII YIII YIIIT Y Yg YII YII -YII -YII Y Y YII YII -YII -YII Y Δ YII YII YIII YIIIT Δ Yg YII YI YIII YIIIT Δ Y YII YII YIII YIIIT Δ Δ YII YII -YII -YII Source: [19].

145


Panesso-Hernández et al / DYNA 82 (192), pp. 141-149. August, 2015.

4. Forward updating method for fault location

%Error

In most of the power distribution systems, the voltages and currents are measured at the substation, and then it is necessary to obtain an expression to determine these values at the faulted line section. Applying the expressions (17)-(19) and (23), all power distribution system elements are modeled as a matrix to relate voltages and currents at the input and output, as is presented in (27). Ai Bi Ci Di

, ∀

(27)

1,2, … ,

Constants A, B, C and D, are estimated from the system parameters for each section or element i, as is presented in section 3. Using the parameters of the power distribution system to obtain the line impedance and susceptance matrixes, it is possible to obtain the voltage and current of one line end only knowing the values of the other line end. This is useful because in case of a fault, there is no necessity to execute a three-phase power flow, considering that without the complete knowledge of the value and location of the fault impedance in the system it would be worthless. Additionally, considering that all of the power distribution systems elements are connected in cascade, it is possible to obtain the voltage and current values for any node of the analyzed feeder (28). n

i=1

Ai Ci

Bi Di

/

(28)

/

5. Simulation results and analysis In this section, several tests are performed, to show the good results obtained by the application of the proposed forward updating method. 5.1. Test system The 24,9 kV IEEE 34-bus test feeder presented in Fig. 8 is used to analyze the proposed approach. The test system contains a three-phase main feeder, single-phase laterals, multiple conductor gauges, and single and three-phase tapped loads [21]. Fault distance estimation error is determined by using (29). 848 846

822 820

844 864

818 802

806

808

812

814

850

824 816

826

842 834

858

860

832

800

862 888

810

890 838

852 828

Figure 8. IEEE 34-bus test feeder. Source: [21]

836

830

854

856

840

Actual location – Estimated location Length of distribution feeder

100%

(29)

The test system in [21] presents different load models such as constant impedance, current and power. For testing purposes, the proposed forward updating method, considers the load models as constant impedance, current and power. 5.2. Comparison of magnitudes and phase angles To validate the proposed approach, used to obtain the voltage and current per phase at any node of the power system when a fault occurs, a single phase to ground fault (a-g) is simulated in several nodes along the main radial of the test system, considering constant impedance load. Tables 2 and 3 present a comparison between simulated and calculated values of voltage and current, respectively, for faulted nodes 802, 850, 828 and 858, considering a fault resistance of 10 Ω. As noticed from tables, the differences between the measured values and those calculated by the proposed methodology are very small. This shows that just knowing all of the parameters of the power system, and having the measurements at the substation, it is possible to have a good estimation of the state in any node of the power system. Additional tests were performed using constant current and constant power load models. Errors are presented in Figs. from 9 to 12. According to these figures, errors are lower than 1% in all of the cases, demonstrating the adequate performance of the updating proposed method. Table 2 Comparison of performance results for nodal voltage between simulated and calculated values Phase a Phase b Phase c Node |V| θV |V| θV |V| θV [kV] [º] [kV] [º] [kV] [º] Sim. 5.7619 -88.183 14.1715 -153.294 14.0518 88.167 802 Calc. 5.7619 -88.170 14.1711 -153.294 14.0518 88.166 Sim. 2.3282 -79.535 15.3566 -159.769 14.1103 96.209 850 Calc. 2.3256 -78.482 15.2604 -159.853 14.0619 96.096 Sim. 2.1429 -78.386 15.3195 -159.809 14.0011 96.574 828 Calc. 2.3648 -71.520 15.2262 -159.579 13.9178 96.139 Sim. 1.4842 -74.152 14.9793 -159.710 13.4044 97.799 858 Calc. 1.4740 -69.905 14.7723 -159.801 13.2845 97.543 Source: The authors Table 3 Comparison of performance results and calculated values Phase a Node |I| θI [A] [º] 591.170 -88.666 Sim. 802 Calc. 591.170 -88.666 242.886 -80.033 Sim. 850 Calc. 242.889 -80.032 222.345 -78.728 Sim. 828 Calc. 216.493 -79.270 155.106 -74.692 Sim. 858 Calc. 155.938 -74.917 Source: The authors

146

for sending current between simulated Phase b |I| [A]

θI [º]

Phase c |I| [A]

θI [º]

28.8831 28.8833 29.4816 31.6663 26.2921 28.7681 26.0445 29.4898

-177.300 -177.300 175.978 176.414 175.458 175.410 172.788 174.134

26.4028 26.4025 24.7570 26.6733 24.3448 26.3138 22.8112 26.4353

56.562 56.563 59.883 59.831 59.970 60.024 59.008 59.609


Panesso-Hernรกndez et al / DYNA 82 (192), pp. 141-149. August, 2015.

Figure 9. Errors in voltage amplitude and angle, considering a constant current load model. Source: The authors

Figure 12. Errors in current amplitude and angle, considering constant power load model. Source: The authors

Figure 10. Errors in current amplitude and angle, considering a constant current load model. Source: The authors (a)

Figure 11. Errors in voltage amplitude and angle, considering constant power load model. Source: The authors

(b) Figure 13. Errors in fault location for single-phase to ground faults considering the proposed approach (a) and avoiding this use (b). Source: The authors

147


Panesso-Hernández et al / DYNA 82 (192), pp. 141-149. August, 2015.

5.3. Test of the fault locator performance Several tests of the superimposed components based fault locator are performed, taking into consideration the singlephase, phase-to-phase and three phase faults. The tests are performed first considering the original method proposed in [11], and then considering a modified version, which includes the methodology proposed here, used to estimate the voltages and currents at each line section. Figs. 13 to 15 show the error of the fault locator for singlephase, phase-to-phase and three phase faults, respectively. The pair of figures (a and b) considers the option with and without applying the proposed approach, to obtain the voltages and currents at the beginning of the faulted line section. For the three types of faults, the performance of the fault location technique is improved due the application of the proposed approach in estimating currents and voltages at the faulted section.

(a)

6. Conclusions This paper presents a method to adequately estimate the voltages and currents at faulted sections, using values of

(b) Figure 15. Errors in fault location for three-phase faults considering the proposed approach (a) and avoiding this use (b). Source: The authors

series impedance and shunt admittance of all elements of the power distribution system and measurements at the main substation. This method is useful for improving the performance of fault locators, because it does not require information regarding what is connected downstream of the faulted line section to perform an adequate right localization. As is demonstrated by the tests, good performance is obtained in fault location in all of the 213 analyzed cases. Finally, an appropriate location of the faulted node helps to improve the continuity indexes in power distribution systems, maintaining the electric power quality.

(a)

Acknowledgments

(b) Figure 14. Errors in fault location for phase-phase faults considering the proposed approach (a) and avoiding this use (b). Source: The authors

This work was developed in ICE3 (Col) Research Group on Power Quality and System Stability. It was supported by the research project “Desarrollo de localizadores robustos de fallas paralelas de baja impedancia para sistemas de distribución de energía eléctrica - LOFADIS 2012 cod. 111056934979”, which was funded by the Colombian Institute for Science and Technology Development (COLCIENCIAS) and the Universidad Tecnológica de Pereira (UTP).

148


Panesso-Hernández et al / DYNA 82 (192), pp. 141-149. August, 2015.

References [1] [2]

[3] [4] [5] [6]

[7] [8] [9] [10]

[11] [12] [13] [14]

[15]

[16] [17] [18]

[19]

[20]

[21]

Comisión Reguladora de Energía y Gas, Resolución CREG 097 de 2008. Mora-Flórez, J., Localización de faltas en sistemas de distribución de energía eléctrica usando métodos basados en el modelo y métodos basados en el conocimiento, PhD. Thesis, University of Girona, Spain, 2006. Saha, M., Izykowski, J. and Rosolowski, E., Fault location on power networks, Springer, London, 2010. DOI: 10.1007/978-1-84882-8865 Das, R., Determining the locations of faults in distribution systems, PhD. Thesis, University of Saskatchewan, Canada, 1998. Morales, G., Mora-Flórez, J. and Vargas, H., Método de localización de fallas en sistemas de distribución basado en gráficas de reactancias, Sci.Tech. 34, pp. 49-54. 2007 Salim, R.H., Salim, K. and Bretas, A., Further improvements on impedance-based fault location for power distribution systems, IET Gener. Transm. Distrib. 5 (4), pp. 467-478, 2011. DOI: 10.1049/ietgtd.2010.0446 Salim, R. et. al., Extended fault-location formulation for power distribution systems, IEEE Trans. Power Delivery. 24 (2), pp. 508516, 2009. DOI: 10.1109/TPWRD.2008.2002977 Lee, S.J., et al., An intelligent and efficient fault location and diagnosis scheme for radial distribution systems, IEEE Trans. Power Delivy,19 (2), pp. 524-532, 2004. Gallego, L.A., López, J.M. y Mejía, D.A., Flujo de potencia trifásico desbalanceado en sistemas de distribución con GD, Sci. Tech. 43, pp. 43-48, 2009. Aggarwal, R.K, Aslan, Y. and Johns, A.T., New concept in fault location for overhead distribution systems using superimposed components, IEE Developments in Power System Protection. Conf. 434, pp. 184-187, 1997. Novosel, D. et al., System for locating faults and estimating fault resistance in distribution networks with tapped loads, U.S. Patent 5, pp. 839-993, 1998. Dorf, R., Svoboda, J.A., Electric Circuits, fifth ed., Alfaomega Ed., Mexico, 2000. Grainger, J.J. and Stevenson, W.D., Power system analysis, McGraw Hill, 1996. Mirzai, M.A. and Afzalian, A.A., A novel fault locator system; algorithm, principle and practical implementation, IEEE Trans. Power Deliv, 25 (1), pp. 35-46, 2010. DOI: 10.1109/TPWRD.2009.2034809 Hou, D. and Fisher, N., Deterministic high-impedance fault detection and phase selection on ungrounded distribution systems, Schweitzer Engineering Laboratories Inc., 2005. [online], Available at: https://www.selinc.com/literature/TechnicalPapers. Kersting, W.H., Distribution system modeling and analysis, Boca Raton, Florida, Second Ed., CRC Pres, 2002. Anderson, P.M., Analysis of faulted power systems, The Iowa State University Press, 1973. Bedoya-Cadena, A.F., Mora-Florez, J.J. y Pérez-Londoño, S.M., Estrategia de reducción para la aplicación generalizada de localizadores de fallas en sistemas de distribución de energía eléctrica, Rev. EIA 9 (17), pp. 9-19, 2012. Choque, J.L., Rodas, D. and Padilha, A., Distribution transformer modeling for application in three-phase power flow algorithm, IEEE Latin America Trans. 7 (2), pp. 196-202, 2009. DOI: 10.1109/TLA.2009.5256829 Wang, Z., Chen, F. and Li. J., Implementing transformer nodal admittance matrices into backward/forward sweep-based power flow analysis for unbalanced radial distribution systems, IEEE Trans. Power Systems 19 (4), pp. 1831-1836, 2004. DOI: 10.1109/TLA.2009.5256829 IEEE Distribution systems subcommittee radial test feeders, IEEE Standards Board, 1993. [online]. Available at: http://www.ewh.ieee.org/soc/pes/dsacom/testfeeders/index.html

Universidad Tecnológica de Pereira (UTP), Pereira, Colombia. Currently, he is a Professor in the Electrical Engineering Department, Facultad de Ingeniería, Universidad de La Salle (ULS), Bogotá, Colombia. Also he is a researcher at ICE3 (Col) Research Group on Power Quality and System Stability at UTP, and CALPOSALLE (Col) Research Group on Power Quality and Energy Systems at ULS. His research interests include: fault analysis and location, power quality, power distribution systems issues and electric machines. J. Mora-Flórez, received his BSc. in Electrical Engineering from the Industrial University of Santander (UIS), Bucaramanga, Colombia, in 1996; his MSc. in Electrical Power from UIS in 2001; his MSc. in Information Technologies from the University of Girona (UdG), Spain, in 2003; and his PhD. degree in Information Technologies and Electrical Engineering from UdG in 2006. Currently, he is Professor at the Electrical Engineering School in the Universidad Tecnológica de Pereira, Colombia. His areas of interest include power quality, transient analysis, protective relaying and soft computing techniques. Mr. Mora-Flórez is a member of ICE3 (Col) Research Group on Power Quality and System Stability. S. Pérez-Londoño, received her BSc. in Electrical Engineering from the Technological University of Pereira (UTP), Colombia, in 2001; her MSc. in Electrical Engineering from UTP in 2005; and her. PhD in Electrical Engineering from the Universidad Nacional de Colombia, Colombia in 2013. She is professor at the Electrical Engineering School, Universidad Tecnológica de Pereira, Pereira, Colombia. Her areas of interest are electric machines, power system stability, fault analysis and soft computing techniques. Ms. Pérez-Londoño is a member of ICE3 (Col) Research Group on Power Quality and System Stability.

A.F. Panesso-Hernández, received his BSc. in Electrical Engineering in 2010, and his MSc. in Electrical Engineering in 2013, both from the

149

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64


The impact of supply voltage distortion on the harmonic current emission of non-linear loads Ana Maria Blanco a, Sergey Yanchenko b, Jan Meyer a & Peter Schegner a a

Institute of Electrical Power Systems and High Voltage Engineering, Technische Universität Dresden, Dresden, Germany. ana.blanco@tu-dresden.de b Moscow Power State Institute, Moscow, Russia. yanchenko_sa@mail.ru Received: April 29th, 2014. Received in revised form: February 13th, 2015. Accepted: July 17th, 2015.

Abstract Electronic devices have a non-linear characteristic and emit harmonics into the low voltage grid. Different harmonic studies analyze their impact on the grids based on different types of harmonic models. The most used model is the constant current source. Measurements have shown that the harmonic currents emitted by electronic devices depend on the circuit topology and the existing supply voltage distortion. This paper quantifies the impact of supply voltage distortion on the harmonic current emission of individual devices and the summation of multiple devices. After a classification of the commonly used circuit topologies, a time-domain model is developed for each of them. Then the individual and combined impact of voltage harmonics on the harmonic current emission of the modeled devices is analyzed based on simulations. Finally the impact of voltage distortion on the summation of multiple devices is analyzed and the accuracy of constant current source models is evaluated. Keywords: power quality; nonlinear loads; harmonics; cancellation effect; diversity factors.

Efecto de la distorsión de tensión sobre la emisión de armónicos de corriente de cargas no lineales Resumen Los dispositivos electrónicos son cargas no lineales que inyectan armónicos a las redes de baja tensión. Existen varios estudios que analizan el impacto de estos dispositivos sobre la red basados en diferentes tipos de modelos, siendo el más común la fuente de corriente. Mediciones han demostrado que la emisión de armónicos de dispositivos electrónicos depende de la topología del circuito y la distorsión de la señal de tensión. Este artículo cuantifica el impacto de la distorsión de la señal de tensión sobre la emisión de armónicos de diferentes dispositivos y grupos de ellos. Primero se desarrollan modelos eléctricos de las topologías más comunes. Luego se analiza el efecto individual y combinado de las armónicos de tensión sobre la emisión de armónicos de los dispositivos seleccionados. Finalmente se analiza el impacto de la distorsión de tensión sobre un grupo de dispositivos y se evalúa la precisión de la fuente de corriente. Palabras clave: calidad de la potencia; cargas no lineales; armónicos; effecto de cancelación; factor de diversidad.

1. Introduction The amount of electronic devices used by residential, commercial and industrial customers is continuously increasing. These devices are nonlinear loads that inject harmonic currents into low voltage grids, which could lead to increased voltage distortion levels, additional loading of neutral conductor, overheating or damage of capacitors for reactive power

compensation and equipment malfunction (e.g. [1]). Most of the studies that analyze the impact of these technologies on power quality are based on simulations that use an ideal current source to represent these loads (e.g. [2,3]). The parameters of the current source (magnitude and angle of the current harmonics) are usually obtained from measurements of the respective devices based on the specifications of international standards, like IEC 61000-3-2

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 150-159. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48591


Blanco et al / DYNA 82 (192), pp. 150-159. August, 2015.

[4]. These standards define a nearly undistorted voltage for the measurement of current harmonics. However, in the low voltage grids the voltage distortion is much greater, usually with THD (Total harmonic distortion) values between 3% and 5%, which change continuously during the day. This voltage distortion affects the current harmonics emitted by the electronic loads. Depending on the voltage waveform characteristics (magnitude and phase angle of the harmonics), harmonic emission of electronic loads can increase or decrease [5-9]. Therefore the current source may not be an adequate model for these kinds of loads. The main aim of this paper is to analyze the effect of supply voltage distortion on the harmonic emission of the main household electronic devices and determine the usefulness of a current source to model non-linear loads. In the first part, time-domain models of generic electronic devices are developed. The generic models correspond to single-phase switch mode power supply (SMPS) without power factor correction (PFC), with passive PFC and with active PFC, which are the main electronic topologies available in the market [10]. In the second part, models are used to make several simulations in order to analyze the variation of the current harmonics emitted by single electronic devices when the magnitudes and angles of the voltage harmonics change. Finally, the results of simulations of a mixture of loads with distorted and undistorted voltage supply are analyzed to determine the impact on the cancellation effect. 2. Overview of Electronic loads The electronic loads available on the market have different circuit topologies which lead to different current waveforms. Fig. 1 shows some current waveforms of different electronic devices that are measured under sinusoidal supply voltage. Electronic devices usually have a SMPS. Simple SMPS without PFC have a two stage topology, consisting of rectifying and inverting stages. Rectifying stage includes capacitor-fed diode bridge rectifier and provides smoothed DC voltage to inverter stage, which converts this DC voltage to a stabilized voltage/current signal of certain shape and level required by the load. Examples of this kind of load are the Compact Fluorescent Lamps (CFLs) with rated power below 25W (see Fig. 1). Depending on the existence of standardized limits (e.g. IEC 61000-3-2 [4]), some devices also implement a PFC. PFC methods can be classified as passive and active according to the utilized components. Passive PFC methods imply adding components like capacitors or inductors either to the input or the output of the rectifying stage or imply using a valley-fill circuit in order to improve the shape of the current pulse. Being a cost-effective and reliable way to lower current distortion, passive PFC methods nevertheless increase the size and weight parameters of SMPS by using bulky capacitors and inductors as low frequency filters. These drawbacks are not present in active PFC topologies, which utilize various DC-DC converters (boost, flyback, SEPIC) that shape the input current waveform by HF-switching and control circuit.

Figure 1. Measured current waveforms of electronic devices with different circuit topologies (undistorted supply voltage of 230V and 50Hz). Source: The authors.

Table 1 Characteristics of some electronic loads measured with an undistorted voltage waveform PF THDi [%] P [W] Irms [mA] CFL 11W (no PFC) 10,66 80,7 0,57 107,6 CFL 24W (no PFC) 22,08 156,8 0,61 109,5 CFL 30W (active PFC) 27,36 121,6 0,97 15,27 PC active PFC 48,09 258,3 0,80 30,24 PC passive PFC 55,78 340,1 0,71 100,07 Source: The authors.

Fig. 1 compares the current waveforms of some electronic loads without PFC (CFLs < 25W), with active PFC (CFL of 30W and one desktop computer) and passive PFC when they are supplied with an undistorted voltage waveform. Table 1 contains some characteristics of the devices obtained when an undistorted voltage waveform was used for the measurements. Active PFC topologies have the lowest distortion and the best power factor; however the almost sinusoidal waveform is superimposed with a high frequency component. This high frequency emission can also cause considerable network disturbances, but is not in the focus of this paper. 3. Modeling of Electronic Loads In order to analyze the effect of voltage distortion on the harmonic emission of electronic devices, the three main topologies were simulated in the Time-Domain. The TimeDomain modeling was selected because it allows a most detailed modeling of the device’s behavior in terms of harmonic emission. The models were designed to accurately represent the low-order harmonic emission. As mentioned above, the high frequency components were not taken into consideration by this analysis. Each topology was simulated in MatlabŽ with the SimulinkŽ package. To verify the models, several measurements with random voltage waveforms were made.

151


Blanco et al / DYNA 82 (192), pp. 150-159. August, 2015.

voltage and current waveforms of the 24W CFL obtained in one measurement (curve no. 2) and the corresponding simulation (curve no.1). As can be seen, the simulation properly represents the current waveform of the lamp. 3.2. Passive PFC Figure 2. Circuit topology of a SMPS without PFC. Source: [10]

Table 2 Circuit parameters of two CFLs of 11 and 24W. RL [kΩ] CFL 11W 8.18 CFL 24W 3.98 Source: Adapted from [10]

Cp [µF] 2.8 5.5

Figure 3. Simulated (1) and measured (2) current waveforms of a 24W CFL for a particular voltage distortion (3). Source: The authors.

3.1. SMPS without PFC The simulated circuit is shown in Fig. 2. The rectifying stage has a resistor (RS) to limit the current peak, a full diode bridge and a smoothing capacitor (CP). The inverting stage has only a resistance (RL) which represents the resonant inverter and the load (the tube in case of CFLs). This simplification does not affect the harmonic analysis because the resonant inverter normally runs at 10 – 40 kHz and this appears as a constant load from the source [11,12]. The values of resistance RL and the capacitance Cp depend on the active power of the lamp, while Rs is fixed to about 10Ω. The value of RL decreases and the value of Cp increases with the power of the lamp. Several ways exist to determine the value of RL. More information can be found e.g. in [8, 10]. The value of Cp is found using an iterative process until the error between the measured values and the simulations reaches a minimum. The values of RL and Cp for two CFLs of 11W and 24W are presented in Table 2. The model was verified using measurements of both lamps under different voltage distortions. Fig. 3 shows the

There are different ways to improve the power factor using passive elements, so there is no unique configuration that can represent these kinds of devices. Reference [13] presents a survey of most popular passive PFC methods, showing their great topological diversity. For this study only one configuration was implemented corresponding to a PC source. PCs with passive PFC tend to have one common circuit consisting of a capacitor-filtered diode bridge rectifier and a smoothing inductor acting as a passive harmonic filter. The inductor widens the current pulse, thus improving its distortion. Depending on usage behavior, the power consumption of a PC can vary in large ranges. To include this in the analysis, 2 load levels (50W and 100W) were considered. The first corresponds to normal mode of editing documents or browsing the internet, the second is dedicated to more power consuming applications, like games or video rendering [14]. The topology of PC power supply under study is shown in Fig. 4. The values of the model parameters for two load states are depicted in Table 3. Input current waveform of considered PC power supply presents a step at the beginning of the conduction interval due to parallel resonant circuit (Cf and Lf), which is used to widen current pulse at higher loads. Moreover, the current waveform of Fig. 5 is non-zero when diodes are reverse biased because of the presence of EMI filtering small capacitor CS. Superimposing simulated and experimental waveforms (Fig. 5) proves a good quality of developed model for PC power supply with passive PFC. 3.3. Active PFC The general idea of active PFC methods implies shaping the input current to resemble the waveform of the sinusoidal line voltage by means of HF switching of the boost DC-DC converter. Nevertheless, there is a certain level of current distortion inherent for all active PFC circuits [15] that involve zero-crossing distortion and non-sinusoidal shaping of the current waveform. It can be claimed that the control system of active PFC circuits defines on the whole its current harmonic spectrum.

Figure 4. Circuit topology of a SMPS with passive PFC. Topology with smoothing inductor (e.g. some PC power supplies). Source: [13]

152


Blanco et al / DYNA 82 (192), pp. 150-159. August, 2015. Table 3 Circuit parameters of a SMPS with passive PFC Rs Rf Cf RL [Ω] [Ω] [µF] [kΩ] 50W load 10 1.7 7 0.82 100W load 10 0.89 7 0.82 Source: Adapted from [13]

Cs [µF] 0.57 0.57

Cp [µF] 600 600

measured waveform. As already mentioned, the models are focused on low order harmonic emission and therefore high frequency emission is not reflected by the model.

Lf [mH] 35 35

4. Analysis of individual electronic devices The effect of supply voltage distortion on the current harmonic emission of the major circuit topologies was analyzed with the models developed in Simulink. The model of the 24W CFL, the PC source (50W load) and the 30W CFL were selected for the simulations because they represent the major circuit topologies, namely the SMPS without PFC, the SMPS with passive PFC and the SMPS with active PFC respectively. 4.1. General framework

Figure 5. Simulated (1) and measured (2) current waveforms of a PC power supply for a particular voltage distortion (3). Source: The authors.

In terms of the present paper, only an active PFC circuit of a 30W CFL was considered. Active PFC topology utilized in 30W CFL consists of diode bridge rectifier, boost converter and control circuit, made up of voltage and current control loops (Fig. 6). To simplify and speed up simulation the operation of boost DC-DC converter was modeled via averaging approach. HF switching was neglected and transistor and diode were replaced with so called “DC transformer”, involving voltage and current sources, subsequently, V0•D and IL•D, controlled by duty cycle D. Voltage control stabilizes the output voltage by means of transfer function GV(s):

GV (s)  Ar s wc  1

In order to obtain a comprehensive overview of the possible impact of voltage distortion on current harmonic emission, systematic simulations based on a set of synthetic waveforms with well-controlled variations of magnitude and phase angle of 3rd, 5th and 7th voltage harmonics were carried out. The mentioned harmonics were added to the test voltage individually as well as in several combinations. Usually the voltage distortion in public low voltage grids contains a characteristic set of harmonics. The harmonics often have prevailing values for magnitude and a phase angle that does not follow a completely random behavior as is assumed in the generic study based on synthetic waveforms. Therefore in a second step harmonic measurements in public

(1)

where Ar is the low frequency gain of the voltage controller and ωc = 2πfc the corner frequency in rad/sec. Current shaping is guaranteed by current control loop that compares the reference signal from voltage loop Iref with inductor current with the following transfer function GI(s):

GI (s)  K c s /  z  1 s s /  p  1

Figure. 6. Circuit topology of a SMPS with active PFC (30W CFL). Source: [15]

(2)

where Kc is the integrator gain and ωp and ωz are the HF pole and zero of the transfer function. Parameter calculation procedures for boost converter, voltage and current control loops (Table 4) are provided by [16]. The comparison of simulated and measured current waveforms for active PFC 30W CFL presented in Fig. 7 shows that modeled curve closely follows the average

Table 4 Circuit parameters of a SMPS with active PFC Boost converter P [W] L [mH] Cout [µF] Cin [µF] RL [kΩ] 30 8 30 0.3 5.2 Control system gains KI KVin KVout 2.44 0.0031 6.3· 10-3 Source: [16].

153

Kc 1.5· 103

Voltage loop Ar ωc [s-1] 1.5 103

Current loop ωz [s-1] ωp [s-1] 90.9 4· 105


Blanco et al / DYNA 82 (192), pp. 150-159. August, 2015.

a) Figure. 7. Simulated (1) and measured (2) current waveform of a SMPS with active PFC for a particular voltage distortion (3). Source: The authors.

low voltage grids were analyzed to quantify the typical harmonic content of supply voltage. These waveforms are used for a second set of simulations, which are finally compared with the simulations based on synthetic waveforms. The following subsections describe the different waveform sets in detail and how they were obtained. 4.1.2. Synthetic waveforms The synthetic waveforms were created in order to identify the effect of the 3rd, 5th and 7th voltage harmonics only, which are the dominating harmonic orders in most of the low voltage grids. Ten cases with different variations of these voltage harmonics were defined (see Table 5). The first 6 cases consider the effect of each voltage harmonic individually, when the magnitude varies between 0 and 4% and the angle varies between 0 and 90° or between 0 and 360°. Other cases consider the effect of two or three voltage harmonics at the same time. Sets of 300 different voltage waveforms were generated for each case. Magnitudes and angles of the voltage harmonics were drawn from uniform distributions and within the ranges given in Table 5. As an example, Fig. 8 shows the Table 5 Defined synthetic cases for the analysis Voltage harmonics  V ( h )   (h )    

V (3) Case [%] 1 0- 4 2 0- 4 3 4 5 6 7 0- 4 8 0- 4 9 10 0- 4 Ref. 0 Source: The authors

θ (3) [°] 0- 90 0- 360 0- 360 0- 360 0- 360 -

V (5) [%] 0- 4 0- 4 0- 4 0- 4 0- 4 0

θ (5) [°] 0- 90 0- 360 0- 360 0- 360 0- 360 -

cumulative distribution of the 5th harmonic magnitudes and angles of the voltage waveforms sampled for case 4. To ensure comparability each device was simulated for each case with the same set of voltage waveforms and the THDi and the first odd current harmonics were recorded. Fig. 11a exemplarily shows the cumulative distribution of 5th harmonic current magnitude that is obtained for case 4. It can be seen that the uniformly distributed 5th harmonic voltage magnitudes and angles result in a distribution of a 5th harmonic current magnitude, which is unsymmetrical and non-uniform. 4.1.3. Real waveforms

V (7) [%] 0- 4 0- 4 0- 4 0- 4 0- 4 0

b) Figure 8. CDF of 5th harmonic of voltage waveforms generated for case 4. a) Magnitudes, b) Angles. Source: The authors

θ (7) [°] 0- 90 0- 360 0- 360 0- 360 0- 360 -

The real voltage distortions were obtained from 3-phase measurements at the PCC of two houses with residential customers located in different low voltage grids in Germany. Magnitude and phase angle of the first 50 voltage harmonics were recorded for each site and one week every 30 minutes. The selected measurement period considers the daily and weekly variations of the voltage distortion in residential areas. Fig. 9 shows the variation range of voltage waveforms for both sites, which are calculated based on the measured complex harmonic voltage spectra. Compared with the sinusoidal waveform (black) the measured waveforms show a flat top characteristic, which is common for most residential

154


Blanco et al / DYNA 82 (192), pp. 150-159. August, 2015.

characteristic (e.g. pointed top) and subsequently different variation ranges of the harmonic emission at different supply voltage distortion.

350 Site 1 300

Site 2 Sine

Voltage /V 

250

4.1.4. Analysis methodology

200

For further analysis, the difference between simulation results and a reference case is calculated in order to represent the impact of voltage distortion clearly and to compare the results of the different topologies. The simulation with an undistorted voltage waveform was selected as a reference (see Table 7). Fig. 11b exemplarily shows the distribution

150 100 50 0

0

50

100

150

Table 7 Reference values obtained with an undistorted voltage waveform. Current harmonics (I (h) (h) ) THDI Device (1) I (5) θ (5) I θ (1) (%) [mA] [°] [mA] [°] No PFC 108.6 102.3 25.09 53.2 137.08 Passive PFC 91.9 236.3 357.07 105.2 285.41 Active PFC 14.5 117.7 10.40 6.4 233.03 Source: The authors

200

Time /ms  Figure 9. Voltage waveforms of two residential low voltage grids compared with a sinusoidal waveform of 230V. Source: The authors.

H3 H5 H7 H9

H3 H5 H7 H9

IIM in A 

5

0

1

(h)

0

(h)

-5

-5

-5

0 (h)

IRE in A 

a)

5

-5

0

0.8

5

(h)

IRE in A 

Estimated CDF

IIM in A 

5

b)

Figure 10. Variation of voltage harmonics of a) site 1 and b) site 2. Source: The authors.

Table 6 5th and 95th percentile of voltage harmonic magnitudes and angles of real voltage waveforms. Voltage harmonics V (h)  (h)    θ (3) V (5) θ (5) V (7) θ (7) V (3) Site Prc. [%] [°] [%] [°] [%] [°] 5 0.47 59 0.42 198 0.61 -4 1 95 1.17 87 2.18 351 1.83 112 5 0.53 52 1.49 182 0.99 -7.5 2 95 1.25 94 2.68 226 1.86 29 Source: The Authors

0.6

0.4

0.2

0 20

30

40

50

60

5th harmonic current / mA 

70

80

a)

1

0.8

Estimated CDF

low voltage grids. However, the sites have slightly different behavior. Fig. 10 shows the variation of the first odd harmonics in the complex plane and Table 6 quantifies the 5th and 95th percentile of the respective magnitudes and phase angles. Both sites show, in most cases, distinctive regions (clouds) for the harmonics, which confirm their non-random behavior. Site 1 shows a higher variation of harmonic angles, especially for the 3rd and 5th harmonics. These differences produce the variations in the voltage waveforms. The complete set of measurements was used in the simulations. It should be noted that the focus of this study is residential grids and the results only apply for public networks with residential customers. Industrial grids, especially, can have significant different waveform

0.6

0.4

0.2

0 -30

-20

-10

0

10

20

30

Variation 5th harmonic current / mA  b) Figure 11. Simulation results CFL 24W with Case 4. a) CDF 5th harmonic current, b) CDF of the difference of the 5th harmonic current and the reference. Source: The authors

155


Blanco et al / DYNA 82 (192), pp. 150-159. August, 2015. 30

THDI Difference / % 

20

10

0

-10

no PFC passive PFC active PFC

-20

-30

1

2

3

4

5

6

7

8

9

10

Grid

Case 

Figure 12. Variation of the THDI of main topologies due voltage distortion. Source: The authors.

function of this difference for one of the synthetic cases. Finally, the 5th to 95th percentile range is obtained for each data set to quantify the variation of the THDI and the current harmonics by respective boxes (c.f. Fig. 11b). It has to be noted, that absolute magnitude of harmonic and magnitude related to fundamental lead to different interpretations. Both have advantages and disadvantages. Refer to next subsection for more details.

the real waveforms, the variation is slightly lower than for most of the cases with synthetic waveforms. However the equipment with passive PFC behaves more sensitively than the equipment without PFC, which is in contrast to the simulations with synthetic waveforms. The THDI is related to the fundamental and provides a good index for comparing the sensitivity between the different equipment. However, the THDI may not only result from variation of harmonic currents but also from variation of fundamental current. To avoid any misinterpretation, Fig. 13 and 14 present the variation of total harmonic current IHAR and fundamental current IFUN separately. They show that the variation of fundamental current of passive PFC device is much more affected by the different voltage distortions than by the other equipment, while the fundamental current for active PFC device is almost independent from it. The total harmonic current of the passive PFC device has the highest variation, which is at least partly caused by its higher absolute current emission compared to the other devices. In the case of the limited phase angle variation of 3rd harmonic voltage (case 1), fundamental current and total harmonic current decrease, while the case with the same variation of phase angles for the 5th harmonic (case 3) shows an increase of fundamental and harmonic current. 50

4.1.5. Analysis of THD IFUN Difference / mA 

30

The total harmonic distortion THDI applied in this paper is defined as: 50

THDI 

I HAR  I FUN

I  (h)

h2

I

no PFC passive PFC active PFC

40

(1)

2

(3) 100%

20 10 0 -10 -20 -30 -40 -50

1

2

3

4

5

6

7

8

9

10

Grid

Case 

Figure 13. Variation of the rms fundamental current. Source: The authors 60

40

IHAR Difference / mA 

where I(h) are the magnitudes of the current harmonics, IHAR is the total rms harmonic current and IFUN is the rms fundamental current. Fig. 12 shows the variation of the THDI for the different cases. The THDI varies for all the topologies and this variation increases for cases with higher voltage distortion. Moreover the THDI variation is larger if the voltage harmonic angles vary in a larger range (0° .. 360° compared to 0° .. 90°). This clearly shows the significant effect of voltage harmonic angles on the harmonic emission of electronic devices. The real waveforms (referred to as grid in the figures), have lower harmonic magnitudes compared to the synthetic waveforms, but produce significant changes in the current distortion of the three topologies as well. Between the topologies, the SMPS without PFC is more sensitive to the voltage distortion than the other topologies. The THDI of this topology varies in the range of 12% to 40%, the current distortion of the SMPS with passive PFC in the range of 5% to 25% and the active PFC device in the range 1% to 5%. The topology without PFC behaves most sensitive while the active PFC seems to be the most robust topology in terms of supply voltage distortion. It should be noted that, for

20

0

-20

-40

-60

no PFC passive PFC active PFC 1

2

3

4

5

6

Case 

7

8

9

10

Grid

Figure 14. Variation of the total harmonic current of main electronic topologies.

156


Blanco et al / DYNA 82 (192), pp. 150-159. August, 2015.

similarly and show lower variation range. Generally there is no linear link between variation of voltage harmonic phase angle and current harmonic phase angle. Another interesting aspect is the cross-interference between the harmonics. A zero cross-interference means that the nth harmonic voltage will only result in a variation of the nth harmonic current. All other harmonic currents are not affected [6]. Consequently Fig. 15 and 16 would show harmonic current variation only for the cases 3, 4, 7, 9 and 10. However, this is only satisfied for the harmonic current magnitudes of the active PFC device, which mainly results from the internal PFC control of the device. The simple SMPS without PFC as well as the passive PFC device show a significant cross interference, which complicates the development of accurate but simple models further.

Source: The authors 5th Harmonic magnitude difference / mA

40

no PFC passive PFC active PFC

30 20 10 0 -10 -20 -30 -40

1

2

3

4

5

6

7

8

9

10

Grid

Case 

4.3. Summary

Figure 15. Variation of the 5th current harmonic magnitude due voltage distortion. Source: The authors

With these results it is clear that the traditional modeling of electronic devices with a constant current source may result in significant inaccuracies in the frequency domain. This applies not only for the synthetic waveforms, but also for the real waveforms based on measurements in public LV grids. The current harmonic emission depends on the voltage harmonics magnitudes and angles. Most dependencies are not linear. All topologies show more or less distinctive cross-interference, which means that independent consideration of harmonic orders and superposition are only limited applicable. If a quick harmonic study is needed, current source models, maybe taking typical voltage distortion into account, could be sufficient. However, depending on the particular situation in terms of voltage distortion in the grid, significant errors of several ten percent have to be expected. For accurate and reliable analysis it is better to use the time domain models.

5th Harmonic Angle Difference /° 

80

no PFC passive PFC active PFC

60 40 20 0 -20 -40 -60 -80

1

2

3

4

5

6

7

8

9

10

5. Analysis of a group of electronic devices

Grid

Case 

Figure 16. Variation of the 5th current harmonic angles due voltage distortion. Source: The authors

However, for the simulations with real waveforms, the total harmonic current tends to be smaller when compared to sinusoidal case, especially for the equipment without PFC and with passive PFC. This effect e.g. proves the common hypothesis of a “saturation” effect for harmonic emission of electronic equipment.

The presence of different devices with different topologies at one connection point can cause a diversity of current harmonic phase angles and subsequently may lead to a lower magnitude of vector sum than the arithmetical sum of the harmonic currents [12, 17]. This is known as cancellation effect and has a high influence on the total harmonic distortion emitted by larger groups of electronic devices into the grid. The cancellation effect is quantified by the phase angle (h ) diversity factor k p individual for each harmonic [18]: n

4.2. Analysis of harmonic emission The impact of voltage distortion on the individual harmonics is exemplarily discussed for the 5th harmonic current. Due to its dominance, the behavior of 5th harmonic currents in Fig. 15 is almost similar to the total harmonic current in Fig. 14. The variation of current harmonic magnitudes is higher for no PFC and passive PFC technologies. Moreover, the variation of 5th harmonic phase angle (Fig. 16) is higher for the active PFC device, while devices without PFC and with passive PFC behave almost

k (ph ) 

Vector sum  Arithmetic sum

( h) IVEC ( h) I ARI

I i 1 n

( h) i

(4)

I i( h)

i 1

(h)

where I i represents the harmonic current vector of the device i, n is the number of devices and h is the order of the harmonic. The diversity factor varies between 0 (perfect cancellation) and 1 (no cancellation). This study tries to give an idea on accuracy and representativeness of diversity

157


Blanco et al / DYNA 82 (192), pp. 150-159. August, 2015.

factors calculated based on individual harmonic currents measured at a particular voltage distortion [e.g. 18]. To analyze the effect of voltage harmonics on the cancellation between individual harmonic currents within a group of electronic devices, the time domain models of two devices without PFC (CFLs with 11W and 24W), one passive PFC device (PC power supply at 50W) and one active PFC device (CFL with 30W) are connected together to the same point. The diversity factor KP is calculated for each case (synthetic and real waveforms) using the same sets defined in the previous section. Analog to section 4, the 5th to 95th percentile range is calculated and presented as a box. The reference case at undistorted supply voltage is indicated by a black line. Fig. 17 shows the variation of the diversity factor of the 3rd and 5th current harmonics for each of the cases. The diversity factor of the 3rd harmonic shows only a very low variation. Moreover, the level of cancellation of this harmonic is almost independent from the voltage distortion. The diversity factor remains close to the one obtained at undistorted voltage (reference case). For the 5th harmonic (Fig. 17b) the variation is significant. Without voltage distortion the cancellation for the 5th harmonic current is very effective (0.38). In case of a 5th voltage harmonic varying in the range 0° ... 90° the efficiency increases, but for most of the other cases, a partial decrease of the diversity can be observed. The cancellation effect in case of the real waveforms is usually lower than for the sinusoidal case. Therefore, an accurate and reliable study of cancellation effect for 5th harmonic current in low voltage grids should take the voltage distortion into account. This can be achieved e.g. by introducing probabilistic aspects to the device modeling. 1

Ref case

Diversity factors 

Diversity factors 

1 0.8 0.6 0.4 0.2 0

0.6 0.4 0.2 0

1 2 3 4 5 6 7 8 9 10Grid

Ref case

0.8

Case 

1 2 3 4 5 6 7 8 9 10Grid

Case 

a) b) Figure 17. Variation of phase angle diversity factors due to voltage distortion. a) 3rd harmonic current, b) 5th harmonic current. Source: The authors

0.8 0.6 0.4 0.2 0

1

Ref. case 3 5 7

9 11 13 15

Harmonic order 

6. Conclusions Most electronic household devices can be divided into three main circuit topologies: SMPS without PFC, SMPS with passive PFC and SMPS with active PFC. All the topologies produce current harmonics which depend strongly on the distortion of the supply voltage. The application of the current source model is limited and can result in errors of several ten percent if supply voltage is distorted. The current emission depends not only on the magnitude of the voltage harmonics, but also on the angles, and in most of the cases these relations are not linear. Most of the analyzed cases show a significant cross-interference. This means that changes in one voltage harmonic lead to changes in multiple current harmonics. Introducing probabilistic aspects into the current source model can improve the quality and reliability of results. For accurate harmonic studies the use of time domain models is highly recommended. The cancelation effect of current harmonics of order 5 and higher is highly affected by the presence of voltage harmonics. Therefore the cancellation of current harmonics not only depends on the number and type of connected devices, but also on the voltage distortion in the low-voltage grid. Subsequently harmonic studies about the impact of electronic mass market equipment on the grid should also take the voltage distortion into account. References

1

Diversity factors 

Diversity factors 

1

The combination of the results of all synthetic waveforms (about 3000) into one box for each odd harmonic up to 15th is presented in Fig. 18a, while Fig. 18b shows the results obtained with the real waveforms (about 2000). The reference case (undistorted supply voltage) is indicated by the back line. As expected, the fundamental adds up arithmetically independent of voltage harmonics. For harmonic orders 7 and above the variation of diversity factor becomes very high. Even situations without any cancellation may occur. The real voltage distortions also produce high variations of the diversity factors, and for some harmonics (3rd, 9th and 13th) the cancellation decreases significantly. On the other hand the voltage distortion of real waveforms has a positive impact on the diversity for 11th and 15th harmonic. Therefore, studies of the cancellation effect based on equipment measurements under undistorted supply voltage should always be interpreted with care.

0.8

[1]

0.6

[2]

0.4 0.2 0

1

Ref. case 3 5 7

9 11 13 15

Harmonic order 

a) b) Figure 18. Variation of diversity factors for the first harmonics when a) synthetic waveforms and b) real waveforms were applied. Source: The authors

[3]

158

Bollen, M.H.J. and Gu, I.Y.H., Signal processing of power quality disturbances. USA: John Wiley & Sons, 2006. DOI: 10.1002/0471931314 Koch, A., Myrzik, J., Wiesner, T., and Jendernalik, L., Harmonics and resonances in the low voltage grid caused by compact fluorescent lamps. Proceedings of the 14th International Conference on Harmonics and Quality of Power – ICHQP, pp. 1-6, 2010. DOI: 10.1109/ichqp.2010.5625425 Korovesis, P., Vokas, G., Gonos, I. and Topalis, F., Influence of largescale lamps on the line voltage distortion of a weak network supplied by photovoltaic station. IEEE Transactions on Power Delivery, 19 (4), pp. 1787-1793, 2004. DOI: 10.1109/TPWRD.2004.835432


Blanco et al / DYNA 82 (192), pp. 150-159. August, 2015. [4] [5]

[6] [7]

[8]

[9]

[10] [11]

[12]

[13]

[14] [15] [16] [17]

[18]

Electromagnetic Compatibility (EMC) – Part 3-2: Limits – Limits for harmonic current emissions (equipment input current ≤16A per phase), IEC Standard 61000-3-2, Mar. 2009. Mansoor, A., Grady, W.M., Thallam, R.S., Doyle, M.T., Krein, S.D. and Samotyj, M.J., Effect of supply voltage harmonics on the input current of single-phase diode bridge rectifier loads. IEEE Transactions on Power Delivery, 10 (3), pp. 1416-1422, 1995. DOI: 10.1109/61.400924 Cobben, S., Kling, W. and Myrzik, J., The making and purpose of harmonic fingerprints. Proceedings of the 19th International Conference on Electricity Distribution (CIRED), pp. 1-4, 2007. Nassif, A. and Acharya, J., An investigation on the harmonic attenuation effect of modern compact fluorescent lamps. Proceedings of the 13th International Conference on Harmonics and Quality of Power, pp. 1-6, 2008. DOI: 10.1109/ICHQP.2008.4668759 Collin, A., Cresswell, C. and Djokic, S., Harmonic cancellation of modern switch-mode power supply load. Proceedings of the 14th International Conference on Harmonics and Quality of Power – ICHQP, pp. 1-9, 2010. DOI: 10.1109/ichqp.2010.5625422 Müller, S., Meyer, J. and Schegner, P., Characterization of small photovoltaic inverters for harmonic modeling. Proceedings of the 15th International Conference on Harmonics and Quality of Power – ICHQP, pp. 659-663, 2014. DOI: 10.1109/ichqp.2014.6842929 Creswell, C., Steady state load models for power system analysis, PhD. Thesis, Department of Electrical Engineering, The University of Edinburgh, Edinburgh, 2009. Wei, Z., Watson, N.R. and Frater, L.P., Modeling of compact fluorescent lamps. Proceedings of the 13th International Conference on Harmonics and Quality of Power, pp. 1-6, 2008. DOI: 10.1109/ICHQP.2008.4668833 Blanco, A., Stiegler, R. and Meyer, J., Power quality disturbances caused by modern lighting equipment (CFL and LED). Proceedings of the IEEE Powertech, pp. 1-6, 2013. DOI: 10.1109/ptc.2013.6652431 Redl, R. and Balogh, L., Power-factor correction in bridge and voltage-double rectifier circuits with inductors and capacitors. Proceedings of the Applied Power Electronics Conference and Exposition, pp. 466-472, 1995. Ibrahim, K.F., Newnes guide to television and video technology. Fourth Edition. Oxford: Elsevier, 2007. Basso C.P., Switch-Mode power supplies spice simulations and practical designs. New York: McGraw Hill, 2008 ON Semiconductor, Power factor correction (PFC) handbook. Choosing the right power factor controller solution. Denver: ON Semiconductor, 2011. Cuk, V., Cobben, J., Kling, W.L. and Timens, R., An analysis of diversity factors applied to harmonic emission limits of energy saving lamps. Proceedings of the 14th International Conference on Harmonics and Quality of Power – ICHQP, pp. 1-6, 2010. DOI: 10.1109/ichqp.2010.5625406 Meyer, J., Schegner, P. and Heidenreich, K., Harmonic summation effects of modern lamp technologies and small electronic household equipment. Proceedings of the 21st International. Conference on Electricity Distribution (CIRED), pp. 1-4, 2011.

S. Yanchenko, received his MSc. in Electric Engineering from Moscow Power State Institute, Moscow, Russia in 2010. Since then he has been working as a researcher at the Department of Electric Supply of Industrial Enterprises, MPEI. His main research interests involve harmonic modeling of residential nonlinear loads including active power factor correction devices and renewables. J. Meyer, studied Electrical Power Engineering at Technische Universität Dresden, Dresden, Germany, where he received the Dipl.-Ing. He completed his PhD. with a thesis on the statistical assessment of power quality in distribution networks. Since 1995 he has been working with the Institute of Electrical Power Systems and High Voltage Engineering at the same university. He is a member of several working groups. His fields of interest are all aspects of design and management of large power quality monitoring campaigns and theory of network disturbances, especially harmonics. P. Schegner, studied Electrical Power Engineering at the Darmstadt University of Technology, Germany, where he received the Dipl.-Ing. After that he worked as systems engineer in the field of power system control and became a member of the scientific staff at the Saarland University, Germany, completing his PhD. with a thesis on the earth-fault distance protection. He then worked as head of the development department of protection systems at AEG, Frankfurt A.M., Germany. Currently he is a professor of Electrical Power Systems at the Technische Universität Dresden, Germany.

A.M. Blanco, finished her BSc. and MSc. studies at the Departamento de Ingeniería of the Universidad Nacional de Colombia. Currently she is developing a PhD in Electrical Engineering at the Technische Universität Dresden, Germany. Her research interests are power quality analysis, modeling and simulation and probabilistic aspects of power quality.

159

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64


Design and construction of a reduced scale model to measure lightning induced voltages over inclined terrain Edison Soto-Ríos a, Ernesto Pérez-González b & Javier Herrera-Murcia c a

Facultad de Ingeniería y Arquitectura, Universidad Nacional de Colombia, Manizales, Colombia. easotor@unal.edu.co b Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. eperezg@unal.edu.co c Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. jherreram@unal.edu.co Received: October 7th, 2014. Received in revised form: February 11th, 2015. Accepted: July 17th, 2015.

Abstract In this paper the design and construction of a reduced scale model indented to be used for measuring lightning induced voltages on overhead lines placed over inclined terrain is presented. The paper includes details about the voltage impulse source developed and used in obtaining fast front current surges along the reduced scale lightning channel. The experiment, although, it is only a first approximation of the phenomenon, is useful to validate exact expressions and simulation results obtained from several numerical methods. Additionally, several simulations are made by means of the FDTD method, which have the aim of showing the expected measurements of the reduced scale model. Keywords: induced voltage, reduce scale model, electromagnetic, lightning, non-flat terrain, overhead lines, distribution lines.

Diseño y construcción de un modelo a escala reducida para medición de tensiones inducidas en un terreno inclinado Resumen En este artículo se presenta el diseño y la construcción de un modelo a escala para medir tensiones inducidas en líneas aéreas ubicadas sobre terrenos inclinados. En este artículo se muestra los detalles de desarrollo de la fuente de tensión creada para producir impulsos rápidos que representen un rayo. El experimento, aunque, es una primera aproximación del fenómeno es útil para validar métodos numéricos que son más flexibles para desarrollar configuraciones más complejas. Adicionalmente se presentan las simulaciones hechas por medio de la metodología de Diferencias finitas para mostrar los resultados esperados por el modelo a escala. Palabras clave: tensiones inducidas, modelo a escala, electromagnetismo, rayo, descargas eléctricas atmosféricas, terreno no plano, líneas aéreas, líneas de distribución.

1. Introduction The lightning electromagnetic field causes and important effect on exposed transmission and distribution lines, phone lines and other electrical systems that are close by [1–3]. There is an important influence of the electromagnetic field in the amplitude and wave shape of the lightning induced voltages. In general, lightning electromagnetic fields have usually been calculated when considering only flat terrains [1,3–5]. Nevertheless, many of these lines are placed on mountainous regions such as the Andean and the Alps regions, which are surrounded by mountains and where the flat terrain approximation is not reasonable.

For this reason, it is important to calculate and to measure lightning electric fields and induced voltages over non-flat terrains. Lightning electromagnetic pulse (LEMP) has been studied worldwide and many theoretical studies have been undertaken. One of the most classical approaches has been made using analytical expressions based on dipole technique [6–8], considering lightning as a straight vertical antenna over a perfectly conducting ground plane. In order to include the ground conductivity it is possible to use the so-called Sommerfeld integrals [9,10]. However do to its complexity, some other approaches have been made such as CoorayRubinstein [11–13] who derived a formula for calculating the

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 160-167. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48611


Soto-Ríos et al / DYNA 82 (192), pp. 160-167. August, 2015.

horizontal electric field over non-perfect soil. Nevertheless, these approaches, in general use a flat terrain. Several numerical approaches have been used to solve Maxwell equations and include more complex configurations with non-homogeneous propagation media. For example, for LEMP calculations Finite Element Method (FEM) [14,15], method of moments [2,16], and the Finite Difference have been used. The Time Domain method (FDTD) [17–21] will be used in this paper. The FDTD for solving Maxwell equations, was introduced by Yee [22,23] and has become popular for calculating electromagnetic fields [24]. Several applications from this method have been presented to calculate lightning electromagnetic fields [17–20], in general, all considering flat terrain. In the case of lighting induced voltages calculation, both analytical equations in Uman et al. [8] or solutions based on numerical methods [1–4,25] have been considered the existence of flat terrain. Although numerical or analytical approximations can be useful in many cases, a few models have been developed to calculate electromagnetic fields and lightning induced voltages over non-flat terrains. [26,27]. For this reason, some measurements need to be done in order to obtain real behavior of the phenomena. In this paper, we measure those fields and induced voltages by means of a reduced scale model, based on the experience of construction of scale models by authors like Yokoyama [28], Paolone [29], Piantini [30] and Boaventura [31]. Although those experiments were made considering flat terrain, they were adapted to the measurement of lightning electric fields above a flat and non-flat terrain configuration, over a perfect soil. The performance of the reduced scale model is simulated by means of the FDTD-3D method linked with the Agrawal et al. coupling model [32]. The knowledge generated in this work, allows for the improvement of overhead line indirect lightning performance and lightning location systems, which are both related to the improvement of power quality. 2. Reduced Scale Model A reduced scale model is important for a) the validation of numerical codes that calculate induced voltages produced by lightning electromagnetic pulse (LEMP) and b) the prediction of the Lightning-induced voltages for those cases which, due to their extreme complexity, cannot be simulated with the existing version of these codes [30]. The scale factors can be derived by applying Maxwell's equations to the real system and the reducedscale model and, and then, by relating the quantities of interest in both systems [30]. The scale factors obtained when the medium is air are shown in Table 1, where p is the ratio between the quantities in the model and in the full-scale system and  is the relation between electric and magnetic fields in the model and in the full scale system. The choice of length scale factor depends on the available space and the minimal wave front that can be generated and measured [30,31]. The scale factor achieved for this case was p==1:200. The reduced scale model was composed by the following components [30,31].

Table 1. Scale factors: ratios between the values of the quantities in the model and in the full-scale system Quantity Relation Length (l) p Time (t) p Electric Field (E)  Magnetic Field (B)  Resistance (R) 1 Capacitance (C) p Inductance (L) p Impedance (Z) 1 Propagation Velocity (v) 1 Frequency (f) 1/p 1/p Conductivity() Voltage (V) .p Current (I) .p p Resistivity () 1 Dielectric Constant () 1 Magnetic Permeability () p Wave Length () Source: Adapted from [30], [31]

2.1. Ground Plane The ground plane consisted in a rectangular plane 3x10 was composed of interconnected aluminum plates in the full-scale system). (corresponding to 600x2000 Due to scale factor of conductivity shown in Table 1, the aluminum plates can be considered as perfect electric conductors in the full-scale model. The aluminum plates were placed over an iron structure 0.8 m in height. The metallic structure was inclined at a suitable angle to verify the influence of the inclined terrain on lightning induced voltages calculation. 2.2. Return Stroke Channel The return stroke channel consists of a cooper coil with a 1.5 cm diameter with a PVC core to give mechanical strength. The copper conductor has a 0.5mm radius. The height of the total rod is 6 m, although due to local restrictions, it was limited to 5m. The injection of a step voltage at rod base has a current response as Figure 1 shows. 15 10 Voltage (V)

5

Current (mA)

0 ‐77

173

423

673

923

1.170 1.420 1.670 1.920

‐5 ‐10 ‐15

Time (ns)

Figure 1. Voltage injected and measured current at channel base. Source: The authors

161


Soto-Ríos et al / DYNA 82 (192), pp. 160-167. August, 2015.

According to Fig. 1, the travel time of the wave from the bottom of the channel and back again is approximately 400 ns. The return stroke velocity is equal to:

0 ‐50 ‐100

(1)

‐150 voltage(V)

27

The velocity measured along the channel is approximately 10% of that of the speed of light in free space. The surge impedance can be obtained dividing the voltage and the current measured in the first 400 ns. .

3.6 Ω

‐200 ‐250 ‐300 ‐350 ‐400

(2)

‐450

2.3. Impulse Current Generator The current is generated when a voltage source is injected into the base of the lightning channel. The voltage source circuit is shown in Figure 2. It is composed by an alternating voltage source through an uninterruptible power supply (UPS), and a voltage multiplier circuit of 2 steps that generates a total voltage of -350 V D.C. (The negative polarity value does not affect the measurements made of lightning induced voltages). The stored voltage in capacitors is discharged to the lightning channel by means of a very fast switching electronic device (MOSFET STD2NK100Z). A control circuit of the MOSFET is composed by an oscillator 555 that generates a pulsed voltage with a frequency of 1 Hz and a width pulse of 4ms (enough time for the expected duration of measurements -approximately 200ns). The discharge is repetitive and has the aim of reducing noise in measurements. The constructed voltage source is shown in Figure 3.

0

100

200

300

400 500 Time (ns)

600

700

Figure 4. Voltage generated by source voltage without load. Source: The authors

The voltage generated by the source without load is shown in Figure 4. It is possible to see an approximately front time of 40 ns, equivalent to 8 us in a full-scale system. 2.4. Overhead distribution line The line is a copper conductor, 5m long, 5 cm high and with a radius of 0.46 mm. The line at full-scale will be equivalent to a 1km line, 10 m height and 80 mm radius. 2.5. Measuring System The electric field is measured by means of a ring, concentric with the return stroke channel. The measured voltage at this ring is proportional to electric field [30]. The induced voltage is measured at the beginning and at the end of the line by means of an oscilloscope Tektronix with a bandwidth of 100 MHz, 1GS/s and 1 mV/div. 3. Methodology for calculating lightning electromagnetic fields and lightning induced voltages

Figure 2 Voltage source circuit. Source: The authors.

Figure 3. Constructed voltage source according to diagram shown in Fig. 2. Source: The authors.

In order to simulate the reduced scale model, the finite difference time domain method (FDTD) was employed in the 3-D Cartesian coordinates system [23]. It was simulated by using a metallic plate 3 x 10 m2 with a very high conductivity (=1e8), which was surrounded by a concrete space (corresponding to the soccer field where the model was placed, see Figure 5). The concrete conductivity was assumed to be =7.14 mS/m. The grid dimensions were 15 (in x)  10 (in y)  6 (in z) m3 (see Figure 6) and the space step was set as ∆ ∆ ∆ 0.05 , due to the available computational capability. The inclined terrains were represented by staircases in 3D [23]. A lightning channel was simulated with a base current, characterized by a peak value of 170 mA and a front time of 20 ns, corresponding approximately to the initial estimated values of the channel base current obtained from the reduced scale model measurements. It can be represented by a Heidler 162

800


Soto-RĂ­os et al / DYNA 82 (192), pp. 160-167. August, 2015.

Figure 5 Reduced Scale model for flat terrain. Source: The authors

Figure 6. Simulated space. The ground plane and lightning channel is at the middle of the working space. Source: The authors

60 50

Ez(V/m)

40 FDTDÂ 3D 30

4. Simulated Electric fields on the reduced scale model

Uman

Two configurations are simulated according to the arrangements to be implemented in the reduced scale model, as shown below:

20 10 0

function [19], with the following parameters: =160 mA, = 4.47 ns, =200 ns. The lightning channel was placed at the middle of the working space. For simplicity, a TL model was used to represent the return stroke current along the channel [8,33]. The height of the constructed channel (5 m) was simulated, and the return stroke velocity was measured (30 m/ s) at this channel (See Section 0). The electromagnetic field obtained by FDTD code [22] was validated by comparing the electric field calculated by the 3D-FDTD methodology with the analytical expressions derived by Uman [8] for LEMP calculation. The vertical electric field is calculated at a distance of 1 m from the lightning channel at ground level over perfectly conducting flat terrain, as shown in Figure 7. It is possible to see good agreement between the two methods. The Agrawal et al. coupling model [32] will be used throughout this paper in order to calculate the lightning induced voltages on overhead lines. This model is implemented in the YalukDraw software [34,35], that was developed by the first two authors. It is software that allows the lightning performance of overhead distribution networks to be calculated, and it is based in a previous piece of software called Yaluk, linked with ATP by means of foreign MODELS. This was done thanks to the collaboration of the Power Systems research group from the University of Bologna. For non-flat terrain, the Yaluk Draw has been modified to be used as input for the electric fields, calculated by means of the 3D-FDTD method. Due to the fact that the TEM approximation is used (derived of the transmission line theory), the electric and magnetic fields can be related and it is not necessary get the magnetic field into the program. In order to fulfill the conditions of the coupling model for evaluating the lightning induced voltages, it is necessary to make some considerations regarding the horizontal and vertical electric fields calculated on the FDTD grid. In all cases, the electric field parallel to the line El, and the average vertical electric field Ev perpendicular to the terrain were obtained [36] at the beginning and at the end of line end, as shown in the Fig. 4. Although, for flat terrain the vertical electric field is approximately constant below the line, the vertical electric field in the soil can be considered as an input parameter for the coupling model. For non-flat terrain there is a great variability on the fields below the line, and for this reason it is preferable to calculate the average vertical electric field.

4.1. Flat ground 0

50

100 Time (ns)

149

199

Figure 7. Vertical Electric field calculated at a distance of 1 m from the channel of the reduced scale model over a perfect conducting flat ground. Source: The authors

The first is the configuration described in Figure 6, where a ground plane is placed in the middle of a working volume with a soil of concrete. The vertical electric field , measured at point P1 is seen in Figure 8.

163


16

40

14

35

12

30

10

25

Ep(V/m)

Ez(V/m)

Soto-Ríos et al / DYNA 82 (192), pp. 160-167. August, 2015.

8 6

20

10

2

5 0 0

50

100

Figure 8. Vertical Electric Field presented in Figure 6. Source: The authors

150

200 250 Time (ns)

300

350

measured at Point P1 in configuration

Flat ground

15

4

0

Point P2

Point P1

0

50

100

150

200 250 Time (ns)

Figure 10. Perpendicular electric field configuration presented in Figure 9 Source: The authors

300

350

calculated at Point P1 in

The trend presented above is applicable to points located aside lightning channels (according to other simulations not here presented). It implies that a line placed over inclined terrain has a vertical electric profile highly variable from that obtained for flat terrain (due symmetry of the fields). For this reason, a different magnitude and wave-shape of lightning induced voltages are expected. 5. Simulated induced Voltages on the reduced scale model

Figure 9. Configuration of Inclined terrain. The electric fields are calculated at points P1 and P2. Source: The authors

4.2. Inclined terrain The second configuration is the result of inclining the ground plane of the previous configuration, at an angle  of 30° as shown in Figure 9. With the aim of being able to make a comparison, The normal component of the electric field Ep was calculated in each simulation (for flat terrain the perpendicular electric field is equal to vertical electric field ). In Figure 10 the perpendicular electric field , obtained at points P1 and P2 compared with the vertical electric field for flat terrain is presented. The electric field at Point P2 is enhanced in magnitude compared with the case of flat terrain. This is explained by the closeness between the lightning channel and measurement point P2. The perpendicular electric field at point P1 is decreased in magnitude compared with the case of flat terrain. This can be explained due to that fact that the lightning channel is farther from the measurement point P1. The tendency of the results presented previously in Figure 10, match the inclined lightning channel studies previously published [16,37] It is explained because according to the method of images, an inclined lightning channel is equivalent to vertical lightning that strikes inclined terrain [25].

In order to obtain the expected measurements of induced voltages on the reduced scale model, the previous configurations were slightly modified with the aim to adapt the results found in the reduced scale model constructed. In this case the simulated peak current was 88 mA with a front time of 40 ns. The current was represented by a Heidler function [19] with the following parameters: 88 , 15 1 5.1. Flat ground The distribution line simulated in flat terrain is shown in Figure 11. It is 5m long, 5cm high and is located 0.8 m from the return stroke channel, as described in Section 0. The induced voltages calculated at both extremities of the line are shown in Figure 12. Some reflections and a double polarity are seen as the result of the presence of the concrete terrain that surrounds the experimental setup. 5.2. Inclined terrain The configuration of inclined terrain to be simulated is shown in Figure 13. Essentially, it is the result of rotating the system shown in Figure 11 at an angle of 30°. The distribution line is 5m long, placed parallel to the terrain. The induced voltage calculated at both extremities of the line is shown in Figure 14. It is possible to see a slight increment of the induced voltage compared with the flat terrain case. Following the tendency of the electric field, the induced voltage is increased at point P2 (end of line), while it is decreased at Point P1 (beginning of line).

164


Soto-Ríos et al / DYNA 82 (192), pp. 160-167. August, 2015.

140 120

Begin

End

voltage(mV)

100 80 60 40 20 0 Figure 11. Configuration on non-flat ground to calculate lightning induced voltages on the Reduced Scale Model. Source: The authors

0

‐20

50

100

150

200

Time (ns)

Figure 14. Induced voltage at beginning and at end of line on the inclined terrain shown in Figure 13. Source: The authors

120

6. Conclusions

100

The design of a reduced scale model to measure the electric field over an inclined terrain has been presented. The results of this experiment serve as validation of numerical Begin End 60 codes previously developed to determine the influence of the orography on lightning electromagnetic fields. This 40 knowledge is useful for the improvement of lightning location systems and lightning induced voltages calculation. 20 A modeling of the reduced scale model has been presented. The model was composed by a highly conductive ground plane 0 0 50 100 150 200 inclined α degrees with respect to a flat ground surface. The lightning channel was modeled by means of a vertical antenna ‐20 Time (ns) excited by a voltage impulse source developed from an electronic Figure 12. Induced voltage at beginning and at end of line in Figure 11. voltage multiplier system. Measurements of the normal Source: The authors component of the electric field and of induced voltages on an overhead line were taken from the experimental setup and then compared to simulation results obtained from a FDTD representation of the measurement scenario. The electric field above an inclined terrain results show a significant difference with respect to the case of flat terrain. The electric field calculated at point P1 decreases its value with respect the flat terrain case, because the farness of the point to the lightning channel, while at point P2 the electric field increase its magnitude as a result of the closeness with the lightning channel. The induced voltages calculated on the reduced scale model show a little increase when compared to the flat terrain case and the same tendency of the electric fields. It is expected that the results obtained by using the Finite Difference time domain method in 3-D Cartesian coordinates system could be validated by the constructed reduced scale model, so as the reliable and real data could be used to improve the performance of distribution networks placed over non-flat terrain. voltage(mV)

80

Figure 13. Configuration on inclined terrain to calculate lightning induced voltages on the Reduced Scale Model. Source: The authors

Acknowledgments he authors would like to thank to the Colombian Administrative Department of Science and Technology -

165


Soto-Ríos et al / DYNA 82 (192), pp. 160-167. August, 2015.

COLCIENCIAS for their support of this research, and the Universidad de Antioquia for their support with the measurement devices used in this experiment. References [1] [2]

[3]

[4]

[5]

[6] [7]

[8] [9] [10]

[11] [12]

[13]

[14] [15]

[16]

[17]

Nucci, C.A., Rachidi, F. and Ianoz, M.V., Lightning induced voltages on overhead lines, IEEE Trans Electromagn Compat, 35 (1), pp. 7586, 1993. DOI: 10.1109/15.249398 Baba, Y. and Rakov, V.A., Voltages induced on an overhead wire by lightning strikes to a nearby tall grounded object, IEEE Trans Electromagn Compat, 48 (1), pp. 212-224, 2006. DOI: 10.1109/TEMC.2006.870807 Cooray, V. and Scuka, V., Lightning-Induced overvoltages in power lines : Validity of various approximations made in overvoltage calculations, IEEE Trans Electromagn Compat, 40, pp. 355-363, 1998. DOI: 10.1109/15.736222 Borghetti, A., Nucci, C.A. and Paolone, M., An improved procedure for the assessment of overhead line indirect lightning performance and its comparison with the IEEE Std. 1410 Method, IEEE Trans. Power Deliv., 22 (1), pp. 684-692, 2007. DOI: 10.1109/TPWRD.2006.881463 Yutthagowith, P., Ametani, A., Nagaoka, N. and Yoshihiro, B., Lightning-Induced voltage over lossy ground by a hybrid electromagnetic circuit model method with Cooray – Rubinstein formula, 51, pp. 975-985, 2009. Master, M.J. and Uman, M.A., Transient electric and magnetic fields associated with establishing a finite electrostatic dipole, Am. J. Phys., 51 pp. 118-126, 1983. DOI: 10.1119/1.13310 Rubinstein, M., Methods for calculating electromagnetic fields from a known source distribution: Application to lightning, IEEE Trans. Electromagn. Compat., 31 (2), pp. 183-189, 1989. DOI: 10.1109/15.18788 Uman, M.A. and Krider, E.P., The electromagnetic radiation from a finite antenna, Am. J. Phys., 43, pp. 33-38, 1975. DOI: 10.1119/1.10027 Sommerfeld, A., Uber die ausbreitung der wellen in der drahtlosen telegraphie, Ann. Phys., 4 (28), pp. 665-736, 1909. DOI: 10.1002/andp.19093330402 Delfino, F., Procopio, R. and Rossi, M., Lightning return stroke current radiation in presence of a conducting ground: 1. Theory and numerical evaluation of the electromagnetic fields, J. Geophys. Res. Atmospheres, 113 (D5), pp. 1-10, 2008. DOI:10.1029/2007JD008553. Cooray, V., Horizontal field generated by return strokes, Radio Sci., 27 (5), pp. 529-537, 1992. DOI: 10.1029/91RS02918 Rubinstein, M., An approximate formula for the calculation of the horizontal electric field from lightning at close, intermediate, and long range, IEEE Trans. Electromagn. Compat., 38 (3), pp. 531-535, 1996. DOI: 10.1109/15.536087 Cooray, V., Some considerations on the ‘Cooray–Rubinstein’ approximation used in deriving the horizontal electric field over finitely conducting ground, IEEE Trans. Electromagn. Compat., 44 (4), pp. 560-566, 2002. DOI: 10.1109/TEMC.2002.804774 Jin, D.J.R.J.M., Finite element analysis of antenna and arrays , 2009. John Wiley & Sons In, Hoboken, 2009. Napolitano, F., Borghetti, A., Nucci, C., Rachidi, F. and Paolone, M., Use of the full-wave finite element method for the numerical electromagnetic analysis of LEMP and its coupling to overhead lines, Proc. of the 7th Asia-Pacific International Conference on Lightning, Nov. 1-4, 2011, Chengdu, China. DOI: 10.1109/apl.2011.6110132 DOI: 10.1016/j.epsr.2012.05.002 Moini, R., Sadeghi, S.H.H., Kordi, B. and Rachidi, F., An antennatheory approach for modeling inclined lightning return stroke channels, Electr. Power Syst. Res., 76 (1), pp. 945-952, 2006. DOI: 10.1016/j.epsr.2005.10.016 Cardoso, J.R. and Sartori, C.A.F., An analytical-FDTD method for near LEMP calculation, IEEE Trans. Magn., 36 (4), pp. 1631-1634, 2000. DOI: 10.1109/20.877754

[18] Yang, C. and Zhou, B., Calculation methods of electromagnetic fields very close to lightning, IEEE Trans. Electromagn. Compat., 46 (1), pp. 133-141, 2004. DOI: 10.1109/TEMC.2004.823626 [19] Heidler, F., Cvetic, J. and Stanic, B.V., Calculation of lightning current parameters, IEEE Trans. Electromagn. Compat., 14 (2), pp. 399-404, 1999. DOI: 10.1109/61.754080 [20] Mimouni, A., Rachidi, F. and Azzouz, Z., Electromagnetic environment in the inmediate vicinity of a lightning return stroke, J. Light. Res., 2, pp. 64-75, 2007. [21] Baba, Y. and Rakov, V.A., Influence of strike object grounding on close lightning electric fields, J. Geophys. Res., 113 (D12), pp. n/a. 2008. DOI:10.1029/2008JD009811. [22] Yee, K.S., Numerical solution of initial boundary value problems involving Maxwell’s equations in isotropic media, IEEE Trans. Antennas Propag., Ap-14, pp. 302-307, 1966. [23] Taflove, A. and Hagness, S.C., Computational Electrodynamics: the finite-difference time-domain method, 3rd edition. Artech House, 2005. [24] Wilson, On some determinations of the sign and magnitude of electric discharges in lightning flashes, Proc Roy Soc, pp. 555-574, 1916. [25] De Conti, A., Perez, E., Soto, E., Silveira, F.H., Visacro, S. and Torres, H., Calculation of lightning-induced voltages on overhead distribution lines including insulation breakdown, IEEE Trans. Power Deliv., 25 (4), pp. 3078-3084, 2010. DOI: 10.1109/TPWRD.2010.2059050 [26] Soto, E., Perez, E. and Herrera, J., Electromagnetic field due to lightning striking on top of a cone-shaped mountain using the FDTD, IEEE Trans. Electromagn. Compat., 56 (5), pp. 1112-1120, 2014. DOI: 10.1109/TEMC.2014.2301138 [27] Soto, E., Perez, E. and Younes, C., Influence of non-flat terrain on lightning induced voltages on distribution networks, Electr. Power Syst. Res., 113, pp. 115-120, 2014. DOI: 10.1016/j.epsr.2014.02.034 [28] Yokoyama, S., Calculation of lightning induced voltages on overhead multiconductor lines, IEEE Trans. Power Appar. Syst., 103 (1), pp. 100-108, 1984. DOI: 10.1109/TPAS.1984.318583 [29] Paolone, M., Nucci, C.A. and Petrache, E., Mitigation of lightninginduced overvoltages in medium voltage distribution lines by means of periodical grounding of shielding wires and of surge arresters : Modeling and experimental validation, 19 (1), pp. 423-431, 2004. [30] Piantini, A., Member, S., Janiszewski, J.M., Borghetti, A., Nucci, C.A. and Paolone, M., A scale model for the study of the LEMP response of complex power distribution networks, IEEE Trans Power Deliv., 22 (1), pp. 710-720, 2007. DOI: 10.1109/TPWRD.2006.881410 [31] Boaventura, W. do C., Estudo da tensão induzida ém lihnas aéreas por descargas atmosféricas utilizando técnicas de modelo reduzido, 1990. [32] Agrawal, A., Price, H. and Gurbaxani, S., Transient response of multiconductor transmission lines excited by a nonuniform electromagnetic field, IEEE Trans. Electromagn. Compat., 22 (2), pp. 119-129, 1980. DOI: 10.1109/TEMC.1980.303824 [33] Uman, M.A. and Mclain, D., Calculations of lightning return stroke models, J. Geophys. Res., 85, pp. 1571 -1583, 1985. [34] Perez, E. y Torres, H., Modelado y experimentación de tensiones Arquitectura, Universidad Nacional de Colombia, 2010. [35] Perez, E. y Soto, E., Yaluk draw: Software especializado para análisis del desempeño de líneas de distribución ante impacto de rayos, Avances en Ingeniería Eléctrica, 4 (1), pp. 1-8, 2013. [36] Soto, E., Perez, E. and Herrera, J., Electromagnetic field due to lightning striking on top of a cone-shaped mountain using the FDTD, IEEE Trans. Electromagn. Compat., 73, [Online], 2014. [37] Rameli, N., Kadir, M.Z.A.A., Izadi, M., Gomes, C. and Jasni, J., On the influence of inclined lightning channel on lightning induced voltage evaluation evaluation of coupling, in: International Conference on Lightning Protection (ICLP), Vienna, Austria, 2012. DOI: 10.1109/ICLP.2012.6344250 E. Soto-Ríos, was born in Manizales, Colombia, South America on April 29th, 1986. He received his BSc. in 2008 and MSc. in 2011 in Electrical Engineering, both from the Universidad Nacional de Colombia. At present he is a PhD. Electrical Engineering Student at the Universidad Nacional de Colombia, Manizales, Colombia. His areas of interest are electromagnetic compatibility, lightning protection and lightning induced voltages.

166


Soto-Ríos et al / DYNA 82 (192), pp. 160-167. August, 2015. E. Perez-González, was born in Bogota, Colombia, South America on November 21st, 1976. He received his BSc. in 1999, MSc. in 2002 and PhD. in 2006 all of them from the Universidad Nacional de Colombia in Electrical Engineering and High Voltage Studies. He worked for one year with the Power System research group at the University of Bologna - Italy, working on lightning induced voltage. He has been working with the research group PAAS-UN in Colombia since 1998. He is has been an associate professor since 2006 in the Universidad Nacional de Colombia, Medellin, Colombia. His special fields of interest include lightning phenomena analysis and power system protection. J. Herrera-Murcia, was born in Bogota, Colombia, on July 28, 1976. He received his BSc. in 1999, MSc. in 2002 and PhD. in 2006, all of them from the Universidad Nacional de Colombia in Electrical Engineering and High Voltage Studies. He has been involved with lightning protection systems and power systems transients with emphasis on lightning-induced voltages. He has been an associate professor since 2006 in the Universidad Nacional de Colombia, Medellin, Colombia.

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64

167


Modeling dynamic procurement auctions of standardized supply contracts in electricity markets including bidders adaptation Henry Camilo Torres-Valderrama a & Luis Eduardo Gallego-Vega b b

a Departamento de Ing. Eléctrica y Electrónica, Universidad Nacional de Colombia, Bogotá, Colombia.hctorresv@unal.edu.co Departamento de Ingeniería Eléctrica y Electrónica, Universidad Nacional de Colombia, Bogotá, Colombia. lgallegov@unal.edu.co

Received: May 9th, 2014. Received in revised form: February 20th, 2015. Accepted: July 7th, 2015.

Abstract Descendant Clock Auctions have been increasingly used in power markets. Traditional approaches are focused on discovering the bidders’ best response but neglecting the bidders’ adaptation. This paper presents an algorithm based on decision theory to estimate the bidders’ behavior throughout the auction. The proposed model uses portfolio concepts and historical data of spot markets to estimate a long-term contract supply curve. This model was applied to evaluate the Colombia’s Organized Market (MOR). Demand curve parameters and round size were varied to evaluate their impact over auction outputs. Results show that the demand curve has a quite small impact over bidders’ decisions and round size management is useful to avoid non-competitive bidders’ behavior. In addition, it is shown that auction’s starting prices strongly influence auction’s clearing prices. These results are extremely helpful to design market structures in power markets. Keywords: Dynamic Auction Model, Descending Clock Auction, Electric Energy Regulation, Colombian Electric Energy Market.

Modelado de subastas dinámicas para compra de contratos estandarizados en el mercado eléctrico incluyendo la adaptación de los participantes Resumen Las subastas de reloj descendente han incrementado su uso en los mercados de energía eléctrica. Las aproximaciones tradicionales a estas subastas se han enfocado en encontrar la mejor respuesta de los postores pero desconociendo la adaptación de ellos a lo largo de la subasta. Este artículo presenta un algoritmo basado en la teoría de decisión para estimar el comportamiento de los postores a lo largo de la subasta. El modelo propuesto usa conceptos de portafolios financieros y datos históricos sobre el mercado spot de energía eléctrica para estimar una curva de oferta de contrato de los generadores. El modelo fue utilizado para evaluar el Mercado Organizado (MOR) en Colombia. Los parámetros de la curva de demanda y el tamaño de la cada ronda, fueron variados para evaluar el impacto sobre la salida de la subasta. Los resultados muestran que la curva de demanda tiene un pequeña impacto sobre la adaptación de los pujadores y que el tamaño de ronda es útil para evitar los comportamientos no competitivos. Adicionalmente se muestra que los precios de inicio de la subasta tienen una gran influencia sobre los precios de cierre. Los resultados aquí presentados son útiles para el diseño de estructuras de mercado en el sector eléctrico. Palabras clave: Modelo de Subasta Dinámica, Subasta de reloj descendente, Regulación de Energía Eléctrica, Mercado Eléctrico Colombiano

1. The problem Auctions are an important allocation mechanism, and they have been employed to trade many goods for a long time. Nowadays, auctions are employed in many fields as one of the most important allocation mechanisms. This has

increased the researchers’ interest about enlarging auction’s understanding. In fact, auction modeling has been deeply explored in many areas of economics and engineering. The first approach to auction modeling comes from economic theory, using mathematical models to determine equilibrium strategies for different types of auctions [1].

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 168-176. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48612


Torres-Valderrama & Gallego-Vega / DYNA 82 (192), pp. 168-176. August, 2015.

These models are useful to understand bidders’ behavior and some auction’s features; however, mathematical models have limitations because strong assumptions are necessary to obtain a model’s equilibrium. Nowadays, computational models allow some of the mathematical model’s limitations to be overcome. There are many examples of computational agent-based models useful for auction design, comparison and performance evaluation. About design auction, in [4] an auction mechanism is settled by using an agent-based model. In [5] an agent-based model is proposed not only for an auction but for a fully automated negotiation system. About evaluation of auctions’ performance, many works have been carried out based on agent-based models. Several features have been studied, for example Shanshan Wang worked on the advantages of combinatorial auctions [6], Kim on the effect of auction repetition [7], Akkaya on the format of online auctions [8] and Sow on risk-prone evaluation by using agent-based models [9]. Moreover, these computational models have been employed to compare different auction formats. In [10] discriminatory and uniform price auctions are compared by using an experimental analysis based on a multi-agents model, and in [11] a similar comparison is done using learning agents. In power markets, auctions are commonly used since deregulation became a trend. In the Colombian case, there is an electricity market composed of two mechanisms: bilateral financial contracts and a spot market, which is a uniform price auction. A sealed bid auction, like a uniform price auction, is the most common auction format in electricity markets and has been widely modeled even with agent-based learning in the Colombian case [2,3,12]. Espinoza compared different formats of sealed bid auctions by determining an equilibrium strategy for each format using econometric models [13]. Moreover, Gallego determined the bidders strategies by exploring the historical data about Colombian wholesale market and employed a learning algorithm. [14]. Recently, a new kind of auction has been included in electricity markets: multi-round (dynamic) auctions. These kinds of auctions show a distinctive feature when compared to traditional auctions: bidders adjust their bids throughout the auction so the analytic solution problem is harder to solve than sealed bid auctions. Nevertheless, it is possible to find examples of modeling dynamic auctions using computational techniques, as is shown in [15], where a multi-round English auction is modeled using genetic network programming. Furthermore, some spot electricity markets have used dynamic auctions, but the principal purpose of dynamic auctions is to trade Long Term Supply Contracts (LTSCs) [16,17]. In LTSCs auctions, it is especially important to avoid the so-called winner’s curse; this happens in common value auctions when the winner overpays because his estimate is higher than the other bidders’ average estimate. Thus, dynamic auctions are often used to trade LTSCs, given that they allow bidders to fit their bids along the auction and thereby they reduce possible overpayment [18]. Despite some LTSCs’ auctions using a static format, like Chilean [19] and Peruvian [20] electricity markets, most of

LTSCs’ auctions are dynamic (New Jersey [21], Illinois [22] and New England [23]), Brazil [19], Spain [24] and Colombia [25]). In addition, LTSCs’ auctions differ from sealed bid auctions because the bidders’ decision making involves additional aspects such as financial risk (due to spot prices’ volatility) and generation uncertainty. Roubik [26] worked on generators’ strategic behavior in LTSCs auctions by using portfolio concepts. Four variables were proposed to be able to understand the generators’ behavior: 1) Mean spot price, 2) Spot price variance, 3) Contract price and 4) Risk aversion [26]. Other contract procurement auction models are Moreno’s who studied two static auction formats by using Bayesian equilibrium concepts [27]; Azevedo’s who also used Bayesian equilibrium but to analyze bilateral contract auction carried out in Brazil in 2003 [28] and GarciaGonzales’ who modeled the bidding strategy of a wind power producer in a Descending Clock Auction [29]. A Descending Clock Auction is a dynamic procurement auction that has been recently introduced in several power markets. In short, this auction works as follows. The auctioneer calls bids in successive rounds. Each round has a maximum and a minimum price. The round’s maximum price is equal to previous round’s minimum price. Hence, the bid price is always descending. In every round the auctioneer adds all bids and announces the total aggregate supply. Auction ends when aggregate supply is equal or less than total demand [30]. From an economic approach, Ausbel & Cramton [31] and Milgrom [32] are the main references. They used a Lyapunov Function to find an equilibrium strategy and their main conclusion is that sincere bidding by the bidders is an equilibrium of the auction game, and, starting from any price vector, the outcome converges to the competitive equilibrium. However, these models are based on strong assumptions about rationality, continuity, and others that allow the easing of the analytic solution. Moreover, Ausbel, Cramton and Milgrom say that the most important feature of a dynamic auction is that the winner’s curse is weakened, given that bidders can fit their bids along the auction [33]; however, the model used to demonstrate the equilibrium strategy is static and the adjustment is not evident. From an engineering approach, models about descending clock auctions are pretty scarce. In [34], Barroso established an optimization model for a price-taker hydrothermal GENCO to devise bidding strategies in multi-item dynamic auctions for long-term contracts. In order to fill the absence of auction’s models from an engineering perspective, this paper presents a methodology that models the bidder’s decision in every round in order to maximize their revenues. For this purpose, a MATLAB program based on decision theory [35] and microeconomic theory [36,37], was developed. This allows for the bidders behavior along the auction to be simulated. The bidders model uses portfolio concepts [26] and historical information about their preferences in the spot market. This paper is organized as follows. In Section II the proposed model is presented, first the bidder model and then the scenarios faced by the bidders in the auction and the possible rewards to a chosen strategy are presented. In

169


Torres-Valderrama & Gallego-Vega / DYNA 82 (192), pp. 168-176. August, 2015.

Section III the model is applied to the Colombian Energy Market. Section IV presents some results of model implementation. Finally, Section V summarizes the main conclusions. 2. Proposed model In this section, the proposed descending clock auction model is presented. This model fits an auction mechanism in the Colombian power market known as MOR, including two main parts:  The Bidders Model: This part focuses on representing the bidder’s valuations about the product to be auctioned: a long-term energy supply contract (LTSC). Since historic information about energy contracts is not available due to reasons of confidentiality, it was necessary to design a methodology to set the valuations from the spot market’s bids and financial portfolio concepts.  Decision Making: In this part the decision making is modeled based on a set of scenarios, a set of bidding strategies and a set of rewards. Below, both parts will be described.

Figure 1. Scatter Plot of Spot Market Bids. Source: The authors

2.1. Bidders model In a descending clock auction, the auctioneer asks bidders about their bids at every round price, i.e. bidders disclose the supply curve point by point. In fact, bidders’ behavior is based on this supply curve as a representation of their LTSC valuations. In order to calculate their LTSC’s valuations this paper proposes a methodology that consists of three stages: 1) Summarize the spot market information through an statistic supply curve 2) Estimate the GENCO’s risk aversion and finally 3) Calculate the LTSC’s valuation by using a utility function that includes expected generation (obtained from spot market information), risk aversion and variables about the commitment period of LTSCs (expected spot price, variance spot price). Moreover, the information about LTSC bids is not available, but the information about bids on the spot market is plentiful given that GENCOs offer a supply curve on a daily basis for this market. This curve is formed from individual generation plants’ bids that must be ordered by price in such a way that the accumulated quantity is determined by adding every unit’s offered quantity. Based on every GENCO’s daily bid, a statistical supply curve can be obtained by ordering every daily supply curve as follows: i. Steps in supply curves are represented by points in a scatter plot (Fig. 1). ii. Next, these points are clustered by price ranges. For each cluster a set of statistical measurements are calculated (i.e. mean, max, min, quartiles) with the aim of building different statistical supply curves. iii. Finally, clusters are combined in such a way that every cluster’s mean is greater than the previous one in order to ensure the supply curve’s monotonicity (Fig. 2).

Figure 2. Statistic Supply Curve of Spot Market Bids. Source: The authors

Now, the resulting expected values for each price can be understood as the expected supply curve or expected generation function (G(P)). It is important to note that G(P) represents the total generation (MW) allocated in both markets, spot and contracts rather than only the generation power allocated in the spot market. The amount to be sold in the spot market is calculated from subtracting GENCO’s contract obligations. Also, an optimal hedge level for a given contract price can be determined by using portfolio concepts, [26] as follows: i. First, it is necessary to represent hedge preferences by utility functions as are commonly used in portfolio evaluations. This utility function allows two objectives to be balanced: the expected earnings maximization and financial risk mitigation. In general terms, GENCO’s revenue is calculated using equation 1 where b is the contract price, G(b) is the expected generation at contract price b, h is the hedge level (contract sales), (1 - h) is the spot market sales and ̅ is the mean spot price.

170

,

1

(1)

ii. Now, the expected value for the GENCO‘s revenue and its involved risk (understood as the revenue variance)


Torres-Valderrama & Gallego-Vega / DYNA 82 (192), pp. 168-176. August, 2015.

can be balanced by using a Linear Mean-Variance Utility Function (LMVUF) having a risk aversion constant (γ) (equation 2)

1

̅

1

(2)

iii. From the LMVUF’s derivative with respect to h, the optimal hedge level can be found. This optimal hedge level depends on the estimated average spot price, its variance and the GENCO’s risk preferences (γ), as is stated in equation 3. ̅ 2

1

(3)

iv. Next, it is possible to estimate a hedge curve by calculating the optimal hedge level for several prices using equation 3. Finally, a Contract Supply Curve (CSC) is calculated by multiplying G(P) and the obtained hedge curve as it was described in step 4. However, some assumptions were necessary to calculate this curve. First, the generators are able to make good estimations about average and variance of spot price, otherwise, it is a mistake to use the historical data to calculate γ. Second, risk aversion does not present meaningful changes between commitment periods. Last but not least, generators plan their risk hedge using a LMVFU to represent their risk preferences. Formally, the Contract Supply Curve is calculated as follows: i. By choosing a period of time and getting its spot market historical information. ii. By determining contract sales, spot sales and weighing them using equation 4.

1

(4)

iii. By calculating the spot price mean ( ̅ ), the spot price variance ( ) and assuming a contract price (b) (i.e. the average price of the contracts for the entire power market). iv. By calculating risk aversion by clearing (γ) from equation 3 as it is stated in equation 5. ̅

(5)

2 1

i. By predicting the mean ( ̅ ) and variance spot price ( ) for the commitment period of contract. ii. By calculating the hedge curve (h(P)) as shown in equation 6. 1

̅ 2

,0

i. By determining the expected generation as a function of price (G(P)) for the commitment period of contract. ii. By calculating the Contract Supply Curve (CSC(P)) by multiplying point by point curves h(P) and G(P) (7) One of the most important contributions of the proposed methodology is that once a Contract Supply Curve is estimated, it is also possible to estimate the effect of the risk aversion in the Contract Supply Curve. Fig. 3 shows the risk aversion effect on a contract supply curve. The three shown Contract Supply Curves were estimated with the following parameters: ̅ =200 $/kWh, = 1000 and 0.04, 0.1, 0.31 . This figure shows that the higher the risk aversion, the larger the amount of energy to be allocated in contracts for the same price. 2.2. Decision making: Scenarios, strategies and rewards

Figure 3. Risk Aversion Effect on Contract Supply Curve. Source: The authors

(6)

Unlike sealed bid auctions, in a descending clock auction the allocations and payments not only depend on a single bid, but on bids sent in previous rounds; consequently, finding an equilibrium strategy using game theory is more difficult in this kind of auctions. Thus, decision-making was based on an alternative theoretical framework: decision theory. Decision theory is a set of criteria that allows someone to choose among different strategies under several feasible scenarios. When the scenarios’ probabilities are known, it is called decision under risk, and, consequently, decisionmaking is based on the expected strategy revenue. The most common criterion in this kind of decision making is the expected value criterion. This criterion weights rewards by the scenario’s probability of finding the strategy’s expected value [35]. In every round in a descending clock auction, bidders face two scenarios: i. The next round in the auction will be the last one. ii. The next round in the auction will not be the last one. These two scenarios are enough to understand the decision making. On one hand, if bidders have certainty that

171


Torres-Valderrama & Gallego-Vega / DYNA 82 (192), pp. 168-176. August, 2015.

next round is not the last one, they will strategically bid to Table 1. improve their position in the auction. On the other hand, if Bidder’s Decision Matrix in a Dynamic Auction P ≠ Pclear P = Pclear bidders have certainty that next round is the last one, they will S1=CSC(P) U11 U22 bid in order to maximize their revenue. U21 U22 S2=CSC(P) + ΔQ In fact, bidders can choose among many possible bids, but U31 U32 S3=CSC(P) - ΔQ to limit the choices it is important to balance the results Source: The authors accurately and the time spent to get them. Therefore, it is proposed that the bidders’ choices can be grouped in three strategies as follows: iv. Calculate the probability that ASr+1|si be greater than i. Bidding a quantity of energy according their CSC the historical aggregate supply for next round price. curves. v. Estimate other bidders’ aggregate supply for the future ii. Bidding a larger quantity of energy than the one set by price rounds (SOf (Pf)). SOf (Pf) based on two limits: their CSC curves. (SOf (Pf)max) and (SOf (Pf)min). iii. Bidding a smaller quantity of energy than the one set by  SOf (Pf)max is other bidders’ maximum aggregate their CSC curves. offer at a future round price (Pf ). For every Pf value, Now, for each scenario-strategy there is an associated SOf (Pf)max it is such that the probability of SOf reward, i.e. for strategy i and scenario j the associated reward (Pf)max is greater or equal to other bidders’ historical is Uij. When next round is the last one (Scenario 1), the aggregate supply, which is equal to Prb. reward (Ui1) is calculated from equation 2 with b equal to the  SOf (Pf)min is the other bidders’ minimum aggregate round price (Pa), given that payment is actually achieved in offer at a future round price (Pf), and is estimated by the final round. taking the other bidders’ minimum historical In scenario 2 (the next round is not the last one), rewards are aggregate supply at price Pf. not real revenues given that there is no payment. However, vi. Estimate maximum (DRmax) and minimum (DRmin) bidders choose an optimal strategy to drive the auction toward a residual demand from SOf (Pf)min, SOf (Pf)max and convenient point where they can maximize their revenues at the the demand curve (DC(Pf)) as follows: end of the auction. Therefore, the strategy’s reward is the expected revenue derived from the current strategy. (8) In addition, if an auction has interdependent estimations, bidders may be motivated to send bids that differ from their (9) real valuations. In other words, if each bidder’s estimate is partially based on rivals’ information, one bidder offering vii. Determine the set of possible clearing prices from small quantities might induce his/her rivals to decrease their DR(Pf)min, DR(Pf )max and the range of possible own valuations and, consequently, they may leave the auction. bids. Furthermore, bidders may also have incentives to hold their viii. Calculate the expected reward (Ui2) based on the bids. A bidder inflates their valuations in the hopes of obtained set of possible clearing prices. exhausting the competitors’ limited budgets. Then, a bidder Based on expected rewards a decision matrix can be shifts to bid its real valuation for these goods, now facing written (Table 1). Once this matrix is established, it is weakened competition [30]. The chosen strategy depends on necessary to estimate the probability of each scenario in order the expected strategy’s impact on aggregate supply. This to use the expected value criterion to choose the most impact depends on bidder’s market power. convenient strategy under the feasible scenarios. Assuming that all bidders have historical information The probability of scenario 1 (Probsc1) (next round being about other bidders, they can estimate the aggregate supply the last), is estimated based on the historical data about the for each price based on the aggregate supply for the previous clearing price. The probability of scenario 2 is calculated as round price and the historical offers for the same price. Next, (1-Probsc1). based on a supply curve estimate and a demand curve, the residual demand (DR) is calculated [36]. Once the bidder has 3. Model application the auction situation summarized in the residual demand, it is easy to estimate the expected revenues for a given strategy. The proposed model was applied in the Colombian power In order to estimate the expected revenue for a given market and specifically to a new market auction scheme strategy, an algorithm was implemented in MATLAB. This proposed in the last 5 years. The following sections describes algorithm follows these steps: this application i. Calculate the probability that the current aggregate supply curve be greater than the historical aggregate 3.1. Colombian electricity wholesale market supply curve for the current round price. ii. Estimate other bidders’ aggregate supply (SO) for the The Colombian power market (known as MEM) was next round price based on historical data and the created in 1995 as a competitive environment for the probability found at step one. generation and energy retail activities. This market structure iii. For the three possible strategies (si), calculate the assures the existence of enough sellers and buyers avoiding aggregate supply for next round price ASr+1|si by any direct influence of any agent over the final energy tariffs adding SO and si. 172


Torres-Valderrama & Gallego-Vega / DYNA 82 (192), pp. 168-176. August, 2015.

[12]. In addition, MEM allows the trading of energy by means of a spot market or bilateral contracts, and both choices might be represented as an energy portfolio to manage the suppliers’ revenue risk. In this market, hiring 100% of power generation lets suppliers know the expected annual revenues, implying lower levels of risk. Nevertheless, the average spot price is usually higher than the average contract price, so, hiring 0 % of power generation maximizes the expected revenue. Consequently, an optimal hedge level must be determined in order to maximize the revenue at an acceptable risk level. In addition, there are two kinds of final customers: regulated and non-regulated customers. The regulated customers pay a tariff that is fixed by the regulator. This tariff transfers the purchase energy cost that was paid by its incumbent retailer. On the other hand, non-regulated customers buy energy through a competitive scheme. 3.2. Organized market (MOR) Five years ago, a new scheme to trade energy for the regulated customers has been designed: the Organized Market (MOR). MOR is a descending clock auction where standard long term energy supply obligations are traded. In other words, the auction product is a standardized contract with a fixed commitment period (one year). According to this new market structure, suppliers/retailers who want/need to sell/buy energy to regulated customers must bid exclusively through this new market scheme (MOR). Retailers have a passive role in MOR, the auctioneer asks them for the energy needed and sets an aggregate demand curve by adding the retailers’ requests. Moreover, the auctioneer fix two prices: PP1 and PP2. PP1 is the maximum price that the auctioneer is willing to buy for the total energy demand while PP2 is the maximum price that the auctioneer is willing to buy only for any fraction of the demand. Before the auction takes place, the available information is limited. PP2 is known, PP1 is unknown, and a range of values for the possible total demand is given instead of an accurate total demand. Thereby, bidders do not know the real demand curve. Instead they have an expectation that can be represented by lower and upper limits. During an auction, the auctioneer sets a range of prices for which the bidders are able to offer. This range sets a variable called round size. For each round, the auctioneer sets the round size in order to balance the auction’s transaction costs and the available time to fit the bidders’ bids. Additionally, a suitable handling of this round size allows the chances of exercising market power to be limited. Finally, the auction ends when the aggregate supply meets the total demand curve. Then, the auctioneer discloses the final allocations. 3.3. Simulation parameters The proposed model was applied to MOR in order to evaluate the impact of varying its parameters: PP2, PP1 and round size. Six generators were modeled and their main features are presented in Table 2. These agents choose one of the following three strategies in every round:

Table 2. Chosen Generators to Model MOR Plants 5 GENCO A 5 GENCO B 6 GENCO C 1 GENCO D 10 GENCO E 4 GENCO F Source: The authors

Table 3. Different Auctions to Be Simulated Parameter Lower Limit 100 PP2 50 PP1min PP1min PP1max 30 nr 0% TD Source: The authors

Risk Aversion 0.04 0.31 0.002 0.19 0.104 0.025

Upper Limit 210 PP2 PP2 130 ±20%

Step 10 10 10 20 ±5%

i. Bid according to their CSC at round price ii. Bid according 25th percentile of their historical bids at round price iii. Bid according 75th percentile of their historical bids at round price The simulation was set with the following values: average spot price (87 $/kWh), spot price variance (1000 $/ ), PP2 was varied between 100 and 210 $/kWh and PP1 between 50 and 210 $/kWh. All these values were chosen from the historical data about Colombia’s spot market. In order to vary the round size, an additional parameter was introduced: a maximum number of rounds (nr). Thus, round size is calculated by dividing the difference between the auction’s starting price and minimum price over nr. Thereby, the higher the maximum number of rounds, the smaller the size of the rounds. The simulation scenarios are composed of 5 variables: price PP2, maximum number of rounds (nr), total demand uncertainty (ΔTD) and the two PP1’s limits: PP1max and PP1min. Table III shows the parameters for the simulated auctions using the MATLAB algorithm described above. In all the auctions, more than 15.000 scenarios were simulated. 4. Results and Discussion The main model’s output is the auction’s clearing price (Pc). However, Pc is impacted by the demand curve shape, as is shown in Fig. 4. To avoid this possible bias in the analysis, a Modified Clearing Price (Pcm) was introduced to filter this effect and accordingly, to identify the direct parameters’ impact over bidders’ decision throughout the auction. Then the parameters’ impact over Pc and Pcm is evaluated by applying the Pearson’s coefficient to the simulation’s outputs. Table IV summarizes the results. From table IV, some conclusions can be inferred: i. PP2 has the greatest impact over Pc and Pcm. Thus, PP2 price influences the clearing price, and also it directly influences bidders’ decision-making processes. ii. PP1 has an impact over Pc but not over Pcm. Thus, despite PP1 having an impact over the clearing price it doesn’t have an impact over bidders’ decisions.

173


Torres-Valderrama & Gallego-Vega / DYNA 82 (192), pp. 168-176. August, 2015.

Figure 4. Clearing Price (Pc) and Modified Clearing Price (Pcm). Source: The authors Figure 6. Modified Clearing Price (Pcm) against PP1 price. Source: The authors

Table 4. Pearson’s Coefficient between Parameters and Pc or Pcm Parameter r with Pcm r with Pc 0.89 0.95 PP2 0.54 0.29 PP1min 0.74 0.51 PP1max 0.09 0.03 nr 0.03 0.03 TD Source: The authors

5. Contributions and conclusions

Figure 5. Clearing Price (Pc) against PP1 price. Source: The authors

iii. The remaining parameters do not have a strong influence over the auction’s outputs. Fig. 5 shows a sensitivity analysis of auction outputs against PP1 with different PP2 values. This figure supports the conclusion about the greater impact that PP2 has over clearing prices and hence, over bidder’s decisions. As well, Fig. 5 is helpful to understand that PP1 impact is limited; PP1 only impacts the auction output given that PP1 defines the demand curve shape and hence only Pc has a direct relation with PP1. Finally, an additional conclusion can be stated: PP1 doesn’t influence bidder’s decisions throughout the auction. Fig. 6 shows this fact.

Two of the paper’s contributions must be highlighted. First, this paper proposes a methodology to obtain a GENCO’s contract supply curve from historical data about spot prices. Barroso [34] presented a similar methodology; however, the proposed methodology in this paper summarizes the GENCOs’ risk preference in one constant (γ) instead of a piecewise function. Roubik’s work [26] proposed a methodology to establish the hiring level based on the GENCOs’ risk preference; however, this paper contributes to Roubik’s work in the extend that it proposes a methodology to calculate the expected generation from spot market historical data and so, GENCO’s hiring profile is calculated from spot market information. This paper proposes a methodology to summarize spot market historical data in a statistic supply curve, which allows for statistic information about GENCO’s bidding profile to be obtained. The paper’s second important contribution is the modeling of GENCO’s strategic behavior throughout a dynamic auction. Models like Barroso’s obtain the GENCOs’ best response to the auction, but this response is independent of the auction’s development. Instead, this paper presents a methodology to get the GENCOs’ best response for the next round that allows the bidders’ adaptation along the auction to be understood. This modeling feature is extremely helpful to understand bidding behaviors for the purpose of future power market designs. Regarding the Colombian power market (MOR), some conclusions emerge from the application of the proposed model. This model allows the auction sensitivity of several demand curve parameters and round sizes to be evaluated. This sensitivity analysis allows the following conclusions to be stated: i. The auction’s starting price PP2 has the strongest impact on the auction’s clearing price. ii. Under MOR’s rules, PP1 price does not influence bidders’ decisions. However, PP1 has an important

174


Torres-Valderrama & Gallego-Vega / DYNA 82 (192), pp. 168-176. August, 2015.

effect on clearing prices because PP1 modifies the demand curve and consequently, the equilibrium price. iii. The round size strongly influences bidders’ decisions throughout the auction. Hence, suitable round size management is helpful to prevent anticompetitive behaviors among bidders. However, the round size impact over auction’s clearing price is low due to the strong relation between staring price and clearing price. These conclusions are extremely helpful to design market structures in power markets, given that it allows emerging behaviors along the proposed auctions to be modeled. References [1] [2]

[3]

[4]

[5] [6]

[7]

[8]

[9]

[10]

[11]

[12]

[13] [14]

[15]

[16]

[17]

[18]

Krishna, V., Auction theory, publisher by: Elsevier Science, 2009. Gallego, L., Duarte, O. and Delgadillo, A., Strategic bidding in Colombian electricity market using a multi-agent learning approach, Proceedings of Transmission and Distribution Conference Latin America, pp.1-7, 2008. DOI: 10.1109/tdc-la.2008.4641706 Delgadillo, A., Gallego, L., Duarte, O., Jimenez, D. and Camargo, M., Agent learning methodology for generators in an electricity market, Proceedings of IEEE Power and Energy Society General Meeting, pp.1-7, 2008. DOI: 10.1109/pes.2008.4596279 Xing, L. and Lan, Z., A method based on iterative combinatorial auction mechanism for resource allocation in grid multi-agent systems, Proceedings of IEEE International Conference on Intelligent Human-Machine Systems and Cybernetics IHMSC ’09, pp. 36-39, 2009. DOI: 10.1109/ihmsc.2009.17 Arai, S. and Miura, T., An intelligent agent for combinatorial auction, Proceedings of IEEE International Conference on Hybrid Intelligent Systems (HIS), pp.34-39, 2011. DOI: 10.1109/his.2011.6122076 Wang, S. and Song, H., A multi-agent based combinational auction model for collaborative e-procurement, Proceedings of IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) , pp.1108-1112, 2008. Tae I1, K., Bilsel, R.U., Kumara, S.RT., A reinforcement learning approach for dynamic supplier selection, Proceedings of IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI), pp. 1-6, 2007. Akkaya, Y., Badur, B. and Darcan, O.N., A study on internet auctions using agent based modeling approach, Proceedings of IEEE International Conference on Management of Engineering and Technology (PICMET), pp. 1772-1778, 2009. DOI: 10.1109/picmet.2009.5261945 Sow, J., Anthony, P. and Mun-Ho, Ch., Competition among intelligent agents and standard bidders with different risk behaviors in simulated English auction marketplace, Proceedings of IEEE International Conference on Systems Man and Cybernetics (SMC), pp. 1022-1028, 2010. DOI: 10.1109/icsmc.2010.5641740 Liu, X. and Cao, H., Uniform price versus discriminatory auctions in bond markets: A experimental analysis based on multi-agents system, Artificial Intelligence, Proceedings of IEEE International Conference on Management Science and Electronic Commerce (AIMSEC), pp. 1791-1794, 2011. Cincotti, S., Guerci, E., Ivaldi, S. and Raberto, M., Discriminatory versus uniform electricity auctions in a duopolistic competition scenario with learning agents, Proceedings of IEEE International Congress on Evolutionary Computation, pp. 2571-2578, 2006. DOI: 10.1109/cec.2006.1688629 Camargo, M., Jimenez, D. and Gallego, L., Using of data mining and soft computing techniques for modeling bidding prices in power markets, Proceedings of IEEE International Conference on Intelligent System Applications to Power Systems (ISAP), pp. 1-6, 2009. DOI: 0.1109/isap.2009.5352872 Espinosa, M., Una aproximación al problema de optimalidad y eficiencia en el sector eléctrico colombiano, Documentos CEDE Feb. 2009, Ediciones Uniandes, pp. 1-58, 2009. Gallego, L. and Duarte, O., Modeling of bidding prices using soft computing techniques, Proceedings of IEEE International Conference

[19]

[20] [21] [22]

[23] [24] [25] [26]

[27]

[28]

[29]

[30] [31] [32] [33]

175

on Transmission and Distribution: Latin America, pp. 1-7, 2008. DOI: 10.1109/tdc-la.2008.4641757 Yue, Ch., Mabu, S., Chen, Y., Wang, Y. and Hirasawa, K., Agent bidding strategy of multiple round english auction based on genetic network programming, Proceedings of IEEE International Joint Conference ICCAS-SICE, pp- 3857-3862, 2009. Batlle, C. and Rodilla, P., Policy and regulatory design on security of electricity generation supply in a market-oriented environment. Problem fundamentals and analysis of regulatory mechanisms, España, Instituto de Investigaciòn Tecnológica, 2009. Moreno, R., Bezerra, B., Mocarquer, S., Barroso, L. and Rudnick, H., Auctioning adequacy in South America through Long-Term contracts and options: From classic Pay-as-Bid to Multi-Item dynamic auctions, Proceedings of the IEEE PES General Meeting, pp. 1-8, 2009. DOI: 10.1109/pes.2009.5275658 Cramton, P. and Sujarittanonta, P., Pricing rule in a clock auction, decision analysis [Online], 2009. [date of reference November 2nd of 2013]. Avaible at: http://pubsonline.informs.org/doi/abs/10.1287/ deca.1090.0161?journalCode=deca Mocarquer, S., Barroso, L., Rudnick, H., Bezerra, B. and Pereira, M., ¿Energy policy in Latin America: the need for a balanced approach?, IEEE Power and Energy Magazine, 7 (4) , pp. 26-34, 2009. DOI: 10.1109/MPE.2009.933417 Camac, D., Ormeño, V. and Espinoza, L., Assuring the efficient development of electricity generation in Peru, Proceedings of IEEE General Meeting, pp. 1-5, 2006. DOI: 10.1109/pes.2006.1708977 La-Casse, C. and Wininger, T., Maryland versus New Jersey: Is there a “Best” competitive bid process?, The Electricity Journal, 20 (3), pp. 46-59, 2007. DOI: 10.1016/j.tej.2007.03.002 de Castro, L., Negrete-Pincetic, M. and Gross, G., Product definition for future electricity supply auctions: The 2006 Illinois experience, The Electricity Journal, 21 (7), pp. 50-62, 2008. DOI: 10.1016/j.tej.2008.08.008 Cramton, P. and Stoft, S., A capacity market that makes sens, The Electricity Journal, 18 (7), pp. 43-54, 2005. DOI: 10.1016/j.tej.2005.07.003 Gener, A., Modelo de gestión del riesgo del suministro de último recurso de electricidad en agentes verticalmente integrados, Tesis, Universidad Pontificia de Comillas, Madrid, España. 2010. Cramton, P. and Stoft, S., Colombia firm energy market, Proceedings of Hawaii International Conference on System Sciences, pp. 1-11, 2007. DOI: 10.1109/hicss.2007.133 Roubik, E. and Rudnick, H., Assessment of generators strategic behavior in long term supply contract auctions using portfolio concepts, Proceedings of IEEE Bucharest Power Tech Conference, pp. 1-7, 2009. DOI: 10.1109/ptc.2009.5281851 Moreno, R., Rudnick, H. and Barroso, L., First price and second price auction modeling for energy contracts in Latin American electricity markets, Proceedings of 16th Power Systems Computation Conference, pp. 1-7. 2008. Azevedo, E. and Correia, P., Bidding strategies in Brazilian electricity auctions, International Journal of Electrical Power and Energy Systems, 28 (5), pp. 309-314, 2006. DOI: 10.1016/j.ijepes.2005.12.002 Garcia-Gonzalez, J., Roma, T., Rivier, J. and Ostos, P., Optimal strategies for selling wind power in mid-term energy auctions, Proceedings of IEEE Power and Energy Society General Meeting, pp. 1-7, 2009. DOI: 10.1109/pes.2009.5276009 Cramton, P., Dynamic auctions in procurement, in Dimitri, N., Piga, G. and Spagnolo, G. (Eds.), Handbook of Procurement, Cambridge, England: Cambridge University Press, 2006, pp. 220-248. Cramton, P. and Ausbel, L., Auction many divisible goods, Journal of the European Economic Association, 2, pp. 480-493, 2004. DOI: 10.1162/154247604323068168 Milgrom, P., Putting auction theory to work, Cambridge: Cambridge University Press, pp. 1-28, 2004. DOI: 10.1017/CBO9780511813825 DOI: 10.1017/CBO9780511813825.004 Ausubel, L., Cramton, P., Pycia, M., Rostek, M. and Weretka, M., Demand reduction, inefficiency and revenues in Multi-Unit auctions, The Review of Economic Studies, [Online], 2014. [date of reference November 2nd of 2013]. Avaible at:


Torres-Valderrama & Gallego-Vega / DYNA 82 (192), pp. 168-176. August, 2015.

[34]

[35] [36]

[37]

http://www.restud.com/paper/demand-reduction-and-inefficiency-inmulti-unit-auctions/ Barroso, L., Street, A., Granville, S. and Pereira, M., Offering strategies and simulation of multi item dynamic auctions of energy contracts, IEEE Transactions on Power Systems, 26 (4), pp. 19171928, 2011. DOI: 10.1109/TPWRS.2011.2112382 Taha, H., Operations Research, New Jersey: Pearson Prentice Hall, 2007. Wolak, F., Measuring unilateral market power in wholesale electricity markets: The California market, 1998-2000, American Economic Association, 93 (2) , pp. 425-430, 2003. DOI: 10.1257/000282803321947461 Wolak, F., An empirical analysis of the impact of hedge contracts on bidding behavior in a competitive electricity market, International Economic Journal, 14 (2), pp 1-39. 2000. DOI: 10.1080/10168730000080009 DOI: 10.1080/10168730000000017

H.C. Torres-Valderrama, was born in Bogota, Colombia in 1987, is PhD(s) MSc. He finished his studies in Electric Engineering in the Universidad Nacional de Colombia in 2010. He has been a research assistant in the Research Program of Acquisition and Analysis of Electromagnetic Signals of the Universidad Nacional de Colombia - PAAS-UN Group since 2010. His research interests are power markets, smart grids and energy efficiency. L.E. Gallego-Vega, was born in Bogota, Colombia, in 1976. He finished his undergraduate, MSc. and PhD. studies in the Department of Electrical Engineering at the Universidad Nacional de Colombia, Bogotá, Colombia. He has been a researcher in the Research Program of Acquisition and Analysis of Electromagnetic Signals of the Universidad Nacional de Colombia - PAAS-UN since 2000, working in research projects mainly related to power quality and lightning protection. He has been involved in teaching activities related to power quality and computational intelligence. His research interests are power markets, power quality analysis and computational intelligence applied to power systems modelling.

176

Área Curricular de Ingeniería de Sistemas e Informática Oferta de Posgrados

Especialización en Sistemas Especialización en Mercados de Energía Maestría en Ingeniería - Ingeniería de Sistemas Doctorado en Ingeniería- Sistema e Informática Mayor información: E-mail: acsei_med@unal.edu.co Teléfono: (57-4) 425 5365


A new method to characterize power quality disturbances resulting from expulsion and current-limiting fuse operation Neil A. Ortiz-Silva a, Jairo Blanco-Solano b, Johann F. Petit-Suárez c, Gabriel Ordóñez-Plata d & Víctor Barrera-Núñez e b

a Universidad Industrial de Santander, Bucaramanga, Colombia, neil.ortiz@cidet.org.co Esc. de Ing. Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia, jairo.blanco@correo.uis.edu.co c Esc. de Ing. Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia, jfpetit@uis.edu.co d Esc. de Ing.Eléctrica, Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia gaby@uis.edu.co e Universitat de Girona, Girona, Spain, victor.barrera@udg.edu

Received: April 29th, 2014. Received in revised form: February 19th, 2015. Accepted: July 14th, 2015.

Abstract This paper presents a new method for the characterization and diagnosis of electrical disturbances caused by fuses operation in the electrical distribution systems. A set of descriptors is proposed in order to quantify the typical features of the distortions caused by operation of expulsion and current limiting fuses. A multivariate statistical analysis is performed to select the descriptors with the best profiles qualifiers and the optimal decision thresholds are selected to classify the disturbances using machine learning algorithms. Voltage and current signals of the fuses operation are obtained from the ATP-EMTP simulation, as well as some real signals, to be all used in the validation of the new proposed algorithm, obtaining optimal performance and efficiency results. The algorithm was implemented in Matlab and the computational requirements are minimal. Keywords: Electromagnetic disturbances; expulsion fuse; current-limiting fuse; descriptors; Multivariate Analysis of Variance (MANOVA).

Nuevo método para la caracterización de perturbaciones de la calidad de la potencia producto de la operación de fusibles de expulsión y limitadores de corriente Resumen Este trabajo presenta un nuevo método para la caracterización y el diagnóstico de perturbaciones eléctricas causas por la operación de fusibles en los sistemas de distribución de energía eléctrica. Se propone un conjunto de descriptores con el propósito de cuantificar las características propias de las distorsiones originadas por la operación de fusibles de expulsión y limitadores de corriente. Para seleccionar los descriptores con los mejores perfiles clasificatorios se realiza un análisis estadístico multivariable y a través de algoritmos de aprendizaje automático se seleccionan los umbrales de decisión óptimos para los mismos. A partir de la simulación en ATP-EMTP se obtienen señales de tensión y corriente de la operación de fusibles, así como algunos registros de eventos reales, para en conjunto ser utilizados en la validación del nuevo algoritmo propuesto, obteniéndose un óptimo desempeño y eficacia en los resultados. El algoritmo fue implementado en Matlab y los requerimientos computacionales son mínimos. Palabras clave: perturbaciones electromagnéticas; fusible de expulsión; fusible limitador de corriente; descriptores; análisis multivariante de la varianza MANOVA.

1. Introduction Power quality concept represents a great interest to both customers and the electrical utilities. The study of different types of electrical disturbances and the associated causes has been the

basis for recent research, which aims to prevent, control, or mitigate such disturbances. Therefore, studies related to the generation of new tools to diagnose electric power quality are currently very important. The purpose of these tools is to improve power quality and reliability of electricity services.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 177-184. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48616


Ortiz-Silva et al / DYNA 82 (192), pp. 177-184. August, 2015.

2. The fuse overview The fuse is a simple and reliable safety device, which has great advantages compared to other protective devices due to its ease of implementation. Fuses are current sensitive devices, with characteristics of operation inverse time [3]. They are constituted by a conductive element which has a reduced cross section, usually surrounded by an arc extinguisher and heatsink, encapsulated in a cartridge (a cylindrical shape) and provided with respective terminals. The fuse element is within the cartridge, which is formed by a wire or metal strips with a reduced section and is calibrated according to its current capacity. In this metal section a high current density is produced, which, in turn, causes the element fusion and the opening of the connected circuit [4]. In the case of low voltage and currents, the fuse elements are manufactured using a lead-based alloy and in the case of higher currents, a tape based alloy of copper or aluminum [1].

2.07

V o lta g e in p .u

1.38 0.69 0

-0.69 -1.38 -2.07 0

0.3 0.4 Time [s] Figure 1. Voltage distortion caused by expulsion fuse. Source: The authors.

0.1

0.2

0.1

0.2

0.5

0.6

0.66

0.5

0.6

0.66

12.12 8.08 C u rre n t in p .u

The fuse is one of the most used protective devices in any electrical system due to its simplicity and performance characteristics, in addition to its low cost compared to other protective devices. The fuse operation is reflected by distortions of voltage and current waveforms that are recorded by power quality monitors. From these recorded signals it is possible to extract valuable and useful knowledge for the management and maintenance of electrical distribution systems. Through an analysis of the diagnosis and characterization of electrical disturbances, important results are obtained to provide better electric service and improve power quality indices. A comparative analysis of the effects of expulsion and current-limiting fuse (CLF) operation is presented in [1]. Some conclusions are, that CLFs improve power quality and reduce the duration of the voltage sags, although they generate overvoltage for customers near the fault location. Furthermore, the expulsion fuse presents longer voltage sag duration and a smaller transient recovery voltage (TRV). In [2], a characterization of electrical disturbances caused by the operation of current-limiting fuses is presented. This paper is the first contribution made by the authors on the subject of characterizing power quality disturbances resulting from the fuses operation, specifically for CLFs. Now, this present study aims to develop a unique methodology to identify the electrical disturbances caused by expulsion fuses and CLFs. A first step is to propose a set of descriptors from the waveform characteristics caused by the fuse operation. Second, a multivariate statistical analysis determines the set of relevant descriptors, which in turn identify the type of electrical disturbance. Finally, the decision rules and the algorithm are proposed. The second section of this paper presents the overview of the expulsion fuse and CLFs. The third section presents the fuse modeling methods. The models were verified by comparing simulation results. Section IV shows the formulation of the descriptors, the statistical analysis and selection decision rules, as well as the design and validation of the algorithm. Conclusions are given at the end of this paper.

4.04 0 -4.04 -8.08 0

0.3 Time [s]

0.4

Figure 2. Current distortion caused by expulsion fuse. Source: The authors.

2.1. Expulsion fuse characteristics Expulsion fuses use high-pressure gas generated by compressed tablets of boric acid in order to extinguish the arc. Fuse element is located between two contacts, one mobile and one fixed. In [5], various physical characteristics of the expulsion fuse are presented. Expulsion fuses commonly have long operation times ranging from half a cycle to times exceeding minutes [1], according to Fig. 1 and Fig. 2. The waveforms show voltage sags and long periods of overcurrent. This involves the flow of large amounts of energy (I2t) through the electric circuit, thereby exposing the sensitive equipment to high currents [1]. Unlike current limiting fuses, the factor I2t is much lower for an expulsion fuse, due to its rapid action. 2.2. Current-limiting fuse characteristics The current-limiting fuse is a fast-acting device, with interruption time of the fault current in less than a halfcycle time by introducing a high resistance in the circuit. Fig. 3 shows the voltage distortion by current-limiting fuse. The fuse element has a greater length than in the expulsion fuse and is located within silica sand to focus the arc. Pressure is incremented along the fuse element and produces a momentary increase in resistance, limiting fault current [4]. In addition, the operation time is reduced to a value that is considered in the first half cycle of the current waveform [3].

178


Ortiz-Silva et al / DYNA 82 (192), pp. 177-184. August, 2015.

non-linear resistance characteristic after melting open. The implemented model was taken from a current-limiting fuse of 8.3 kV, 20 A. According to [1], the current-limiting fuse was modeled as a nonlinear resistance. This model was implemented on ATPDraw and simulated on two test circuits, Fig. 4, whose records were obtained for the characterization of the disturbances and for future tests of the methodology. Two radial test feeders, of 34 and 13 nodes, were used in the simulations [8], [9]. 4. Methodology and implementation Figure 3. Voltage distortion caused by current-limiting fuse. Source: The authors.

Figure 4. Model of the fuse operation. Source: The authors.

3. Fuse modeling and simulation of its operation In order to study the effects of the expulsion and currentlimiting fuse operation in an electrical network, the devices were modeled using the inverse time characteristics (expulsion and current-limiting fuse) in the fuse model, implemented in ATP-EMTP, Fig. 4. 3.1. Expulsion fuse modeling The expulsion fuse was modeled as an ideal switch whose opening time is determined by the inverse time curve and the zero-crossing time of the instantaneous current, Fig. 4. The model meets the basic requirements of an expulsion fuse. These requirements are:  Overcurrent detection.  Operating time depending of the overcurrent magnitude.  Current interruption with zero-crossing time.  Open the protected feeder. Fuse models were programmed on the “MODELS” package and were implemented in ATP-EMPT using the Type-94 Thevenin model [6]. Simulations were performed on two test circuits and a set of voltage sags was obtained for the characterization of disturbances and some testing of the proposed algorithm. A system of 13 nodes [8] and one of 34 nodes [9] are the test feeders used in the simulations. Different types of faults were obtained, including single line-to-ground, double line-to line and three line faults, recorded voltage and current waveforms. 3.2. Current-limiting fuse modeling Two principal parameters were taken into account in its modeling. One is the fuse’s melt I2t and the other is the fuse’s

The following subsections describe relevant aspects of each of the steps set for the development and elaboration of the proposed methodology for the characterization of the electrical disturbances by expulsion and current-limiting fuses. Initially the proposed descriptors are described according to their waveform characteristics. The implementation of descriptors is performed using functions programmed in MATLAB. Subsequently, the multivariate statistical analysis is presented. The aim of this step is to determine the degree of relevance of each of the descriptors and the selection of decision rules, based on machine learning techniques for the design of the proposed methodology. Validation is performed using the different types of previously identified disturbances. Lastly, the proposed algorithm is applied to a set of actual and synthetic recorded data. 4.1. Formulation of descriptors Descriptors formulation is made based on the waveforms of electrical disturbances, as well as features identified in [2]. Some of these features are, the event duration, slopes of instantaneous and RMS overcurrent, ratio between fault and pre-fault currents, zero-crossing in overcurrent opening, disturbance waveform in the instantaneous and RMS values sequence, and percentages of variation in voltage and current rms. A number of descriptors that measure these and other features are postulated in this article. These descriptors are described below. 4.1.1. Disturbance time delta (∆TP) The ∆TP is defined as the difference between the endpoint ( ) and starting point ( ) of the disturbance, in respect to the total number of samples per cycle ( ).

(1)

The start and end points of the disturbance were determined using a segmentation tool developed in [13]. This algorithm identifies changes in the magnitude and frequency of the signal and estimates segments according to the detected variations in these parameters. Its operation is formed with the use of tensor analysis and wavelet transforms (decomposition with family Bior 3.9). The proposed algorithm combines the advantages of both techniques and has become a highly efficient tool for signal segmentation.

179


Ortiz-Silva et al / DYNA 82 (192), pp. 177-184. August, 2015.

4.1.2. Ratios of voltage and currents (RVI) The expected behavior for the waveforms of the voltage and current signals when operating a fuse is a relative increase and decrease in the effective signals during a disturbance. During the disturbance, it can be seen that the current always increases while the voltage tends to decrease. The descriptor quantified this characteristic behavior, identifying the phases involved in network fault. is a descriptor of the binary type, which defines whether a perturbation of this type of behavior in the voltage and current signals results in the operating of a fuse. This descriptor, unlike others, is calculated using the RMS sequences of voltage and current of one phase, and the expected result is the value 1, which indicates the increasing of the current while the voltage decreases during the event and the value zero for otherwise. The calculation method is presented below. _

%

Figure 5. The Nearest Zero-Crossing. Source: The authors.

∗ 100

_

(2) _

%

∗ 100

_

1, % 100 % 0,

100

Values % and % correspond to the percentages of variation of RMS sequence voltage and current respectively. The set of RMS values taken into account are those that correspond to the values located between 15% of the measurements following the start point of the disturbance and 15% of the measurements before the end of the event. It is calculated by comparing the voltage and current signals with a replica signal RMS, built from an analysis of pre-faults waveforms. 4.1.3. Nearest zero-crossing (NZC) When a disturbance occurs, it is important to identify the time at which the disturbance is cleared. For this, the zero crossing before the opening of fault is taken as a reference point. NZC descriptor is formulated in order to estimate the proximity between the point of fault clearing and the nearest zero crossing. Consequently, it is determined that the interruption was presented in a different value of current than the wave normal crossing zero (distinguishing feature between a current-limiting fuse and fuse expulsion). In summary, NZC descriptor computes the number of samples between the end point of the disturbance and the zero crossing closest to this point, as shown in Fig. 5. #

Figure 6. Rectangular shape coefficient Source: The authors

4.1.4. Rectangular shape coefficient (RSC) The RMS voltage sequences of the voltage sags show rectangular waveforms generally. The shape coefficient is used to quantify whether the sequence of the effective values of stress during evolution follows a rectangular hole. If the disturbance corresponds to the operation of an expulsion fuse, then RSC will have values close to unity, and values close to zero in the opposite case. The descriptor is quantified by comparing the areas, i.e. the area under the curve of the perturbation is calculated and is compared with the area under the curve of a perfect rectangle, using E.q. 4. Ideal rectangle consists of the start and end points of the disturbance and the lowest value of RMS voltage during this period of time, as illustrated in Fig. 6. (4) 4.1.5. Current rise and fall time

(3) #

ePI+ and ePI descriptors measure the speed at which the RMS current grows and decays during the duration of the disturbance. 180


Ortiz-Silva et al / DYNA 82 (192), pp. 177-184. August, 2015. Table 1. Complementary descriptors. Descriptor

iTSC ∆TP iPI+ iPI%iVF iRATIO I2t iRSC NZC IZ0 IZ-

Table 2. Descriptors and their relevance according to a statistical analysis. Descriptor Definition R2 ∆TP Disturbance time delta 0.800 NZC The nearest zero-crossing 0.643 RVI Ratios of voltage and currents 0.873 RSC Rectangular shape coefficient 0.581 iTSC Instant triangular-shaped coefficient 0.639 iRSC Instant rectangular-shaped coefficient 0.586 Source: The authors

Definition Instant triangular shape coefficient Disturbance time delta Up slope of instant overcurrent Slope down of instant overcurrent Instant voltage fall percentage Instant currents ratio Dissipated energy Instant rectangular shape coefficient Fuse operation insertion angle The nearest zero-crossing Increase of zero-sequence impedance Increase of negative-sequence impedance

Source: The authors

/ /

(5)

These descriptors extract important information about the transient state of the disturbance and identify the fuse operation among other disturbances of long duration. The slopes (rise and fall) of the electrical current determine the behavior during casting of the fuse element and the resulting bow. 4.1.6. Others descriptors proposed in [2] Table 1 presents a summary from [2] of some proposed descriptors, compiled by the authors of this work. This set of descriptors is primarily used for the characterizing of the disturbances caused by current-limiting fuses. 4.2. Multivariate analysis of variance-MANOVA According to the descriptors mentioned above, a multivariate statistical analysis is performed to establish the level of effectiveness of the descriptors and the degree of relevance of each of in the identification of events caused by the fuse operation. Eighty electrical signals from voltage and current (obtained by simulation and of real records) are analyzed. These waveforms have been previously identified and classified into three classes of causes: current limiting fuse operation, expulsion fuse operation and capacitor bank energization. Multivariable analysis verified the existence of groups or classes in the data, i.e. the degree of influence of each feature event, allowing for the knowledge of the importance of each descriptor to the origin of the event (limiting fuse, expulsion fuse, energizing capacitor banks). Parameter R2 -corrected (see Table 2, third column) indicates the degree of influence of the cause of the event on each one of the exposed descriptors. R2 values (corrected close to the unit) indicated greater relevance regarding the origin of a disturbance. Descriptors were selected which had the greatest degree of influence; these were those that had R2corrected ≥ 0.5 values.

Table 3. Extracted rule set using CN2 induction algorithm. Rule Cause Assignation If RVI=1 AND ΔTP >0.643 AND NZC≤0.036 Cause= Expulsion fuse AND 0.76≤RSC≤3.02 IF RVI=1 AND ΔTP ≤ 0.5313 AND NZC>0.047 Cause= Current-limiting fuse AND 0.402<iTSC≤1 OR 1.721<iRSC≤338 Source: The authors

According to this analysis, ∆TP, NZC, RVI, RSC, iTSC and iRSC descriptors were selected as the relevant descriptors for electromagnetic disturbances characterization by expulsion and current-limiting fuses. 4.3. Selection of the rules decision thresholds Selection of thresholds and decision rules seek to find appropriate values for the descriptors for the classification task. These values allow the classification of the disturbance cause and avoid overlap in the disturbances groups. Data mining was used to obtain these thresholds, this being a pattern and regularity recognition technique used with large databases. The CN2 algorithm was used, which is an automated learning technique based on an iterative algorithm searching for IF- THEN rules [11]. Every iteration looks for a set of descriptors covering a large number of examples in a specific class, and only some from another classes. Therefore, this could be used to make a reliable prediction of the class containing the covered examples. Table 3 shows the decision rules and thresholds. 4.4. Methodology design According the results obtained and presented in Table 3 and in order to obtain a complete analysis tool, these results, together with those obtained in [2], are consolidated to create a methodology that allows for the identification of the disturbances caused by the operation of current-limiting and expulsion fuses, Fig. 7. The process of automatic identification is presented in Fig. 7, in which the input signals are voltages and currents (per phase). These signals are acquired from power quality

181


Ortiz-Silva et al / DYNA 82 (192), pp. 177-184. August, 2015. Table 4. Confusion matrix with the results obtained from test case.

Voltages and Currents Database

CAUSE Signal Segmentation

No

Yes

TP  0.5313

No Fuse Operation

No

NZC  0.036

No

Total

True Positives (TP)

11

17

15

43

False negatives (FN)

4

3

0

7

False positives (FP)

0

0

9

9

True negatives (TN)

35

30

26

-----

True Positive Rate (TPR)

0.73

0.85

1

0.86

False Positive Rate (FPR)

0

0

0.25

0.071

Source: The authors

No 0.76  RSC  3

The false positives rate (FPR) indicates the degree of confusion of the methodology for assessing the cause of a disturbance. Some results and operational details regarding the methodology are presented. Three different types of disturbance are presented: events caused by current-limiting fuse operations (synthetic and actual signals), events caused by expulsion fuse operations (synthetic and actual signals) and events caused by capacitor bank energizing (synthetic signals).

No

Yes Yes 0.402  iTSC  1

OR

No Fuse

Yes

Yes

NZC  0.047

CLF Fuse

Vrms Sequences

Descriptors

RVI  1

Expulsion Fuse

No

Expulsion Fuse Operation

0.721  iRSC  338

Yes

5.1. A case of expulsion fuse operation CLF Operation

2.07

12.12

1.38

8.08

0.69

5. Testing

C u r r e n t in p . u

monitors. From these signals, the descriptors are calculated as priorities, where their values are evaluated in the rules decision. The proposed algorithm has several steps, including a step for loading the signals records, segmentation of the signals, calculating RMS sequences, calculating of the proposed descriptors and evaluation of the decision rules and finally diagnosis of the fuse operation.

The next record was taken from a real substation’s database, regarding a disturbance caused by expulsion fuse operation. Voltage and current waveforms are presented in Fig. 8. According to Table 5, it was concluded that the disturbance was caused by expulsion fuse operation because of its RVI descriptor value was 1, which means that there are voltage increases and current decreases during the disturbance. Moreover, its duration was 11.6 cycles (∆TP = 11.6) and its fault time was close to a zero-cross (NZC ≈ 0). Also, effective signal

V o lt a g e in p . u

Figure 7. Proposed methodology for identifying disturbances related to expulsion and current-limiting fuse operation. Source: The authors.

4.04

0

-0.69

Evaluation and validation of the proposed methodology are performed on a set of 50 electrical registers, formed by 15 disturbances caused by expulsion fuse, 20 current-limiting fuses and 15 disturbances by other causes. These registers are comprised of real data, as well as disturbances obtained by simulation, which are used only in the validation stage, differing from those used in the MANOVA stage analysis. The results are shown in Table 4. The true positives rate (TPR) is the efficiency with which the algorithm correctly classifies the disturbance, according to the cause (expulsion fuse, current limiting fuse or no fuse). For example, as shown in Table 4 with the disturbances by fuse operation, the RTP indicates that correctly classified disturbances make up 86 % of the entire set of events.

-4.04

-1.38 -2.07 0

0

0.1

0.2

0.3 0.4 Time [s]

0.5

0.6

0.66

-8.08 0

0.1

0.2

0.3 0.4 Time [s]

Figure 8. Disturbance caused by expulsion fuse operation. Source: The authors.

Table 5. Descriptor for a disturbance caused by expulsion fuse operation. Descriptor Value 1 RVI

∆TP NZC RSC Source: The authors

182

11.6 0.07 1.4

0.5

0.6

0.66


Ortiz-Silva et al / DYNA 82 (192), pp. 177-184. August, 2015. Table 7. Descriptor for a disturbance caused by capacitor bank energizing. Descriptor Value RVI 0 ∆TP 9.3125 NZC 0.2813 RSC 0.032 Source: The authors

This case was ruled out at the beginning of the methodology because the RVI descriptor value was zero, which means that there was no loss of load recorded.

Figure 9. Disturbance caused by current-limiting fuse operation. Source: The authors.

6. Conclusions

Table 6. Descriptor for a disturbance caused by current-limiting fuse. Descriptor Value RVI 1 ∆TP 0.3906 iTSC 0.9596 iRSC 1.9469 NZC 0.1719 Source: The authors

disturbance waveforms were rectangular-shaped (voltage) because RSC were close to the unit (RSC = 1.4). 5.2. A case of current-limiting fuse operation A real case of current-limiting fuse record was taken from a real substation’s database. Voltage and current waveforms are presented in Fig. 9. According to Table 6, its RVI descriptor value was 1, which means that its voltage increases and current decreases during the disturbance, therefore, there are losses of load. Moreover, its duration was around two fifth part of a cycle (∆TP = 0.3906) and its fault time was not close to a zerocross (NZC > 0). Also, instantaneous signal disturbance waveforms were triangular-shaped (current) because iTSC were close to the unit (iTSC = 0.9596). Rectangular-shaped (voltage) is significant (iRSC = 1.9469). It was concluded that the disturbance was caused by current-limiting fuse operation. 5.3. A case of capacitor bank energizing The methodology was tested using a simulated record of capacitor bank energizing [15]. The methodology ruled out that a fuse operation was the cause due to the descriptors values (see Table 7). Voltage and current waveforms are presented in Fig. 10.

1.89

The authors thank the Universidad Industrial de Santander for their support of this research through the project DIEF- VIE-5567: “Methodologies for the Characterization and Diagnosis of Voltage Sags in Electrical Distribution Systems”. References [1]

[2]

3.33 C u r r e n t in p . u

1.26

V o lta g e in p .u

Acknowledgements

5

2.53

1.66

0.63 0

-0.63

0

[3]

-1.66

-1.26 -1.89 0

The proposed methodology is a useful and reliable option for electrical utilities that want to improve the power quality optimization of their resources. The methodology has descriptors, which were selected as relevant from the characterization of electrical disturbances caused by expulsion and current limiting fuses operation. The combination of multivariate statistical analysis and machine learning techniques are presented as support for other studies in different areas, in order to generate methodologies for the formulation of features (descriptors) of a certain event or a failure state. The effectiveness of the results depends on the quality of the signals to be analyzed. Features such as sampling rate, the distance that was taken from the recording point of the disturbance source and the signals with noise are elements can lead to errors in the descriptors. This study presents an innovative tool that can be integrated with other automatic methodologies and can also be used to realize the characterization, assessment, and mitigation of electrical disturbances that affect the power quality.

[4] 0.1

Time [s]

0.2

0.3

-3.33 0

0.1

Time [s]

0.2

0.3 0.3

Figure 10. Disturbance caused by capacitor bank energizing. Source: The authors.

183

Lj, A., Kojovic, S.P., Hassler, K.L., Leix, C.W., Williams, E. and Baker, E., Comparative analysis of expulsion and current-limiting fuse operation in distribution systems for improved power quality and protection, IEEE Transactions on Power Delivery, 13 (3), pp. 863869, 1998. DOI: 10.1109/61.686985 Ortiz-S., N., Blanco-S, J., Ordoñez-P., G. and Barrera-N., V., Characterising power quality disturbances resulting from current limiting fuse operation. Revista Ingeniería e Investigación, 31 (2), pp. 88-96, 2011. Wright, A., Newbery, P.-Gordon, Electric fuses, IEE Power and Energy Series 49, 3rd Edition, 251 P., 2004. Angelopoulos, N., All about fuses-what you need to know but were shorted on in class, Potentials, IEEE, 10 (4), pp.34-36, 1991. DOI: 10.1109/45.127679


Ortiz-Silva et al / DYNA 82 (192), pp. 177-184. August, 2015. [5] [6] [7]

[8]

[9]

[10] [11] [12]

[13]

[14] [15]

Martín, R., Diseño de subestaciones eléctricas, Mc Graw-Hill, 1992, pp. 105-110. Rincon-D., J., and Urquijo-V., C., Modelamiento y simulación de protección por sobrecorriente utilizando EMTP, Tesis de pregrado, Universidad Industrial de Santander, Bucaramanga, Colombia, 1995. Zhang, L. and Bollen, M.H.J., A method for characterisation of threephase unbalanced dips from recorded voltage waveshapes, IEEE Telecommunications Energy Conference, Chalmers of Electric of Technology, 1999. Radial Test Feeders - IEEE Distribution System Analysis Subcommittee. [Online]. 34-bus Feeder (XLS and DOC).17 Sept 2010. Available at: http://ewh.ieee.org/soc/pes/dsacom/testfeeders.html. Radial Test Feeders - IEEE Distribution System Analysis Subcommittee. [Online]. 13-bus Feeder (XLS and DOC). 17 Sept 2010. Available at: http://ewh.ieee.org/soc/pes/dsacom/testfeeders.html. Barker, H.R. and Barker, B.M., Multivariante analysis of variance (Manova) a practical guide to its use in scientific decision making. Alabama, 1984. Clark P. and Boswell, R., Rule induction with CN2: Some recent improvements, Machine Learning - Proceedings of the Fifth European Conference (EWSL-91), 1991. Blanco, J. and Jagua-G., J., Metodología para el diagnóstico de la causa de hundimientos de tensión: Análisis de fallas, Tesis de pregrado, Universidad Industrial de Santander, Bucaramanga, Colombia, 2009. Blanco, J., Diseño de una metodología para la valoración de eventos causados por fallas de red e inserción de bancos de condensadores en sistemas de energía eléctrica”, Tesis MSc. en Ingeniería Eléctrica, Universidad Industrial de Santander, Bucaramanga, Colombia, 2012. Vega-G., V., Detección y clasificación automática de perturbaciones que afectan la calidad de la energía eléctrica”, Tesis MSc., Universidad Industrial de Santander, Bucaramanga, Colombia, 2007. Santos, J., Courvy, D., Tavares, M. and Oleskoviccz, M., An ATP simulation of shunt capacitor switching in an electrical distribution system. Thesis, Dept. of Electrical Engineering, University of São Paulo, São Paulo, Brazil, 2001.

J. Blanco-Solano, was born in Socorro (Santander), Colombia. He received his BSc. in Electrical Engineering in 2009 and his MSc. in Electrical Engineering in 2012, both from the Universidad Industrial de Santander UIS, Bucaramanga, Colombia. Currently, he is pursuing a PhD in Electrical Engineering at the Universidad Industrial de Santander. His research interests include: power quality, electromagnetic transients, modeling and simulation, smart metering, smart grids and optimization. V. Barrera-Nuñez, received his BSc. In 2003 and MSc. in 2006 from the Universidad Industrial de Santander - UIS, Bucaramanga, Colombia. He received his PhD degree in Power Quality Monitoring from the University of Girona, Catalonia, Spain. From September to December 2009, he was a visiting researcher at the University of Texas, USA. His research interests are voltage sags and power quality monitoring. N. Ortiz-Silva, received his BSc. in 2012 in Electrical Engineer of the Universidad Industrial de Santander - UIS, Bucaramanga, Colombia.

G. Ordóñez-Plata, received his BSc. in Electrical Engineering from the Universidad Industrial de Santander - UIS, Bucaramanga, Colombia, in 1985. He received his PhD in Industrial Engineering from the Universidad Pontificia Comillas, Madrid, Spain, in 1993. Currently, he is a professor at the School of Electrical Engineering at the Universidad Industrial de Santander – UIS, Bucaamanga, Colombia. His research interests include: signal processing, electrical measurements, power quality, technology management and competency-based education. J.F. Petit-Suárez was born in Villanueva (La Guajira), Colombia. He received his BSc. and MSc. in Electrical Engineering from the Universidad Industrial de Santander - UIS, Bucaramanga, Colombia, in 1997 and 2000, respectively. He received his PhD degree in Electrical Engineering from the Universidad Carlos III de Madrid - UC3M, Madrid, Spain, in 2007. Currently, he is a professor at the Universidad Industrial de Santander – UIS, Bucaramanga, Colombia and his areas of interest include Power Quality, power electronics and signal processing algorithm for electrical power systems.

184

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64


Time-varying harmonic analysis of electric power systems with wind farms, by employing the possibility theory Andrés Arturo Romero-Quete, Gastón Orlando Suvire, Humberto Cassiano Zini & Giuseppe Rattá Instituto de Energía Eléctrica, Universidad Nacional de San Juan – Consejo Nacional de Investigaciones Científicas y Técnicas, San Juan, Argentina, aromero@iee.unsj.edu.ar Received: April 29th, 2014. Received in revised form: February 10th, 2015. Accepted: July 17th, 2015.

Abstract This paper focuses on the analysis of the connection of wind farms to the electric power system and their impact on the harmonic loadflow. A possibilistic harmonic load-flow methodology, previously developed by the authors, allows for uncertainties related to linear and nonlinear load variations to be modeled. Moreover, it is well known that some types of wind turbines also produce harmonics, in fact, timevarying harmonics. The purpose of this paper is to present an improvement of the former method, in order to include the uncertainties due to the wind speed variations as an input related with power generated by the turbines. Simulations to test the proposal are performed in the IEEE 14-bus standard test system for harmonic analysis, connecting to the network a wind farm composed by ten Full Power Converter (FPC) type wind turbines. Results obtained show that uncertainty in the wind speed has a considerable effect on the uncertainties associated with the computed harmonic voltage magnitudes. Keywords: full power converter, harmonic distortion, possibility distribution, power system, uncertainty, wind turbine.

Análisis de armónicos variando en el tiempo en sistemas eléctricos de potencia con parques eólicos, a través de la teoría de la posibilidad Resumen En este trabajo se analiza el impacto de la conexión de parques eólicos, en el flujo de cargas armónicas en un sistema de potencia. Algunos generadores eólicos producen armónicos debido a la electrónica de potencia que utilizan para su vinculación con la red. Estos armónicos son variables en el tiempo ya que se relacionan con las variaciones en la velocidad del viento. El propósito de este trabajo es presentar una mejora a la metodología para el cálculo de incertidumbre en el flujo de cargas armónicas, a través de la teoría de la posibilidad, la cual fue previamente desarrollada por los autores. La mejora consiste en incluir la incertidumbre debida a las variaciones de la velocidad del viento. Para probar la metodología, se realizan simulaciones en el sistema de prueba de 14 barras de la IEEE, conectando en una de las barras un parque eólico compuesto por diez turbinas del tipo FPC. Los resultados obtenidos muestran que la incertidumbre en la velocidad del viento tiene un efecto considerable en las incertidumbres asociadas a las magnitudes de las tensiones armónicas calculadas. Palabras clave: convertidor de potencia AC/DC – DC/AC; distorsión armónica, distribución de posibilidad, sistemas de potencia, incertidumbre, generador eólico.

1. Introduction Electricity is a product of particular importance for the progress of society. An electric power system allows such a product to be generated, transformed and transmitted and finally supplied to the end customers. All of society’s efforts

and investments (e.g., in human resources, physical assets, research and development, etc.) in electric power systems have a common aim: "to ensure electricity in sufficient quantity in time and place, with adequate reliability at the lowest cost and controlling that environmental impacts remain within acceptable limits".

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 185-194. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.48617


Romero-Quete et al / DYNA 82 (192), pp. 185-194. August, 2015.

2. Time-varying harmonics in electric power systems 2.1. Fourier series and harmonics Harmonics in power systems are a steady state perturbation, characterized by a deviation of the voltage (or current) wave shape from the ideal sine form. Since the wave shape remains periodic, this deviation can be precisely described and quantified by decomposing the distorted wave into its Fourier series. Fourier’s theory proves that any periodic function could be represented by the sum of a fundamental and a series of higher order harmonic components at frequencies that are integer multiples of the fundamental component. Such series establish a relationship between the function in time and frequency domains 0. The Fourier series of a periodic function f(t), with period T and angular frequency  = 2/T, can be written as: 

f (t )  c (0)   c ( h ) sin ( h t   ( h ) )

(1)

h 1

with c(0)= a(0)/2, c(h)  a( h)2  b( h)2 1 (h) (a / b(h))

and (h)=tan-

1

f(t)

100

0.8

fundamental

50

0 -50

fifth

seventh

-100 0

0.005

0.01

0.015

0.02

0.025

harmonic components

To achieve this aim, the scientific and technical community specializing in power engineering has set its expectations on the development of Smart Grids. In this field, there are many issues that have only partially been solved, which deserve special attention and require intensive research and development. One such issue is related to the connection of new electricity sources based on renewable resources, e.g., solar, wind, geothermal, etc., to the power systems. This paper focuses on the analysis of the connection of wind farms and their impact on power quality. First, the main aspects to be considered when harmonics have to be modeled in the network, including wind farms’ generator fleet, are discussed and developed. Then, the general formulation of a possibilistic harmonic load-flow methodology, previously developed by the authors, that allows for modeling uncertainties related to linear and nonlinear load variations, is presented. Moreover, as is well known, some types of wind turbines also produce harmonics. Therefore, in this paper, an improvement of the former proposed methodology is developed in order to deal with this specific issue. In fact, with the modifications, the improved possibilistic harmonic load-flow method is able to model the uncertainty due to the wind speed variations as an input related to electric power generated by the turbines. The new proposal is capable of computing the possibility distributions of the harmonic voltage distortion at all buses of a power system. Finally, the proposed method is tested by performing simulations in the IEEE 14-bus standard test system for harmonic analysis, connecting to the network a wind farm composed by ten wind turbines of the Full Power Converter (FPC) type.

0.03

harmonic spectrum

0.6 0.4 0.2

0

time (s)

1

3

5

7

9 11 13

harmonic order (h)

Figure 1. Representation of the Fourier series and the harmonic spectrum of a function f(t). Source: The authors

where: h: harmonic component order, c(0): magnitude of the DC component, c(h): peak value of the harmonic component (h) and (h): phase angle of the harmonic component (h), the Fourier coefficients a(0), a(h) and b(h) are given by:

a(0) 

2 T

T /2

T /2

f (t )dt ,

(2)

a( h ) 

2 T

T /2

b( h ) 

2 T

T /2

T /2

T / 2

f (t ) cos(ht )dt ,

(3)

f (t ) sin(ht )dt ,

(4)

Fig. 1 shows the decomposition of a function f(t) in its harmonic components. The harmonic spectrum is also shown, which is obtained by dividing the peak value of each harmonic component by the peak value of the fundamental component: c(h)/c(1). 2.2. Causes of harmonics in electric power systems

Linear devices satisfy the superposition principle, i.e.: if i(t) is the current flowing through it when a voltage v(t) is applied, then a current i(t) will flow if the applied voltage is v(t). A non-linear load is a device for which this relationship does not hold. Fig. 2 illustrates this concept. In an ideal linear power system, both voltage and current waveforms are perfect sine waves. In real power systems, however, there are nonlinear components for which the current is not a sinusoid, even if a perfect sine voltage were applied. This non-sinusoidal current produces voltage drops in other linear or nonlinear impedances and, consequently, the voltage wave is distorted in the whole system. It is noted that harmonic distortion is a growing problem in power systems due to the increasing use of nonlinear loads.

186


Romero-Quete et al / DYNA 82 (192), pp. 185-194. August, 2015.

basic models of nonlinear loads from a deterministic point of view are presented. In methodologies based on frequency domain analysis, the common practice is to model nonlinear loads through harmonic current sources, whose amplitudes and phases depend on a set of parameters that characterize their harmonic behavior and their operating state. The simplest model assumes that harmonic currents do not depend on the applied voltage waveform, but only on its power frequency component. For example, for thyristorcontrolled rectifiers, expressions like (5) can be used to compute the amplitude and phase of the harmonic components of the injected currents at node j,

i(t) v(t)

Zn-l

Figure 2. Current harmonic distortion due to a nonlinear load. Source: The authors

2.3. Time-varying harmonics

As mentioned above, harmonics are conceptualized as a steady-state phenomenon, that is, the waveform to be analyzed is assumed to be repeated periodically from -∞ to ∞. In practical situations, however, the voltage and current distortion levels as well as their fundamental components are continually changing in time. This is due to the fact that the network configuration usually changes and its linear and nonlinear loads vary all the time; in addition, even if they were constant their parameters are not usually well-known. All these features make harmonic distortion a phenomenon involving uncertainty. Therefore, it is inadequate that the harmonic load-flow calculation, in a real power system, be performed on a deterministic basis, i.e., assuming that all the relevant parameters are well known and non-random. In fact, these kinds of studies only provide a static and certain image of a varying and uncertain situation. Methodologies based on probability theory, with different degrees of sophistication, have been developed to deal with these uncertainties [2]. Practical application of them, however, often has to deal with the lack of information to describe in probabilistic terms the amount and type of medium size and small distributed nonlinear loads (NLs), as well as the composition of linear loads (LLs). This suggests that in many practical cases, the available information for harmonic load flow can be better described through fuzzy measures like possibility distributions. 2.4. Effects of harmonics on the network

-

There are several problems related to harmonics, such as: Overheating of neutral conductors, Overheating of transformers, Overloading of capacitors for power factor correction, Problems with induction motors, Control failures in electronic devices that detect the zero crossing of the voltage wave, and others.

3. Modeling the power system components at harmonic frequencies 3.1. Model of Nonlinear Loads

Accurate modeling of nonlinear loads is a difficult issue because of their inherent complexity and because some of them are constantly changing their operational modes. In this section,

i (jh)  f h (v (1) j , vdc , idc ,  )

(5)

where, v (1) j is the power frequency component of the applied voltage at node j, vdc and idc the voltage and current in the DC side of the rectifier, and  the switching angle of the thyristors. This model could be further simplified if parameter  in (5) is implicitly expressed as a function of the active (vdcidc) and reactive power of the rectifier, Prect and Qrect respectively, thus the expression (5) becomes: i (jh)  f h (v (1) j , Prect , Qrect )

(6)

More detailed models have been developed for some of the most important devices based on power electronics; however, such models can be used only when the characteristics of the nonlinear device are well known. This, unfortunately, is not the most common scenario in the case of real electric power systems. 3.2. Behavior and Models of Harmonic Frequencies of the Main Elements of an Electric Power System

Modeling of electric power system elements over a wide range of frequencies is relatively well documented in the bibliography [3,6]. However, due to the importance of this topic, typical harmonic frequency models representing the elements in common power systems, such as transformers, lines and cables, shunt capacitors and loads, are described in what follows. 3.2.1. Transformers The transformer is usually modeled as a lumped impedance which represents its leakage impedance. Such lumped impedance comprises a series inductance with resistance; this is because the frequency dependent effects are not significant for the harmonic frequencies of common interest. However, other complex models include the nonlinear characteristics of core loss resistance, the winding stray capacitance, the core saturation, the skin effects, etc. Effects of stray capacitance are usually noticeable only for frequencies higher than 4 kHz.

187


Romero-Quete et al / DYNA 82 (192), pp. 185-194. August, 2015.

j

Ycj0

Zljk

k

Yck0

Figure 3. Line model for harmonic analysis. Source: The authors

Furthermore, for three-phase transformers the winding connections are important for determining the effect of the transformer on zero-sequence harmonic components, i.e., delta connections isolate these currents from one voltage level to the next. Other connections such as zigzag windings are used to mitigate harmonics. 3.2.1. Overhead lines and underground cables For balanced harmonic analysis, the models of typical lines or cables can be further simplified into a single-phase -circuit, as shown in Fig. 3, which uses positive and zero sequence data. In Fig. 3, Zljk comprises a resistance and a series reactance connected between nodes j and k of the network. Moreover, admittances Ycj0 and Yck0 represent two capacitors connected between their respective nodes and earth. The main issues in modeling lines and cables are the frequency dependence of per-unit length series impedance and the long line effects. As a result, the level of detail of their models depends on the line length and harmonic order. Thus, as a first approximation, it can be considered that the series resistance is independent of frequency, and that both the series reactance and the capacitive admittances depend linearly on the frequency. Additionally, more complex models, especially long lines and high frequency harmonics, consider the Skin effect correction in the series resistance and propagation effects by associating several -circuits in series. 3.2.2. Aggregate load model: shunt capacitors and loads As in other power system studies, it is only practical to model an aggregate load for which reasonably good estimates of active (MW) and reactive (MVAR) power are usually readily available. However, this information is insufficient to establish an adequate load model at harmonic frequencies. In fact, the system harmonic impedance is sensitive to the model parameters, model topology and the actual load composition. In agreement with what was previously stated, the important components of the load for harmonic analysis are: ‐ The resistive component (passive loads), ‐ The step-down transformer (distribution or service transformers), ‐ The motor components (motive loads), ‐ The non-linear loads (exposed at section 3.1), and ‐ The shunt capacitors and filters (such as power factor compensation capacitors and so on). Linear passive loads have a significant effect on system

frequency response, primarily near resonant frequencies, i.e., the resistive component provides damping when the overall system response is near a parallel resonance (high impedance). The aggregate load model should include the distribution or service transformer. At power frequencies the effect of distribution transformer impedance is not of concern in the analysis of the high voltage network; however, at harmonic frequencies the impedance of the transformer is practically an inductance in series with the load. Moreover, it is also important to discriminate the motor components from the overall load because the active power absorbed by rotating machines does not exactly correspond to the value of a damping resistance. Furthermore, it is clear that the nonlinear load components determine the levels of harmonic currents injected to the system. Finally, the shunt capacitors and the stray inductance of the supply system can resonate at or near one of the harmonic frequencies. Several models to aggregate loads at harmonic frequencies have been proposed in the bibliography [3,7]. However, load representation for harmonic analysis is still an active research area. Section 6 of this paper deals with the fact that most of the magnitude and composition parameters, which are necessary to achieve an adequate load model representation, are known with uncertainty for real power systems. In effect, the influence of such uncertain parameters in a general load model is studied in [7]. 3.3. Model of a wind turbine at harmonic frequencies

There are several types of wind turbines, among which include: Fixed Speed Induction Generator (FSIG), Doubly Fed Induction Generator (DFIG) and Full Power Converter (FPC). Each type of system has advantages and disadvantages inherent to the technology used, and pursuing the same goal: maximum energy transfer from the wind turbine to the grid, at the lowest cost, and ensuring adequate levels of electromagnetic compatibility. However, this paper will only deal with the modeling of harmonic frequency of the wind turbine FPC type, as they are widely used in wind farms in Latin America, and also have a characteristic harmonic spectrum by multiples of the grid frequency [8]. Despite this, in the future the study may be extended to other turbine models. The basic configuration of a wind turbine type FPC is presented in Fig. 4. The two main components to be modeled to harmonic frequency are the generator and the converter. 3.3.1. Generator In synchronous generators, the rotating magnetic field established by a harmonic current in the stator rotates at a speed significantly different from that of the rotor. Therefore, for harmonic frequencies, the impedance is usually taken to be either the negative sequence impedance or the average of the direct and quadrature subtransient impedances [3]:

188


Romero-Quete et al / DYNA 82 (192), pp. 185-194. August, 2015. 2200 2000 1800 1600 1400 1200 1000

IWP-100-20MW

800 600 400

Figure 4. Configuration of a FPC type wind turbine. Source: The authors

200 0 0

 (1   )  h  1  RB  h   sh  

jhX 2

(a)

(b)

Figure 5. Models to harmonic frequency for: (a) induction generators, (b) synchronous generators. Source: The authors

X2 

X d"  X q" 2

(7)

In the case of induction generators, the inductance is represented through the locked rotor reactance. In both cases, the dependency of the resistance on the frequency may be important due to the skin effect and the eddy current losses; therefore, this dependence may be considered by increasing the resistance value at the fundamental frequency by h , where h is the harmonic order.

Furthermore, in induction machines the “α” parameter is considered, which relates the rotor resistance R1 with the locked rotor resistance RB, by α = R1/RB, and has a typical value equal to 0.5. Additionally, the slip at harmonic frequency sh is given by (8), sh 

 h1  r  h1

10

Wind speed (m/s)

15

20

Figure 6. Generated power vs. wind speed at a 2 MW turbine-generator of the FPC type. Source: Adapted from www.impsa.com

hR2

jhX B

5

(8)

where r is the rotor speed and the sign + or – depends on the positive or negative sequence of the harmonic order considered. The models for harmonic frequency for induction and synchronous generators are shown in Fig. 5.

Table 1. Characteristic harmonic of a FPC wind generator Harmonic 5 7 11 13 17 order (h) iw(h)/ iw(1) 0.5 0.54 0.21 0.10 0.13 Source: the authors

19 0.19

23

25

0.19 0.02

anti-parallel diodes. On the side of the generator, the convert transforms a three-phase AC voltage of variable amplitude and frequency in a constant DC voltage. On the side of the grid, the converter is responsible for converting this DC voltage into an AC voltage phase, with constant amplitude and frequency (exactly like the power grid where the unit is connected). Using digital signal processors, the vector control is performed in the converter (PWM technology). The control algorithms are diverse and depend on the manufacturer, but always look for the optimum use of turbine-generator. In Fig. 6 the characteristic curve of wind speed versus generated power through a commercial generator of 2 MW is shown. Although a characteristic function of the harmonic behavior of a wind turbine-generator type FPC has not been fully defined (e.g., similar expressions to those defined in (5) and (6) for a converter), there is information, such as that presented in Table 1, which relates the harmonic current, iw(h), injected by the wind generator, as a percentage of the fundamental frequency current, iw(1), (50 or 60 Hz). Note that this type of data is normally provided by the manufacturer and it is obtained from the electric power quality tests. Thus, using information from Fig. 6 and Table 1, it is possible to model the harmonic current that injects a wind turbine-generator according to the wind speed, in this case a wind generator 2 MVA. 4. Possibilistic harmonic load-flow

3.3.2. Converter

4.1. Uncertainty management

In an FPC type wind turbine the electric parameters, which vary with the wind speed, are transformed to grid parameters by a converter. The commutation valves used in the converter are Insulated Gate Bipolar Transistors (IGBT) with

Most real-world decisions are made within an environment in which: goals, constraints, possible actions (solution space) and consequences are not precisely known. To deal quantitatively with imprecision, usually techniques and concepts of probability theory are employed. However, it must be considered, that when

189


Romero-Quete et al / DYNA 82 (192), pp. 185-194. August, 2015.

uncertainty is modeled by using probability density functions, PDFs, it is implicitly accepted that the imprecision of whatever nature can be compared with randomness, an assumption that may be questionable. For example, in many situations, the decision maker cannot measure or express an explicit function for all determinant parameters within a decision process. This is due to various reasons, e.g., incomplete information, insufficient knowledge, lack of or inaccuracy of models, or that the cost involved to acquire the lacking information is high regarding the improvement in results. Therefore, in those situations, it could be advisable to appeal to other types of information, namely: experience, intuition, ideology, beliefs, feelings, etc., i.e., options with a degree of subjectivity, but not of arbitrariness. Fuzzy sets and possibility theories provide a formal framework to model vague relationships or concepts, such as, size, pollution, economics, satisfaction, adequacy, etc. Definitions for the formal treatment of the fuzzy sets and possibility distributions can be found in [10], and applications to the specific problem of harmonic analysis in [11-14].

v j (P)  abs Z j (P)I(P)

4.2.1.

v j ( p1 , p2 ,, pn p )  f j ( p1 , p2 ,, pn p )

 Pˆ ( pk ) , k = 1…np, the harmonic voltage vj becomes a k

possibilistic variable, expressed by a fuzzy number Vˆj . Thus:

Vˆj  f j (Pˆ )

(13)

Pˆ where is the fuzzy Cartesian product: ˆP  Pˆ  Pˆ  Pˆ 1 2 np , whose membership function is (see [13]):

Pˆ p1 , , pn p  min  Pˆ ( p1 ),,  Pˆ ( pn p ) np

1

(14)

Accordingly, the membership function of Vˆj can be written as:

Vˆ (v j )  j

The simpler case not involving possibilistic dependences will be considered first. For this purpose, consider the general expression of the harmonic bus voltages, i.e.:

max   Pˆ v j  f j ( p1 ,, pn p ) 

 p ,, p  np

1

(15)

and by replacing (14) in (15),

Vˆ (v j ) 

(9)

where the harmonic order has not been identified for notational simplicity. When harmonic interaction is neglected (harmonic penetration approach, [3]) components ik (k = 1…n) of vector I do not depend on the harmonic voltages. Moreover, the diagonal elements of Y and the injected currents ik, depend on the model of the linear and nonlinear loads, and on a number of parameters describing their magnitudes and compositions. For the sake of generality, in this section no hypotheses will be made regarding these aspects as the development of a specific model is postponed until section 4.3. However, by assuming that a generic vector P  ( p1 , p2 ,, pn p ) of np parameters completely describes the loads of the system, all the injected currents and the bus admittance matrix in (9) can be determined accordingly, i.e:

(12)

When pk are uncertain parameters described through fuzzy numbers Pˆ1 , Pˆ2 ,  , Pˆn p with independent membership functions

General formulation for the case of possibilistic independence

V  Y 1I

(11)

or, using a more general notation:

4.2. The possibilistic harmonic load-flow

Through references [11-14], a complete formulation for calculating the harmonic load-flow, considering uncertainties in the input parameters, has been presented. Such uncertainties are modeled by possibility distributions. Readers are encouraged to review such references to achieve a better understanding of the methodology described below.

j

 min  ( p ), ,  ( p )  np  1 Pˆ1 Pˆnp 

max

v j  f j ( p1 ,, pn p )  

(16) Analogously, the following relationships can be written in terms of the -cuts (for definition see [10]):

V   Y( P)  I ( P) (10) 1 By denoting Z j (P ) the jth row of Z(P)   Y(P) , the amplitude of the harmonic voltage at a generic bus j, can be written as: 1

190

 

  V j   f j P 

(17)

    with P   P1   P2   Pnp  .

Therefore, each -cut is the interval:    V j    v j  , v j       min f p ,.., p , max f p ,.., p np j np 1   p1P1  j 1 p1P1      pn p Pnp  pn p Pn  p 



   (18)


Romero-Quete et al / DYNA 82 (192), pp. 185-194. August, 2015.

and can be determined solving two optimization problems (minimization and maximization respectively). 4.2.2.

Therefore, according to the extension principle, the possibility distribution function of vj can be written as:

General Formulation for the case of possibilistic dependence

pos(v j ) 

A more general situation, whenever the possibilistic dependences to be modeled are of the type due to a welldefined physical connection among fuzzy variables whose possibility distribution functions have been determined independently, is analyzed in what follows. In this general case the uncertain variables are a set of np parameters ( p1 , p2 ,, pnp ) , with possibility distribution functions  Pˆ ( pk ) , k = 1…np, such as the total active bus k loads; the load due to nonlinear devices, static linear devices and induction motors; the starting current of induction motors; kind and operating point of nonlinear devices, etc. In general, among these parameters some deterministic relationships exist (e.g., active load due to linear and nonlinear devices must add up to the total active load) that can be expressed through a system of nd < np equations. In this way, nd parameters can be expressed in terms of the remaining ni  n p  nd as follows:

which is a direct generalization of equation (16) above. Similarly, the -cuts of Vˆj are the intervals:    V j    v j  , v j         min  f j p1 ,.., pn p , max  f j p1 ,.., pn p   P P   P P (24)

...,  Pˆ

np

  v    max  f  p ,.., p     

j

min

 pni 1  g1 ( Pi )    pn  g n ( Pi ) e  p



 gn (Pi )  e

 pn  p

(21) Moreover, from a deterministic viewpoint, the harmonic voltage at bus j is, in general, a function of parameters pk, with k = 1…np.

1

(25)

np

(26)

4.2.4. Fuzzy modeling of linear and non-linear loads

( p1 ),  Pˆ ( p2 ),...,  Pˆ Pˆ1 np 2

v j  f j p1 , p2 ,, pn p

j

 p    p  p   1 1  1        pn  pn p  p   p   pn 1  g1 ( pn .. pn ) i 1  i   p  ni  nd  g nd ( pn1 .. pni )

 g1 (Pi )  ,...

that can also be written as:

i

both subject to:

(20)

Pˆ (Pi ) 

 v j   min  f j p1 ,.., pn p   

ni 1

ni

4.2.3. Possibilistic harmonic load-flow algorithm

(19)

Pˆ (Pi )  min  Pˆ ( p1 ),..,  Pˆ ( pni ),  Pˆ 1

 (23)

where Pi  [ p1 , p2 , , pni ] . Here it is assumed, without loss of generality, that the first ni parameters are independent, and that (19) are the expressions for the dependent ones. Now, if the possibility distribution functions  Pˆ ( pk ) are k compatible and all of them are equally reliable, then the standard fuzzy intersection operator, [11], can validly be applied to obtain the joint distribution functions, for each node; i.e.:

and can be determined solving the following optimization problems:

pni 1  g1 (Pi )  pni  nd  g nd (Pi )

i

 p  min  Pˆ1 ( p1 ),...,  Pˆn p n p

max

 f j  P  v j   p  g (P )  ni 1 1 i    pn p  g ne ( Pi )

(22)

The methodology to calculate the possibilistic harmonic load-flow is summarized in Fig. 7. However, for its suitable implementation, it is important to establish an appropriate set of parameters pk, k=1…np, to describe the information available regarding some features of loads, nonlinear loads and wind generators, thus allowing to compute the admittances from node to ground, and the harmonic currents injected at each node. The main steps for developing a nodal model are: 1) define the circuit which models the harmonic load behavior, 2) select a set of fuzzy parameters describing the available information with respect to the composition and magnitude of loads and wind generators (these parameters are called

191


Romero-Quete et al / DYNA 82 (192), pp. 185-194. August, 2015. Table 2. Fuzzy parameters to estimate the harmonic nodal model Parameter Description

Inputs: i) Network topology; ii) Certain network parameters; iii) Possibility distributions of uncertain parameters: ; iv) Required

wind , j Wind speed (mps) at node j

bus voltages: busj , j=1…nb, (nb: number of buses); v) Required harmonic orders: hm , m=1…nh; (nh: number of harmonic frequencies); vi) Required -cuts: k , k=1…n , (1=1,…, n=0 and n: number of -cuts)

Pt(1) ,j

Reactive power at fundamental frequency at node j

j = 1 ; m = 1 ; k = 1

K p, j

Composition factor of passive loads (fraction of the passive load in the total active bus load) at node j

hm

pf p, j

Power factor of passive loads at node j

busj

K m, j

Composition factor of motive loads (fraction of the motive load in the total active bus load) at node j

k

pfm, j

Power factor of motive loads at node j

Compute the lower and upper limits of k-cut

Knl , j

Composition factor of nonlinear load(s) (fraction of the nonlinear load in the total active bus load) at node j

pfnl , j

Power factor of nonlinear load(s) at node j

by solving (25) and (27) Store:

Kf

and k =k+1

m= m+1, k =1, j =1

X LR

k=n yes

fqLR

j=j+1, k =1 no

j=nb yes

characteristics. This model also allows the inclusion of the wind generator as an RL branch and a harmonic current source. (h) where X pfc models capacitors for power factor correction.

Store: no

m=n yes

Rm( h) and X m( h ) model rotating machinery. R(ph) and X (ph)

Outputs: End Figure 7. Algorithm for the possibilistic harmonic load-flow. Source: the authors

model parameters, and differ from the electric circuit parameters identified in the first step), 3) establish the relationships between model and the circuit parameters. 4.2.5. General model of the aggregated load for harmonic analysis Several studies suggest the electric circuit shown in Fig. 8, [7], as a suitable model for harmonic load-flow calculation, where each branch represents the sum of loads with similar

Z(h)filters

Xp(h)

Rp(h)

bus i

Xm(h) Rm(h)

Installed factor, which can be estimated from the rated motor power factor, pfm, i.e., Kf=1/pfm. (typically, Kf= 1.2) Locked rotor reactance in pu (typically 0.15 pu  X LR  0.25 pu

) Quality factor of the locked rotor circuit. From experience, this value is approximately 8 Phase angles of the harmonic currents injected at node j (for the (h) ang nl ,j aggregate model several small and medium power devices) Reactive power due to the capacitors for power factor (1) Q pfc , j correction, at fundamental frequency, at node j Source: the authors

no

Store:

Xpfc(h)

Active power at fundamental frequency at node j

Qt(1) ,j

Figure 8. General model of the aggregated load at a node of the PS. Source: the authors

i(h)

model passive loads and small engines. i(h) models harmonic currents injected by nonlinear devices (rectifiers, inverters, wind turbines etc.); and filters model the RLC circuits installed to filter harmonics. 4.2.6. Fuzzy parameters The circuit parameters in the load model of Fig. 8 are uncertain and, in principle, could be described through their possibility functions. In practice, however, the available information usually refers to other characteristics of the load, the wind generator, and the environment, e.g., total active and reactive power, the percentage of it due to nonlinear devices, the wind speed, etc. Table 2 shows the model parameters, which are usually available in practice. These model parameters, together with the appropriate expressions, allow the parameters of the circuit in Fig. 8 to be computed. 4.2.7. Relationships between the model parameters and the aggregated load parameters For each harmonic frequency, nonlinear loads are represented by current sources at their respective nodes, and link the model parameters by means of expressions as (5) and (6). In addition, the model for the harmonic current injected by a wind turbine corresponds to that presented in Section 3.3

192


Romero-Quete et al / DYNA 82 (192), pp. 185-194. August, 2015.

b-14

b-11

1

1

1

0.5

0.5

0.5

0.5

b-10

b-12

G b-1

0 0.02

b-6

b-9

0.05

0 0.06 0.02

bus 3

0.03

0 0.04 0.01

bus 5

0.02

0.03

bus 7

1

1

1

0.5

0.5

0.5

0.5

b-4 b-8

Harmonic source

Wind Farm

0 0.04 0.04

1

b-7 b-5

b-2

0.03

bus 2

G

WF

1

b-13

0 0.01

b-3 Harmonic source

0.025

0 0.04 0.01

bus 9

Figure 9. IEEE 14-bus standard test system for harmonic analysis. Source: the authors

Regarding linear loads, detailed expressions that relate the model parameters with the elements of the aggregate circuit of Fig. 8, are presented in the Appendix (Section 8) of reference [13].

5. Case study 5.1. Description of the electric power system

The test system is presented in Fig. 9. Main parameters of the power system are summarized in [15]. Furthermore, in Fig. 7 of reference [13], the values of the uncertain parameters to model aggregated harmonic loads and sources are presented. For example, these uncertain parameters are modeled through triangular possibility distributions, and the values reported in [11,13] are the same employed for this case study. The only difference is that for a 20 MW wind farm of ten generators, each one of 2 MW, is included at the generation bus 2. Moreover, it is assumed that wind speed is known with uncertainty, and that it can vary between 5 mps and 20 mps. Under other wind speed conditions, the generators are turned off due to security reasons. 5.2. Results

The aim of the simulation is to compare the results obtained in both cases: i) the case in which the generator at bus 2 corresponds to a synchronous machine that does not inject harmonics into the network (this case corresponds with that reported in reference [13]), and ii) the proposed case in this article, in which the generator at bus two is replaced by ten FPC-type wind generators, each one of 2 MW. 5.3. Discussion

From results presented in Fig. 10, it can be concluded that the wind farm connection has an impact over the 5th harmonic voltage magnitudes at all buses of the power system.

0.025

bus 10

0 0.04 0.01

0.025

bus 12

0 0.04 0.01

0.025

0.04

bus 14

Figure 10. Per unit 5th order harmonic voltage magnitude at buses 2, 3, 5, 7, 9, 10, 12 and 14. Dotted lines indicate the Possibility functions obtained in [13]; i.e. case i. Continuous lines indicate the Possibility functions obtained in case ii, i.e., by considering the wind farm at bus 2, and assessing the influence of the variability of the wind speed between the interval [5 mps – 20 mps]. Source: The authors

In particular, in most of the buses, there is a decrease in the harmonic distortion, surely due to phasorial cancellation between the harmonic currents flowing through the network. However, this situation is different at bus 3, where the harmonic voltage distortion increases considerably and reaches levels that can be intolerable from the electromagnetic compatibility point of view. It can also be noted that obtained possibility distributions, in the case ii, are trapezoidal. This is the result of modeling the uncertainty related to the wind speed as an interval, unlike the other uncertain parameters that are modeled with triangular possibility distributions. It is, therefore, concluded that there is uncertainty in the wind speed; i.e., an input parameter, has a considerable effect on the uncertainties associated with the harmonic voltage magnitudes, i.e., the outputs of the possibilistic harmonic load-flow.

6. Conclusions This work has focused on including the wind-turbine FPC type within the methodology for calculating the possibilistic harmonic load-flow. The methodology is able to consider the uncertainties due to the deficiency of information regarding harmonic sources and the magnitude and composition of linear loads, i.e., wind turbines, nonlinear devices, and conventional loads. The Possibility Theory has been selected to model uncertainty because it seems to be an interesting alternative for situations where the information is too vague to define reliable probability distributions, a common situation in the context of harmonic studies. The possibilistic harmonic load-flow has shown flexibility allowing include wind farms as harmonic sources. In fact, the reformulated method has been proven in the IEEE 14-bus standard test system for harmonic analysis. In order to evaluate the performance of the proposal, a comparison with previously published simulations, in which

193


Romero-Quete et al / DYNA 82 (192), pp. 185-194. August, 2015.

wind farms were not considered, was performed in this article. Results obtained show that uncertainty in the wind speed has a considerable effect on the uncertainties associated with the computed harmonic voltage magnitudes; therefore, this uncertainty must be considered when harmonic studies must be developed. It is hoped that the methodology developed contributes to the solution of real problems related with harmonics in electric power systems. For example, problems such as decision making under uncertainty, harmonic filter design, harmonic filters and capacitors location, analysis of the impact of the connection wind farms, among others. In future work, research efforts must be conducted for the development of harmonic models for other types of wind turbines, e.g., DFIG type wind turbines, and their inclusion in the possibilistic harmonic load-flow method.

References [1] [2] [3] [4] [5] [6] [7]

[8] [9] [10] [11] [12] [13] [14]

[15]

Fourier, J., Theorie analytique de la chaleur, par M. Fourier. Chez Firmin Didot, père et fils, 1822. Ribeiro, P.F., Editor. Time-Varying waveform distortions in Power Systems. J. Wiley & Sons Press, 2009. IEEE, Task force on harmonics modeling and simulation. Tutorial on harmonics modeling and simulation. IEEE Publication TP-125-0, 1998. IEEE, Std 519-1992. IEEE Recommended practices and requirements for harmonic control in electrical power systems, 1992. IEEE, Task force on harmonics modeling and simulation. Modeling and simulation of the propagation of harmonics in electric power networks. IEEE Trans. on Power Delivery, 11 (1), pp. 452-465, 1996. CIGRE, Working Group 36-05. Harmonics, characteristic parameters, methods of study, estimates of existing values in the network. Electra, 77, pp. 35-54, 1981. Romero, A.-A., Zini, H.-C., Ratta, G. and Dib, R., A novel fuzzy number based method to model aggregate loads for harmonic loadflow calculation, Transmission and Distribution Conference and Exposition: Latin America, 2008 IEEE/PES, pp. 1-8, 13-15, 2008. Baroudi, J.A., Dinavahi, V. and Knight, A.M., A review of power converter topologies for wind generators. Renewable Energy, Elsevier, 32, (14), pp. 2369-2385, 2007. Tentzerakis, S.T. and Papathanassiou, S.A., An investigation of the harmonic emissions of wind turbines. Energy Conversion, IEEE Transactions on, 22 (1), pp. 150-158, 2007. Zimmermann, H.J., Fuzzy set theory and its applications. Kluwer Academic Publishers, Third Edition 1996. Romero, A.A., Zini, H.C. and Rattá, G.. Modelling input parameter interactions in the possibilistic harmonic load flow. Generation, Transmission & Distribution, IET, 6 (6), pp. 528-538, 2012. Romero, A.A., Zini, H.C. and Rattá, G., An overview of approaches for modelling uncertainty in harmonic load-flow. Ingeniería e Investigación, 31 (2), pp. 18-26, 2011. Romero, A.A., Zini, H.C., Rattá, G. and Dib, R., Harmonic load-flow approach based on the possibility theory. Generation, transmission & distribution, IET, 5 (4), pp. 393-404, 2011. Romero, A.A., Zini, H.C., Rattá, G. and Dib, R., A fuzzy number based methodology for harmonic load-flow calculation, considering uncertainties. Latin American Applied Research, 38, pp. 205-212, 2008. IEEE, Task force on harmonics modeling and simulation. Test systems for harmonics modeling and simulation. IEEE Transactions on Power Delivery, 14, pp. 579-585, 1999.

Ensayos Eléctricos Industiales, Fabio Chaparro, UNC until 2003, when, he was awarded a scholarship from the German Academic Exchange Service (Deutscher Akademischer AustauschDienst - DAAD) to conduct PhD. studies at IEE-UNSJ. Currently. He works as a researcher at the Consejo Nacional de Investigaciones Científicas y Técnicas, CONICET, at the IEEUNSJ, Argentina. His research interests include asset management, power quality, and high voltage test techniques. G.O. Suvire, was born in San Juan, Argentina. He graduated in Electric Engineering from the Universidad Nacional de San Juan (UNSJ), Argentina, in 2002 and received his PhD. from the same university in 2009, carrying out part of his doctorate in the COPPE institute, at the Federal University of Rio de Janeiro in Brazil, supported by CAPES. From 2009 to 2011, he worked as a Postdoctoral research fellow for the Argentinean National Council for Science and Technology Research (CONICET) at the Institute of Electrical Energy (IEE), UNSJ, Argentina. He is currently an assistant professor of Electrical Engineering at UNSJ, Argentina. His research interests include simulation methods, power systems dynamics and control, power electronics modeling and design, and the application of wind energy and energy storage in power systems. H.C. Zini, was born in San Juan, Argentina, on February 4, 1958. He received his BSc. degree in Electrical Engineering in 1985, from the Universidad Nacional de San Juan (UNSJ), Argentina,| and his PhD degree at UNSJ in 2002. Currently, he is a senior researcher and Consulting Engineer with the Instituto de Energía Eléctrica of UNSJ, Argentina. In 1998, he completed a one-year postgraduate stage at CESI, Milan, Italy, working on electromagnetic transient calculation. His research interests are electromagnetic transients and power quality. G. Rattá, was born in Italy in 1950. He received his BSc. degree Electromechanical Engineering in 1974, from Universidad Nacional de Cuyo-Argentina. Since 1997 he has been director of the IEE-UNSJ, Argentina. Prof. Rattá is currently an assistant professor in the UNSJ, Argentina. He is also member of the Commission of Postgraduate Evaluation. His research interests include transient behavior of power system components and power quality.

A.A. Romero-Quete, was born in Colombia in 1978. He received his BSc. in Electrical Engineering degree in 2002, from the Universidad Nacional de Colombia and his PhD. in 2009 from the Instituto de Energía Eléctrica, Universidad Nacional de San Juan (IEE-UNSJ), Argentina. He worked and researched as a high voltage test technique engineer at the Laboratorio de 194

Área Curricular de Ingeniería de Sistemas e Informática Oferta de Posgrados

Especialización en Sistemas Especialización en Mercados de Energía Maestría en Ingeniería - Ingeniería de Sistemas Doctorado en Ingeniería- Sistema e Informática Mayor información: E-mail: acsei_med@unal.edu.co Teléfono: (57-4) 425 5365


Water quality modeling of the Medellin river in the Aburrá Valley Lina Claudia Giraldo-B.a, Carlos Alberto Palacio b, Rubén Molina c & Rubén Alberto Agudelo d a

Facultad de Ingeniería, Universidad de Antioquia, Medellín, Colombia. lina.giraldo@udea.edu.co Facultad de Ingeniería, Universidad de Antioquia, Medellín, Colombia. carlos.palacio@udea.edu.co c Facultad de Ingeniería, Universidad de Antioquia, Medellín, Colombia. ruben.molina@udea.edu.co d Facultad de Ingeniería, Universidad de Antioquia, Medellín, Colombia. ruben.agudelo@udea.edu.co b

Received: March 4th, 2014. Received in revised form: February 17th, 2015. Accepted: July 10th, 2015.

Abstract Water quality modeling intends to represent a water body in order to assess their status and project the effects of different measures taken for their protection. This paper presents the results obtained from the Qual2kw model implementation in the first 50 kilometers of the Aburrá-Medellín River, in their most critical conditions of water quality, which correspond to low flow rates. After the model calibration, three recovery scenarios (short-term, medium-term and long-term) were evaluated. In the first scenario the sanitation only improved in some streams, in accordance with the Plan of Sanitation and Management of Discharges that was considered. Medium and long-term scenarios, with the operation of the new Water Waste Treatment Plant (WWTP) of the Bello municipality and an increase in the sewage collection, were considered. The obtained results show the positive impact of the operation of the WWTP of Bello in the balance of BOD5, dissolved oxygen and nitrogen. Keywords: water quality modeling; Aburrá-Medellín River.

Modelación de la calidad de agua del Río Medellín en el Valle de Aburrá Resumen La modelación de la calidad del agua busca representar un cuerpo de agua con el fin de evaluar su estado y proyectar los efectos de diferentes medidas que se tomen para su protección. En este artículo se presentan los resultados obtenidos a partir de la implementación del modelo Qual2Kw en los primeros 50 kilómetros del río Aburrá-Medellín, en sus condiciones más críticas de calidad de agua, que corresponden a caudales bajos. Una vez obtenido el modelo calibrado, se evaluaron tres escenarios de recuperación (corto-2años, mediano5años y largo plazo-10años). En el primer escenario sólo se consideraron mejoras en el saneamiento de algunas quebradas de acuerdo con el Plan de Saneamiento y Manejo de Vertimientos (PSMV), mientras que los escenarios a mediano y largo plazo consideraron la operación de la Planta de Tratamiento de Aguas Residuales (PTAR) de Bello y un aumento en la colección de aguas residuales. Los resultados obtenidos evidencian el impacto positivo del funcionamiento de la PTAR de Bello en los balances de DBO5, oxígeno disuelto y Nitrógeno Palabras clave: modelación de calidad del agua, río Aburrá-Medellín.

4. Introduction Surface water bodies, especially those that travel along great cities, when they do not have a complete sanitation system are exposed to receive discharges of several sources. These discharges can alter the balance of the ecosystem, leading to a loss of diversity and to an alteration of the abundance of organisms and deterioration in the physical and chemical quality, which ultimately results in limiting the potential uses of the

water. As a tool, in order to evaluate different scenarios to recover the water quality, it the implementation of numerical methods is widely used. Numerical models for water quality and transportation can reproduce the hydraulic, chemical and biological processes that occur in water bodies by using mathematical expressions that represent these phenomena of interest. These mathematical expressions are approximations, or a simplified version, of the real system, which consider its most important characteristics

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 195-202. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.42441


Giraldo-B. et al / DYNA 82 (192), pp. 195-202. August, 2015.

[1,2]. Therefore, the models are valuable tools for assessing the impact of the implementation of strategies and / or measures that seek to improve water quality [3,4,5]. The first water quality models applied to the Aburra-Medellín River that can be found in literature, were conducted by Empresas Publicas de Medellín (EPM) as support models for several projects related with the sanitation of the river. Recently, the environmental authority in the basin of this river (Area Metropolitana del Valle de Aburrá), through its “RedRio” project performed a water quality modeling using the QUAL2K modeling software, in order to evaluate 14 different sanitation scenarios, in other words, to estimate the impact of chemical pollutants in the water body. The results of the research presented in this paper began with the selection of the water quality model, looking for a model that best represents, or fits, the main characteristics of the river and the modeling needs. The model to be selected should be able to adequately represent the transport phenomena of a highly affected river with continuous discharges of organic matter; the model should also have an adequate mathematical formulation of the physical–chemical processes of interest in the research. Finally, the model should be consistent with the technical and economic capacities available in this research, to acquire the data required by the selected model (both: quality and discharges data). Taking into consideration the reasons above mentioned and having in mind that the main goal is to have a model capable to be used as a decision tool in the river, the QUAL2Kw model, an updated version of the QUAL2K model [5,6,7], was implemented. 2. Theoretical background The QUAL2Kw model splits the river into segments which are then divided into small subsections known as computational elements. For each element hydrological, thermal and mass balances for each constituent (equation 1) are performed. For each element in the model, a gain or loss of mass can be computed due to the transportation phenomena (advection and dispersion), loads or extractions of external sources (water intakes, wastewater discharges, among others) or internal sources (benthic oxygen demand and biochemical transformations, among others) [7]. The general formula to compute a mass balance of a constituent in water, ci, in section i, is as follows: ,

(1) Eq. 1. Mass balance equation for constituent I, in section i. [4]. Where Qi: discharge in section i (L/day), Qab,i: water intake in section i (L/day), Vi: volume in section i (L), Wi: external load of constituent in segment i (mg/day), Si: sources and discharges of constituent due to reactions and mechanisms of mass transfer (mg/L/day), Ei: dispersion coefficient between sections (L/day), Ei-1,Ei dispersion coefficient between segments i-1, i and i+1 (L/day), ci: constituent concentration in section i (mg/L) and t: time (day). Mass balance equations are solved under steady state and under uniform flow, using a finite difference method. The

final products of the modeling process are curves, which show the variation of the modeled parameters along the streams. The main difference between Qual2Kw and Qual2K is that the first one has incorporated a genetic algorithm to find the optimal values of the kinetic parameters. The genetic algorithm tries to optimize the adjustment rate of the model compared to the observed data. The final software users can choose between a manual input of the values for each parameter to be used in the model or an auto-calibration functionality [8]. 3. Materials and methods 3.1. Study area description Aburrá Valley is located in the center-south of the Department of Antioquia (State) in Colombia. It is situated in the middle of the Central Andes mountain range. Its relative importance is that the city of Medellin is part of it, which is the second largest city in the country as well as the second most important. Nine other municipalities are part of this Metropolitan area (Barbosa, Girardota, Copacabana, Bello, Itagüí, Sabaneta, Caldas, La Estrella y Envigado). The combined population of these cities is around 3 million inhabitants. The Aburrá-Medellín River crosses the valley from south to north. This river crosses the ten municipalities mentioned above, and has become a hub for the historical development of the region [9]. The topography of Aburrá Valley is irregular along its extension. The altitude is between 1300 and 2890 m a.s.l. Medellín, which is the most developed urban core, is located in the central part of the valley within an extension of 8 km [10]. Most of the surface streams and the natural drains of the Aburrá Valley are highly polluted due to direct wastewater discharges, mainly in the central part of the valley. This situation leads to a reduction in the dissolve oxygen levels and to an increase in the levels of organic matter, inorganic matter, and toxic substances [11]. Water quality decreases substantially and with it the quality of life for the people around it. The Aburrá-Medellín River integrates the municipalities of this valley. It receives about 254 tributaries of a different magnitude, along its 100 kilometer length, from its source point in the top basin, in a place known as Alto de San Miguel (municipality of Caldas), to its confluence with the Rio Grande. The tributaries are relatively short, with steep slopes, deep and narrow channels, high speeds and large sediment transport capacity, which correlate with the high erosive powers [12]. Monitoring stations, the monitoring stations selected in this research to be incorporated into the model are: San Miguel –E1, Before Valeria-E2, After Valería-E3, Ancón Sur without channeling -E4, Ancón Sur with channeling -E5, Before San Fernando-E6, After San Fernando-E7, Aula Ambiental-E8, Ancón Norte –E9 (Fig. 1, Table 1). After model selection, according to its easy handling and required information [13], historic records from low flows in the river, since the year 2004, were used to run the model [12], since they are related to the most critical conditions of water quality in a flow receiving different types of discharges [14].

196


Giraldo-B. et al / DYNA 82 (192), pp. 195-202. August, 2015. Table 2. Modeling Scenarios Scenario

PTAR Bello

e0

No Operative

70-70

e1

No Operative

70-70

Operative e2 Operative e3 Source: The authors.

Figure 1. Study Area Location. Source: [10]

Table 1. Monitoring Stations used in the model. Estation Coordinates

Alto San Miguel (SM) Before stream La Valeria (AV) After stream La Valeria (DV) Ancón Sur without channeling (ASSC) Ancón Sur with channeling (ASCC) Before San Fernando (ASF) After San Fernando (DSF) Aula Ambiental (AA) Ancón Norte Source: The authors.

Latitude 6° 02´ 50.4´´ N

Distance from san miguel Km 0.0

6° 5´ 46.6´´

Longitude -75° 37´ 09.9´´ W -75° 38´ 08.7´´

6° 5´ 46.6´´

-75° 38´ 08.7´´

8,6

6° 09´ 07.8´´

-75° 37´ 54.9´´

15,2

6° 09´ 07.8´´

-75° 37´ 54.9´´

15.4

6° 11’ 26.78”

-75° 34’ 79.15”

22,7

6° 11´ 43.5´´

-75° 34´ 53.3´´

22,8

6° 15´ 51.8´´

-75° 34´ 20.4´´

31,3

6° 22’ 16.21”

-75° 29’ 21.29”

48,6

8,5

Input Data into Qual2Kw model. As a first step the border conditions were set up in the monitoring station (San Miguel). Hydraulic data, temperature, water quality and low flow data were also set up in the other monitoring stations along the river located downstream of the initial station. The model was set up in such a way that the sections in which the river is splitted were adjusted to concur with the sections that were defined by the environmental authority [15] as the most important monitoring stations, because they allow a quick view of the state of the river. Domestic and industrial discharges, as well as tributary streams, were modeled as point sources in the model. Since there are some distributed discharges along the river, which are very difficult to locate and to characterize, and since they have a significant impact on the water, these distributed discharges were included in the model as unmonitored tributaries and the quality was assumed to be the typical composition for residential waste water [16] and for the mass balance.

Discharge PTAR San Fernando

70-70 50-50

% Removal Streams (Bod5 –Tss) Actual Situation Variable f(PSMV) 2014 40 60

Model Implementation. The model was set up with low flow data. The time step used to calculate was fixed on 11.25 minutes to avoid numerical instabilities. The numerical methods used were Euler and Newton-Raphson [17]. Level 1 was chosen in order to model the exchanges in the hypothetic zone, because it includes zero and one-order equations for BOD oxidation. Model Calibration. The algorithm used by the model to calibrate the model constants consists in a genetic searching technique, which involves a set of operators that allows its convergence into an optimal solution [2, 7]. In order to find the set of parameters that better represent the reality of the river, the model was run in such a way that the objective function (weighted squared error) was minimized. The genetic parameters were set as: 125 model runs in a population and 126 evolutive generations. Water quality scenarios, according with the progress of the Sanitation Master Plan and Discharges Management (PSMV for its acronym in Spanish). Three scenarios were considered in this research, according with the law 3930 of 2010 [18]. In which is formulated, among others, the need to have formulated a management plan of water resources and these management plans must be formulated by the environmental authority, considering during the formulation of the plan different projections for the short, medium and long term (2, 5 and 10 years). The first scenario was simulated by having the discharge of the interceptors of wastewater directly into the river, but assuming a better collection of wastewater in the tributaries according with the goals of the PSMV for the year 2014. Scenarios 2 and 3 were simulated under the assumption of the construction of the south and north interceptor as well as the construction of the wastewater treatment plant of Bello, all of them fully functional. Table 2 presents the summary of the simulated scenarios. Among the assumptions that were considered in the simulations, the actual discharge of the waste water treatment plant (WWTP), San Fernando, 70-70 and 50-50, means that it is expected that the discharge of the WWTP does not exceed 70 70 mg/L of BOD5 and 70 mg/L of TSS, and so on. In scenarios 2 and 3, which consider the WWTP of Bello, it is worth mentioning that the percentages in the collection of waste water in the tributaries that still do not have proper sanitation is 40% and 60%. Additionally, in scenario 3 a 5% improvement was assumed by the streams that today have proper sanitation with the implementation of construction works that improve the collection of wastewater. Obtaining the equation for the BMWP/Col (Biological Monitoring Working Party in Colombia). According to the actual water quality conditions observed in the river it is not

197


Giraldo-B. et al / DYNA 82 (192), pp. 195-202. August, 2015.

possible for the macro invertebrates to colonize the river. For this reason, it was decided to not include in the computation the growth and decay rates, but a variation of the model was used which tried to better represent the specific situation described above in the Aburrá-Medellín River. The BMWP/col is computed by using a piece of code in an Excel Macro that automatically calculates it after the physicochemical variables are computed in the software. To obtain the equation, a matrix was structured with the data collected under low flow periods. The Electrical Conductivity parameter plays a major role in determining the processes related with water quality in this river and for this reason an equation to calculate the BMWP/col must be in terms of this parameter. To be able to include the effects of conductivity in the BMWP/col a regression using dichotomous variables and to validate its representativity it was necessary to evaluate the assumptions that were made; the latter was done with Eviews [19].

Figure 3. Dissolve Oxygen Simulation (mg/L), low flow. Source: The authors.

4. Results 4.1. BMWP/col equation When performing a test of hypothesis testing, the p-value of the explanatory variable (for electrical conductivity variable) was 0.104 with a confidence level of 89.9 %. With these values, it is possible to conclude that conductivity does have a significant effect on the BMWP/Col. Additionally, pvalues of all dichotomous variables tend to zero, which indicates that they are significant and must be considered when using the equation in each section of the river 4.2. The obtained equation is shown below: BMWP/Col=(- 0.13*conductivity^0.3 +3.59E1 +2.54*E2 +2.14*E3 +2.82*E4 +2.90*E5+ 2.29*E6+ 2.49*E7+2.58*E8)^(1/0.3)

Figure 4. BOD5 Simulation (mg/L), low flow Source: The authors.

(2)

4.4. Evaluation of scenarios

4.3. Calibrated Model In the Figs. 2, 3 and 4 it is possible to compare the results along the river for the modeled parameters (continuous line) against field measurements in the monitoring stations (scattered points) under low flow scenarios.

Figure 2. Simulation under low flow discharges (m³/s) Source: The authors.

Once the model was properly calibrated, the following alternatives were evaluated: scenario “e0” actual water quality conditions in the Aburrá – Medellín River; scenario “e1” corresponds to the expectations of the water quality conditions in a period of two years. For the input data of this scenario, in some of the tributaries the percentage of sanitation according with the sanitation works projected in the PSMV for the year 2014 were assumed. Scenario “e2” represents the expectations in water quality in a five-year period. Scenario “e3” represents the expectations in water quality over a ten-year period. The percentage of sanitation in the tributaries used in scenario “e2” was 40% and 60% in scenario “e3”. Discharges from the San Fernando WWTP were set up as 70-70 and 50-50 for scenarios “e2” and “e3” respectively assuming the future WWTP of Bello was fully functional. From these scenarios, it was possible to obtain the behavior of water quality. Fig. 5 shows the variation for the BDO5 along the river for the future scenarios that were run (2, 5 and 10 years) assuming pollutant load removal in scenarios 2 and 3 and also with the WWTP of Bello operating. In the same graph, the

198


Giraldo-B. et al / DYNA 82 (192), pp. 195-202. August, 2015.

Figure 7. Dissolve Oxygen (DO) profile in the river in all scenarios Source: The authors.

Figure 5. BOD5 behavior in all scenarios Source: The authors.

Figure 8. Electrical conductivity profile in the river in all scenarios Source: The authors.

Figure 6. SSI behavior in all scenarios Source: The authors.

actual scenario is presented. By comparing these scenarios, a notorious recovery in the water quality of the river is observed once the WWTP enters into operation. From this analysis it is clear the importance that the WWTP plays in the Sanitation Plan of the river to help in the recovery of the physicochemical water quality of it. Fig. 6 shows the variation of inorganic suspended solids (ISS) along the Medellin-AburrĂĄ River according to the predictions of the modeling. It can be seen that in terms of ISS in the river the recovery is not as significant as the one modeled for the BOD. Which can be explained with the large contribution of ISS that is being received from the tributaries and in many of them the origin is not from the domestic wastewater, but from mining in the tributaries and deforestation processes in their upper parts, so despite its removal in the proposed scenarios, concentrations remain high. For the evaluated scenarios OD profile along the river, is shown in Fig. 7. In this graph it is possible to see a recovery near to the Moravia sector (Kilometer 34), this is due to the closure of the interceptors located in this point of the river and the startup of the WWTP of Bello. This situation is more evident in scenarios 2 and 3 (i.e., projections at 5 and 10 years). Fig. 8 shows the results for electrical conductivity for the different scenarios simulated. The highest values of conductivity correspond to scenarios e0 and e1 (current and year two) due to the low remotion rates. The lowest values of conductivity were in scenarios e2 and e3 (years five and ten) where the WWTP of Bello is operating. The discharge of the future WWTP in Bello causes the conductivity to increases and moves northwards.

Like the ISS parameter, there is not a significant recovery in conductivity, as it was obtained for BOD near to km 48 of the river. This is due to the lack of real data about removal percentages for this variable with the WWTP of Bello operating. Fig. 9, shows the results from the model for BMWP/Col in the different scenarios. From this graph, a marked deterioration in km 10 is observed, which corresponds with the place where the tributaries Chuscala, La Miel y La Valeria enters into the river, then a recovery is achieved near km 15. In km 23, BMWP/Col reduces again; in these kilometers is located the discharge of WWTP San Fernando. From this point the biological component that represents this BMWP/Col, shows a slight improvement until km 33 where the parameter decreases again due to the discharge of “K-33 Interceptors� of waste water at this point for scenarios 0 and 1 (current and 2 year scenarios). From this point, there is a constant trend in this variable in the more critical stretch of the river.

Figure 9. BMWP/Col profile in the river in all scenarios Source: The authors.

199


Giraldo-B. et al / DYNA 82 (192), pp. 195-202. August, 2015.

5. Discussion In terms of BOD5, in the first 19 km, there are no significant changes in the behavior of this parameter between the current and proposed future scenarios, with a slight peak near Km 10, where tributaries La Miel, La Valeria and La Chuscala enter the river (Fig. 5). La Chuscala tributary receives the discharge of a collector of wastewater in the Mandaly neighborhood (municipality of Caldas), which has a high concentration of organic matter as a product of the agricultural and domestic activities that occur in this watershed. By comparing scenarios e1, e2 and e3 with the actual scenario it is evident that BOD5 concentration has variations in the first kilometers with removal percentages between 40% and 60%, respectively. Therefore, all the efforts in these kilometers should be directed towards the completion of the collectors of domestic wastewater [3], which are planned by the municipality of Caldas and with actions focused on water resource protection. Future scenarios show a significant decrease in the concentrations of BOD5, compared with the current conditions scenario e0. It is remarkable the reduction that is observe between km 33 and km 48, as a result of the connection of the sewer interceptors of EPM with the projected WWTP of Bello. Also, with the removal of load in terms of BOD5 in some tributaries such as La Rosa, La Madera, El Hato, La Garcia, La Señorita, Rodas and Piedras Blancas, which are located in this section of the river. The results support the need to continue with the works to collect the wastewater and with the construction of the WWTP Bello, in order to reduce the load in terms of BOD5, and to continue with the cleaning of the tributaries, to collect wastewater and to prevent direct discharge to the water source, similar effects are presented by Holguín for the Cauca River in the city of Cali [20]. The increase in solids that occurs around km 10, can be explained as a result of the arrival into the river of the tributaries La Minita, La Miel, La Valeria and The Chuscala (Fig. 6). These tributaries introduce large amounts of suspended solids into the water as a result of erosive processes generated in the upper parts of these watersheds and from the direct input of wastewater from the municipality of Caldas through collectors, which are still disconnected from the WWTP in San Fernando. The increase in the Inorganic Suspended Solids (ISS), close to km 29, is a response of the tributaries that arrive in this section of the river, such as Altavista, La Picacha and La Hueso, which are very important tributaries in terms of distributing solid loads into the river. The high content of solids in these tributaries is not only due to erosive phenomena in the streams but is also related with poor land planning as well as soil uses in the catchments, also in these tributaries the mining to extract construction materials is high. In Km 33, the simulation for future scenarios (specially e3) shows a clear reduction in the load of solids. This reduction can be the result of the connection of the interceptors to the future WWTP of Bello and therefore a reduction of the direct discharges into the Aburrá–Medellín River. In addition, the reduction can be explained due to the

significant reductions in discharges in the tributaries. In the base scenario (e0), there is no change in the model for the amount of solids in the river. For scenarios e2 and e3, there is an increment in the solids concentration, basically for the arrival of some tributaries to the river: El Hato, La García and la Madera. In these tributaries, despite the fact they are modeled with a reduction in pollutants, the reduction is not enough to make a significant change in the water quality, due to the high concentrations of solids that they have. Such concentrations are not mainly associated to the domestic wastewater, but with the mining in order to extract construction materials. Despite discharges of wastewater from the municipality of Caldas, dissolved oxygen is presented in the river until km 15 (Fig. 7). This is because the slope of the river in this segment promotes the oxygenation of the water body. Later, in the river a reduction in the oxygen concentration is observed, with the lowest concentrations around the location of the discharges of the interceptors, which transports much of the wastewater of the region near km 33 (Metro’s Caribe Station). From that point, a slight recovery occurs and then decreases again due to an increased load that does not allow re-oxygenation, despite the fact that the slope at this point is still significant and the presence of hydraulic jumps in this part of the river. Between km 33 and km 48 the most critical section of the river is located, which is consistent with the behavior of BOD5 and solids, reflecting the impact of the income of wastewaters into the water body, which deteriorates its quality [17,21]. It is necessary to remark on the fact that despite the improvement of sanitation in the tributaries as it was set up in the model for scenarios e2 and e3, a significant recovery for future values of dissolved oxygen in the critical section of the river is not seen. Aiming to find an ideal scenario, in which the levels of DO were acceptable in the critical section, it was found that it is necessary to collect a minimum of 80% of the wastewater to achieve this goal. Scenarios e0 and e1 return the highest values of electric conductivity along the river, due to the low removal percentages for this variable (Fig. 8). These results show a sustained increase in the concentration value that occurs from km 15 to km 33 near the location of the interceptors. Results that can be associated in large part to the city domestic discharges , which are not yet transported into a WWTP. It is also important to note that the electrical conductivity is a dynamic parameter with a different peak in time for each monitoring station depending on the time of day and it is closely related to human activity in the city. Scenarios e2 and e3 show a decrease in electrical conductivity, due to the projected connection of the interceptors to the WWTP of Bello, demonstrating the recovery that the river would have in this segment with the implementation of the PSMV. However, for these two scenarios, the most critical conditions are given by the discharge of the WWTP of Bello, which because of the significant volume of water to be discharged in this plant generates a significant impact on the river. Overall, the impact of the more significant tributaries of the river on the macro invertebrate community were observed, highlighting the variable BMWP/Col as an indicator to complement the physicochemical analysis of the

200


Giraldo-B. et al / DYNA 82 (192), pp. 195-202. August, 2015.

river (Fig. 9). To recover the diversity and abundance of macro invertebrates (biological quality) in the channelized segment of the river, it is necessary not only to recover the water quality of the river and its tributaries, it is also necessary to implement other strategies associated with the hydraulics of the channel [22,23]. Measurements such as the reduction of the water velocity (a variable that was not considered in the calculations due to the lack of information) must be implemented. What is concluded here is that despite the operation of the WWTP there is still a missing component to improve and it is the availability of natural habitats and colonization substrates and to have structures to protect them [24]. 6. Conclusions Analyzing the results of the model in terms of BOD5, the reduction of this parameter is clear, the highest reductions are achieved when the interceptors were simulated to be connected to the future WWTP in Bello, this was done in scenarios 2 and 3. Scenario 1 only presents a small reduction in concentration of BOD5 due to the sanitation in the tributaries according to the PSMV until the year 2014. This reflects the right decision of EPM to prioritize investment to the construction of the treatment plant to improve the quality of the river. This action is suggested in Colombia due to the limited resources available for reorganizing current and / or treatment wastewater plants [25]. In terms of suspended solids, alternatives e2 and e3 allow reductions in the concentration of this parameter, mainly due to the proposed reduction in the tributaries. Scenario e1 is the worst scenario in terms of solids due to the low reductions assumed on it. It is important to remark that the most critical section of the river for water quality in terms of BOD5 and ISS is located from km 29 to km 48. This situation has a direct relation with the arrival of some main tributaries on the river such as: La Hueso, La Iguaná, Santa Elena, La Rosa, La Madera, El Hato, La García, La Señorita y La Seca, among others. Even though, in the simulated scenarios a certain removal of pollutants was assumed for the sanitation in these tributaries, the assumed percentage was not enough to fully recover the river, especially in terms of ISS. These results show that despite all the efforts that can be made in terms of infrastructure to collect and to treat wastewater, simultaneously, it is essential to initiate actions focusing on environmental education and culture [26], regulatory/incentive policies, proper management of quarries and mining, prevention and mitigation works to avoid erosion and also a suitable land use scheme [27]. Finally, according to the variation obtained from BMWP/Col River's profile in the different scenarios modeled, it is difficult to reach values higher than 15 in the urbanized section, which allow the classification to change from "heavily contaminated (<15)" to "very polluted (1635)" (ranges established in the biotic index) [28,29]. From the results of the model for the different scenarios it was observed that even though the improvement in water quality on the river and its tributaries and the interconnection of the interceptor with the future WWTP of Bello, the increase in BMWP/col index is minimum. This situation leads to the

conclusion that parallel to the improvement in water quality of the tributaries, changes are also required in the hydraulic of the river to create proper conditions that favor the development of the entire life cycle of organisms. So, it is necessary to improve the availability of natural substrates of colonization and structures that enhance its protection. Acknowledgments We want to express our acknowledgments to the University of Antioquia, to the research group GIGA, to the environmental authority (Área Metropolitana del Valle de Aburrá) and to the engineers Yaneth Daza and James Londoño for their support, advice and contributions to this experimental work. References [1]

[2]

[3]

[4] [5]

[6]

[7]

[8]

[9] [10] [11] [12]

[13]

[14]

201

Camacho and Díaz, Metodología para la obtención de un modelo predictivo de transporte de solutos y de calidad del agua en ríos — Caso río Bogotá, Seminario Internacional La Hidroinformática en la Gestión Integrada de los Recursos Hídricos, Universidad del Valle Instituto Cinara, 2003. Vera, I. and Lara, J., Discusión de operadores involucrados en un proceso de calibración mediante algoritmos genéticos para un modelo de calidad del agua de corrientes superficiales trabajando con la herramienta Qual2Kw. Rev. Fac. Ing., Univ. Antioquia 50, pp. 77-86, 2009. Rodríguez, J.P., Díaz-Granados, M., Camacho, L.A., Raciny, I.C., Maksimovic, C. and McIntyre, N., Bogotá’s urban drainage system: Context, research activities and perspectives. BHS 10th National Hydrology Symposium, Exeter, 2008. Chapra, S.C., Surface water-quality modeling. Waveland Press, 2008. Chapra, S.C., Pelletier, G.J. and Tao, H., QUAL2K: A modeling framework for simulating river and stream water quality, (Version 2.11): Documentation and users manual. Civil and Environmental Engineering Dept., Tufts University, Medford, MA., pp. 1-109, 2008. Chapra, S.C. and Pelletier, G.J., QUAL2K: A modeling framework for simulating river and stream water quality: Documentation and user manual. Civil and Environmental Engineering Dept., Tufts University, Medford, M. A., 2003. Pelletier, G.J., Chapra, S.C. and Tao, H., QUAL2Kw - A framework for modeling wáter quality in streams and rivers using a genetic algorithm for calibration. Environmental Modelling & Software 21, pp. 419-425. 2006. DOI: 10.1016/j.envsoft.2005.07.002 Hossain, M.A., Sujaul, I.M. and Nasly, M.A., Application of QUAL2Kw for water quality modeling in the Tunggak River, Kuantan, Pahang, Malaysia. Research Journal of Recent Sciences 3 (6), pp. 6-14, 2014. Giraldo, L., Agudelo, R. and Palacio, C., Spatial and temporal variation of nitrogen in the Medellín river. DYNA, 77 (63), pp 124131, 2010. Área Metropolitana del Valle de Aburrá, Corantioquia, Cornare y Universidad Nacional de Colombia - Sede Medellín, Plan de ordenación y manejo de la cuenca del río Aburrá, Medellín, 2007. Rivera, J., Evaluation of organic matter in the cold river supported in qual2K version 2.07. DYNA 78 (169), pp. 131-139, 2011. Área Metropolitana del Valle de Aburrá, Universidad de Antioquia, Universidad Nacional de Colombia - Sede Medellín, Universidad Pontificia Bolivarian y Universidad de Medellín, Red de Monitoreo Ambiental en la cuenca hidrográfica del río Aburrá–Medellín en jurisdicción del Área Metropolitana–Fase III. Medellín, 2011. Fan, C., Ko, C. and Wang, W., An innovative modeling approach using Qual2K and HEC-RAS integration to assess the impact to tidal effect on River Water quality simulation, Journal of Environmental Management, 90, pp. 1824-1832, 2009. DOI: 10.1016/j.jenvman.2008.11.011 Chang, F., Tsai, Y., Chen, P., Coynel, A. and Vachaud, G., Modelling water quality in an urban river using hydrological factors – Data


Giraldo-B. et al / DYNA 82 (192), pp. 195-202. August, 2015.

[15]

[16] [17]

[18] [19] [20]

[21] [22]

[23]

[24]

[25] [26]

[27] [28] [29]

driven approaches. Journal of Environmental Management, 151, pp. 87-96, 2015. DOI: 10.1016/j.jenvman.2014.12.014 Área Metropolitana del Valle de Aburrá., Universidad de Antioquia, Universidad Nacional de Colombia — Sede Medellín, Universidad Pontificia Bolivariana y Universidad de Medellín, Red de monitoreo ambiental en la cuenca hidrográfica del río Aburrá–Medellín en jurisdicción del Área Metropolitana— Fase II. Medellín, 2007. Metcalf, and Eddy, Ingeniería de aguas residuales, tratamiento, vertido y reutilización. Editorial Mc. Graw Hill, 1998. Kannel, P.R., Lee, S., Lee, Y.S., Kanel, S.R. and Pelletier. G.P., Application of automated QUAL2Kw for water quality modeling and management in the Bagmati river, Nepal, Ecological Modelling 202, pp. 503-517, 2007. DOI: 10.1016/j.ecolmodel.2006.12.033 Ministerio de Ambiente, Vivienda y Desarrollo Territorial. Decreto 3930 de 2010, Colombia. Gujarati, D., Econometría, Mc GRAW-HILL, Mexico, 2003. Holguin, J., Everaert, G., Boets, P., Galvis, A. and Goethals, P., Development and application of an integrated ecological modelling framework to analyze the impact of wastewater discharges on the ecological water quality of rivers. Environmental Modelling & Software, 48, pp. pp. 27-39, 2013. Kuzniar, A., Kowalczk, A. and Kostuch, M., Long term water quality monitoring of a transboundary river. Pol. J. Environ Stud, 23 (3), pp. 1009-1015, 2015. Steinman, A. and McIntire, D., Effects of current velocity and light energy of the structure of periphyton assemplages in laboratory stream, Journal of Phycology, 22 (3), pp.352-361, 1986. DOI: 10.1111/j.1529-8817.1986.tb00035.x Oscoz, J. and Escala, M., Efecto de la contaminación y regulación del caudal sobre la comunidad de macroinvertebrados bentónicos del tramo bajo del río Larraun (norte de España), Ecologia, 20, pp. 245256, 2006. Gómez, N., Sierra, M.V., Cortelezzi, A. and Rodrigues, C.A., Effects of discharges from the textile industry on the biotic integrity of benthic assemblages. Ecotoxicology and Environmental Safety, 69, pp. 472-479, 2007. DOI: 10.1016/j.ecoenv.2007.03.007 Camacho, L. y Cantor, M., Calibración y análisis de la capacidad predictiva de modelos de transporte de solutos en un río de montaña colombiano. Avances en Recursos Hidráulicos, 14, pp. 39-51, 2006. Espinosa, E. y Reynoso, E., La importancia de la educación no formal para el desarrollo humano sustentable en México, Revista Iberoamericana de Educación Superior, 5 (12), pp. 137-155, 2014. DOI: 10.1016/S2007-2872(14)71947-X Erola, A. and Randhir, T., Watershed ecosystem modeling of landuse impacts on water quality, Ecological Modelling, 270, pp. 54-63, 2013. DOI: 10.1016/j.ecolmodel.2013.09.005 Alba-Tercedor, J., Macroinvertebrados acuáticos y calidad de las aguas de los ríos. IV Simposio del agua en Andalucía (SIAGA). Almería., España, 2, pp. 203-213, 1996. Roldan, G., Bioindicación de la calidad del agua en Colombia. Uso del método BMWP/Col. Editorial Universidad de Antioquia. Medellín, 2003.

R.A. Agudelo-García, received the BSc. degree in Sanitary Engineering in 1984, the MSc. degree in Envonmental Engineering in 1994, both from the Universidad de Antioquia, Medellin, Colombia and the MSc. degree in urban and regional studies in 1998. Currently, he is a full professor in the Sanitary Engineering in Department, Facultad de Ingeniería, Universidad de Antioquia, Medellin, Colombia. His research interests include: water quality, solid waste and planning. R.D. Molina, received the BSc. degree in Sanitary Engineering in 2006, the MSc. degree in Environmental Engineering in 2009, both from the Universidad de Antioquia, Medellin, Colombia. Currently, he is a professor in the Sanitary Engineering in Department, Facultad de Ingeniería, Universidad de Antioquia, Medellin, Colombia. His research interests include: water quality and programming. C.P. Tobon, received the BSc. degree in Civil Engineering in 1994 from Universidad Nacional de Colombia, Medellín Colombia. The MSc. degree in Civil Engineering in 1996 from Universidad de Los Andes. Bogotá, Colombia, and the PhD. degree in Engineering in 2002 from a "sandwich program" between National Universidad Nacional de Colombia, Medellín Colombia and University of Kiel, Germany. He is a full professor in the Department, Facultad de Ingeniería of the Universidad de Antioquia, Medellin, Colombia. His research interests include: hydrodynamic models for simulation of flow in coastal, rivers, groundwater and atmospheric. Oil spill and water quality modelling. Design of hydraulic structures, hydrologic studies, design of water supply and sewage pipes.

L.C. Giraldo-Buitrago, received the BSc. degree in Sanitary Engineering in 1999, the MSc. degree in Envonmental Engineering in 2004, and the PhD. degree in Engineering in 2013, all of them from the Universidad de Antioquia, Medellin, Colombia. Currently, she is a full professor in the Sanitary in Engineering Department, Facultad de Ingeniería, Universidad de Antioquia, Medellín, Colombia His research interests include: water quiality, water modeling and solid waste.

Área Curricular de Medio Ambiente Oferta de Posgrados

Especialización en Aprovechamiento de Recursos Hidráulicos Especialización en Gestión Ambiental Maestría en Ingeniería Recursos Hidráulicos Maestría en Medio Ambiente y Desarrollo Doctorado en Ingeniería - Recursos Hidráulicos Doctorado Interinstitucional en Ciencias del Mar Mayor información:

E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 5105

202


Compressive sensing: A methodological approach to an efficient signal processing Evelio Astaiza-Hoyos a, Pablo Emilio Jojoa-Gómez b & Héctor Fabio Bermúdez-Orozco c b

a Facultad de Ingeniería, Grupo GITUQ, Universidad del Quindío, Armenia, Colombia. eastaiza@uniquindio.edu.co Facultad de Ingeniería Electrónica y Telecomunicaciones, Grupo GNTT, Universidad del Cauca, Popayán, Colombia. pjojoa@unicauca.edu.co c Facultad de Ingeniería, Grupo GITUQ, Universidad del Quindío, Armenia, Colombia. hfbermudez@uniquidio.edu.co

Received: September 10th, 2014. Received in revised form: April 14th, 2015. Accepted: May 19th, 2015.

Abstract Compressive Sensing (CS) is a new paradigm for signal acquisition and processing, which integrates sampling, compression, dimensionality reduction and optimization, which has caught the attention of a many researchers; SC allows the reconstruction of dispersed signals in a given domain from a set of measurements could be described as incomplete, due to that the rate at which the signal is sampled is much smaller than Nyquist's rate. This article presents an approach to address methodological issues in the field of processing signals from the perspective of SC. Keywords: Compressive Sensing, Sampling, Compression, Optimization, Signal Processing, Methodology

Sensado compresivo: Una aproximación metodológica para un procesamiento de señales eficiente Resumen Sensado Compresivo (SC) es un nuevo paradigma para la adquisición y procesamiento de señales, el cual integra muestreo, compresión, reducción de dimensionalidad y optimización, lo cual ha captado la atención de una gran cantidad de investigadores; SC permite realizar la reconstrucción de señales dispersas en algún dominio a partir de un conjunto de mediciones que podrían denominarse incompletas debido a que la tasa a la cual la señal es muestreada es mucho menor que la tasa de Nyquist. En este artículo se presenta una aproximación metodológica para abordar problemas en el ámbito del procesamiento de señales desde la perspectiva de SC. Palabras clave: Sensado Compresivo, Muestreo, Compresión, Optimización, Procesamiento de Señales, Metodología.

1. Introducción La principal motivación en Sensado Compresivo (SC) es que muchas señales del mundo real pueden aproximarse adecuadamente por señales dispersas, es decir, que pueden aproximarse por una combinación lineal de términos de una base vectorial, en la cual solo se tienen algunos términos significativos, razón por la cual, se considera que SC es una tecnología promisoria la cual contribuirá al mejoramiento significativo de la forma en que actualmente se realiza el procesamiento de señales, reduciendo los costos computacionales y con ello optimizando la utilización de otros recursos tales como energía. Para obtener una representación comprimida se calculan

los coeficientes en la base seleccionada (por ejemplo Fourier o una base Wavelet [1,2]) y luego se mantienen sólo los coeficientes más grandes, sólo éstos se almacenarán, mientras que el resto de ellos se hacen cero cuando se recupera la señal comprimida. La pregunta inmediata ante este escenario es: ¿existe una manera directa de obtener la versión comprimida de la señal?, sin embargo, medir directamente los coeficientes mayores es imposible, ya que normalmente no se sabe a priori, cuáles de ellos son en realidad los más grandes. Sin embargo, SC proporciona una manera de obtener la versión comprimida de una señal usando sólo un pequeño número de mediciones lineales y no adaptativas. Incluso, SC predice que es posible recuperar la señal a partir de mediciones sub-muestreadas de la

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 203-210. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.45512


Astaiza-Hoyos et al / DYNA 82 (192), pp. 203-210. August, 2015.

soporte del vector ; la norma ℓ se define como se muestra en la ec. (2).

misma, mediante métodos computacionalmente eficientes, por ejemplo optimización convexa. Es de anotar que, las medidas lineales arbitrariamente submuestreadas, descritas por la denominada matriz de sensado en general, no tendrán éxito en la recuperación de vectores dispersos. Razón por la cual se han formulado algunas condiciones necesarias y suficientes que se deben cumplir por la matriz para permitir recuperar vectores dispersos; estas condiciones son bastante difíciles de comprobar cuando la matriz de sensado es determinista, al menos cuando se apunta a trabajar con la mínima cantidad de mediciones. De hecho, en la actualidad, como se muestra en [1,2] se obtienen grandes avances utilizando matrices aleatorias, en las que las entradas de la matriz son independientes e idénticamente distribuidas de acuerdo a cualquier distribución sub-gausiana, esto hace que la cantidad de mediciones requeridas sea menor que en el caso de matrices deterministas de acuerdo a la caída pronunciada de la función de distribución, lo que implica la concentración de los valores representativos de la señal. El proceso completo, para abordar un problema desde el paradigma de SC, consta de tres partes fundamentales: 1.) Representación dispersa de la señal. 2.) Toma de medidas y codificación lineal y 3.) Recuperación dispersa o decodificación no lineal; etapas que se tratan en detalle en el resto del artículo. El principal objetivo del presente artículo es establecer una primera aproximación a un marco de referencia metodológico, mediante el cual, se definan no solamente las etapas en la solución de problemas basados en SC, sino también en los criterios de desarrollo de las mismas, basándose en definiciones, teoremas y técnicas que serán proporcionadas e ilustradas. Igualmente se busca responder a las preguntas típicas en el ámbito del SC tales como: ¿Qué características o condiciones debe cumplir la matriz de sensado para garantizar recuperación de la señal sensada? ¿Cuantas medidas deben tomarse de una señal dispersa para garantizar recuperación? ¿Cómo seleccionar el algoritmo de recuperación más adecuado? La estructura del artículo se presenta de acuerdo a las preguntas formuladas anteriormente.

∑ ‖ ‖

| |

max | | , , ,…,

∈ 1, ∞

∞ (2)

Típicamente en la realidad las señales no son en sí mismas dispersas, pero admiten ser representadas de manera dispersa en alguna base Φ, por lo tanto puede representarse como se ilustra en la ec. (3).

Φd con ‖ ‖

(3)

Por ejemplo, en el dominio del tiempo, una señal típica de información en el ámbito de las telecomunicaciones luce como la Fig. 1, donde no es claro que la señal sea dispersa. Sin embargo, al obtener su transformada de Fourier, la cual se ilustra en la Fig. 2, puede verse que la mayoría de los coeficientes de la señal son muy pequeños, por lo tanto, puede obtenerse una representación bastante aproximada de la señal haciendo cero los coeficientes muy pequeños mediante un método de umbral, consiguiendo de esta manera, una representación dispersa de la señal (en este caso 3).

Figura 1. Señal de Información en el dominio tiempo. Suma de tres tonos. Fuente: Autores

2. Representación dispersa de la señal Generalmente, las señales reales pueden representarse con un muy buen nivel de aproximación, mediante una combinación lineal de algunos pocos elementos de una base conocida; cuando esta representación es exacta, se dice que la señal es dispersa, lo cual permite capturar el hecho que en muchos casos, señales que residen en altas dimensiones, contienen relativamente poca información comparada con su ambiente dimensional. Matemáticamente, se dice que una señal x es kdispersa cuando ésta tiene a los sumo k valores no cero, donde

:‖ ‖

(1)

La ec. (1), denota el conjunto de todas las señales dispersas, y el operador ‖ ‖ denota la norma ℓ del vector , cuando 0 la norma definida en la ec. (2) no cumple la desigualdad triangular, por lo tanto por definición | y representa la cardinalidad del se tiene que ‖ ‖ : |

Figura 2. Representación Dispersa de la Señal de Información Mediante la Transformada de Fourier. Fuente: Autores

204


Astaiza-Hoyos et al / DYNA 82 (192), pp. 203-210. August, 2015.

Inicialmente se introduce el concepto de espacio nulo de la matriz denotado como se indica en la ec. (6).

Al medir el error de aproximación utilizando una norma ℓ , se está realizando un procedimiento que permite obtener la mejor aproximación de términos de la señal original, esta medida de error, es el eje central de la aproximación no lineal [3] no lineal porque la selección de cuales coeficientes se mantienen en la aproximación depende de la señal en sí misma. Es importante considerar que un modelo disperso es altamente no lineal, siempre que los elementos seleccionados de la base para representar una señal pueden cambiar de señal a señal, lo cual puede apreciarse cuando se realiza una combinación lineal de dos señales dispersas, combinación que en general no será dispersa, dado que los soportes de las señales no necesariamente coinciden, aunque si se puede generalizar que si , ∈ luego . ∈ En la práctica, un aspecto de gran importancia es que solo pocas señales reales son verdaderamente dispersas, pero en su gran mayoría pueden aproximarse a señales dispersas [4,5], por lo tanto, puede calcularse el error de aproximar una señal por alguna señal ∈ como se indica en la ec. (4).

min ‖

∈ ∑

Si ∈

es claro que

:

(6)

0

Dado que el interés, es recuperar todas las señales dispersas a partir de las medidas , por lo tanto, es claro que para cualquier par de vectores diferentes , ∈ , se , ya que de otra manera es debe tener que imposible diferenciar de basándose solamente en las medidas , ya que si se presenta que , luego 0 con ∈ , de donde se aprecia que permite representar de manera única a todos los vectores dispersos ∈ si y solamente si no contiene . En este artículo, para caracterizar esta vectores en propiedad se utilizará la definición de la “chispa” (spark) de una matriz [6]. Definición 1: La “chispa” (spark) de una matriz se define como el menor número de columnas de que son linealmente dependientes. La anterior definición permite plantear el siguiente teorema, el cual garantiza que no contiene vectores en . Teorema 1[6]: Para cualquier vector ∈ , existe al si y solamente si menos una señal ∈ , tal que

(4)

0 para cualquier .

(7)

2

3. Diseño de matrices de sensado

La demostración del Teorema 1 se realiza por contradicción en [6]. Dado que es la cantidad de filas de la matriz , en el peor de los casos, el rango de la matriz coincide con el número de filas, y por consiguiente 2 1, por lo tanto, se infiere que ∈ 2, 1 y por lo tanto del Teorema 1 de la ec. (7) se tiene que 2 . Cuando se trabaja con señales exactamente dispersas, la “chispa” de la matriz proporciona una caracterización completa de cuando es posible reconstruir la señal dispersa a partir de las muestras medidas; sin embargo, cuando se trabaja con señales aproximadamente dispersas se deben considerar algunas condiciones más restrictivas sobre el espacio nulo de la matriz [7], donde se debe garantizar, que no contiene vectores que sean muy compresibles, adicionalmente a vectores que sean dispersos. Por lo anterior, suponiendo que ⊂ 1,2, … , es un subconjunto de índices, y sea 1,2, … , \ . Se representa como al vector de longitud obtenido haciendo cero las entradas de indexadas por ; de forma similar, se representa como a la matriz de tamaño obtenida convirtiendo en el vector cero las columnas de indexadas por . Definición 2: Una matriz satisface la propiedad de espacio nulo de orden , si existe una constante 0 tal que

Asumiendo que la señal ∈ es dispersa, de la cual se tomarán medidas lineales mediante un sistema de adquisición; que puede representarse matemáticamente como lo indica la ec. (5). (5) . La Donde es una matriz de tamaño y ∈ matriz representa una disminución de la dimensionalidad de la señal dado que la mapea desde su espacio original , el cual es generalmente grande, a un espacio , donde es en general mucho más pequeño que . En este caso igualmente se asume que las medidas son no adaptativas, lo cual significa que las filas de la matriz son fijas, y por lo tanto no dependen de las medidas adquiridas. De acuerdo a lo anterior, se pueden formular las siguientes dos preguntas: ¿Cómo puede realizarse el diseño de la matriz de sensado de tal manera que esta garantice la conservación de la información contenida en la señal ? ¿Cómo puede reconstruirse o recuperarse la señal original a partir de las medidas tomadas ?; para dar respuesta a estas preguntas, y dado que el motivo primario del presente artículo es realizar una primera aproximación a un marco de referencia metodológico para la aplicación de SC, no se realiza directamente una propuesta sobre un procedimiento de diseño, sino que se ilustran las propiedades que debería tener la matriz de sensado para permitir la preservación de la información y garantizar la reconstrucción de la señal original de manera única, para ello, se parte de la introducción de algunos conceptos, definiciones y teoremas dados a continuación.

(8)

Para todo ∈ y para todo tal que | | . Donde el operador | | denota cardinalidad. Por lo tanto, la propiedad de espacio nulo cuantifica el 205


Astaiza-Hoyos et al / DYNA 82 (192), pp. 203-210. August, 2015.

Definición 4: Sea : → describe una matriz de sensado, y sea ∆: → describe un algoritmo de recuperación dispersa. Se dice que el par ( , ∆) es C-estable si para cualquier ∈ y para cualquier ∈ se tiene que

hecho de que los vectores en el espacio nulo de la matriz no deberían ser demasiado concentrados en un pequeño subconjunto de índices. Por ejemplo, si un vector es estríctamente disperso, luego existe un tal que ‖ ‖ 0 y por lo tanto, la (8) implica que 0. Por lo tanto, si una matriz satisfice la propiedad de espacio nulo, luego el único vector disperso en es 0. Para describir completamente las implicaciones de la propiedad de espacio nulo en el diseño de la matriz de sensado, y por consiguiente, en el contexto de recuperación dispersa, es necesario introducir algunos conceptos con respecto a cómo se mide el desempeño de los algoritmos de recuperación de señales dispersas cuando se trabaja con señales que en general son no dispersas. Para ello, considerando que ∆: → representa el método de recuperación de la señal, luego, aplicando la ec. (8) al error de aproximación del método y utilizando la ec. (4) se llega a la expresión en la ec. (9).

‖∆

‖∆

‖ ‖

‖ ‖

1

‖ ‖

(11)

(12)

Para todo ∈ La demostración del teorema 3 se encuentra en [8]. Del teorema 3 (ec. (12)) se puede afirmar que si se desea reducir el impacto del ruido en la señal recuperada, se debe ajustar la matriz de sensado (puede pensarse en escalarla) de tal manera que cumpla la propiedad de isometría restringida, obteniendo de esta manera un ajuste de la ganancia de la señal si la magnitud del ruido es independiente de la selección de , alcanzando de esta manera una alta relación señal a ruido, de tal manera que eventualmente el ruido pueda considerarse despreciable. Hasta el momento, con respecto a la cantidad de medidas requeridas para garantizar la recuperación de una señal dispersa, se tiene el límite inferior determinado por el teorema 1, el cual indica que debe ser 2 para el caso de señales exactamente dispersas que cumplan con la propiedad de espacio nulo (adicionalmente no se considera el ruido); por lo tanto, la idea ahora, es identificar cual es el mínimo número de medidas requeridas para garantizar la recuperación de una señal dispersa basadas en la propiedad de isometría restringida. Teorema 4 [4]: Sea una matriz de tamaño que satisfice la propiedad de isometría restringida de orden 2 con constante ∈ 0, . Luego

Condición que garantiza recuperación exacta de toda señal dispersa posible, pero también garantiza un grado de robustez en el escenario de trabajar con señales no dispersas, que depende directamente, de que tan bien se realiza la aproximación de dichas señales a vectores dispersos[7]. Teorema 2 [5]: Sea : → describe una matriz de sensado, y sea ∆: → describe un algoritmo de recuperación dispersa. Si el par ( , ∆) satisface la ec. (9), luego satisface la propiedad de espacio nulo de orden 2 . La demostración del teorema se encuentra en [7]. Sin embargo, cuando las medidas de la señal dispersa se encuentran contaminadas por algún tipo de ruido, tal como ruido blanco o ruido de cuantización, se deben considerar condiciones aún más fuertes, dado que la propiedad de espacio nulo no considera el ruido. Es por lo anterior que en [8] se introduce la condición denominada propiedad de isometría restringida aplicada sobre la matriz de sensado. Definición 3: Una matriz satisface la propiedad de isometría restringida de orden , si existe un ∈ 0,1 tal que

1

‖ ‖

La ec. (11) indica que si se adiciona una pequeña cantidad de ruido a las medidas, el impacto de dicha adición sobre la señal recuperada no debe ser arbitrariamente grande. Teorema 3 [4]: Si el par ( , ∆) es C-estable luego

(9)

0,28

(13)

La demostración del teorema 4 se encuentra en [8]. La 1 se realiza por conveniencia. restricción 2 La ec. (13), establece un límite inferior para la cantidad de medidas requeridas para la recuperación de la señal dispersa cuando la matriz de sensado satisface la propiedad de isometría restringida de orden 2 . Verificar que una matriz de sensado satisface cualquiera de las propiedades tales como la “chispa”, la propiedad de espacio nulo, o la propiedad de isometría restringida requiere típicamente realizar una búsqueda combinatoria sobre todo el espacio de submatrices , por lo tanto, es preferible utilizar propiedades sobre la matriz que se puedan calcular con mayor facilidad, donde, la propiedad de coherencia de la matriz es una de esas propiedades [10,11]. Definición 5: La coherencia de una matriz , denotada

(10)

Para todo ∈ Si una matriz satisface la propiedad de isometría restringida de orden 2 , luego, de la ec. (10) se puede interpretar que la matriz conserva la distancia de cualquier par de vectores dispersos, lo cual tiene implicaciones fundamentales respecto al ruido. Lema 1[9]: Supóngase que satisface la propiedad de isometría restringida de orden con constante . Sea un entero positivo. Luego satisface la propiedad de isometría con constante . restringida de orden El límite inferior de la propiedad de isometría restringida es una condición necesaria para la recuperación de todas las señales a partir de sus medidas , por los mismos motivos que la condición de espacio nulo es necesaria. 206


Astaiza-Hoyos et al / DYNA 82 (192), pp. 203-210. August, 2015.

que es posible recuperar una señal utilizando cualquier subconjunto suficientemente grande de medidas, otro beneficio es que en la práctica, con frecuencia, se tiene interés de tomar medidas de una señal que no es dispersa en su base original, pero que es dispersa con respecto a una base Φ, por lo tanto, se requiere que el producto Φ cumpla con la propiedad de isometría restringida. Si se utiliza una construcción determinista, sería necesario tener en cuenta la base Φ en la construcción de , pero cuando se construye de manera aleatoria se puede evitar esta consideración.

por , es el mayor valor absoluto del producto interno entre cualquier par de columnas , de , y definida como se ilustra en la ec. (14).

max

〈 , ‖ ‖

(14)

En [12-14] se muestra el límite de Welch, el cual es el límite inferior de la coherencia de una matriz, por lo tanto , 1 , donde puede apreciarse que si ≫ ∈ . el límite inferior es aproximadamente √ Lema 2[11]: Para cualquier matriz ,

1

4. Algoritmos de recuperación de la señal Existen cinco clases de técnicas computacionales para resolver problemas de aproximación dispersa:  Búsqueda Codiciosa: Refina iterativamente una solución dispersa por medio de identificación sucesiva de uno o más componentes que producen la mejor aproximación [19].  Relajación Convexa: Este tipo de técnica reemplaza el problema combinatorio por un problema de optimización convexa. Resuelve el problema convexo con algoritmos que explotan la estructura del problema [20].  Métodos Bayesianos: En esta técnica se asume una distribución a priori que favorece la dispersión para los coeficientes desconocidos; se desarrolla un estimador de máximo a posteriori que incorpora la observación; identifica una región de masa posterior significativa [21], o promedia sobre los modelos más probables [22].  Optimización no Convexa: Convierte el problema ℓ en un problema no convexo relacionado y trata de identificar un punto estacionario. [23].  Fuerza Bruta: Con esta técnica se realiza una búsqueda sobre todo el conjunto de posibles soportes, utilizando métodos de plano cortante para reducir el número de posibilidades [24]. A continuación se realiza una descripción general de los algoritmos basados en Búsqueda Codiciosa y los algoritmos de Relajación Convexa, dado que estos dos métodos son los que presentan como ventaja el ser computacionalmente prácticos y conducen a soluciones demostrablemente correctas bajo condiciones bien definidas [25-28]. Los métodos Bayesianos y basados en optimización no convexa, se basan en principios sólidos, pero en la actualidad no ofrecen garantías sólidas de solución [29,30]; en cuanto a los métodos de Fuerza Bruta, son algorítmicamente correctos, pero solo son eficientes en problemas de pequeña escala[2,31,32].

(15)

La demostración se realiza en [11]. Por medio de la combinación del teorema 1 (ec. (7)) con el lema 2 (ec. (15)) se puede proponer la siguiente condición sobre la matriz de sensado de tal manera que se garantiza unicidad de la correspondencia de la señal dispersa con sus medidas . Teorema 5[10]:Si

1

(16)

Luego, para cada vector de mediciones ∈ existe al menos una señal ∈ tal que . El teorema 5 (ec. (16)) junto con el límite de Welch proporcionan un límite superior al nivel de dispersión que garantiza unicidad utilizando coherencia, el cual se indica en la ec. (17).

(17)

De acuerdo a lo presentado hasta el momento en este artículo, puede indicarse que la construcción de la matriz de sensado es una tarea dispendiosa en la cual es de vital importancia satisfacer las propiedades que garantizan la recuperación de la señal con la menor cantidad posible de mediciones, por ello, en [5,15-17] se han estudiado técnicas de construcción de matrices de sensado determinísticas, las cuales satisfacen las propiedades de recuperación, pero requieren de gran cantidad de medidas, por ejemplo en [16], la técnica de construcción propuesta requiere que log , afortunadamente, estas limitaciones pueden ser superadas aleatorizando la construcción de la matriz de sensado. Por ejemplo, si es una matriz aleatoria de tamaño cuyas entradas son independientes e idénticamente distribuidas, con distribuciones continuas, luego 1 con probablidad 1. De manera más significativa, matrices aleatorias satisfacen la propiedad de isometría restringida con alta probabilidad, si las entradas siguen una distribución gausiana, Bernoulli o en general cualquier distribución subgausiana y requieren una cantidad de medidas log / [18]. La utilización de matrices aleatorias tiene otros beneficios adicionales tales como que las medidas son democráticas, lo cual significa,

4.1. Algoritmos basados en búsqueda codiciosa Los algoritmos basados en búsqueda codiciosa se basan en aproximaciones sucesivas de los coeficientes y del soporte de la señal, identificando de manera iterativa el soporte de la señal hasta alcanzar un criterio de convergencia, o por obtener de manera alternativa una aproximación mejorada de la señal dispersa en cada iteración considerando la falta de correspondencia con los datos medidos. El algoritmo de Búsqueda de Correspondencia Ortogonal 207


Astaiza-Hoyos et al / DYNA 82 (192), pp. 203-210. August, 2015.

(Orthogonal Matching Pursuit - OMP) [19] es tal vez el más simple al interior de esta categoría, este algoritmo inicia buscando la columna de la matriz de sensado mayormente correlacionada con las medidas, luego se repite este paso, correlacionando las columnas con la señal residual, la cual se obtiene restando las contribuciones de una estimación parcial de la señal del vector de mediciones original. El algoritmo 1 muestra la estructura formal de OMP, donde denota el operador de umbralización dura sobre , el cual establece en cero todas las entradas de excepto aquellas entradas de mayor magnitud. El criterio de parada puede consistir de un límite sobre la cantidad de iteraciones donde también se limite el número de entradas no cero del vector , o un criterio que verifique que . Algoritmo 1. Búsqueda de Correspondencia Ortogonal OMP Entradas: Matriz de sensado , vector de medidas . Inicializar: 0, , ∅. For 1; ∶ 1 hasta que se cumpla el criterio de parada ← % Estimación de la señal de forma residual ← ∪ % Agrega la mayor entrada residual del soporte. | ← , | ← 0 % Actualiza la estimación de la señal. ← % Actualiza la medición residual. Fin For Retorna:

Cuando se trata con mediciones imperfectamente dispersas (medidas contaminadas por ruido), se considera el modelo de sensado dado por la ec. (20). (20) Donde es la matriz de sensado de tamaño , ∈ es el vector de mediciones y ∈ es el vector de ruido, por lo tanto, las entradas de son las medidas de contaminadas por ruido, por lo tanto el problema de optimización de la ec. (19) se convierte en

min‖ ‖

min ‖ ‖

Los algoritmos basados en relajación convexa son otro enfoque fundamental de aproximación dispersa, los cuales reemplazan la función combinatoria ℓ con la función convexa ℓ , lo cual, convierte el problema combinatorio en un problema de optimización convexa, concretamente [35], la norma ℓ es la función convexa más aproximada a la función ℓ . El enfoque natural, desde el cual se aborda el problema de aproximación dispersa es encontrar la solución dispersa de , resolviendo

(22)

4.3. Búsqueda codiciosa o relajación convexa Los dos métodos permiten garantizar la recuperación de la señal original siempre que se cumpla con una cantidad adecuada de medidas, sin embargo, dada la naturaleza de la búsqueda codiciosa que construye y corrige progresivamente el soporte de la solución, su desempeño sobre señales que siguen una distribución que decae rápidamente es mejor que los algoritmos basados en relajación convexa, dado que requieren una menor cantidad de mediciones; por otra parte, los algoritmos basados en recuperación convexa tienden a tener un desempeño más consistente por lo cual, la calidad de la solución tiende a afectarse menos por la velocidad de caída de la distribución de la señal. La búsqueda codiciosa puede extenderse al modelo base de CS, en el cual, las señales no solamente son dispersas, sino que sus soportes se encuentran restringidos a ciertos modelos, como por ejemplo, estructuras en árbol; algunos de

(18)

Sin embargo el problema planteado en la ec. (18) es un problema combinatorio el cual en general es NP-Hard [36], y el simple hecho de trabajar con todos los soportes de cardinalidad se convierte en un problema computacional intratable, al reemplazar la norma ℓ por la norma ℓ el problema se convierte en el planteado en la ec. (19).

min‖ ‖

(21)

Los dos algoritmos o problemas de minimización son equivalentes en el sentido que la solución de un problema es también la solución del otro siempre que los parámetros y se establezcan adecuadamente; sin embargo la correspondencia entre y no se conoce de manera a priori, dependiendo de la aplicación y la información disponible, uno de los dos puede ser más fácil de obtener, lo que hace que uno de los dos problemas enunciados en las ec. (21) y ec. (22) sea preferido sobre el otro. El seleccionar adecuadamente o es un problema que es muy importante en la práctica, por lo cual, principios generales de selección incluyen 1.) Realizar presunciones estadísticas sobre y e interpretar las ec. (21) o ec. (22) como por ejemplo estimaciones de máximo a posteriori. 2.) Validación cruzada (realizar la reconstrucción a partir de un subconjunto de medidas y validar la recuperación sobre el otro subconjunto de medidas) y 3.) Encontrar los mejores valores de los parámetros sobre un conjunto de datos de prueba y utilizar estos parámetros sobre los datos actuales con ajustes apropiados, para compensar las diferencias en escala, rango dinámico, dispersión y nivel de ruido.

4.2. Algoritmos basados en relajación convexa

O de manera equivalente

OMP garantiza recuperación de la señal en exactamente iteraciones para señales exactamente dispersas libres de ruido para matrices que satisfacen la propiedad de isometría restringida [33] y matrices que cumplen la propiedad de coherencia [34]; en los dos casos, los resultados solo aplican cuando log .

min‖ ‖

(19) 208


Astaiza-Hoyos et al / DYNA 82 (192), pp. 203-210. August, 2015.

estos modelos son difíciles de representar como modelos de optimización convexa; por otra parte, es difícil extender la búsqueda codiciosa a funciones objetivo o a funciones de energía. La optimización dispersa, naturalmente acepta funciones objetivo y restricciones de muchos tipos, especialmente si son convexas; sin embargo, si un problema puede ser resuelto por los dos métodos, se debe examinar la velocidad de caída de la distribución de la señal. 5. Marco de referencia metodológico Finalmente, se presenta en resumen el marco de referencia metodológico propuesto para realizar un procesamiento de señales eficiente basado en CS; la Fig. 3 ilustra las etapas y las consideraciones basadas en las secciones anteriores, el marco de referencia metodológico propuesto se valida en [37]. 6. Conclusiones En este artículo se presentan los resultados más importantes para la definición de una metodología y una primera aproximación hacia un marco de referencia

Figura 3. Marco de Referencia Metodológico Propuesto. Fuente: Autores

209

metodológico para el procesamiento de señales basada en sensado compresivo, permitiendo de esta manera realizar la representación dispersa de señales reales, realizar el diseño adecuado de matrices de sensado y seleccionar eficientemente el algoritmo de reconstrucción adecuado, dado que la literatura actual es lo suficientemente extensa como para convertir en un gran problema el cómo abordar el procesamiento de señales desde este nuevo paradigma. Finalmente, se considera que SC es una tecnología promisoria la cual contribuirá al mejoramiento significativo de la forma en que actualmente se realiza el procesamiento de señales, reduciendo los costos computacionales y con ello optimizando la utilización de otros recursos tales como energía; igualmente, se considera que aún es un campo en el cual se pueden generar gran cantidad de aportes en el diseño de matrices de sensado aleatorias y seudoaleatorias que permitan realizar una implementación práctica real de bajos requerimientos computacionales, además de buscar el diseño de algoritmos de recuperación de señal eficientes, que articulados a las matices de sensado, permitan minimizar la cantidad de mediciones requeridas para garantizar estabilidad y exactitud en la recuperación de la señal.


Astaiza-Hoyos et al / DYNA 82 (192), pp. 203-210. August, 2015.

Referencias [1]

[2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]

[18] [19] [20] [21] [22]

[23] [24]

Candès, E.J., Tao, J.T. and Romberg, J., Robust. Uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2), pp. 489– 509, 2006. DOI: 10.1109/TIT.2005.862083 Donoho, D.L., Compressed sensing, IEEE Trans. Inform. Theory, 52 (2), pp. 1289–1306, 2006. DOI: 10.1109/TIT.2006.871582 DeVore, R.A. Nonlinear approximation. Cambridge University Press, 1998. Davenport, M., Random observations on random observations: Sparse signal acquisition and processing. PhD Thesis, Rice University, 2010. Indyk, P., Explicit constructions for compressed sensing of sparse signals. Proc.ACM-SIAM Symp Discrete Algorithms (SODA), San Franciso, CA, 2008. Donoho, D. and Elad, M., Optimally sparse representation in general (nonorthogonal) dictionaries via L1 minimization. Proc Natl Acad Sci, 100 (5), pp. 2197–2202, 2003. Cohen, A., Dahmen, W. and DeVore. R., Compressed sensing and best -term approximation. J. Am. Math. Soc., 22 (1), pp. 211-231, 2009. Candès, E. and Tao, T., Decoding by linear programming. IEEE Trans Inform Theory, 51 (12), pp. 4203–4215, 2005. DOI: 10.1109/TIT.2005.858979 Needell, D. and Tropp, J., CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal, 26 (3), pp. 301–321, 2009. Donoho, D. and Elad, M., Optimally sparse representation in general (nonorthogonal) dictionaries via L1 minimization. Proc Natl Acad Sci, 100 (5), pp. 2197–2202, 2003. Tropp, J. and A. Gilbert. Signal recovery from partial information via orthogonal matching pursuit. IEEE Trans Inform. Theory, 53 (12), pp. 4655–4666, 2007. DOI: 10.1109/TIT.2007.909108. Rosenfeld, M., In Praise of the Gram Matrix, on The Mathematics of Paul Erdös II, R.L. Graham and J. Nešetřil, Eds. Springer Berlin Heidelberg, pp. 318-323, 1997. Strohmer, T. and Heath, R., Grassmanian frames with applications to coding and communication. Appl Comput Harmon Anal, 14 (3), pp. 257–275, 2003. Welch, L., Lower bounds on the maximum cross correlation of signals. IEEE Trans Inform Theory, 20 (3), pp. 397–399, 1974. DOI: 10.1109/TIT.1974.1055219 Bourgain, J., Dilworth, S., Ford, K., Konyagin, S. and Kutzarova, D., Explicit constructions of RIP matrices and related problems. Duke Math J, 159 (1), pp. 145–185, 2011. DeVore, R., Deterministic constructions of compressed sensing matrices. J. Complex, 23 (4), pp. 918–925, 2007. Haupt, J., Applebaum, L. and Nowak, R., On the restricted isometry of deterministically subsampled fourier matrices. On 44th Annual Conference on Information Sciences and Systems (CISS), pp. 1-6, 2010. DOI: 10.1109/CISS.2010.5464880 Achlioptas, D., Database-friendly random projections: JohnsonLindenstrauss with binary coins, in: Special issue on PODS 2001 (Santa Barbara, CA). J. Comput. Syst. Sci., 66, pp. 671–687, 2003. Mallat, S. and Zhang, Z., Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process, 41 (12), pp. 3397–3415, 1993. DOI:10.1109/78.258082 Chen, S.S., Donoho, D.L. and Saunders, M.A., Atomic decomposition by basis pursuit, SIAM Rev., 43 (1), pp. 129–159, 2001. Wipf, D. and Rao, B., Sparse bayesian learning for basis selection. IEEE Trans. Signal Processing, 52 (8), pp. 2153–2164, 2004. Schniter, P., Potter, L.C. and Ziniel, J., Fast bayesian matching pursuit: Model uncertainty and parameter estimation for sparse linear models. On: Information Theory and Applications Workshop, pp. 326-333, 2008. Chartrand, R., Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Processing Lett., 14 (10), pp. 707–710, 2007. DOI: 10.1109/LSP.2007.898300 Miller, A., Subset Selection in Regression. CRC Press, 2002.

[25] Becker, S., Bobin, J. and Candès, E., NESTA: A fast and accurate first-order method for sparse recovery. SIAM J. Imag. Sci., 4 (1), pp. 1–39, 2011. DOI: 10.1137/090756855 [26] Becker, S., Candès, E. and Grant, M., Templates for convex cone problems with applications to sparse signal recovery. Math Prog Comp, 3 (3), pp.165–218, 2011. DOI: 10.1007/s12532-011-0029-5. [27] Ben-Haim, Z., Eldar, Y.C. and Elad, M., Coherence-based performance guarantees for estimating a sparse vector under random noise. IEEE Trans Sig Proc., 58 (10), pp. 5030–5043, 2010. DOI: 10.1109/TSP.2010.2052460 [28] Donoho, D., Elad, M. and Temlyahov, V., Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory, 52 (1), pp. 6–18, 2006. DOI: 10.1109/TIT.2005.860430 [29] Lobato, A., Ruiz, R., Quiróga, J. y Recio, A., Recuperación de señales dispersas utilizando orthogonal matching pursuit (OMP), Revista Ingeniería e Investigación, 29 (2), pp. 112–118, 2009. [30] Gilbert, A., Li, Y., Porat, E. and Strauss, M., Approximate sparse recovery: Optimizaing time and measurements. In Proc. ACM Symp. Theory of Comput., Cambridge, MA, USA, 2010. [31] Gilbert, A., Strauss, M., Tropp, J. and Vershynin, R., One sketch for all: Fast algorithms for compressed sensing. In Proc. ACM Symp. Theory of Comput., San Diego, CA, USA, 2007. DOI: 10.1145/1250790.1250824 [32] Candés, E. and Tao, T., Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans Information Theory, 52 (12), pp. 5406–5425, 2006. DOI: 10.1109/TIT.2006.885507 [33] Davenport, M. and Wakin, M., Analysis of orthogonal matching pursuit using the restricted isometry property. IEEE Trans Inform Theory, 56 (9), pp. 4395–4401, 2010. DOI: 10.1109/TIT.2010.2054653 [34] Tropp, J., Greed is good: Algorithmic results for sparse approximation. IEEE Trans Inform Theory, 50 (10), pp. 2231–2242, 2004. DOI: 10.1109/TIT.2004.834793 [35] Gribonval, R. and Nielsen, M., Highly sparse representations from dictionaries are unique and independent of the sparseness measure, Aalborg University, Tech. Rep., 2003. [36] Natarajan, B.K., Sparse approximate solutions to linear systems, SIAM Journal on Computing, 24, pp. 227–234, 1995. [37] Astaiza, E., Jojoa P. and Bermudez, H., Compressive spectrum sensing based on random demodulation for cognitive radio, submitted to IEEE Trans. Prof. Commun.2015. E. Astaiza-Hoyos, es Ing. en Electrónica en 1998, de la Universidad del Cauca, Colombia; MSc. en Ingeniería, área de Telecomunicaciones, en 2008, de la Universidad del Cauca, Colombia. Actualmente es candidato a Dr. en ciencias de la Electrónica. Es profesor asociado en programa de Ingeniería Electrónica de la Universidad del Quindío, Colombia. Es investigador del grupo de Investigación en Telecomunicaciones – GITUQ, de la Universidad del Quindío, Colombia. Sus áreas de interés: comunicaciones inalámbricas, sensado de espectro. P.E. Jojoa-Gómez, es Ing. en Electrónica en 1993, de la Universidad del Cauca, Colombia. MSc. en Ingeniería, área de Sistemas Electrónicos, 1999, Universidad de Sao Paulo, Brasil y Dr, en Ingeniería, Área de Sistemas Electrónicos, en 2003, Universidad de Sao Pablo, Brasil. Es coordinador del grupo de Investigación en Nuevas Tecnologías de Telecomunicaciones. Áreas de interés: Procesamiento digital de señales y filtraje adaptativo. H.F. Bermúdez-Orozco, es Ing. en Electrónica en 2000, MSc. en Electrónica y Telecomunicaciones, en 2010, de la Universidad del Cauca Colombia. Actualmente es estudiante de Doctorado en Ingeniería Telemática. Es profesor asociado del programa de Ingeniería Electrónica y Coordinador del Grupo de Investigación en Telecomunicaciones – GITUQ, de la Universidad del Quindío, Colombia. Sus áreas de interés: Comunicaciones inalámbricas, sistemas radiantes y propagación, modelado de tráfico de servicios telemáticos.

210


Settlement analysis of friction piles in consolidating soft soils Juan F. Rodríguez-Rebolledo a, Gabriel Y. Auvinet-Guichard b & Hernán E. Martínez-Carvajal a,c a

Department of Civil & Environmental Engineering, University of Brasilia, Brazil. jrodriguezr72@hotmail.com b Instituto de Ingeniería, Universidad Nacional Autónoma de México, Mexico. gauvinetg@iingen.unam.mx. c Facultad de Minas, Universidad Nacional de Colombia, Sede Medellín, Medellín, Colombia. hmartinezc30@gmail.com Received: December 8th, 2014. Received in revised form: April 10th, 2015. Accepted: Abril 30th, 2015.

Abstract The paper shows how axisymmetric finite element numerical models can be used to optimize the design of friction piles foundations in an environment that is prone to regional subsidence. The study considers friction piles in typical Mexico City soft clays, that are subjected to external loads and soil consolidation due to variations in piezometric conditions. The constitutive models used to numerically simulate the behavior of the clays vary from a basic elastic perfectly-plastic model to a critical state model that is able to account for the anisotropic yielding behavior of Mexico City clay. The simulations consider the long term behavior of the internal piles within a large pile group. Keywords: friction piles; pile group; regional subsidence; numerical modeling; anisotropy; constitutive models, Mexico City clay.

Análisis de asentamientos de pilas de fricción en suelos blandos compresibles Resumen Este artículo muestra como el uso de modelos axisimétricos implementados en códigos de elementos finitos pueden ser usados para optimizar el diseño de cimentaciones con pilotes de fricción en ambientes susceptibles a hundimiento regional. El estudio considera pilotes de fricción instalados en el suelo blando de la Ciudad de México, sometidos a cargas externas y a la consolidación del suelo debida a variaciones en las condiciones piezométricas. Los modelos constitutivos empleados para simular el comportamiento del suelo compresible varían desde un elástico-plástico perfecto hasta uno basado en la teoría del estado crítico que toma en cuenta la plastificación anisotrópica de las arcillas de la Ciudad de México. Las simulaciones consideran el comportamiento a largo plazo de un grupo de pilotes supuesto infinito. Palabras clave: pilotes de fricción, grupos de pilotes, hundimiento regional, modelado numérico, anisotropía, modelos constitutivos, arcilla de la Ciudad de México.

1. Introduction Commonly, three main types of foundations (Fig. 1a) are used in the lacustrine zone of Mexico City [1]: box-type shallow foundation for small buildings, box-type foundation with friction piles for intermediate height buildings [2,3] and point-bearing piles for very tall or heavy structures. Friction piles transfer most of their load to the soil through skin friction. In the soft soils of Mexico City, friction piles are mainly used as a complement to box-type foundations to reduce settlements. Infrequently, they have been used to ensure the overall foundation stability (bearing capacity design). In all cases, a complex interaction between soil, piles and structure can be expected, because the foundations are

submitted to the effects of a double process of consolidation: firstly due to the load of the structure and, secondly, due to pore pressure drawdown associated to intense pumping of water from the subsoil in the urban area. Since the end of the 19th century, Mexico City Lake Zone has suffered a regional subsidence, which, in some areas, has exceeded 10m. In such conditions, point-bearing piles foundations can lead to apparent protrusion of the structure, as shown in Fig. 1b, with loss of confinement of the upper part of the piles and damage to neighboring structures. On the other hand, when not properly designed, friction pile foundations can either settle excessively or, on the contrary, protrude from the subsiding surrounding soil (Fig. 2). Some field tests were conducted on piles in a

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 211-220. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.47752


RodrĂ­guez-Rebolledo et al / DYNA 82 (192), pp. 211-220. August, 2015.

consolidating soil subjected to pore pressure drawdown [47], but only a few of them involve friction piles. Analytical methods for the design of friction pile foundation in these difficult conditions have been proposed [2,3,8]. The finite element method (FEM) has been increasingly used for the analysis and design of pile foundations subjected to negative skin friction [9-14]. Numerical modeling allows soil behavior complexities and soil-structure interaction, to be taken into account as well as changes in pore pressure regime. The aim of this paper is to demonstrate how 2D finite element analyses can be used to optimize the design of friction piles in an environment that is prone to regional subsidence. Firstly, some background information regarding Figure 2. Friction pile foundation protruding from the subsiding surrounding soil (Mexico City underground, Line 4). Source: The authors.

(a)

typical Mexico City soil conditions and regional subsidence is provided. Next, parametric analyses on the behavior of friction piles subjected to external loads and soil consolidation due to variations in piezometric conditions are presented. The constitutive models used, vary from a basic elastic perfectlyplastic Mohr Coulomb model to S-CLAY1, a critical state model with plastic anisotropy [15,29,30]. The principles of numerical modeling and the constitutive models used are briefly described, followed by numerical analyses and discussion of the results.

Original ground level

2. Background

Clay

2.1. Typical soil conditions Hard layer

b) Box-type with friction piles

a)Box-type

c) Pointbearing piles

(b)

Original ground level

Clay

Hard layer

a)Box-type

b) Box-type with friction piles

c) Pointbearing piles

Figure 1. a) Common types of foundations used in the Lake Zone of Mexico City. b) Behavior of foundations when subjected to regional subsidence in the Lake Zone of Mexico City. Source: The authors.

The urban area of Mexico City can be divided in three geotechnical zones [17]: Foothills (Zone I), Transition (Zone II) and Lake (Zone III), as defined in the present building code [1]. In the Foothills Zone, very compact and heterogeneous volcanic soils and lava fluxes are found. These materials contrast with the highly compressible soft soils of the Lake Zone. Generally, in between, a Transition Zone is found where clayey layers of lacustrine origin alternate with erratically distributed sandy alluvial deposits. The main difficulties for foundations of high buildings are encountered in the Lake and Transition zones. Until the end of the 18th century, the valley of Mexico was a closed basin with a number of shallow lakes, including the Texcoco and Xaltocan lakes. The valley became an open basin when the Nochistongo cut, a channel 7 km long and up to 50 m deep (dug by hand between 1637 and 1789) was completed. Progressively, the lakes were drained, mainly through the Tequisquiac and Deep Drainage (Emisor Central) tunnels, and practically disappeared. A large part of the city was built on lacustrine sediments, which are highly plastic soft clays interbedded with layers of silt, sand and sandy gravels of alluvial origin. In Fig. 3, a typical Lake Zone soil profile is presented, which was obtained from the SIG [18]. Three main clayey layers are referred to as Upper Clay Formation (UCF), Lower Clay Formation (LCF) and Deep Deposits (DD). The clays of the Upper Clay Formation are separated from the Lower Clay Formation by the Hard Layer (HL), a sandy silt or clay stratum, between 2 and 3 m thick, lying typically at a depth of from 29 to 35 m. Generally, a dry crust of desiccated soils and/or anthropic

212


RodrĂ­guez-Rebolledo et al / DYNA 82 (192), pp. 211-220. August, 2015.

Â

qc, MPa

w, % 0 0

100 200 300 400 500 0 0

0.5

1

1.5

cu, kPa

N 2

2.5

3 0 0

10

20

30

40

50 0 0

50

100

150

200 Dry crust

10

15

15

15

15

20

20

20

20

25

25

25

25

30

30

30

30

35

35

35

35

40

40

40

40

45

45

45

45

Upper clay formation #1

10

Upper clay formation #2

10

Upper clay formation #3

10

Hard Layer

Lower Clay Formation

5

Depth, m

5

Depth, m

5

Depth, m

Depth, m

Crust

5

Deep Deposits

Figure 3. Typical soil profile in the north part of the Lake Zone in a newly developed urban area (data from [18]). Source: The authors.

fill, several meters thick, is found above the Upper Clay Formation layer, which has been influenced by drying and wetting cycles due to historic fluctuations of the water table.As seen in Fig. 3, Mexico City clay has a very high water content (corresponding to a void ratio as high as 10), a low cone resistance in the CPT and practically a nil blow count in an SPT test. Undrained shear strength increases with depth, with values of around 20 kPa in the upper part of the UCF and 80 kPa in the contact with HL. This slightly overconsolidated material (overconsolitation ratio varies from 1 to 1.3 [18]) is highly compressible. 2.2. Regional subsidence Due mainly to exploitation of ground-water to supply the growing population through pumping wells, Mexico City has suffered regional subsidence that in some locations exceeds 10 m. Recent data shows that the rate of subsidence tends to decrease in certain areas. However, in newly developed urban zones, such as the eastern and western parts of Texcoco Lake and the former Xochimilco and Chalco lakes, the consolidation process is only in its first stage and the rate of subsidence can be as high as 0.4m/year. Pore pressure drawdown due to the pumping of water in deep pervious strata (Hard Layer and Deep Deposits) leads to the typical piezometric profiles that are shown on Fig. 4. The resulting settlements of the Upper Clay Formation soil considerably affect the behavior of pile foundations. 3. Numerical modeling 3.1. General considerations A group of friction piles, connected to an infinitely large rigid slab is considered (Fig. 5a). The simulations deal with the

long term behavior of the internal piles. The tributary area [19] of each internal pile is hexagonal, but it can be idealized as a circular unit cell (Fig. 5a). The radius of the tributary area is then equal to half of the center-to-center spacing between piles. The problem can then be modeled as axisymmetric (Fig. 5b). Given that the main interaction between piles and soil takes place within the Upper Clay Formation, the less compressible layers below the Hard Layer have not been included in the analyses. The problem was discretized using a finite element mesh with more than 2,500 fifteen-node triangular elements (total density about 55 elements/m2). A mesh densification (from 350 to 750 elements/m2) along the pile shaft and below the rigid slab and the pile cap had to be considered [31]. Lateral boundaries were fixed in the horizontal direction, and the bottom boundary in both directions (Fig. 5b). Sensitivity studies showed that the mesh was dense enough to give accurate results and that it was not necessary to use interface elements at the soil-pile contact. Parametric studies were performed for a foundation slab on friction piles, which were assumed to be 25 m long (finishing 4 m above of the Hard Layer) and 0.5 m in diameter. The weight of a typical five to ten floor building was considered by applying a load of 75 kPa directly to the rigid slab. The analyses were developed in three stages. In Stage 1, pile was placed (no installation effects were considered, only the pile weight as discussed below) and the 75 kPa load was applied on the slab; in Stage 2 the first pore pressure drawdown was introduced (Fig. 6), simulating typical future piezometric conditions (Fig. 4.a); and in Stage 3 a second pore pressure drawdown was considered (Fig. 6), representing an extreme, but possible, future piezometric condition in Mexico City (Fig. 4.b). The conditions at the end of each stage of the analysis, after the excess pore pressure due to the applied load and pore pressure drawdown having been completely dissipated, were assessed.

213


Rodríguez-Rebolledo et al / DYNA 82 (192), pp. 211-220. August, 2015. a)

b)

Efective Stresses, kPa 0

100

200

0

300

Dry crust

200

300 W.T.

Crust

Dry crust Crust

5

5

Preconsolidation pressure

UCF1

UCF1 10

10

UCF2 Depth, m

UCF2 Depth, m

100

0

0

15

Initial effective stress 20

Pore Pressure, kPa

15

20

’z

UCF3

UCF3

25

25

30

Hard layer 30

Hard layer

Figure 6. Initial effective stress and pore water pressure profiles for the numerical model. Source: The authors. Figure 4. Pore water pressure profiles in (a) east and (b) north areas of Lake Zone, Palacio Legislativo and Torre de Tlatelolco, respectively (data from [18]). Source: The authors.

b)

3.2. Constitutive models

Axis of symmetry

a)

Rigid slab

Crust

Pile

S

25m Clay

Piles

Axisymmetry 4m

R

S  2R

negligible in comparison with long-term displacements in Mexico City due to the regional subsidence.

Hard layer R

Figure 5. Model of a pile raft. a) Infinitely large pile raft supported by friction piles. b) Numerical model (not in scale). Source: The authors modify from [9].

Pile driving in very soft soils results in soil disturbance: changes in soil structure and excess pore pressures around the piles. These effects are difficult to model with finite element analyses, due to the excessive mesh distortions, and are hence ignored. However, given that the combined pile weight and the frictional force at soil-pile interface far exceed the buoyancy force, the system is not in equilibrium after pile driving and thus additional deformations occur simply due to pile weight. These deformations are not negligible in Mexico City Clay, as demonstrated by Auvinet and Hanel [7] by means of field observations. Therefore, the effect of pile weight has been taken into account in the numerical analyses as part of Stage 1. In general, the displacements induced by installation are almost

For simulation of the clay behavior (Upper Clay Formation), elastic perfectly-plastic and hardening elastoplastic constitutive models were used. The elastic perfectlyplastic model is the Mohr-Coulomb (MC) model commonly used in industry. The hardening elasto-plastic models include two isotropic models: Modified Cam-Clay [20] and the Soft Soil (SS) model. The Soft Soil model, available in the commercial version of PLAXIS finite element code, has been inspired by the MCC model, but the yield surface and failure surface have been decoupled in order to have a proper K0 prediction at normally consolidated states. Hence, the ellipsoidal yield surface is much steeper than the ellipse of the MCC model (see Fig. 7) and the shape is defined by the input of estimated K0NC (used for calculating the value for shape parameter M*). According to Ovando-Shelley et al. [21], K0NC for Mexico City clay has been measured to correspond to Jaky’s K0. The failure condition in the SS model can be described by a Mohr Coulomb failure line, and consequently on the ‘dry’ side of failure line, the model reverts to a non-linear elastic-perfectly plastic model with zero dilatancy. On the ‘wet’ side of the failure line, the SS model predicts volumetric hardening similarly to the MCC model, but this is expressed in terms of modified compression and swelling indices, and hence at large strains the results will deviate from those of the MCC model. The S-CLAY1 [15] model is a critical state model, which accounts for initial and plastic strain induced anisotropy through an inclined initial yield surface and a rotational hardening law that describes the evolution of anisotropy as a function of plastic strains. Based on data by Diaz-Rodríguez et al. [16], S-CLAY1 is able to accurately represent the extremely high anisotropy at yielding that Mexico City clay exhibits [15].

214


Rodríguez-Rebolledo et al / DYNA 82 (192), pp. 211-220. August, 2015. Table 1. Values for conventional soil parameters. Source: The authors. Depth  ´ M Layer ´   (m) kN/m3) (°) 0-2 14.5 55 0.25 ----- ----- -----Dry Crust

q

p'

E´ (kPa) 4,825

Crust UCF1 UCF2

2-5

12.0

47

0.25

-----

-----

5-10

11.4

43

0.30

1.77

3.35 0.204

261

10-15

11.1

40

0.30

1.64

3.30 0.230

209

UCF3

15-29

11.5

40

0.30

1.64

2.73 0.145

HL

29-31

18.0

45

0.33

-----

-----

------

------

3,444

325 10,000

 = unit weight ‘= effective friction angle at critical state ‘ = Poisson’s ratio

S‐CLAY1 MCC

M = stress ratio at critical state in the p’– q plane

 = slope of the normal compression line in the lnp’ –  plane  = slope of the swelling line in the lnp’ –  plane

SS

E’ = Young’s modulus Source: The authors. Figure 7. The yield surfaces for the hardening elasto-plastic models. Source: The authors. Table 2. Initial values for state parameters. Source: The authors.

3.3. Soil properties

Layer

The soil conditions correspond to a relatively new residential area, near the former Texcoco Lake. This is practically virgin terrain with no previous loading history. A notable amount of ground investigation data were available from [18] database, including cone penetration tests, standard penetration tests mixed with Shelby sampling, piezometric measurements and laboratory triaxial and consolidation tests. Fig. 3 shows the considered soil profile, and Tables 1 to 3 present the values of soil constants and state parameters of the materials considered in the analyses based on ground investigation data and laboratory testing. Due to natural variability, there was some scatter in the values, and for each layer representative mean values have been chosen. The values for ´ and M were deduced from CD and CU triaxial tests published by Marsal and Mazari, Marsal and Salazar, Lo, Alberro and Hiriart and Villa [17,22-25], and the values for e0, , , E´ and the vertical pre-overburden pressure POP have been obtained from one-dimensional consolidation tests and calibrated with triaxial consolidation results published by Villa and DiazRodríguez et al. [25,16]. The values for K0NC for the input of the Soft Soil model have been estimated from Jaky’s formula [26]. The in situ values of K0 were derived from the equation presented by Mayne and Kulhawy [27]. As explained by Wheeler et al. [15], the values for the initial inclination of the yield surface (0) and the soil constant  can be theoretically derived based on the value of friction angle at critical state (´). The value for  has been taken as the smallest value reported by [15], given experimental results that would enable the optimization of the value are not available. As shown by Wheeler et al. [15], S-CLAY1 model predictions are not particularly sensitive to this value. The soil constant  controls the absolute rate at which the inclination of the yield surface  heads toward its current target value, and  controls the relative effectiveness of plastic deviatoric strains and plastic volumetric strains in rotating the yield surface.

K0NC

K0

e0

0

POP (kPa) ----------25 5 10 ------

Dry Crust -----1.17 ----------Crust -----0.82 ----------UCF1 8.7 0.44 0.32 0.73 UCF2 10.0 0.38 0.36 0.65 UCF3 7.2 0.38 0.36 0.65 HL -----0.29 ----------e0 = initial void ratio K0 = in situ lateral earth pressure at rest K0NC = lateral earth pressure at rest for normally consolidated states 0 = initial anisotropy POP = vertical pre-overburden pressure Source: The authors.

Table 3. Values for additional soil constants for the S-CLAY1 model. Source: The authors. Depth, m Layer   from to Dry Crust

0

2

------

------

Crust

2

5

------

------

UCF1

5

10

0.97

20

UCF2

10

15

1.02

20

UCF3

15

29

1.02

20

HL

29

31

------

------

 = relative effectiveness of plastic deviatoric strains and plastic volumetric strains in rotating the yield surface

 = absolute rate at which anisotropy heads toward its current target value Source: The authors.

Fig. 6 shows the initial effective vertical stress and porewater pressure profiles that are assumed in the numerical analyses. The actual state of the pore water pressure (Fig. 6b) has been obtained from piezometers installed in thin sandy layers within Upper Clay Formation and Hard Layer. The water table is assumed to, be at a depth of 2 m, and to stay

215


Rodríguez-Rebolledo et al / DYNA 82 (192), pp. 211-220. August, 2015.

constant. A significant pressure reduction, of about 90 kPa with respect to the hydrostatic distribution, was observed in the Hard Layer due to groundwater extraction. With these values and with the measured soil density of each layer, the initial effective stresses were evaluated (Fig. 6a). The preconsolidation pressure was estimated from 1-D consolidation tests (experimental results are shown with open circles). Due to historical wet-dry cycles, the dry crust, which geologically is part of Upper Clay formation, shows evidence of significant overconsolidation and its behavior may be simulated by using Mohr Coulomb model. The Upper Clay Formation exhibits evident overconsolidation because of water table variations with depth during dry and rainy seasons. Fig. 6b also shows the pore pressure distributions considered for Stages 2 and 3 of the analyses. The future pore pressure drawdown considered in each stage was assumed to be equal to the difference between the actual and the assumed profiles. This assumption results in a consolidation of UCF of about 5 cm/year, for a 20 and 40 year period, for Stages 2 and 3 respectively due to regional subsidence. This is consistent with the consolidation rates measured based on superficial and deep settlement benchmarks installed in the Lake Zone of Mexico City. 3.4. Results of numerical analysis Fig. 8 presents the predicted effective vertical displacement ratio, Yeff/Ylimit for different normalized pile spacing (S/D), for the three stages of the analysis. The variable Yeff refers to the vertical effective displacement predicted for the piled raft, which is defined as:

Yeff  Ysub  Ytotal

(1)

where Ysub is the superficial subsidence induced by pore pressure drawdown (in absence of piles) and Ytotal is the total vertical displacement predicted for the piled raft. The variable Ylimit refers to the Mexico City Code serviceability limit state for isolated structures. Negative values of Yeff mean that the piled raft is settling and positive values correspond to apparent protruding. For the first stage of the analysis (i.e. the load of the building and pile weight), Ysub =0; therefore, the effective displacement is the same as the total one, Yeff = Ytotal. It was observed that all models exhibit the same trend and the predicted settlements ratios exceed the serviceability limit state. For S/D < 3, the settlement increases due to the effect of the pile weight, since the unit weight of the soil (mean value of 11.5 kN/m3 for Upper Clay Formation) is partially substituted by the unit weight of the pile (24 kN/m3). The smallest settlements are observed for relative spacing ratio S/D in the 3-5 range. The Mohr-Coulomb model predicts the largest settlement ratio (Fig. 8). This type of model can hardly be considered as realistic since a constant value for elastic stiffness parameters is assumed, independently of the stress level and the evolution of the stress path. As shown in Fig. 9, the increase in the settlements predicted by the different constitutive models for S/D>5 is due to the increase in the number of stress points at shear

failure and undergoing volumetric hardening. The MohrCoulomb and the Soft Soil models predict a localized zone of failure below the pile tip, not detected by the critical state models (MCC and S-CLAY1S), and the predicted extent of full mobilization of friction along the shaft varies depending on the different models. For center-to-center spacing smaller than 5D, the results obtained with all hardening elasto-plastic models are almost identical (Fig. 8), which demonstrates that the input values have been consistently calibrated. Beyond this point, some differences between the models can be seen. The Soft Soil (SS) model predicts a much larger plastic zone than the MCC and S-CLAY1 models (Fig. 9). This is due to the differences in the predicted stress paths during the loading process. To illustrate this, two stress points have been selected for inspection of relative pile spacing of S/D= 8: one next to the pile shaft near the tip (Fig. 10) and one just beneath the pile tip (Fig. 11). In addition to the stress paths, the yield and failure surfaces have been outlined. Fig. 10 shows that the isotropic SS and MCC models predict totally different stress path directions when reaching failure/critical state. This is due to the assumption of MohrCoulomb failure adopted in the SS model. For the S-CLAY1 model prediction, most of the stress path remains within the elastic region, but ultimately the critical state is reached, followed by some softening due to the effect of anisotropy. Fig. 11 shows that for the point situated at a depth of 0.8D beneath the pile tip, failure is predicted only by the SS model, as would be expected based on Fig. 9. The MCC model predicts an unrealistically high K0NC value (Fig. 11), whilst S-CLAY1 prediction is consistent with measurements of Mexico City clay [21]. Despite the notable differences in the predicted stress paths in the soil near the pile shaft and tip, the differences in the predicted general behavior are not particularly significant (Fig. 8). It is concluded that for long-term analyses the compressibility behavior of the reinforced soil mass is more relevant than the pile-soil interaction. This explains why in this analysis soil-pile interface elements are not required, although they could actually be required for bearing capacity simulations. Fig. 8.b shows the predicted displacements ratio for Stage 2 of the analyses (i.e. when the first pore pressure drawdown was introduced). Again, all models show the same trend, and the values of Yeff / Ylimit are closer to zero and to the serviceability limit state than those obtained for the first stage. This is because the soil has been reinforced with piles, and consequently the settlements due to the pressure drawdown are smaller in the reinforced area than in the surrounding soil. The optimum pile center-to-center spacing, corresponding to predictions that are closest to the serviceability limit state, is within the 2D to 4D spacing range. Further regional subsidence due to an additional decrease in pore water pressure (Stage 3 of the analyses), induces a substantial change in the predicted behavior of the pile foundation, as shown in Fig. 8.c. For example, the Soft Soil model suggests that for a pile spacing of 3D, the foundation protrudes from the surrounding soil and is more than twice as high as Ylimit. On the other hand, for S/D=10, Yeff = Ylimit equals to -2.3; the foundation settles and exceeds 2.3 times the Ylimit. An optimum solution would be reached with 1<Yeff / Ylimit<1. Overall the results by the hardening elasto-

216


Rodríguez-Rebolledo et al / DYNA 82 (192), pp. 211-220. August, 2015.

a) Stage 1

MC

2

Protrusion

SS

Serviceability limit state

1

MCC S-CLAY1

0

 Yeff/ Ylimit

0

1

2

3

4

-1

5

6

7

8

9

10

11

Serviceability limit state

-2

Settlement

3

-3 -4 -5 -6 -7 3

S/D

b) Stage 2

MC

Protrusion

SS

Serviceability limit state

1

MCC S-CLAY1

0

Yeff/ Ylimit

0

1

2

3

4

5

6

7

-1

8

9

10

11

Serviceability limit state

-2

Settlement

2

-3 -4 -5 -6 -7 3

S/D

c) Stage 3

MC

1

Protrusion

SS

Serviceability limit state

MCC S-CLAY1

0

 Yeff/ Ylimit

0 -1

1

2

3

4

5

6

7

8

9

10

Serviceability limit state

-2

11

Settlement

2

-3 -4 -5 -6 -7

S/D

Figure 8. Predicted effective vertical displacement ratio for different S/D values and for the three analyzed stages. Source: The authors.

plastic models suggest that the optimum pile spacing corresponds to values of S/D between 5.5 and 8, while the Mohr-Coulomb model suggests a range of considerably lower values, between 3 and 6. In addition, the results presented in Fig. 8 indicate that in order to reach an optimum foundation design in a subsiding environment it is important to make an accurate prediction of the future pore pressure drawdown at the site. Unfortunately, it is difficult to develop a reliable estimation of the future piezometric condition at the site due to the changing requirements of Mexico City’s water supply. Therefore, design should be based on an intermediate solution, which

minimizes the inconveniences of both settlement and apparent protrusion. In the example presented, according to the S-CLAY1 model, a relative spacing of S/D=5 appears to be adequate, since no protrusion or settlement greater than Ylimit was predicted for Stages 2 and 3. The choice of constitutive model has an equally important role for a realistic optimum foundation design. Given the non-linearity of soft soil response, the Mohr Coulomb model is unlikely to give accurate predictions. In areas prone to regional subsidence due to groundwater extraction accounting for stress-dependent stiffness is extremely important.

217


Rodríguez-Rebolledo et al / DYNA 82 (192), pp. 211-220. August, 2015.

S/D = 8

S/D = 4 MC

SS

MCC

S-CLAY1

MC

SS

MCC

S-CLAY1

0

0

0

0

0

0

0

0

-5

-5

-5

-5

-5

-5

-5

-5

-10

-10

-10

-10

-10

-10

-10

-10

-15

Depth, m

-15

Depth, m

-15

Depth, m

-15

Depth, m

-15

Depth, m

-15

Depth, m

Depth, m

Depth, m

Pile

-15

-15 Stress points at shear failure

-20

-20

-20

-20

-20

-20

-20

-20

-25

-25

-25

-25

-25

-25

-25

-25 Stress points experiencing volumetric hardening

-30

-30

0 1

0 1

-30

-30

-30 0 1

0 1

-30 0 1 2

-30

-30

0 1 2

0 1 2

0 1 2

Figure 9. Stress points at shear failure and experiencing volumetric hardening, predicted by different constitutive models for S/D=4 and S/D=8 (Stage 1). Source: The authors.

p’ = 75.3kPa q = 123.1kPa

100 Z = 22.5m

q, kPa

80

100

Z = 25m

90

80

80

70

70

60

60

60

50

50

50 30

40

50

60

70

80

90

Z = 22.5m Z = 25m

90

70

p’ = 26kPa q = 42.5kPa

S-CLAY1 M = 1.64 0 = 0.65 p’m = 72.4kPa

Z = 22.5m

q, kPa

Z = 25m

90

MCC M = 1.64 p’m = 95.6kPa

PILE

PILE

100

q, kPa

110

110

SS M* = 2.14 p’m = 83.9kPa

PILE

110

30

40

50

p', kPa

60

70

80

p', kPa

90

30

40

50

60

70

80

90

p', kPa

Figure 10. Stress paths for the hardening elasto-plastic models at the shaft near pile tip (Stage 1). Source: The authors.

Some differences between the hardening elasto-plastic model predictions can be observed for Stages 2 and 3. The magnitude of the total displacements is becoming larger, and hence the minor differences between the models in terms of stress-strain calculations become more significant, such as the slightly different considerations for computing volumetric strains implicit in the Soft Soil model as compared to MCC and S-CLAY1 (namely the use of * instead of ). The shape of the yield surface is another factor (Fig 7). The Soft Soil model

predicts the smallest displacements (i.e. the largest protrusion) because it has the largest elastic domain in the region of interest, i.e. between K0 line and critical state/failure line (Fig. 7). The sizes of the yield surfaces coincide initially at a stress ratio corresponding to one-dimensional loading (defined by Jaky’s K0) in order to predict yield with the same value of vertical effective stress. The MCC and the S-CLAY1 models show some differences in the predictions because anisotropy influences the calculation of horizontal stresses [30].

218


Rodríguez-Rebolledo et al / DYNA 82 (192), pp. 211-220. August, 2015.

p’ = 239kPa q = 323kPa

p’ = 195kPa q = 314kPa

130 120

q, kPa

110 100

Z = 25m

80

120

110 100 90

PILE

90

Z = 25.4m

70

Z = 25m

80

60

70

80

90

100

110

120

110 100 90

Z = 25m

80

Z = 25.4m

Z = 25.4m 70

70

50

S-CLAY1 M = 1.64 0 = 0.65 p’m = 84.0kPa

130

q, kPa

120

MCC M = 1.64 p’m = 110.9kPa

PILE

130

q, kPa

140

140

SS M* = 2.14 p’m = 97.7kPa

PILE

140

p’ = 279kPa q = 396kPa

50

60

70

80

p', kPa

90

100

110

p', kPa

120

50

60

70

80

90

100

110

120

p', kPa

Figure 11. Stress paths for the hardening elasto-plastic models at below the pile tip (Stage 1). Source: The authors.

4. Conclusions The behavior of friction pile foundations in typical Mexico City soft clays that are subjected to external loads and soil consolidation due to variations in piezometric conditions was studied using numerical analyses. Vertical displacements of the foundation for different relative spacing S/D (center-to-center spacing vs. pile diameter) were predicted using different constitutive models: an elasticperfectly plastic Mohr-Coulomb model, and three hardening elasto-plastic models. The Modified Cam Clay and Soft Soil models are isotropic models and the S-CLAY1 model is an anisotropic model that is able to account for the high anisotropy in yielding that is typical for Mexico City clay. For the three stages of analyses, all hardening elastoplastic models exhibit the same trend. The Mohr-Coulomb model predicts however notably larger settlements than the other models, but these predictions are unlikely to be realistic as a constant Young’s modulus has to be assumed. Some differences between the predictions by the hardening elasto-plastic models were observed when pore pressure drawdown was included, triggering regional subsidence. This is because when the magnitude of the total displacements is large, the differences between the models become significant. Specific considerations in each model for computing volumetric strains, the shape of the yield surface and the effect of initial anisotropy can explain these differences. The models show that due to soil consolidation, a neutral level separating positive skin friction from negative skin friction develops on the pile shaft. The position of this level depends more on pile spacing than on the magnitude of the pore pressure drawdown. For close pile spacing, the neutral level is near the pile tip and the piles can protrude from the consolidating surrounding soil as a result of regional subsidence. According to the results obtained, to reach an optimum foundation design in a subsiding environment, it is important to make an accurate prediction of the future pore pressure drawdown at the site. Unfortunately, due to changing

hydrogeological conditions, it is difficult to develop a reliable estimation. Therefore, an optimum foundation design should be based on an intermediate solution minimizing inconveniences of both settlement and protrusion, and using a constitutive model which is a most representative idealization of the soil behavior. However, for further studies, a coupling between hydrogeological and soil mechanics numerical models could be envisioned to improve foundation design. These could be achieved by advanced hydrogeological research in order to improve long-term porepressure drawdown predictions. It is shown that axisymmetric finite element numerical models can be used to optimize the design of friction piles foundations in an environment that is prone to regional subsidence. References [1]

Gobierno del Distrito Federal, Normas técnicas complementarias para diseño y construcción de cimentaciones, Gaceta Oficial del Distrito Federal, 6th October, VII, N 103-BIS, Mexico City, 2004, pp. 11-3. [2] Zeevaert, L., Compensated friction-pile foundation to reduce the settlements of buildings on the highly compressible volcanic clay of Mexico City, Proc. 4th ICSMFE, London, 2, pp. 81-86, 1957. [3] Zeevaert, L., Foundations problems related to ground surface subsidence in Mexico City, ASTM (STP), 322, pp. 57-66, 1963. [4] Plomp, A. and Mierlo, W.C., Special problems, effects of drainage by well points on pile foundations, Proc. 2nd ICSMFE, Rotterdam, 4, pp. 141-148, 1948. [5] Endo, M., Minou, A., Kawasaki, T. and Shibata, T., Negative skin friction acting on steel pipe pile in clay, Proc. 7th ICSMFE, Mexico City, 2, pp. 85-92, 1969. [6] Fellenius, B.H. and Broms, B.B., Negative skin friction for long piles driven in clay, Proc. 7th ICSMFE, Mexico City, 2, pp. 93-98, 1969. [7] Auvinet, G. and Hanel, J.J., Negative skin friction on piles in Mexico City clay, Proc. 10th ICSMFE, Stockholm, 2, pp. 599-604, 1981. [8] Reséndiz, D. and Auvinet, G., Analysis of pile foundations in consolidating soils, Proc. 8th ICSMFE, Moscow, 2, pp. 211-218, 1973. [9] Auvinet, G. and Rodríguez, J.F., Friction piles in consolidating soils, Proc. 15th ICSMGE, Istanbul, 2, pp. 843-846, 2001. [10] Auvinet, G. and Rodríguez, J.F., Modeling of friction piles in consolidating soils, Proc. Int. Deep Found. Cong., ASCE, Orlando, USA, pp. 224-235, 2002.

219


Rodríguez-Rebolledo et al / DYNA 82 (192), pp. 211-220. August, 2015. [11] Comodromos, E.M. and Bareka, S.V., Evaluation of negative skin friction effects in pile foundations using 3D nonlinear analysis. Comp. and Geotech. J., 32, pp. 210-221, 2005. DOI: 10.1016/j.compgeo.2005.01.006 [12] Jeong, S., Kim, S. and Briaud, J.L., Analysis of downdrag on pile groups by the finite element method. Comp. and Geotech. J., 21 (2), pp. 143-161, 1997. DOI: 10.1016/S0266-352X(97)00018-9 [13] Jeong, S., Lee, J. and Lee, C.J., Slip effect at the pile–soil interface on dragload. Comp. and Geotech. J., 31, pp. 115-126, 2004. DOI: 10.1016/j.compgeo.2004.01.009. [14] Lee, C.J. and Charles, W.W., Development of downdrag on piles and pile groups in consolidating soil. J. Geotech. and Geoenv. Engng., 130 (9), pp. 905-914, 2004. DOI: 10.1061/(ASCE)10900241(2004)130:9(905) [15] Wheeler, S.J., Näätänen, A., Karstunen, M. and Lojander, M., An anisotropic elastoplastic model for soft clays. Can. Geotech. J., 40, pp. 403-418, 2003. DOI: 10.1139/t02-119 [16] Diaz-Rodríguez, J.A., Leroueil, S. and Aleman, J.D., Yielding of Mexico City clay and other natural clays. J. Geotech. Engng. ASCE, 118 (7), pp. 981-995, 1992. DOI: 10.1061/(ASCE)07339410(1992)118:7(981) [17] Marsal, R.J. and Mazari, M., El subsuelo de la Ciudad de México, Tesis, Facultad de Ingeniería, UNAM, Mexico, 1959. [18] SIG. Sistema de Información Geográfica. Laboratorio de Geoinformática, Instituto de Ingeniería, UNAM, [Önline]. 2012. Available at: http://sitios.iingen.unam.mx/geoinformatica/. [19] Schlosser, F., Jacobsen, H.M. and Juran, I., Le renforcement des sols (1). Revue Française de Géotechnique, 29, pp. 7-32, 1984. [20] Roscoe, K.H. and Burland, J.B., On the generalized stress-strain behavior of “wet” clay. Engineering Plasticity, in: Heyman, J. and Leckie, F.A. eds., Cambridge University Press, pp. 535-609, 1968. [21] Ovando-Shelley, E., Trigo, M. and López-Velázquez, O., The value of K0 in Mexico City clay from laboratory and field tests. Proc. 11th PCSMFE, Foz de Iguazú, Brasil, pp 433-439, 1999. [22] Marsal, R.J. and Salazar, J., Pore pressure and volumetric measurement in triaxial compression tests, Proc. Research Conference on Shear Strength on Cohesive Soils, ASCE, Boulder, Colorado, pp. 965-983, 1960. [23] Lo, K.Y., Shear strength properties of a sample of volcanic material of the Valley of Mexico. Geotechnique, 12, pp. 303-319, 1962. DOI: 10.1680/geot.1962.12.4.303 [24] Alberro, J. and Hiriart, G., Resistencia a largo plazo de las arcillas del Valle de México. Instituto de Ingeniería Series, UNAM, No. 317, 1973. [25] Villa, R., Aplicación del principio de proporcionalidad natural para describir el comportamiento esfuerzo-deformación de la arcilla del Valle de México sometida a ensayes de compresión triaxial drenados y no drenados, en estado preconsolidado. MSc. Eng. Thesis, Programa de Maestría y Doctorado en Ingeniería, UNAM, Mexico, 2004. [26] Jaky, J., The coefficient of earth pressure at rest. J. Society Hungarian Arch. and Eng., pp. 355-358, 1944. [27] Mayne, P.W. and Kulhawy, F.H., K0-OCR relationships in soil. J. Geotech. Engng. ASCE, 108 (GT6), pp. 851-872, 1982. [28] Auvinet, G. and Díaz-Mora, C., Programa de computadora para predecir movimientos verticales de cimentaciones, Instituto de Ingeniería Series, UNAM, No. 438, 1981. [29] Karstunen, M. and Koskinen, M., Plastic anisotropy of soft reconstituted clays. Can. Geotech. J., 45, pp. 314-328, 2008. DOI: 10.1139/T07-073 [30] Karstunen, M., Krenn, H., Wheeler, S.J., Koskinen, M., and Zentar, R., The effect of anisotropy and destructuration on the behaviour of Murro test embankment. Int. J. of Geomech., 5, pp. 87–97, 2005. DOI: 10.1061/(ASCE)1532-3641(2005)5:2(87) [31] Diaz-Segura, E.G., Método simplificado para la estimación de la carga última de pilotes sometidos a carga vertical axial en arenas, DYNA, 80 (179), pp. 109-115, 2013.

undertook a one-year academic placement in the University of Strathclyde, Scotland, U.K. From 1995 to 2013, he worked for the UNAM in geotechnical research projects related to numerical modeling, deep foundations and tunnels. From 1997 to 2013 he participated as geotechnical consultant in civil engineering projects across the country (Mexico). Currently, he is an associated professor at the Department of Civil and Environmental Engineering of the Universidade de Brasilia, Brasil. From 2008 to 2013, he participated in international research projects founded by the European Commission. ORCID.ORG/0000-0003-2929-7381 G.Y. Auvinet-Guichard, is graduated in 1964 from Ecole Spéciale des Travaux Publics de Paris, France. He received his PhD degree in Engineering in 1986, from Universidad Nacional Autónoma de México – UNAM, Mexico. He is a faculty member of UNAM Engineering School Postgraduate Division and head of the Geotechnical Computing Laboratory of the Institute of Engineering, UNAM, Mexico. He has dedicated his research to the solution of geotechnical problems with emphasis on deep foundations in consolidating soft soils and on application of probabilistic and geostatistical methods to Civil Engineering. He has been President of Mexican National Society for Soil Mechanics (1991-1992) and VicePresident for North America of the International Society for Soil Mechanics and Geotechnical Engineering (2009-2013). ORCID.ORG/0000-0003-4674-1659 H.E. Martinez-Carvajal, received a BSc. Eng. in Geological Engineering in 1995 from the Universidad Nacional de Colombia in Medellin, Colombia, an MSc. degree in Soil Mechanics in 1999 from the Universidad Nacional Autonoma de Mexico – UNAM, Mexico and the Dr. degree in Geotechnics in 2006 from the University of Brasilia, Brazil. From 1993 to 1999 he worked for civil engineering consulting companies as a geologist. He is currently an associated professor in the Civil and Environmental Engineering Department at the University of Brasilia and professor at the Facultad de Minas of the Universidad Nacional de Colombia in Medellin, Colombia. ORCID.ORG/0000-0001-7966-1466

J.F. Rodriguez Rebolledo, is graduated BSc. in Civil Engineering in 1996, an MSc. degree in Soil Mechanics in 2001, and a PhD degree (with honors) in Civil Engineering in 2011, all from the Universidad Nacional Autónoma de Mexico – UNAM, Mexico. As part of his PhD project, in 2008, he

220

Área Curricular de Ingeniería Civil Oferta de Posgrados

Especialización en Vías y Transportes Especialización en Estructuras Maestría en Ingeniería - Infraestructura y Sistemas de Transporte Maestría en Ingeniería – Geotecnia Doctorado en Ingeniería - Ingeniería Civil Mayor información: E-mail: asisacic_med@unal.edu.co Teléfono: (57-4) 425 5172


Influence of biomineralization on a profile of a tropical soil affected by erosive processes Yamile Valencia-González a, José de Carvalho-Camapum b & Luis Augusto Lara-Valencia c a

Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. yvalenc0@unal.edu.co b Faculdade de Tecnologia, Universidade de Brasília, Brasília, Brasil. camapu@unb.br c Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. lualarava@unal.edu.co Received: April 3rd, 2014. Received in revised form: February 16th, 2015. Accepted: March 4th, 2015.

Abstract Most of the soils of tropical countries, especially those in South America and Africa, are affected by erosion processes. As a result, researchers in the field of geotechnical engineering, specifically in the context of "biotechnology" or "bioengineering", have been investigating the use of microorganisms to improve the geotechnical properties and stability of soils. Using this approach, this work was developed to analyze the effects of the implementation of a calcium carbonate precipitating nutrient in native microbiota on the mitigation of erosion processes in a tropical soil profile. The methodology used in this research consisted of collecting undisturbed samples in a soil profile located in an area affected by erosion processes. In such samples, the native bacteria were identified, and it was determined that the nutrient B4 induced the precipitation of calcium carbonate. Subsequently, soil samples were characterized physically, chemically, mineralogically and mechanically in their natural state and after the addition of the nutrient. The tests were performed at least fifteen days after treatment with the nutrient. It was concluded that the use of the nutrient B4 enabled the native bacteria present in the soil to precipitate calcium carbonate, resulting in improvements in the physical, chemical, mineralogical and mechanical properties of the soil, which allowed for the mitigation of erosion processes that characterize the soil profile studied. The conclusions derived from the study apply not only to other tropical soil profiles subjected to erosion but also to improvements of the geotechnical behavior of soils in general. Keywords: Biomineralization, erosion processes, tropical soil, calcium carbonate.

Influencia de la biomineralización en un perfil de suelo tropical afectado por procesos erosivos Resumen Grande parte de los suelos de países de clima tropical, en especial de América del Sur y de África, son afectados por procesos erosivos. Pero son pocas las investigaciones en el área de geotecnia, específicamente, en el ámbito de la “biotecnología” o “bioingeniería”, que buscan, a partir de la utilización de microrganismos, mejorar las propiedades geotécnicas y de estabilidad de los suelos. Con ese enfoque, fue desarrollada esta investigación, buscando analizar el efecto que tiene la aplicación de un nutriente precipitador de carbonato de calcio, sobre la microbiota nativa en la mitigación de procesos erosivos en un perfil de suelo tropical. La metodología usada en este trabajo consistió en la toma de muestras alteradas e inalteradas de un perfil de suelo localizado en una zona afectada por procesos erosivos. En tales muestras fueron identificadas las bacterias nativas y, se determinó si el nutriente B4 induce la precipitación de carbonato de calcio. Posteriormente, fueron caracterizadas física, química, mineralógica y mecánicamente las muestras de suelo en estado natural y después de la adición del medio nutritivo. En este caso, los ensayos fueron realizados en un mínimo de quince días después del tratamiento con el nutriente. Se concluyó, que el uso del nutriente B4 posibilita a las bacterias nativas presentes en el suelo a precipitar carbonato de calcio causando una mejoría en las propiedades físicas, químicas, mineralógicas y mecánicas de los suelos estudiados, posibilitando, por tanto, en términos generales, la mitigación de los procesos erosivos que marcan el perfil de suelo estudiado. Las conclusiones originadas de este estudio se aplican no solo a otros perfiles de suelos tropicales sometidos a procesos erosivos, como a la mejoría del comportamiento geotécnico de suelos de un modo general. Palabras clave: Biomineralización, proceso erosivo, suelo tropical, carbonato de calcio.

1. Introduction Tropical soils are influenced by climate, geology, hydrology and human action [1], generating a wide variety of profiles with

significant differences in their geological and geotechnical properties, which favor erosion. As a result, researchers in geotechnical engineering, in the context of "biotechnology" or "bioengineering", have investigated the use of microorganisms

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 221-229. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.42942


Valencia-González et al / DYNA 82 (192), pp. 221-229. August, 2015.

in improving the geotechnical properties and stability of soils. Most studies in biotechnology have been directed toward understanding the behavior of bacteria, their interaction with many of the minerals found in nature or their application in the decontamination of soils and water resources and the restoration of sculptures [2]. However, in general, there have been few studies focused on the influence of bacteria on soil behavior and engineering properties [3]. In this study, we applied knowledge acquired in other areas related to bacteria to provide solutions to engineering problems using the technique of biomineralization. This technique consists of stimulating microorganisms using a nutrient to precipitate chemical compounds, thus forming minerals. In the present study, the precipitation of calcium carbonate by native bacteria in the soil, aimed to improve the physical properties, mechanical parameters and structural stability of soil against erosion processes, was examined. This study represents an advance in the development of biotechnology applications to solving engineering problems, specifically those of erosion, resulting in a significant economic and environmental impact. In most cases, to prevent, stop or restore areas affected by erosion, controls are used, which often have high costs and/or have an environmental impact that is not always negligible in other areas. 2. Soil microbiology Research with microorganisms began in 1673 with Van Leeuwenheek but, according [4], only gained momentum in 1857 with the studies of Louis Pasteur. However, soil microbiology had its first major contribution in the late nineteenth century, with the isolation of rhizobia. The study of soil microbes is vast and unknown as soil is an unusual habitat in relation to other terrestrial habitats, due to its heterogeneous, complex and dynamic nature. Within this complex habitat, five main groups of microorganisms exist: bacteria, actinomycetes, fungi, algae and protozoa. The bacteria are notable because they form the group of microorganisms of greatest abundance and diversity among species. The bacterial community is estimated at approximately 108 to 109 (CFU) per gram of soil [5]. Many variables influence soil bacteria, such as moisture, aeration, temperature, organic matter, acidity and the presence of inorganic nutrients. Other variables, such as crops, the season and depth, have relevance, yet their combination makes them determinant [6]. The activity that microorganisms play in the soil may be closely related to the soil structure. Due to the similar size of microorganisms and soil components, particularly bacterial cells and clay particles, there is the possibility of accession or linking of microbial cells to clay particles. The nature of this adhesion is mainly chemical and mediated by cement substances. The rate of adherence of microorganisms to soil mineral particles is often quite considerable and may reach 90% of the population. This adhesion depends on the diameter of the particles: the smaller the diameter, the stronger the adhesion. Adhesion also depends on the nature of the microorganism and the type of clay mineral. For example, Gram-positive bacteria more easily adhere to clay minerals, such as kaolinite, which have predominantly negative surface charges, because Gram-negative bacteria are fixed on positively charged minerals, such as gibbsite [5].

3. Biomineralization Biomineralization is a common process in nature by which living organisms form precipitated crystalline or amorphous minerals [7]. Biomineralization occurs through chemical reactions between specific ions or compounds as a result of metabolic activities of an organism under certain environmental conditions. The "carbonate-geneses" is a good example of biomineralization, in which it precipitates carbonates [8]. Photosynthesis is the most common form of carbonate microbial precipitation [9]. This process is based on the metabolic utilization of dissolved CO2, which equilibrates with HCO and CO (eq. 1) around the bacterium. This reaction induces a shift in the balance of bicarbonate and, subsequently, an increase in the pH in most of media (eq. 2 and 3). 2HCO 3 CO2  CO32  H 2O

(1)

CO2  H 2 O  CH 2 O  O2

(2)

CO 2  H 2 O  HCO 3  OH  3

(3)

Another type of process that precipitates carbonates is the sulfur cycle, specifically the reduction of sulfate. The reaction begins with the dissolution of gypsum (CaSO4.2H2O/CaSO4, eq. 4). In these circumstances, the organic matter can be consumed by sulfate-reducing bacteria, and sulphide and metabolic CO2 is released (eq. 5). CaSO 4  2 H 2 O  Ca 2  SO 2  2 H 2 O 4  2 2CH 2 O  SO  H 2 S  2 HCO 3 4

(4) (5)

The removal of sulfur produced from hydrogen (H2S) and the result of the increase in pH are prerequisite to the precipitation of carbonates [10]. Another form of precipitation involves the nitrogen cycle and, more specifically, the ammonification of amino acids, nitrate reduction and degradation of urea [11]. These three mechanisms have in common the production of metabolic CO2 and ammonia (NH3), which, in the presence of calcium ions, results in precipitation of ammonium ions and the release of carbonate. The reaction takes place according to eq. 6 and 7: CO NH 2 2  H 2 O  CO 2  2 NH 3

(6)

2 NH 3  CO 2  H 2 O  2 NH   CO 2 4 3

(7)

In complex natural environments, different metabolisms can combine to produce precipitation. Under such conditions, precipitation of calcium carbonate (CaCO3) can occur from the equilibrium reaction shown in eq. 8 if soluble calcium ions (Ca+2) are present.

222

Ca 2  CO 2  CaCO 3 3

(8)


Valencia-González et al / DYNA 82 (192), pp. 221-229. August, 2015.

It is noteworthy that the production of CO3-2 from bicarbonate (HCO3-1) in water is highly dependent on pH and that growth occurs under alkaline conditions. In summary, the precipitation of calcium carbonate occurs easily in alkaline environments abundant in calcium (Ca2 +) and carbonate ions (CO3-2) [8]. The main role of bacteria in the process has been linked to their ability to create alkaline environments and to increase the concentration dissolved inorganic carbon (DIC) through various physiological activities [9]. Specific species of bacteria are capable of producing different amounts, forms and types of crystals of carbonate (e.g., calcite, aragonite and dolomite) from the same synthetic medium [9]. The study of the bioprecipitation process by microorganisms, especially of calcium carbonate, began at the end of the nineteenth century by Nadson (1899-1903). Several genera and species of bacteria were isolated from the environment and systematically studied, contributing to our knowledge of the bioprecipitation processes in the production of calcium carbonate. The research conducted in [8] and [12] showed that bacteria such as Bacillus have the ability to precipitate CaCO3 when incubated in B4, which is a nutrient composed of yeast extract, glucose and calcium acetate, at a pH of 8 and incubation temperatures of 25 to 30°C. The precipitation begins to occur after 15 days. Upon observation of the precipitates for 25 days, the number and size of crystals were found to increase with time. The first study of the applicability of the process of calcium carbonate bioprecipitation, called "bioremediation", was based on protection against deterioration of materials used in construction, such as ornamental stones and concrete, using a nutrient medium containing bacteria [8]. Subsequent techniques, such as "bio-induration" or "biosealing", have consisted of sealing or plugging the pores of the soil through the application of nutrients and microorganisms capable of producing a biofilm, which were used to reduce the permeability of soils [13,14]. Finally, the technique of "biostabilization" improves soil properties through the addition of microorganisms and nutrients [15]. In this study, instead of adding microorganisms to the material as has been done previously, we stimulate the native microorganisms present in the tropical soil to precipitate calcium carbonate minerals from the addition of only one nutrient. This technique has a reduced environmental impact as compared to other techniques commonly used to prevent, halt or reclaim areas affected by erosion. 4. Materials and methods

The search region was in a location affected by erosion, specifically, in the gullies. The selected profile was located 20 m from the left edge of a gully in the town of Santa Maria, Brasília, Federal District – Brazil. The study was limited to 6 m deep, with profiles of five soil layers: 0 to 1.5 m, 1.5 m to 2.5 m, 2.5 m to 3.5 m, 3.5 m to 4.5 m and 4.5 m to 6.0 m. According to [1], this profile is located in the geological unit “Sandy Metarritmito" from the Paranoá group. The “Sandy Metarritmito" is composed of

fine to medium interbedded quartzite, siltstones and slates. The maximum thickness of this structure can reach 150 m [16]. The gully geomorphologically lies within the "Contagem Plateau”. This plateau is the highest in the Federal District with an average elevation of more than 1200 m. The area has a climate with temperatures below 18°C in the coldest month and greater than 22°C in the warmest month. The average annual relative humidity is 60%. 4.1. Microbiological analyses

For the microbiological identification of bacteria present in the soil profile, first, a 10 g sample of each layer was taken and placed in a plastic sterile container with 90 mL of peptone water to make a 1:10 dilution. Then, from this mixture, 1 ml was taken and homogenized with 9 ml of 0.1% buffered water to obtain a dilution of 1:100. Each dilution was incubated in a bacteriological incubator under 25°C for 24 hours and distributed to the respective dilutions, in the form of stripes with a platinum handle on plates containing agar and 5% sheep blood, for the growth and isolation of the bacterial population. The identification of bacterial colonies was carried out through various biochemical tests: GRAM test, 3% KOH test, catalase test, oxidase test, TSI test, oxidation/fermentation test, indole test, urease test, mannitol fermentation, nitrate reduction test, production of desidrolase and decarboxylases of amino acids, gelatin hydrolysis test, Methyl Red test, Voges-Proskauer test, utilization of citrate, motility test, fermentation of lactose, sucrose, glucose, maltose, arabinose and trehalose and hydrolysis of Esculin. After the bacteria in the soil were identified, the samples were placed on plates containing the nutrient medium B4, which induces the precipitation of calcium carbonate. The medium components were 15 g calcium acetate, 4 g stratum yeast, 5 g glucose and 12 g agar in 1 L of distilled water. In order for the bacteria to promote the precipitation of calcium carbonate, it was important to maintain the pH at about 8. To achieve this pH, NaOH was incorporated into the described solution [8]. The plates with the bacteria in B4 medium were incubated for at least 15 days at 25°C [12]. The precipitates were observed using a scanning electron microscope (SEM) to determine whether they contained the calcium element. After verification of the mechanism of precipitation from the bacteria found in soil, the nutrient was added to blocks of undisturbed soil, for the precipitation to occur in locus. The blocks of undisturbed soil from each layer were placed inside a container, which, at the time of the nutrient addition, did not crumble. At the top of the blocks, holes were made, about 0.4 cm in diameter and 2 cm deep with a 10 cm spacing. The nutrient was introduced in the holes with the use of a syringe. The nutrient filled 60% of the empty spaces present in each block of undisturbed soil. Subsequently, the blocks were kept for a minimum of 15 days within a "moisture chamber" (25°C and relative humidity 60%), similar to the average conditions of the erosion site selected for the study. The influence of the treatment on the properties of the soil was evidenced by running physical, chemical, mineralogical and mechanical sample tests with and

223


Valencia-González et al / DYNA 82 (192), pp. 221-229. August, 2015.

without the addition of the nutrient. These tests were performed according to the methodologies proposed by the Brazilian Association of Technical Standards (ABNT) and in their absence, according to the American Standard Test Method (ASTM).

susceptibility to erosion, such as the pinhole test and the test of disaggregation. 5. Results 5.1. Microbiological characterization

4.2. Physical properties

For the physical characterization of the soil profile, tests were carried out to determine the moisture, specific weight of grains, Atterberg limits and grain size with and without dispersant, and tests MCT (Miniature Compressed Tropical) of tropical soils expedite classification [17]. Voids and the degree of saturation were determined from the soil samples. 4.3. Chemical analyses

These tests consisted of measurements of the pH in water and KCl solution at a ratio of 10:25 (soil: water/solution) and determining levels of calcium (Ca), sodium (Na), potassium (K), magnesium (Mg), H + Al (total acidity), organic matter (OM), organic carbon (C), nitrogen (N), phosphorus (P) and sulfur (S), as well as the cation exchange capacity (CEC), base saturation and aluminum saturation. These tests were performed according to EMBRAPA standards [18]. 4.4. Mineralogical analyses

The mineralogical characterization was limited to the identification of minerals in layers of soil profiled by X-ray diffraction, complemented with semi-qualitative chemical analysis obtained by energy dispersive spectroscopy (EDS) using SEM. 4.5. Structure definition

Deeply weathered tropical soils, such as those used in this study, cannot be thought of as individual soil particles because the particles are grouped together, forming aggregates that directly influence the physical properties and mechanical and hydraulic behavior of the soil. However, the chemical additives can affect the structural stability of these aggregates, but the biomineralization may give them greater stability providing structural changes in the ground. The structural changes caused by biomineralization may explain certain behavior in soil. Structural characterization of the soil was made from the visual analysis of images obtained from SEM.

From different biochemical tests of the entire profile, a total of 43 types of isolated bacteria were identified: among them were Bacillus spp., Pasteurella spp., Actinobacillus spp., Pseudomonas spp., Staphylococcus spp., Alcaligenes spp., Rhodococus spp., Corynebacterium spp., Rhodococus quei , Francisella tularensis and Enterobacter cloacae ; with the Bacillus spp. as the most common bacteria. In the microbiological analysis, it was found that the amount of bacteria present increased with depth. This may be linked to increases in water pH with the depth of soil (1 m: 5.6, 2 m: 5.8, 3 m: 5.9, 4 m: 6.1 and 5 m: 5.9). According to [19], the environment most favored by bacteria is one in which the pH is close to 8. The precipitation occurred in Petri dishes containing the B4 medium. Then, the samples coated with gold were pasted onto a sample holder (hence the presence of the gold peak in the determined chemical composition), and observed by SEM, which confirmed the presence of calcium in all of the samples (black spot in Fig. 1). Importantly, when Bacillus was present, the greatest precipitation of calcium was generated (larger precipitates in the Petri dishes), which confirms that Bacillus-type bacteria are excellent for the process of biomineralization [12]. 5.2. Physical characterization

The results of physical characterization tests for the five layers of natural soil profiles are presented in Table 1, together with the results obtained for the soil treated with the B4 nutrient, which allows for an evaluation of the influence of the addition of B4 to the medium on the physical properties after a minimum of 15 days. Chemical composition of the black spot

4.6. Mechanical behavior

Analyzing the mechanical behavior of soils is very important in defining the influence of biomineralization in the erosion stabilization of slopes. For its determination, the following parameters were determined: compressive strength, indirect tensile strength, resistance to direct shear, total matrix suction, dual-oedometer densification and permeability. Tests were also conducted to estimate the

Figure 1. Scanning electron microscopy and chemical analysis of the Bacillus spp. sample. Source: The authors

224


Valencia-González et al / DYNA 82 (192), pp. 221-229. August, 2015. Table 1. Variation with depth of the physical indices of soil profiles with and without treatment. Without treatment e Sr (%) w (%) AT (%) wL (%) Depth (m) PI (%) M.C.T. U.S.C.S. s (kN/m3) 1 26.67 1.855 50 34 69 50 18 LG´ MH 2 26.67 1.744 59 38 70.1 50 17 LG´ MH 3 26.77 1.109 74 30 62.6 48 16 LA´-LG´ ML 4 26.97 0.879 82 27 64.3 47 15 LA´-LG´ ML 5 26.97 0.851 87 27 63.4 44 13 LA´-LG´ ML With treatment e Sr (%) w (%) TA (%) wL (%) Depth (m) PI (%) MCT USCS s (kN/m3) 1 26.48 1.815 72 49 57.2 46 13 LA´ ML 2 26.48 1.719 81 52 56.8 45 11 LA´ ML 3 26.87 1.095 84 34 49.5 43 9 LA´ ML 4 27.16 0.866 92 30 28.7 37 7 LA´ ML 5 27.16 0.832 94 30 32.8 36 6 LA´ ML s = specific weight of grains, e = void index, Sr = degree of saturation, w = moisture, TA = total aggregate, wL = liquidity limit, PI = plasticity index, MCT = Miniature Compressed Tropical Classification, USCS = Unified Soil Classification System. Source: The authors

pH (water) 5

6

7

8

9

Depth (m)

0 1 2 3 4 5 6 Without nutrient

With nutrient

Figure 2. Variation of pH with depth in the soil profile before and after treatment. Source: The authors

Organic Carbon (g/Kg), Ca (mE/100ml) 0

5

10

15

0

Depth (m)

The results in Table 1 show that the void index decreased slightly with the addition of nutrients because the precipitates formed with little intensity or very fine fibers. To evaluate the stability of micro-concretions that were present in the soil profile, the expression given by [20] for the total aggregate (TA = % clay size with dispersant - % clay size without dispersant) was used. When the nutrient was used, the TA values were lower, indicating greater stability of the micro-concretions. Using the liquidity limit (wL) and plasticity index (Ip), it was observed that the plasticity of a clay, in general, may decrease as the number of cations and calcium increases [21]. In addition, the treatment helped to maintain soil aggregation by making it more granular, which generally favors the reduction of plasticity. The treatment gave stability to the aggregates, which according to the MCT system, classifies the soil as sand. However, based on the USCS, due to the reduced plasticity, the soil is classified as having low plasticity. The fact that no changes were observed in the classification of any of the layers does not necessarily mean that the treatment did not influence the behavior of the soil. However, it indicates a low sensitivity of classification methods to assess levels of change experienced by the soil. 5.3. Chemical characterization

1 2 3 4

Fig. 2 shows that, by adding the nutrient to the samples, the pH increased from slightly acidic to slightly basic, with basic being greater than pH 7 in water. An alkaline pH is needed to generate an environment favorable for bacteria to precipitate calcium carbonate [22]. The generation of calcium carbonate is contingent on the existence of carbon and calcium, hence the importance of the relationship that exists between the organic carbon content and calcium content (Fig. 3). Samples with nutrient, at depths less than 3 m, showed a reduction in the amounts of calcium and carbon, with a higher rate of reduction of carbon than of calcium. Beyond 3 m, the reduction of carbon continued, but the calcium increased and in larger quantities than carbon, implying that not all of the calcium

5 6 Carbon (without nutrient) Ca (without nutrient)

Carbon (with nutrient) Ca (with nutrient)

Figure 3. Variation of carbon and calcium with depth in the soil profile before and after treatment. Source: The authors

forms calcite at these depths and that part of the calcium must be available associated with minerals or as chemical complexes. Moreover, in the first 3 m, there was a surplus of carbon, meaning that there may be more additions of calcium because the relationship of these elements in calcite is 1:1.

225


Valencia-González et al / DYNA 82 (192), pp. 221-229. August, 2015.

Relative intensity of peak 0

50

100

150

200

250

300

Si, O, Al, Fe

Depth (m)

0 1 2 3 4 5 6

Gibbsite Kaolinite Quartz Hematite Goethite Anatase Calcium carbonate Figure 4. Intensity of main peaks of the different minerals with depth in the soil profile for the samples without treatment. Source: The authors

Figure 6. SEM images of a soil layer depth of 4 m without nutrient. Source: The authors

Relative intensity of peak 0

50

100

150

200

250

Ca

300

Depth (m)

0 1 2 3 4 5 6

Gibbsite Kaolinite Quartz Hematite Goethite Anatase Calcium carbonate Figure 5. Intensity of main peaks of the different minerals with depth in the soil profile for the treated samples. Source: The authors

5.4. Mineralogical characterization

Figs. 4 and 5 show that the original minerals were preserved when comparing the samples without nutrient to samples with nutrient, i.e., the treatment does not affect the initial mineralogical composition. There is, however, in Fig. 5, the appearance of mineral calcium carbonate (calcite, 1 m, 2 m, 4 m and 5 m and wilkeita at 3 m), thus confirming, even in small amounts, the precipitation generated by bacterial activity with the treatment. It should be noted that the amount of kaolinite present in the soil was related to the adherence of Gram-positive bacteria, which, in the presence of nutrient, present precipitation of calcium carbonate as already shown in the initial studies for the bacterial characterization on Petri dishes. To confirm the formation of calcium carbonate, hydrochloric acid (HCl) was poured on the soil samples with and without nutrient. All treated soil samples showed effervescence, confirming the presence of carbonates. 5.5. Structure characterization

When the nutrient was not added to the chemical analysis, the SEM images illustrated common elements of

Figure 7. SEM images of a soil layer depth of 4 m with nutrient. Source: The authors

tropical soils (Fig. 6). In soils with the nutrient, the presence of fibrous precipitates containing calcium that bind soil grains or crystals of different fibers, called "globular or botryoidal habit" (Fig. 7), were observed at all depths. 5.6 Mechanical characterization

5.6.1. Direct shear strength of saturated and natural soil To analyze the influence of nutrient addition on the shear strength, the parameters of cohesion and friction angle in isolated form were examined (Table 2). In almost all cases, the friction angles and cohesion in natural and immersed states increased when passing from untreated soil to treated soil, which confirms that the generated Table 2. Cohesion (c) and friction angle (φ) parameters determined from direct shear tests on saturated and natural samples. Without nutrient With nutrient Depth Sat Natural Saturated Natural (m) c c c c φ φ (°) φ (°) φ (°) 1 10 29 7 29 12 31 5 2 2 28 23 15 22 22 26 5 2 1 2 3 24 26 6 25 29 27 1 3 4 24 36 5 35 28 37 2 3 5 23 37 8 38 37 36 Source: The authors

226


10 9 8 7 6 5 4 3 2 1 0

100000

1000 100 10 Macropore region

1 0

0 1 m

100 2 m

200 300 Strength (kPa) 3 m

400 4 m

5 m

(m/seg) Permeability *

10-5

3 2 1 0

1 m

2 m

200 300 Strength (kPa) 3 m

400

30

40 50 60 70 Saturation (%)

80

90 100

4 m without total nutrient 4 m with total nutrient

Figure 10. Variation of matrix suction and total saturation at a depth of 4 m. Source: The authors

4

100

20

4 m without matric nutrient 4 m with matric nutrient

5

0

10

500

Figure 8. Variation in the collapse rate with strain for soil profile samples without nutrient. Source: The authors

Collapse rate (%)

Micropore region

10000

Suction (kPa)

Collapse rate (%)

Valencia-González et al / DYNA 82 (192), pp. 221-229. August, 2015.

8

5m

7 6 5

1m

4

2m

3

5m 4m 4m 3m

2 1

2m

3m

1m

0 0

500

0,2

0,4

0,6

0,8

1

Inter-aggregate void ratio

4 m

5 m

Without Nutrient

Figure 9. Variation in the collapse rate with strain for soil profile samples with nutrient. Source: The authors

Figure 11. Variation of permeability with inter-aggregate void ratio for the soil profile. Source: The authors 5

Flow (ml/sec)

precipitate acted as cement for the soil profile studied. The increase of friction angles and cohesion from the treatment was reflected in the increase in shear strength. The resistance increases to the depths of 1m and 2m were the lowest. This may be related to the fact that at these depths the initial moisture content of the samples was the highest, or may also be due to the fact that at these depths the amount of kaolinite is less, resulting in a lower adhesion of Gram-positive bacteria type Bacillus spp., and consequently, causing a smaller amount of precipitates [22].

4 3 2 1 0 0

200

400 600 800 Hydraulic Gradient (mm)

5 m without nutrient

5.6.2. Dual-oedometer density Soil samples were assayed in their natural state and in a saturated state to determine the structural collapse inundation (Collapse rate %). There is a clear reduction in the collapse potential of the treated soils (Fig. 9) as compared to those obtained at the same depths for the untreated soils (Fig. 8). 5.6.3. Total suction and matrix suction Fig. 10 shows a plot for the soil layer at a depth of 4 m as an example of the general behavior of the total and matrix suction observed for all of the layers. The treatment increased the total suction in the macropore region. Despite the difficulty that arises in identifying a unique behavior and

With Nutrient

1000

1200

5 m with nutrient

Figure 12. Variation of flow with the hydraulic gradient for a soil profile depth of 5 m with and without nutrient. Source: The authors

the influence of treatment on the behavior for the matrix suction, it is worth noting the shift to the right of the point of entry of air in the micropores registered for the curves depending on the degree of saturation during the treatment. This shift indicates a reduction in macroporosity. 5.6.4. Permeability By examining the permeability as a function of the interaggregated void ratio of soil, which represents voids in which water circulates and the carbonates precipitated, at all

227


Valencia-González et al / DYNA 82 (192), pp. 221-229. August, 2015.

 a) Sample without treatment

b) Treated sample Figure 13. Images of breakdown tests after 24 hours of immersion (partial and total flooding) for the soil profile: a) untreated soil, b) treated soil. Source: The authors

depths, it was found that the nutrient caused a decrease in permeability (Fig. 11). 5.6.5. Pinhole test Fig. 12 presents the behavior before the internal erosion of the soil layer at a depth of 5 m. For all depths, the treatment indicated a greater closure of the pores due to the precipitation of calcium carbonate and greater structural stability as the curves obtained with treatment were quasilinear, with little difference between the phases of loading (→) and unloading (←) and the outflows for the same gradients were lower than in the untreated soil. 5.6.6. Breakdown The results of the breakdown test clearly illustrated the improvement of the structural stability of the soils when partially or totally inundated with water. Little or no breakdown was observed for the treated soils immersed in water. Fig. 13 shows the sample images to a depth of 3 m with and without treatment. The performance was generally the same for all depths. The pinhole and breakdown tests were used to qualitatively investigate the eroded soils. The results of these experiments showed that the biomineralization caused by the treatment reduced the potential erodibility of the soils. 6. Conclusions 

The technique used in this work creates new

possibilities for soil improvements, allowing for an advance in biotechnology development and reducing the possibility of environmental impact in comparison to other techniques commonly used to combat erosion and to stabilize soils. The use of biomineralization in geotechnical engineering requires the integration of professionals from different fields (e.g., microbiologists, geologists, engineers), combining the knowledge of related areas, to obtain a more suitable proposal for resolving the problem. In conclusion, in the tropical soil profile studied, the native bacteria effectively used the B4 nutrient to precipitate calcium carbonate. The calcium carbonate generated by the treatment provided variations in the physical, chemical and mineralogical properties of the soil and improved the mechanical and hydraulic behavior of the soil. Changes were observed in the reduction of the void index, liquidity limit, plasticity index, permeability, collapse index and erodibility, in addition to increased suction and shear strength. The effect of these changes was reflected by a greater structural stability of the grains, better performance of the aggregates and lower deformability of the soil mass, thus pointing to the possibility of using the technique of biomineralization in erosion process control.

Aknowledgements

“Programa Bolsista da CAPES/CNPq - IEL Nacional – Brasil” and “Universidad Nacional de Colombia”. References [1]

Lima, M., Degradação físico-química e mineralógica de maciços junto às voçorocas, Tese de Doutorado, Departamento de engenharia civil e ambiental, Universidade de Brasília, Brasília, Brasil, 2003. [2] Tiano, P., Biagiotti, L. and Mastromei, G., Bacterial bio-mediated calcite precipitation for monumental stones conservation: methods of evaluation. Journal of Microbiological Methods, 36, pp. 139-145, 1999. DOI: 10.1016/S0167-7012(99)00019-6 [3] Martínez, G., Maya, L., Rueda, D. and Sierra, G., Aplicaciones estructurales de bacterias en la construcción de nuevas obras de infraestructura -Estabilización de suelos-, Trabajo de grado, Departamento de Ingeniería Civil, Universidad Nacional de Colombia, Medellín, Colombia, 2003. [4] Zilli, J., Gouvea, N., da Costa, H. and Prata, M.C., Diversidade Microbiana como indicador da qualidade do solo. Cadernos de Ciência & Tecnologia, Brasília, 20 (3), pp. 391-411, 2003. [5] Cardoso, E., Tsai, S. and Neves, M.C., Microbiologia do solo, Brasil, Sociedade Brasileira de ciência do solo, 1992. [6] Peña, J.J., Introducción a la microbiología del suelo, México, Libros y editoriales S.A., 1980. [7] Soto, A., Introducción a los biominerales y biomateriales, Chile, Universidad de Chile, 2003. [8] Lee, Y.N., Calcite prodution by bacillus amyloliquefacies CMB01. Journal of Microbiology, 41 (4), pp. 345-348, 2003. [9] Hammes, F. and Verstraete, W., Key roles and calcium metabolism in microbial carbonate precipitation. Environmental Science & Bio/Technology, 1, pp. 3-7, 2002. [10] Castanier, S., Gaële Le, M. and Perthuisot, J.P., Ca-carbonates precipitation and limestone genesis — the microbiogeologist point of view. Sedimentary Geology, 126 (1-4), pp. 9-23, 1999. DOI:

228


Valencia-González et al / DYNA 82 (192), pp. 221-229. August, 2015.

[11]

[12]

[13]

[14] [15]

[16] [17] [18] [19] [20]

[21] [22]

10.1016/S0037-0738(98)00144-4 DOI: 10.1016/S00370738(99)00028-7 Hammes, F., Boon, N., Villiers, J., Verstraete, W. and Siciliano, D., Strain-Specific Ureolytic Microbial Calcium Carbonate Precipitation. Appl Environ Microbiol., 69(8), pp. 4901-4909, 2003. DOI: 10.1128/AEM.69.8.4901-4909.2003 Baskar, S., Baskar, R., Mauclaire, L. and McKenzie, J.A., Microbially induced calcite precipitation in culture experiments: Possible origin for stalactites in Sahastradhara caves, Dehradun, India Current Science, 90 (1), pp. 58-64, 2006. Cardona, F. y Usta, M., Un estudio de la reducción de permeabilidad por la depositación de finos y bacterias en medios porosos, Trabajo de grado, Departamento de Ingeniería Civil, Universidad Nacional de Colombia, Medellín, Colombia, 2002. Whiffin, V., Lambert, J., Dert F. and Van ree, C. Biogrout and Biosealing. Pore space engineering with bacteria. American Society of Civil Engineers, vol 5 (5), pp. 13-36, 2005. Gómez, E. Evaluación de las propiedades geotécnicas de suelos arenosos tratados con bacterias calcificantes. Tesis de Maestría, Departamento de ingeniería civil, Universidad Nacional de Colombia, Medellín, Colombia, 2006. Souza, É.R.A., Abílio-de Carvalho, O. and Fontes, R., Evolução geomorfológica do Distrito Federal, EMBRAPA, documentos 122, Brasil, 2004, 56 P. Nogami, J. and Villibor, D., Pavimentação de baixo custo com solos lateríticos, Brasil, Ed. Vilibor, 1995. Empresa Brasileira de Pesquisa Agropecuária – EMBRAPA. Manual de Métodos de Análise de Solo, 2a edição, Brasil, Centro Nacional de Pesquisa do Solo, 1997. Brandy, N.C., Natureza e propriedades dos solos. Tradução de Antonio B. Neiva Figueiredo Filh), 5ta edição, Brasil, Freitas Bastos, 1979. Araki, M.S., Aspectos relativos às propriedades dos solos porosos colapsíveis do distrito federal, Dissertação de Mestrado, Departamento de Engenharia Civil e Ambiental, Universidade de Brasília, Brasília, Brasil, 1997. Santos P.S., Tecnologia de argilas aplicada às argilas brasileiras vol. 1 Fundamentos, Brasil, Edgard Blucher ltda and Universidade de São Paulo, 1975. Gómez, W., Gaviria, J. y Cardona, S., Evaluación de la bioestimulación frente a la atenuación natural y la bioaumentación en un suelo contaminado con una mezcla de gasoline – diesel. Revista DYNA, 76 (160), pp. 83-93, 2009.

J. Camapum de Carvalho, received the BSc. in Civil Engineering in 1978 from University of Brasilia, Brazil, in 1981 received the MSc. degree in geotechnical from the University Federal of Paraiba, Brazil. In 1985 received the Dr. degree in Civil Engineering (Geotechnical) from Institut National des Sciences Appliquées in Toulouse, France. He also made a postdoctoral research in Université Laval in Quebec, Canada, in 1999. Currently, he is Professor in the civil and environmental engineering department of the University of Brasilia, Brazil where he works since 1986. His research interest includes: Properties and behavior of tropical soils, influence of suction on soil-structure interaction, alternative materials for base course pavement, collapsible porous soils and control and prevention of erosive process. L.A. Lara-Valencia, received the BSc. in Civil Engineering in 2005 from Universidad Nacional de Colombia, campus Medellin, Colombia the MSc. and Dr. degrees in Structures and Civil Construction in 2007 and 2011, respectively, from the University of Brasilia, Brazil. Currently, he is a full professor in the Civil Engineering department of the Universidad Nacional de Colombia, campus Medellin, Colombia. His research interest includes: vibration control of structures, dynamics of structures, linear and nonlinear finite elements modeling, foundations and tropical soils.

Y. Valencia-Gonzalez, received the BSc. in Civil Engineering in 2001, the MSc. in Civil Engineering-Geotechnical in 2005, both from Universidad Nacional de Colombia, Medellin campus, Colombia. In 2009 received the Dr. in Geotechnical follow by a year as postdoctoral fellow, all of them in the University of Brasilia, Brazil. Currently, she is a full professor in the Civil Engineering department of the Universidad Nacional de Colombia, Medellin, Colombia. Her research interest includes: tropical soils, biotechnology, foundations and vibration control.

Área Curricular de Ingeniería Civil Oferta de Posgrados

Especialización en Vías y Transportes Especialización en Estructuras Maestría en Ingeniería - Infraestructura y Sistemas de Transporte Maestría en Ingeniería – Geotecnia Doctorado en Ingeniería - Ingeniería Civil Mayor información: E-mail: asisacic_med@unal.edu.co Teléfono: (57-4) 425 5172

229


Analysis of the investment costs in municipal wastewater treatment plants in Cundinamarca Juan Pablo Rodríguez-Miranda a, César Augusto García-Ubaque b & Juan Carlos Penagos-Londoño c. a

Faculty of the Environment. Universidad Distrital Francisco José de Caldas, Bogotá, Colombia. jprodriguezm@udistrital.edu.co. b Faculty of Technology. Universidad Distrital Francisco José de Caldas, Bogotá, Colombia. cagarciau@udistrital.edu.co. c Sub-General Management. Empresas Públicas de Cundinamarca S.A. E.S.P., Bogotá, Colombia. juan.penagos@epc.com.co. Received: July 30th, 2014. Received in revised form: September 19th, 2014. Accepted: May 23th, 2015

Abstract Some of the most significant aspects in the selection of wastewater treatment plants are the investment costs, since they cross-linkthe treatment level, the quality of the raw wastewater, the design flow and the purpose of the treated wastewater. Through a multivariable exponential regression analysis, data from 51 projects of new treatment plants was analyzed, and from that process, data of cost scale elasticity was obtained, in slow growth, in comparison to the design flow for each of the treatment technologies analyzed. Key words. Wastewater treatment, investment, treatment capacity.

Análisis de los costos de inversión en plantas de tratamiento de aguas residuales municipales en Cundinamarca Resumen Uno de los aspectos más importantes en la selección de plantas de tratamiento de aguas residuales son los costos de inversión debidos a los costos de interconexión, el nivel de tratamiento, la calidad de las aguas residuales crudas, el flujo de diseño y el destino final de las aguas residuales tratadas. A través de análisis multivariado de regresión exponencial, se analizaron datos de 51 proyectos de nuevas plantas de tratamiento. En este proceso, se obtuvieron datos de elasticidad de la escala de costos y se apreció un crecimiento lento en comparación con el flujo diseñado para cada una de las tecnologías de tratamiento analizadas. Palabras clave. Tratamiento de aguas residuales, inversión, capacidad de tratamiento.

1. Introduction The selection of municipal wastewater treatment plants (MWTP) has taken into consideration technical elements such as the concentrations of BOD5, TSS, N Total and P Total, without ignoring the population, eating habits, socioeconomic aspects, the municipal wastewater collection system and, of course, the flow and the direct costs of the MWTP project [1], to produce a high quality effluent in compliance with the regulations applicable to the discharge into the receptor body of water [2]. The above is taken into consideration when making technical decisions on wastewater management, for the purpose of decontaminating, but there is also the situation of procuring technical economic conditions, on the basis of an analysis,

empirical in many cases, where the purpose is to interlink the construction costs, the treatment level and the design flow of the MWTPs [3,4]. Approximations have been made to the economic analysis of the MWTP relating the investment or construction cost and the design flow, establishing in a general manner a regression in function of the costs and flow of the MWTP, through an exponential equation [5-11] where CI is the investment or construction cost, Q is the design flow, and a and b are calculated coefficients. However, constant a represents the cost of unit capacity and constant b is considered the constant of cost scale elasticity (always positive), where if b = 1, it means that the investment costs are directly proportional to the capacity or flow of the MWTP (costs increase linearly), if b 1, it means that the costs

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 230-238. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.44699


Rodríguez-Miranda et al / DYNA 82 (192), pp. 230-238. August, 2015. Table 1. Cost equations from different countries Treatment technology Secondary treatment Secondary advanced and nitrification Activates sludge Oxidation pits Aerated pond Oxidation ponds Conventional secondary treatment Extended mechanical aeration Extended aeration dissolved air Primary treatment Secondary treatment

Cost function (CI) 8988 ∗ ,

# data

Reliability limit (l/s)

Corr. coefficient

Sou rce

37

16,20 – 1388,9

0,908

[7]

11

34,7 – 173,6

0,938

[7]

6

115,7 – 289,3

0,979

[9]

8

11,6 – 902, 8

0,604

[9]

11

11,6 – 902, 8

0,822

[9]

23

11,6 – 902, 8

0,790

[9]

0,116 ∗ ,

9

NA

0,935

[8]

0,206 ∗ ,

35

NA

0,829

[8]

0,153 ∗ ,

32

NA

0,808

[8]

NA

1 – 4000

1,000

[13]

1 – 5000

1,000

[13]

2790 ∗ , 0,0031 ∗ , 0,0017 ∗ , 0,0143 ∗ , 0,0004 ∗ ,

15,75 ∗ , 23,46 ∗ ,

NA 3

CI in millions of dollars and Q in m /s. Source: The authors.

advance less than proportionally to the capacity or size of the MWTP; that is, an economy of scale is present, which describes the behavior of costs according to the variable of flow or size [9], if while b is smaller in the cost function of the MWTP, it is considered that its cost grows more slowly as larger flows or capacities are considered for the MWTP; if b  1 there would be a false economy of scale [10]; However, the literature reports values of the constant b between 0.68 – 0.954 [7]. The above has served to plan the new MWTPs, but only in the direct cost of construction or investment approximated to the level of pre-feasibility for the efficient assignment of resource [13]. The measurement of the impacts on the final users of a MWTP project indicates the order of priority of the investment and compares several MWTP projects without including aspects such as location, area, environmental impacts and local prices at the time of construction, among Table 2. Cost equations for Colombia. Treatment technology Stabilization ponds UASB* RAP** Extended aeration Secondary treatment

Cost function (CI) 41915593 ∗ , 13974805 ∗ , 43108293 ∗ , 33826482 ∗ , 2841 ∗ 46830 ∗ 18,34

# data

Reliability limit (l/s)

Corr. coefficient

Sour ce

7

1 – 250

NA

[14]

5

1 – 450

NA

[14]

2

1 – 60

NA

[14]

3

1 – 40

NA

[14]

NA

1000 – 14000

0,984

[13]

CI in millions of Colombian pesos and Q in L/s. * Upflow Anaerobic Sludge Blanket ** Anaerobic piston reactor (acronym in Spanish). Source: The authors.

some other aspects. Some regressions of exponentials have been differentiated according to level or treatment process of the wastewaters as shown below in Table 1. In Colombia several analyses of exponential regressions have been conducted [2,14-15], where costs functions have been formulated for different treatment systems, as shown in Table 2. In the Interamerican Development Bank (BID for its acronym in Spanish) document [16], Development Objectives for the Millennium in Latin America, a universal goal was established, to reduce by half the percentage of people lacking access to drinking water and basic sanitation, something that requires great investments in each country. According to Corporación Andina de Fomento (CAF), the management study for the infrastructure in Latin America [17] established that homes in the region with access to water in their property was 87%, with access to a public sewer system 58%, and toilets connected to asewer system or septic system is 66%. In addition, the study further indicates that between 10% and 19.9% of the population in Colombia has an inadequate system for the elimination of excreta. The Pan American Health Organization’s basic indicators report [18] for Latin America indicates that 80% of the population has access to an adequate supply of water, 54% of the population has access to an adequate sanitation system; in our country 92% of the total population has access to sources of potable water (99% in the urban area and 72% in the rural area) and 77% of the total population has access to sanitation installations (82% in the urban area and 63% in the rural area). Other indicators establish that coverage (urban and rural) of water supply is 91% and basic sanitation is 85% [19]. For the Department of Cundinamarca, aqueduct coverage is 81.26% and sewage system coverage is 66.08% [20]. However, an analysis of the sector, establishes that our country (water coverage 88%) in the 90s was above the average for Latin America and the Caribbean (average 85%). With regard to water coverage, in the years 2000 to 2010, it was found to be below average (average 93%) for Latin America and the Caribbean, but during the last few years, 2011 to 2013, it was placed above the average for Latin America and the Caribbean [21]. The National Plan for the management of municipal wastewaters in Colombia [22], indicates that 237 Wastewater Treatment Plans (WTP) have been constructedin the country (21.7% of all the municipalities in the country), generating 67 m3/s and, about 10% of those WTP present adequate operating conditions. However,for the rest of the WTP, the actual operating and working conditions are unknown; and it also establishes that the WTP do not treat the totality of the wastewater produced by the afferent municipalities. In 2010, an increase of 75.95 m3/s in the flow of wastewater was observed, but only 18.93 m3/s are treated, equivalent to 24.92% of the wastewaters generated from 454 WTP constructed [23]. The most widely used technologies in the WTP are aerobic and anaerobic ponds (55%), activated sludge (22%), percolator filters (14%), ascending flow anaerobic reactor (9%); however, the conditions of the WTP built (454), are as follows: 24% (108) in good conditions, 27% (122) in regular conditions, 22% (100) in deficient conditions and for 27% (124) the conditions are unknown [23]. The study [22] mentions that there should be an interrelation between the WTP, the sewage system and the receptor body, while taking

231


Rodríguez-Miranda et al / DYNA 82 (192), pp. 230-238. August, 2015.

into consideration concepts such as the integral management of the hydric resource, the pressure exerted over the resource, the preservation of the basins and the potential uses of the source, but does not establish the environmental assessment of the WTP with respect to the hydric basins. In addition, it can be mentioned that according to the World Bank the cost per capita for the treatment of wastewater is US$ 100 [20]. However, the technical report on wastewater treatment systems in Colombia—baseline 2010 [24]—establishes that out of the 1119 municipalities in Colombia, 490 have a WTP (43.80%) and out of these, a total of 556 WTP in Colombia, are located in the Departments of Cundinamarca and Antioquia. The installed capacity of WTP was 33.2 m3/s, and in conclusion, the report establishes: absence of monitoring and control of the processes; no characterization of the waters is made, there is no control of the inflow or the outflow, and the design flows are unknown so that the operation of the systems is conducted in an empirical, autonomous and routine manner. The lack of knowledge on the systems operated does not allow the careful planning of the expansion and optimization of the treatment systems, leads to the deterioration of the corrective and preventive maintenance plans for the wastewater treatment systems, and createsthe absence of programs for the control of vectors and the handling of sludge, making it difficult to monitor the discharge of non-residential wastewaters and compliance with the existing regulations. In addition to the above, the analysis conducted by the Asociación Colombiana de Ingeniería Sanitaria y Ambiental (Colombian Association of Sanitary and Environmental Engineering) [25], indicates that 31% of the cities in Colombia have a WTP (primary treatment 29% and 1% tertiary treatment), where the following factors have been observed: incorrect selection of technologies, high investment and operating costs (although the investment assigned to the treatment of wastewaters does not reach the 1% invested or assigned to potable water), and very little protection of the hydric sources, jeopardizing environmental sustainability, due to the pollution of hydric resources, given the now obsolete premise used in Colombia, which states that the country’s hydric resource is infinite. Therefore, the cost function model for the MWTP in the Department of Cundinamarca developed in this research was obtained through the historic costs of the MWTPs already built and of the MWTP projects under construction with different flows or capacities and different water quality parameters (biochemical oxygen demand BOD, total suspended solids TSS, nitrogen N, phosphorous P) in different municipalities of the department. These serve as tools for the planning of the efficient use of the resources in preventing the pollution of the hydric basins based on the technology to be applied for the treatment of wastewater, the investment costs and its adaptability to the local environment. 2. Methodology 2.1. Type of Research The type of research applied to the development of the cost function model of the MWTPs can be established from the prospective scope [26], given that the information is

recorded as the phenomenon occurs, in this case, obtaining the information of future investment costs of the MWTP in the Department of Cundinamarca. However, according to the analysis and scope of the results, the research can also be quasi experimental [26] due to the fact that there is a causal relationship (cause – effect) between investment costs, wastewater flow and the physical parameters of the quality of raw wastewater that in conditions of rigorous control of the factors may affect the result of the analysis. Additionally, with the cost functions of the MWTPs, it willbe possible to foresee the efficient assignment of the investment resources for the preservation and conservation of the hydric basins in the department. For this reason, the research is also known as a forecast [27], given that future situations will be anticipated, under conditions of project horizon, demand of raw wastewater, compliance with current environmental regulations for the discharge of treated wastewater and institutional strengthening. 2.2. Data for the construction of the cost function A Departmental Water Plan (DWP) has been implemented in Cundinamarca whereby prefeasibility studies have been conducted for the development of investments for the installation of MWTP in the different municipalities of this territorial entity, where the idea, profile and design of the different MWTP have been foreseen, for the purpose of strengthening the institutions, and conserving and managing the basins in the different provinces of the department through compliance with the current environmental regulations for the discharge of treated wastewater. In accordance with the above, to determine the cost functions for the MWTPs the data gathering and historic costs method was applied to 51 MWTP projects constructed or being constructed with different capacities or flows of wastewater (grouped according to the technologies for the treatment of wastewaters and taking the direct cost without Administration, Incidentals and Profit (AIUfor its acronym in Spanish) within the DWP and projects executed by the Autonomous Regional Corporation (CAR for its acronym in Spanish) in Cundinamarca, that is, future investment costs or costs of monitoring the construction of the MWTPs. 2.3. Data Analysis The costs functions in the MWTP that express a exponential regression of the form ∗ ∗ ∗ ∗ where CI is the investment or construction cost, Q is the design flow, BOD (biochemical oxygen demand), TSS (total suspended solids),N (nitrogen), P (phosphorous), a and b are calculated coefficients. This model allows the establishment of the relationships between variables and the identification of the existing dependency. In other words, a deterministic model that excludes external factors that produce fluctuations that influence the construction of the cost function, which means that it can be insufficient to explain the reality of the phenomenon. Consequently, the construction of a cost function of the MWTP should bestochastic or random, meaning that it includes unknown external information (represents the factors that affect the cost function of the MWTPs and that are not considered in

232


Rodríguez-Miranda et al / DYNA 82 (192), pp. 230-238. August, 2015.

the model) through error , so that the model that better interprets reality can be better explained [28].

Table 3. Municipal wastewater treatment plants with anaerobic technology

3. Results Out of the 51 MWTP projects in the department of Cundinamarca, 39,2% of the MWTP have anaerobic reactor technologies (20 MWTP UASB, RAP reactors and anaerobic filters), 21,6% of the MWTPs have activated sludge technologies (11 MWTP extended aeration, oxidation pit and conventional reactors), 19,6% have oxidation pond technologies (10 MWTP), 11,8% have aerated ponds (6 MWTP) and 7,8% have advanced treatment technology with DAF (4 MWTP). The MWTP are distributed from 177 m.a.s.l up to 2718 m.a.s.l and with a flow rate of between 0.4 L/s and 958.7 L/s according to the quantity of inhabitants benefited. In terms of investment costs per MWTP, these range between USD$51655 and USD$6975000 and with a per capita cost of 4 USD$/inhab. to 1506 USD$/inhab.which indicates a considerable dispersion, since in countries such as Brazil (USD$40/inhab to USD$240/inhab), Peru (USD$90/inhab to USD$320/inhab), México (USD$10/inhab to USD$220/inhab), and even Colombia (USD$10/inhab to USD$289/inhab) present investment values per capita (secondary treatment) that are low in comparison to those reported in the MWTPs of Cundinamarca [14]. Fig. 1 shows the behavior of the investment costs according to the design flow or the treatment flow in the MWTP, where the tendency is to have a greater flow and thereby greater investment cost in the MWTP. In addition, it is observed that the largest concentration of projects is below 200 L/s. The average flow is 73.3 L/s and only 10 MWTP projects are above the aforementioned flow average and the remaining 41 MWTP projects are below it. The average investment cost is USD$1060046.9 and 13 MWTP projects are above the aforementioned average investment cost and the rest; that is, 38 projects are below the average investment cost. The average investment cost per capita is 229.18 USD$/inhab. Table 3 shows that the 20 MWTP projects with anaerobic reactor technology, in terms of investment costs per capita all exceed the recommendation made by MAVDT {(for Colombia can be from USD$20/inhab to USD$40/inhab [29]},

Design Population (Inhab)

Flow (L/s)

Investment Cost (USD$)

10.000 42.796 9.130 490 382 1.943 2.340 108 381 761 513 3.845 473 2.111 4.394 999 981 183 2.343 546

38,0 181,2 11,0 2,3 1,6 6,4 7,6 0,4 1,4 7,1 1,7 10,1 1,5 7,9 20,1 6,2 3,4 0,6 4,9 4,6

520.000 6.969.656 803.000 73.638 108.626 757.579 669.965 108.151 54.677 51.655 74.349 841.158 89.811 616.795 884.799 1.067.206 324.421 275.686 431.103 113.054

Investment Cost (USD$)

7.000.000

Design Population (Inhab)

Flow (L/s)

Investment Cost (USD$)

108.000 6.724 16.000 6.000 10.000 106.099 9.060 9.122 8.787 1.984 3.157

187,0 9,3 32,0 8,0 12,0 188,6 27,0 4,0 95,2 38,9 49,2

6.975.000 439.000 2.162.500 405.500 843.000 451.274 443.000 164.000 1.664.186 2.778.560 788.678

4.000.000 3.000.000 2.000.000 1.000.000 400,0

600,0

800,0

Design Flow (L/s)

Figure 1. Investment costs and desing flow. Source: The authors.

1000,0

719 2.700 2.608 1.725 980 1.243 976 976 976 1.400 2.270 2.450 453 1.134 2.180 177 1.300 1.543 1.520 1.312

(a) (a) (a) (a) (b) (a) (b) (a) (b) (a) (b) (a) (b) (a) (b) (a) (b) (a) (b) (a) (c) (a) (b) (a) (d) (a) (b) (a) (c) (a) (d) (a) (b) (a) (d) (a) (b)

Table 4. Municipal wastewater treatment plants with activated sludge technology

5.000.000

200,0

TT

of 48 USD$/inhabitant and the literature [30], of 40 USD$/ inhabitant. Even when the recommendation is to locate the MWTP with anaerobic technology below 1000 m.a.s.l.for the adequate stabilization of biogas [31], there are 13 MWTP projects above 1100 m.a.s.l., the flow range found goes from 0.4 L/s to 181.2 L/s. Table 4, shows that out of the 11 MWTP projects with activated sludge technology, in terms of investment costs per capita, only two MWTP projects fulfill the recommendations from MAVDT (for Colombia, it can be from USD$40/inhab to a USD$120/inhab [29]) and six MWTP projects fulfill the recommendations of the literature [30]; Arceivala, 1981) of 70 USD$/inhab. The investment cost per capita for the activated sludge technology varies from 52 USD$/inhab to 1506 USD$/inhab, presenting a large dispersion of the data, since in countries like Brazil (USD$170/inhab to USD$270/inhab), Mexico (USD$50/inhab to USD$85/inhab), and even Colombia (USD$110/inhab to USD$175/inhab) present values

6.000.000

(1.000.000) 0,0

hasl

TT: Treatment technology (a) Anaerobic reactor; (b) Anaerobic Filter; (c) UASB; (d) RAP Source: The authors.

9.000.000 8.000.000

Investment cost per capita Reference Reference (USD$ MAVDT (USD$ /Inhab) (USD$ /Inhab) /Inhab) 52 48 40 163 45 40 88 48 40 150 48 40 284 48 40 390 48 40 286 48 40 1.001 48 40 144 48 40 68 48 40 145 48 40 219 48 40 190 48 40 292 48 40 201 48 40 1.068 48 40 331 48 40 1.506 48 40 184 48 40 207 48 40

1200,0

Investment cost per capita Reference Reference (USD$ MAVDT (USD$ /Inhab) (USD$ /Inhab) /Inhab) 65 36 70 65 48 70 135 45 70 68 48 70 84 48 70 4 36 70 49 48 70 18 48 70 189 48 70 1400 48 70 250 48 70

hasl

TT

2586 2579 2718 2636 2601 2548 2610 2457 987 1044 2376

(a) (a) (a) (a) (a) (a) (a) (a) (a) (c) (a) (b) (a) (c)

TT: Treatment technology (a) Activated Sludge; (b) Extended Aeration; (c) Oxidation Pit. Source: The authors.

233


RodrĂ­guez-Miranda et al / DYNA 82 (192), pp. 230-238. August, 2015. Table 5. Municipal wastewater treatment plants with oxidation pond technology Design Population (Inhab)

Flow (L/s)

Investment Cost (USD$)

9.287 6.537 32.234 19.000 8.000 43.000 141.349 5.000 8.097 11.936

35,0 17,0 117,0 30,0 50,0 70,0 958,7 20,0 9,0 20,0

502.000 695.000 3.955.000 1.257.500 1.058.000 1.914.000 2.227.683 516.000 360.000 552.000

Investment cost per capita Reference Reference (USD$ MAVDT (USD$ /Inhab) (USD$ /Inhab) /Inhab) 54 48 30 106 48 30 123 45 30 66 45 30 132 48 30 45 45 30 16 36 30 103 48 30 44 48 30 46 45 30

Table 7. Municipal wastewater treatment plants with advanced DAF technology

hals

TT

2598 2625 2545 2556 2657 2556 2650 2563 2595 2650

(a) (a) (a) (a) (a) (a) (a) (a) (a) (a)

for the investment per capita (secondary treatment) that are low in comparison with those reported for the MWTPs of Cundinamarca [14,32]. The flow range found goes from 4.0 L/s to 188.6 L/s. The vast majority of the MWTP projects are above 2000 m.a.s.l.. Table 5 shows that out of the 10 MWTP projects using oxidation pond technology, in terms of investment cost per capita only three MWTP projects fulfill the MAVDT recommendations (for Colombia, this can be from USD$10/inhab to USD$30/inhab [29]) and one MWTP project fulfills the recommendations made in the literature [30] of 30 USD$/inhabitant. The range of investment cost per capita found was USD$44/inhab to USD$132/inhab, which indicates a very high dispersion range, since it is estimated that the value of cost per capita in Colombia could be from USD$3.9/hab to USD$27.1/hab [32]. The flow range found went from 9.0 L/s to 958.7 L/s. It was also observed that all MWTP projects are located above 2500 m.a.s.l., even though the recommendation is to locate MWTPs with oxidation pond technologies below 1000 m.a.s.l. Table 6 shows that out of the six MWTP projects with aerated pond technology, in terms of investment costs per capita, only three MWTP projects fulfill the MAVDT recommendations and two MWTP projects fulfill the recommendations made in the literature of 30 USD$/inhab [30]. The range of investment cost per capita went from USD$8/inhab to USD$117/inhab which is a considerable dispersion range since it is estimated that the value of the cost Table 6. Municipal wastewater treatment plants with aerated pond technology Flow (L/s)

Investment Cost (USD$)

12.000 122.814 56.458 4.100 5.001 22.899

20,0 861,6 355,6 17,0 15,0 123,5

1.183.000 1.299.548 459.500 480.000 348.000 969.199

TT: Treatment technology (a) Aerated pond. Source: The authors.

Investment cost per capita Reference Reference (USD$ MAVDT (USD$ /Inhab) (USD$ /Inhab) /Inhab) 99 45 30 11 36 30 8 38 30 117 48 30 70 48 30 42 45 30

Flow (L/s)

Investment Cost (USD$)

1.924 1.746 17.086 3.324

7,4 6,5 25,0 12,3

890.240 766.224 1.647.634 1.031.838

Investment cost per capita Reference Reference (USD$ MAVDT (USD$ /Inhab) (USD$ /Inhab) /Inhab) 463 48 40 439 48 40 96 45 40 310 48 40

hals

TT

2580 2590 2600 2586

(a) (a) (a) (a)

TT: Treatment technology (a) Advanced DAF. Source: The authors.

TT: Treatment technology (a) Oxidation ponds. Source: The authors.

Design Population (Inhab)

Design Population (Inhab)

hals

TT

2566 2652 2558 2588 2652 2606

(a) (a) (a) (a) (a) (a)

per capita in Colombia could be from USD$1.54/inhab to USD$3.87/inhab [31]. The range of the flow found went from 15.0 L/s to 861.6 L/s. It was also observed that all MWTP projects are located above 2500 m.a.s.l. Table 7, shows that out of the four MWTP projects with advanced technology DAF, in terms of investment costs per capita, none of them fulfill the MAVDT recommendations (for Colombia it can be from USD$20/inhab to USD$30/inhab [29] or the literature [33] of 40 USD$/inhab. The range of the investment cost per capita found went from USD$96/inhab to USD$463/inhab, which is a high dispersion range since it is estimated that the value of the cost per capita in Colombia could be from USD$1.35/inhab to USD$3.87/inhab [32]. The range of the flow went from 6.5 L/s to 25 L/s. It was also observed that all MWTP projects are located above 2500 m.a.s.l. Together with the information presented above, the econometric results of the model or cost function of the MWTP are presented for the 51 MWTP projects in the department of Cundinamarca. 3.1. First scenario This scenario considers the value of the investment measured in Colombian pesos as a dependent variable, and the characteristics of the microbasins such as population, flow, BOD, TSS, N, and Pas independent variables. A linear model,and a log-log model are considered (Table 8). Table 8. Econometric analysis bylinear model. Dependent Variable: INVERSION_COP Method: Least Squares Sample: 1 51 Included observations: 51 Variable Coefficient Std. Error t-Statistic C 9,28E+08 9,69E+08 0,958069 CAUDAL 11224012 2067174 0,592118 BOD 12127754 3147626 3,852984 N 3,46E+08 8,79E+08 0,393516 TSS 2368166 2756035 -0,859266 P -1,64E+08 79565693 -2,058601 R-squared 0,329383 Mean dependent var Adjusted R-squared 0,254870 S.D. dependent var S.E. of regression 2,45E+09 Akaikeinfocriterion Sum squared resid 2,70E+20 Schwarz criterion Log likelihood -1171,770 Hannan-Quinncriter F-statistic 4,420484 Durbin-Watson stat Prob (F-statistic) 0,002329 Source: The authors.

234

Prob. 0,3431 0,0467 0,0004 0,6958 0,3947 0,0453 2,12E+09 2,84E+09 46,18706 46,41433 46,27391 1,535813


Rodríguez-Miranda et al / DYNA 82 (192), pp. 230-238. August, 2015. Table 9. Econometric analysis by linear model Dependent Variable: INVERSION_COP Method: Least Squares Sample: 1 51 Included observations: 51 Variable Coefficient Std. Error t-Statistic C 1,81E+09 4,17E+08 4,342566 CAUDAL -4237381 2129708 1,989653 R-squared 0,074751 Mean dependent var Adjusted R-squared 0,055868 S.D. dependent var S.E. of regression 2,76E+09 Akaikeinfocriterion Sum squared resid 3,73E+20 Schwarzcriterion Log likelihood -1179,978 Hannan-Quinncriter F-statistic 3,958720 Durbin-Watson stat Prob (F-statistic) 0,052225 Source: The authors.

Table 11. Econometric analysis by Log – Log model Dependent Variable: LOG(INVERSION_COP) Method: Least Squares Sample: 1 51 Included observations: 51 Variable Coefficient Std. Error t-Statistic C 17,94784 1,273990 14,08790 LOG(CAUDAL) 0,458459 0,075848 6,044410 LOG(BOD) 0,308837 0,252729 1,222012 R-squared 0,725225 Mean dependent var Adjusted R-squared 0,705443 S.D. dependent var S.E. of regression 0,802020 Akaikeinfocriterion Sum squared resid 30,87532 Schwarzcriterion Log likelihood -59,56822 Hannan-Quinncriter F-statistic 26,55025 Durbin-Watson stat Prob(F-statistic) 0,000000 Source: The authors.

Prob. 0,0001 0,0522 2,12E+09 2,84E+09 46,35206 46,42782 46,38101 1,626834

The table above, shows that the model or cost function is 11.224.012 ∗ 12.127.754 ∗ as follows: 9.28 , which indicates that it is only analyzed for the significant variables within the model, that is, those whose probability value is less than 0.05; the investment cost of the MWTPs increases per each capacity (L/s), which means that the investment cost increases $11,224,012 COP as the flow increases for each L/s of capacity or flow and in turn the investment cost of the MWTP increases $12,127,754 COP per each mg/L in BOD concentration. 3.2. Second scenario

cost), while the estimator of the flow is the exponent of the exponential function. Table 10 shows that the model or cost function is as follows: 19.40 0.41 ∗ log and the cost function is expressed as follows: 266.264.305 ∗ . ; The value of the coefficient expresses the elasticity of the flow in function of the cost which is 0.49, indicating that if the flow or capacity is increased by 1%, the cost of the investment in the MWTP will increase by 0.49% (less than the unit). 3.4. Fourth scenario

This scenario considers the value of the investment on the MWTP measured in Colombian pesos as a dependent variable and the flow as an independent variable. Table 9 shows that the model or cost function as follows: 1.81 4.234.381 ∗ .In this cost function of the MWTPs, the investment cost increases by COP$ 4,237,000 per each L/s of capacity or MWTPflow. 3.3. Third scenario This scenario considers the model or cost function as a log-log, where the investment logarithm of the MWTP is estimated in Colombian pesos against the logarithm of flow or capacity of the MWTP. Comparing these two equations, it can be seen that the coefficient is a log estimator (investment Table 10. Econometric analysis by Log – Log model Dependent Variable: LOG(INVERSION_COP) Method: Least Squares Sample: 1 51 Included observations: 51 Variable Coefficient Std. Error t-Statistic C 19,48016 0,226321 86,07307 LOG(CAUDAL) 0,496602 0,069475 7,147923 R-squared 0,610454 Mean dependent var Adjusted R-squared 0,600464 S.D. dependent var S.E. of regression 0,806047 Akaikeinfocriterion Sum squared resid 31,83588 Schwarzcriterion Log likelihood -60,34945 Hannan-Quinncriter F-statistic 51,09281 Durbin-Watson stat Prob(F-statistic) 0,000000 Source: The authors.

Prob. 0,0000 0,0000 0,0277 20,88235 1,140451 2,453656 2,567293 2,497080 1,619977

Prob. 0,0000 0,0000 20,88235 1,140451 2,445077 2,520834 2,474026 1,588287

This scenario only contemplates logarithms in the dependent variable, in the flow and in the BOD, taking into account that for the other variables it is not significant. Table 11 shows that the elasticity of the flow or capacity of the MWTP is 0.45 and BOD is 0.30, this is the model that adjusts the most (R adjusted 0.70) and the cost function is as follows: 17,49 0,45 ∗ 0,30 ∗ and expressed otherwise is: . 61.836.230,21 ∗ . ∗ . Fig. 2 shows the behavior of the investment costs function estimated according to the design flow or treatment flow of the MWTP. With regard to the investment costs function, it can be mentioned that in the model applied, 70% of the data adjusts to the model or the investment costs function of the MWTP according to the independent variables of flow (Q) or capacity of the MWTP and the BOD. The constant a (COP$61,836,230.21) is the cost of unit capacity and the constants b(cost scale elasticity) for Q (0.458459) and BOD (0.308837), meaning that the costs increase proportionally less than the capacity or flow of the MWTP. 3.5. Models per type of technology The investment costs functions or models described below correspond to the estimates in accordance with the wastewater treatment technologies. The variables considered in the model are investment cost in the MWTP as the dependent variable and the flow or capacity, BOD, N, TSS and P as independent variables. The logarithm is only applied

235


Rodríguez-Miranda et al / DYNA 82 (192), pp. 230-238. August, 2015. Table 13. Econometric analysis by Log – Log model. Anaerobic Reactor Technology Dependent Variable: LOG(INVERSION_COP) Method: Least Squares Sample: 1 20 Included observations: 20 Variable Coefficient Std. Error t-Statistic Prob. C 19,33746 2,762663 6,999573 0,0000 LOG(CAUDAL) 0,888219 0,201254 4,413425 0,0006 LOG(BOD) 0,488460 0,632031 -0,772842 0,0525 N 0,113196 0,539280 0,209902 0,0368 LOG(TSS) 0,301130 0,463820 0,649239 0,0267 P 0,030058 0,045243 0,664375 0,0172 R-squared 0,656971 Mean dependent var 20,23191 Adjusted R-squared 0,534461 S.D. dependent var 1,288849 S.E. of regression 0,879387 Akaikeinfocriterion 0,824142 Sum squaredresid 10,82650 Schwarzcriterion 1,122862 Log likelihood -22,24142 Hannan-Quinncriter 2,882455 F-statistic 5,362580 Durbin-Watson stat 1,723489 Prob(F-statistic) 0,005832 Source: The authors.

5.000.000,00 4.500.000,00

Investment Cost (USD$)

4.000.000,00 3.500.000,00 3.000.000,00 2.500.000,00 2.000.000,00 1.500.000,00 1.000.000,00 500.000,00 0,0

200,0

400,0

600,0

800,0

1000,0

1200,0

Design Flow (L/s)

Figure 2. Estimated investment cost and design flow. Source: The authors.

Table 14. Econometric analysis by Log – Log model. Oxidation Ponds Technology Dependent Variable: LOG(INVERSION_COP) Method: Least Squares Sample: 1 10 Included observations: 10 Variable Coefficient Std. Error t-Statistic Prob. C 21,64916 3,565699 6,071506 0,0037 LOG(CAUDAL) 0,594555 0,231520 2,568048 0,0021 LOG(BOD) 0,281994 2,416211 -0,116709 0,0427 N 0,658809 0,981761 0,671049 0,0389 LOG(TSS) 0,065380 1,471132 0,044442 0,0567 LOG(P) 0,933924 2,645121 -0,353074 0,0419 R-squared 0,698632 Mean dependentvar 21,39443 Adjusted R-squared 0,521922 S.D. dependentvar 0,777915 S.E. of regression 0,640578 Akaikeinfocriterion 1,230817 Sum squaredreside 1,641359 Schwarzcriterion 1,412368 Log likelihood -5,154084 Hannan-Quinncriter 2,031656 F-statistic 1,854560 Durbin-Watson stat 1,917909 Prob(F-statistic) 0,284669 Source: The authors.

Table 12. Econometric analysis by Log – Log model. Activated Sludge Technology Dependent Variable: LOG(INVERSION_COP) Method: Least Squares Sample: 1 11 Included observations: 11 Variable Coefficient Std. Error t-Statistic Prob. C 14,33332 3,744969 3,827354 0,0123 LOG(CAUDAL) 0,331351 0,254887 1,299989 0,0403 LOG(BOD) 2,186059 1,207188 1,810869 0,0299 N 0,186291 0,517564 -0,359938 0,0336 LOG(TSS) 0,961049 1,101546 -0,872455 0,0229 LOG(P) 0,324029 0,329225 -0,984219 0,0702 R-squared 0,744708 Mean dependentvar 21,29760 Adjusted R-squared 0,689417 S.D. dependentvar 1,079744 S.E. of regression 0,771533 Akaikeinfocriterion 0,621576 Sum squaredresid 2,976313 Schwarzcriterion 0,838610 Log likelihood -8,418669 Hannan-Quinncriter 2,484767 F-statistic 2,917088 Durbin-Watson stat 2,197722 Prob(F-statistic) 0,132481 Source: The authors.

to those variables that do not contain zero or negative values. Table 12 shows that the model or cost function for the activated sludge technology is as follows: 14,33 0,33 ∗ log 2,18 ∗ 0,18 ∗ 0,96 ∗ 0,32 ∗ and the cost function is expressed as follows: 1.672.784 ∗ , ∗ , , , , ∗ ∗ ∗ . Table 13 shows that the model or cost function for the anaerobic reactor technology is as follows: 19,33 0,88 ∗ log 0,48 ∗ 0,11 ∗ 0,30 ∗ 0,03 ∗ and the cost function is expressed as: 248.263.192 ∗ , ∗ , , ∗ , ∗ ∗ , . Table 14 shows that the model or cost function for the oxidation ponds technology is as follows: 21,64 0,59 ∗ log 0,28 ∗ 0,65 ∗ 0,06 ∗ 0,93 ∗ and the cost function is expressed as: 2.501.108.824 ∗ , ∗ , , , , ∗ ∗ ∗ .

For the aerated pond and advanced DAF technologies, the first model or the cost function is not significant because the sample is not representative. For the second, it is not possible to estimate the model or cost function because of the number of observations: four. 4. Conclusions In terms of investment cost per capita for the MWTPs, there is evidence of a great dispersion of the data obtained. This is due to the fluctuation of the materials, equipment, and the civil works at the time of estimating the budget, which ostensibly affects the value calculated (investment cost per capita) in comparison with international and Latin American literature. For the most part, the investment costs are assumed indirectly by the inhabitants of the populations benefited, through the financing by the national government, and afterwards these costs are assumed directly through the charges for the operation and maintenance of the MWTP. The investment cost functions of the MWTPs formulated are an appropriate tool for the selection of the

236


Rodríguez-Miranda et al / DYNA 82 (192), pp. 230-238. August, 2015.

decontamination model applied and in addition serves as an economic indicator for decision making and even for the evaluation of several alternatives of treatment. This is due to the fact that the investment cost of a MWTP is not proportional to the design flow, but rather mainly depends on the characteristics of the raw wastewater to be treated and consequently, the greater the quantity of contaminants eliminated from the wastewater, the greater the environmental damaged prevented, and therefore,the greater the benefit from the wastewater treatment process, because the entire treatment train of the MWTP is utilized. Having cost functions to estimate long term economies of scale in the MWTPs in the department of Cundinamarca is specific to the capacity or flow of the MWTP considered in the analysis. Consequently, the elasticity of the cost scale (Q (0,458459) and BOD (0,308837)) meant that the investment costs rose slowly in comparison to the capacity or flow of the MWTP analyzed. Cost functions were obtained for the activated sludge, oxidation pond and anaerobic reactor technologies, while given the number of observations or data of MWTP projects with aerated pond or advanced DAF technologies, the cost function was not significant in the estimation of the model. Acknowledgements

References

[2]

[3]

[4]

[5]

[6]

[8]

[9]

[10] [11] [12]

[13]

[14]

The authors wish to thank the Empresas Públicas de Cundinamarca S.A. E.S.P. of the Government of Cundinamarca, Colombia for the information provided on the MWTP investment projects within the Departmental Water Plan (PDA acronym in Spanish) as part of the agreement subscribed with the Universidad Distrital Francisco José de Caldas. Likewise, the authors express their gratitude to tenured professor Vidal Fernando Peñaranda Galvis from the Environmental and Natural Resources Faculty of the Universidad Distrital Francisco José de Caldas, for the review of the document.

[1]

[7]

[15] [16]

[17] [18] [19]

Molinos, M., Hernández, F. and Sala, R., Economic feasibility study for wastewater treatment: A cost - benefit analysis. Science of the environmental, 408 (20), pp. 4396-4402, 2010. Rodríguez, J.P., Selección técnico económico del sistema de depuración de aguas residuales: Aplicando la evaluación de la descontaminación hídrica. Tecnología del Agua, 306(29), 22-21, 2009. Posada, E., Mojica, D., Pino, N., Bustamante, C., y Monzon, A., Establecimiento de índices de calidad ambiental de ríos con bases en el comportamiento del oxígeno disuelto y de la temperatura: aplicación al caso del Río Medellín, en el Valle de Aburrá. Revista DYNA, 80 (181), pp 192-200, 2013. Restrepo, G., Rios, L., Marin, J., Montoya, J. y Velasquez, J., Evaluación del tratamiento fotocatalítico de aguas residuales industriales empleando energía solar. Revista DYNA, 75 (155), pp 145-153, 2008. United States Enviromental Protection Agency-USEPA, Construction cost for municipal wastewater treatment plants: 1973-1977. Washington: U.S. Environmental Protecion Agency. EPA/430/977014, 1978. United States Enviromental Protection Agency-USEPA, Construction cost for municipal wastewater treatment plants 1973-1978. Washington: U.S. Environmental Protecion Agency. EPA/430/980003, 1980.

[20]

[21] [22] [23] [24]

237

Friedler, E. and Pisanty, E., Effects of design flow and treatment level on construction and operation costs of municipal wastewater treatment plants and their implications on policy making. Water research, 40 (20), pp. 3751-3758, 2006. DOI: 10.1016/j.watres.2006.08.015 Tsagarakis, K.P., Mara, D.D., and Angelakis, A.N., Application of cost criteria for selection of municipal wastewater treatment systems. Water, air and soil pollution, 142 (1-4), pp. 187-210, 2003. DOI: 10.1023/A:1022032232487 Singhirunnusorn, W. and Stenstrom, M.K., A critical analysis of economic factors for diverse wastewater treatment process: Case studies in Thailand. Sustainble Environmental Research, 20 (4), pp. 263-268, 2010. Hernández-Sancho, F., Molinos-Senante, M. and Sala-Garrido, R., Cost modelling for wastwater treatment processes. Desalination, 268 (1), pp. 1-5, 2011. DOI: 10.1016/j.desal.2010.09.042 Revollo, D. and Londoño, G., Análisis de las economías de escala y alcance en los servicios de acueducto y alcantarillado en Colombia. Desarrollo y Sociedad, 66, pp. 145-182, 2010. Valenzuela, L.C., Evaluación económica y metodología de minimización de costos para proyectos de sistemas de agua potable. Desarrollo y Sociedad, [Online]. 19, pp. 147-171, 1987. Available at: http://www.academia.edu/11320913/Evaluaci%C3%B3n_Econ%C3 %B3mica_y_Metodolog%C3%ADa_de_Minimizaci%C3%B3n_de_ Costos_para_Proyectos_de_Sistemas_de_Agua_Potable Engin, G.O. and Demir, I., Cost analysis of alternative methods for wastewater handling in small communities. Journal of Environmental Management, 79 (4), pp. 357-363, 2006. DOI: 10.1016/j.jenvman.2005.07.011 Aristizábal, O.L., Referencias para costos de inversión en plantas de tratamiento de aguas residuales, en Pinilla J.I. Las ciudades y el agua: Ingenieros de EPM investigan sobre los sistemas hídricos urbanos. Medellin: Universidad de Medellin. 2011. pp. 337-348. Buitrago, C.A., and Giraldo, E., Evaluación técnica y económica de sistemas de tratamiento de aguas residuales. Bogotá: Universidad de los Andes. 1994. Banco Interamericano de Desarrollo – BID, Los objetivos de desarrollo del milenio en America Latina y el Caribe: Retos, acciones y compromisos. Washington D.C.: Banco Interamericano de Desarrollo. 2004. Corporación Andina de Fomento – CAF, Caminos para el futuro: Gestión de la infraestructura en América Latina. Caracas: Corporación Andina de Fomento, 2009. Organización Panamericana de la Salud – OPS, Indicadores básicos: Situacion de salud en la americas. Washington: Organización Panamericana de la Salud, 2012. Sánchez J., Estudio sectorial de los sistemas de acueducto y alcantarillado en Colombia, Memorias 56 Congreso Internacional: Agua, Saneamiento, Ambiente y Energía Renovable ACODAL Tomo I, [Online]. 2013. pp. 1-34. Available at: http://www.acodal.org.co/acodiario2012/imagenes/docweb56/Memo rias%20Congreso%20Acodal/Presentaciones/Jaime%20S%e1nchez %20De%20Guzm%e1n%20-%20Superservicios.pdf Marín, C. and Serna, M., Análisis del costo per capita de ampliación de cobertura a nivel nacional de los servicios de acueducto y alcantarillado. Revista Regulación de Agua Potable y Saneamiento Básico, 16, pp. 153-190, 2011. Asociación Colombiana de Ingeniería Sanitaria y Ambiental – ACODAL, Agua potable y saneamiento: Colombia se rezaga en el contexto Latinoamericano. Bogotá: ACODAL. 2013 Ministerio de Ambiente, Vivienda y Desarrollo Territorial – MAVDT, Plan nacional de manejo de aguas residuales municipales en Colombia. Bogotá D.C.: MAVDT. 2004. Ministerio de Ambiente, Vivienda y Desarrollo Territorial – MAVDT, Politica nacional para la gestión integral del recurso hídrico. Bogotá D.C.: MAVDT. 2010. Superintendencia de Servicios Públicos Domiciliarios, Informe técnico sobre sistemas de tratamiento de aguas residuales en Colombia - línea base 2010. Bogotá D.C.: Superintendencia de Servicios Públicos Domiciliarios. 2012.


Rodríguez-Miranda et al / DYNA 82 (192), pp. 230-238. August, 2015. [25] Asociación Colombiana de Ingeniería Sanitaria y Ambiental – ACODAL, Aguas residuales y ciudades. Bogotá: Asociación Colombiana de Ingeniería Sanitaria y Ambiental. 2013. [26] Vergel, G., Metodología. Un manual para la elaboración de diseños y proyectos de investigación. Compilación y ampliación temática. Barranquilla: Publicaciones Corporación UNICOSTA. 1997. [27] Hurtado, J., Metodología de la investigación holística. Caracas: Fundación SYPAL. 2000. [28] Gallego, J. and Pinilla, M., Modelo económetrico básico: Teoría y conceptos. Madrid: Académica Española. 2012. [29] Salas, D., Zapata, M. and Guerrero J., Modelo de costos para el tratamiento de las aguas residuales en la región. Scientia et Technica, 13 (37), pp. 591-596, 2007. [30] Von Sperling, M., Comparison among the most frequently used systems for wastewater treatment in developing countries. Water Science and Technology, 53 (3), pp. 59-62, 1996. DOI: 10.1016/0273-1223(96)00301-0 [31] Ghazy, M.R., Dockhorn, T. and Dichtl, N., Economic and environmental assessment of sewage sludge treatment processes application in Egypt. International Water Technology Journal, 1 (2), pp. 1-17, 2011. [32] Sarmiento, G., Documento de manejo de aguas residuales urbanas y tasas retributivas. Memorias Seminario internacional sobre tratamiento de aguas residuales y biosólidos. Fundación Universitaria de Boyacá, Tomo I. 2000. [33] Fresenius, W. y Schneider, W., Manual de disposición de aguas residuales. Lima: Centro panamericano de ingeniería sanitaria y ciencias ambientales (CEPIS/OPS/OMS). 1991.

J.P. Rodriguez-Miranda, is Sanitary and Environmental Engineering, MSc. in Environmental Engineering. PhD (c) Engineering. He is associate professor of the Universidad Distrital Francisco José de Caldas, Bogotá, Colombia. Director of the research group AQUAFORMAT. C.A. García-Ubaque, is Civil Engineer, PhD in Engineering He is associate professor of the Universidad Distrital Francisco José de Caldas, Bogotá, Colombia. Head of research group GIICUD. J. C. Penagos-Londoño, is Civil Engineer, MSc. in Civil Engineering. Deputy General Manager. Cundinamarca PublicEnterprises Inc. E.S.P.

238

Área Curricular de Medio Ambiente Oferta de Posgrados

Especialización en Aprovechamiento de Recursos Hidráulicos Especialización en Gestión Ambiental Maestría en Ingeniería Recursos Hidráulicos Maestría en Medio Ambiente y Desarrollo Doctorado en Ingeniería - Recursos Hidráulicos Doctorado Interinstitucional en Ciencias del Mar Mayor información:

E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 5105


Fast pyrolysis of biomass: A review of relevant aspects. Part I: Parametric study Jorge Iván Montoya a, Farid Chejne-Janna a & Manuel Garcia-Pérez b a

Facultad de Minas, Universidad Nacional de Colombia, Medellín Colombia. fchejne@unal.edu.co, jimontoy@unal.edu.co b Washington State University, Pullman, USA. mgarcia-perez@wsu.edu Received: July 31th, 2014. Received in revised form: December 10th, 2014. Accepted: July 13th, 2015.

Abstract Recent years have witnessed a growing interest in developing biofuels from biomass by thermochemical processes like fast pyrolysis as a promising alternative to supply ever-growing energy consumption. However, the fast pyrolysis process is complex, involving changes in phase, mass, energy, and momentum transport phenomena which are all strongly coupled with the reaction rate. Despite many studies in the area, there is no agreement in the literature regarding the reaction mechanisms. Furthermore, no detailed universally applicable phenomenological models have been proposed to describe the main physical and chemical processes occurring within a particle of biomass. This has led to difficulties in reactor design and pilot industrial scale operation, stunting the popularization of the technology. This paper reviews relevant topics to help researchers gain a better understanding of how to address the modeling of biomass pyrolysis. Keywords: Fast pyrolysis, biomass, heating rate, particle size, temperature, mineral matter, kinetics, modeling.

Pirólisis rápida de biomasas: Una revisión de los aspectos relevantes. Parte I: Estudio paramétrico Resumen Existe gran interés en el desarrollo de biocombustibles a partir de biomasa mediante procesos termoquímicos, que ha ido creciendo en los últimos años como alternativa promisoria para satisfacer parcialmente el consumo creciente de energía. Sin embargo, el proceso de pirolisis rápida es complejo, e involucra cambios de fase y fenómenos de transferencia de masa, energía, cantidad de movimiento, fuertemente acoplados con las tasas de reacción. A pesar de numerosos estudios realizados en el área, no hay consenso respecto a mecanismos de reacción, ni se han propuesto modelos fenomenológicos detallados para describir los procesos físicos y químicos que ocurren dentro de una partícula de biomasa, esto ha traído dificultades en el diseño y operación de reactores a escala piloto e industrial, dando lugar a la popularización de la tecnología. Este trabajo presenta un estudio de diferentes líneas de investigación, para ayudar a los investigadores a obtener una mejor comprensión del tema. Palabras clave: Pirólisis rápida, biomasa, tasa de calentamiento, tamaño de partícula, temperatura, material mineral, cinética, modelado.

1. Introduction The worldwide estimated energy consumption in recent years is of the order 515 EJ yr -1 (1018 J/year), 80% of which is supplied from petroleum fuels. This consumption tends to increase over time due to phenomena associated with population growth and increasing demands from emerging countries such as Brazil, Russia, India and China (BRIC) [1,2]. Biomass, solar radiation, wind, water, and geothermal are potential alternative energy sources that can be used in the near future. However, biomass is the only renewable

source of carbon for biofuels production and the only alternative available which can be quickly inserted into the biofuels markets matrix [3]. Another important developing country, Colombia, has an availability of about 72 million tons/year of biomass mainly consisting of sugarcane bagasse, banana, corn, coffee, rice, and oil palm waste. The energy potential from these is close to 332 million GJ yr-1, equivalent to 7.4% of the total energy consumption in the country [4]. The latter statistic shows that there is a realistic opportunity to use waste biomass for biofuels production in Colombia.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 239-248. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.44701


Montoya et al / DYNA 82 (192), pp. 239-248. August, 2015.

Figure1. Van Krevelen diagram. Source: [3]

Figure 2. General structure of lignocellulosic materials. Source: [16]

Fast pyrolysis of agro-industrial residues for bio-oil production is a promising technology. The development of this technology is relevant and appropriate due to the fact that it does not compete with food security (unlike the transesterification and fermentation processes) commonly used to produce ethanol and biodiesel. Additionally, it can be easily incorporated into existing petroleum refining infrastructure, achieving a reduction in operating costs [5- 8]. Pyrolysis processes involve heating biomass feedstocks to temperatures between 400-600°C in an inert or oxygenless atmosphere to produce bio-oil, bio-char, and noncondensable gases with high calorific value (rich in CO, CH4, H2) [9]. Bio-oil can be integrated into refineries as a raw material; bio-char can be integrated into the pyrolysis process for reaction heat or sold for other purposes; the gas can be recirculated into the reactor to provide heat, or can be used in subsequent processes like catalytic methane reforming or Fisher Tropsch procedures [10]. The pyrolysis char product is a solid residue similar to high rank coal, as seen in the Van Krevelen diagram in Fig. 1; as pyrolysis takes place, the volatiles are released and the bio-char is obtained with a high carbon content, and a low oxygen/carbon ratio similar to high rank coals. Biomass is made up of a mixture of polymers with different physicochemical structures, so thermal decomposition proceeds by stages. Initially, hemicellulose which is a weak structure and with lower molecular weight, breaks down in a temperature range 200-300 °C, followed by the cellulose which decomposes between 300-450 °C, and finally, the lignin breaks down between 250-500 °C which is the major contributor of char production at the end of pyrolysis [11–14]. Depending on operating conditions, the pyrolysis can be classified as slow pyrolysis or fast pyrolysis. The former is used for obtaining char as the main product (50-75% yield), bio-oil (15-20% yield) and non-condensable gases (20-30% yield). On the other hand, fast pyrolysis processes mainly produce bio-oil (50-70% yield), whose performance is limited to the type of biomass, operating temperature, gas residence time in the reactor, heating rate, particle size, and mineral content.

High heating rates are required, more than 1000 °C s-1, for achieving high selectivity towards the production of bio-oil. A particle size of between 1-5 mm is required to get high heating rates, reactor temperatures between 400-600 °C and gas residence time less than 1s. There is extensive and varied information about what type of biomass to use, operating conditions, and reactor type (fixed bed, fluidized rotating cone, ablative reactor, and others). These have not been agreed upon among the different authors; therefore, the main parameters (operational parameters such as temperature, heating rate, particle size, gas and solid residence time, initial biomass moisture, and structural parameters such as biomasstype, crystallinity, and mineral material content) affecting the bio-oil production by fast pyrolysis process of biomass are described in this first review. 2. Biomass Structure As shown in Fig 2, biomass can be defined as a heterogeneous mixture of polymers mixed with a small fraction of inert material. The organic fraction consists of polymers containing three major macromolecules: cellulose, hemicellulose, and lignin. Some biomasses also have a lower amount of lipids, pectin, and extractives which do not exceed 10% w w-1 [15]. Cellulose corresponds to about 40-60% of the total biomass weight, hemicellulose 15-25%, and lignin (15-25%) [16–18]. Cellulose is a linear, mostly crystalline polymer whose structural unit is glucose, which is linked through β-D glycosidic bonds. They stick together to form chains with more than 10,000 structural units (a.k.a. degree of polymerization). Cellulose polymers form fibers inside biomass, which provide structure to the cell walls. Hemicellulose is an amorphous polymeric structure shorter than cellulose (polymerization degree 100), and branched; it is made up mainly of sugars with 5 and 6 carbons per unit (e.g. xylose, glucose). On the other hand, lignin is a complex non-crystalline macromolecule, composed of a variety of aromatic constituents as sinapyl, coniferyl, and coumaryl alcohols (see Fig 2) linked by β-ether linkages [18]. Lignin and hemicellulose are responsible for keeping cellulose fibers attached.

240


Montoya et al / DYNA 82 (192), pp. 239-248. August, 2015.

Mineral matter depends on the kind of biomass, it comprises between 2-25% of the total solid weight and commonly consist of minerals such as Na, K, Ca, Mg, Mn, Co, Zn, and Cu like oxides or like salts such as chlorides, carbonates, phosphates and sulphates [19-20]. These minerals play an important role in thermal decomposition of biomasses, acting as catalysts of some reactions such as biomass dehydration and catalytic cracking of the volatiles.

of glucose is easily maintained in crystal structures due to the network formed by hydrogen bonds. The crystalline lattice acts as a thermal energy sink; therefore, the fragmentation of amorphous cellulose produces more oligomers (e.g. cellobiosan, cellotriosan) than monomers (glucose, levoglucosan, celobiosan).

2.1. Physicochemical nature of biomass

All types of biomass are poor heat conductors, so it is easy to find a temperature profile inside a particle when it is submitted to fast heating. This profile is more remarkable in bigger particles, so, biomass size and shape affect the residence time of volatiles inside the particle, favoring cracking reactions that reduce the yield of condensable gases [15,17-28]. Therefore, secondary reactions between char and volatiles take place more easily in bigger particles, which increase the yield of char instead of bio-oil production [28]. In general, particle size is directly related to the rate of heating, and for fast pyrolysis processes, the formation of bio-oils is favored with high heating rates which are obtained with small particle sizes (<1 mm ) [15,20,28-32]. It should be clarified that in industrial processes or pilot scale, the costs associated with grinding processes to reduce the biomass size, are high, and this may limit the profitability of the process. Bridgwater et al. [32] have reviewed the specifications of particle sizes for different pyrolysis technologies. For example: particle sizes of less than 200 mm are recommendable for jet rotary cone technology, less than 6 mm are for fluidized bed reactor and circulating bed, and less than 10 cm for rotary disc ablative processes. In the literature there is no unanimous agreement on the ideal particle size to increase the production of bio-oil. Shena. J et al. [33] found that bio-oil yields obtained from mallee (Australian Eucalyptus) in a fluidized bed reactor decreases as the particle diameter increases from 0.3 to 1.5 mm. Beige et al. [35] reported that the maximum bio-oil yield obtained from fixed-bed pyrolysis of safflower is reached with a 0.42 mm particle size. Onay et al. [36], Nurul et al. [37] have reported ideal particle sizes of between 0.85 to 200 mm for fixed-bed pyrolysis of rapeseed, whereby bio-oil yields of up to 60% (w w-1) are obtained.

It has been observed under slow heating rates (1-100 째C min-1) [21-22] that the maximum weight loss for the hemicellulose is 80% (w w-1) at 258 째C; for cellulose, it is close to 94% (w w-1) at 400 째C; and for lignin only 54% (w w-1) is decomposed at 900 째C. There are few studies to determine how volatile production changes due to other compounds in the biomass (protein, fatty acids, sugars). Xiu-John et al. [22] confirmed through thermogravimetric studies that the presence of extractives catalyzes reactions that increase aldehydes and acid compounds (acetic acid, formic acid), while the absence of them improves the water and carbon dioxide production. The presence of extracts in biomass favors the bio-oil production (can be easily volatilized and condensed), but also modifies its quality; for example, the fatty acids present in biomass favor the formation of bio-oil with high viscosity, less oxygen content, and high calorific value. There are other structural parameters which have been scarcely studied, have great importance, including the degree of crystallinity, bond orientation and degree of polymerization of the cellulose. These parameters affect the chemical composition of volatiles released during pyrolysis. For example, Ponder et al. [23] studied the effects of the orientation and position of glucans links (cellulose constituent units) in the production of volatiles. They observed that position and bond orientation do not have a significant effect on levoglucosan yields (main compound obtained from the devolatilization of cellulose). It is important to note that this study is only an approximation since they used low and medium molecular weight glycosides (150 Dalton) whose behavior is different from cellulose molecules (polymerization degree>1000). Mettler et al. [24] demonstrated that a high degree of polymerization in cellulose increases the production of levoglucosan and decreases the formation of char. Researchers believe that small molecules or species with low degrees of polymerization are closed chemical structures like glucose rings that break off and form new linear structures, which then undergo decomposition reactions producing light compounds, whereas it is easier to break glycosidic bonds joining the glucose units present in larger molecules. The maximum stress points and therefore the most feasible to fracture during pyrolysis are precisely those which are located at the interface of the amorphous and crystalline structures [25]. Therefore the maximum temperature required for cellulose decomposition is greater for more crystalline structures [26]. The latter is mainly due to the extra energy for destroying the structure formed in the crystal lattices due to hydrogen bonds. In addition, the ring structure

2.2. Particle Size

2.3. Initial moisture content of biomass. The initial moisture content of some biomasses can be high, close to 90% w w-1, which requires an increase of energy supplies, affecting the energy efficiency of the process [33] and the costs associated with biomass drying. The most economical and widely used energy source for drying biomass is solar radiation, which can reduce the moisture content to values between 12 and 3% w w-1, depending on atmospheric conditions. The quality and yield of bio-oil recovered in fast pyrolysis processes is directly related to the initial moisture content of biomass. High moisture content produces a bio-oil with low viscosity, high stability to the decomposition of anhydrous sugars and dissolution of polar fragments, and physically looks like a homogeneous phase [34]. However, the direct

241


Montoya et al / DYNA 82 (192), pp. 239-248. August, 2015.

use is restricted due to ignition and combustion problems in burners and boilers caused by the high water content. 2.4. Mineral material content Biomasses contain traces of mineral material (K, Na, Ca, Zn, Cu, P, Mg, Mn..), usually oxides or salts such as chlorides, sulphates, carbonates and phosphates [35]. The proportions of these minerals depends on the kind of biomass, storage, collection methods, drying, and grinding conditions. The content of inorganic material is an important parameter for the study of side reactions; generally, as the mineral material content in biomass increases, the bio-oil performance decreases and gas and char production increases [11,15,16,20,31-32,36-37]. The latter is due to the catalytic effect of the mineral matter in the dehydration and decarboxylation reactions. In recent years, numerous works have been published aimed at understanding the effect of mineral matter on final product distribution [19,35,38-41]. A theoretical study to determine the effect of inorganic material on the fast pyrolysis process is not feasible yet. Raveendran, R, et al. [38] evaluated the effect of mineral content on the yield of volatiles and char, for 20 different biomasses, which were studied with and without the demineralization step (acid wash). Comparing the experimental results, it was found that higher char yields are obtained for the biomass without washing, in comparison with demineralized biomasses. In corn cob pyrolysis, washing produced a 57% decrease in the char yield and 21% increase in the oil yield. Despite the good results obtained, testing was performed in a fixed bed reactor with temperature set at 500 째C and particle sizes between 100 and 250 mm (10 and 25 cm). Under these conditions intra-transfer phenomena of particles are likely [15,20,28], and it is unclear whether variations in the results are due to the minerals catalytic effects or associated with resistance to heat and mass transfer. Another study was undertaken by Patwardhan et al. [39], who evaluated the effect of mineral matter on the yield of char and volatiles obtained during pyrolysis of Avicel microcrystalline cellulose. In contrast to the experiments carried out by Raveendran [38], the authors use particle sizes of 50 microns to minimize the effects of heat and mass transfer, and the experiment was conducted in a Frontier freefall pyrolysis micro reactor coupled to a gas chromatograph, ensuring high heating rates (5000째C/s). In this study, the authors impregnated cellulose with solutions of 0.5 to 5% w/w of salts such as NaCl, KCl, MgCl2.6H2O, CaCl2, Ca(OH)2, Ca(NO3), CaCO3, and CaHPO4 to evaluate which of these minerals exerts a greater influence on the global production of volatile levoglucosan, acetol, furaldehyde, 5HMF (Hydroxy-methyl-furfural), glycol aldehyde, and formic acid. They determined that low concentrations, 0.5% w/w, increases char yield up to 10% and also increases the yield of low molecular weight species such as acetol, glycol aldehyde, and formic acid while levoglucosan levels decrease. They found that the most impacting minerals on volatile yields were, in order: K+, Na+, Ca+2, Mg+2, Cl-, NO-, OH-, CO2-, PO3-. Despite the fact that the experiments were carefully prepared, these behaviors cannot be applied to biomass, because the interaction with cellulose can be

changed by the presence of lignin and hemicellulose, and cellulose in biomass has different degrees of crystallinity that may interact differently with minerals. Yang, H. et al. [35] studied the effect of KCl, K2CO3, Na2CO3, CaMg (CO3)2, Fe2O3 and Al2O3 on the performance of the char produced by slow pyrolysis of cellulose, hemicellulose, and pure lignin. They used 0.1 weight ratios between the inorganic and the organic component (cellulose, hemicellulose, and lignin). According to the authors, none of the minerals studied except K2CO3, had any appreciable effect on the yield of char, in contrast with work reported by Shimomura et al. [42], Patwardhan et al. [19], Richards et al. [43], Nik-Azar et al. [44], Zsuzsa et al. [45]. It has also been found that the presence of alkali and alkaline metals promotes dehydration reactions of cellulose and lignin. These reactions decrease the levoglucosan yield and increase the char production. In contrast, adding anions (chloride, nitrates, sulfates) increases levoglucosan performance and decreases char yields [45]. For example, sodium restricts both the cellulose and hemicellulose transglycosylation reactions, and facilitates demethoxylation, demethylation, and dehydration of lignin [46]. The presence of (what?) improves the interaction with functional groupsCOOH,-OH, forming alkali-oxygen clusters that promote cracking reactions [47], and the presence of silicon also decreases the production of volatiles [43]. To study the effects of mineral matter on the volatile and char yields, samples of biomass (natural or synthetic) are subjected to washing with deionized water and acidic solutions (HCl, H2SO4, HNO3, H3PO4), or the addition of minerals in solution (Na2 (NO3), K2(NO3), Ca(NO3), etc.) [48-49]. These methods of pre-treatment of raw materials have limitations, which makes it difficult to explain the results. Acid washing with strong acids for example, dissolves hemicellulose and cellulose (hydrolysis) almost entirely [45,49]; therefore, the results obtained are due to the combined effects, so it would not be clear which of the two effects exerted greater influence on yields. Techniques for mineral addition by impregnation are simple physical adsorptions on the particle surface; therefore, the mineral dispersion is different at found in virgin biomass (mineral distribution throughout the entire volume). Another difficulty with impregnation techniques is that cations and anions are added simultaneously to the biomass (impregnated saline solutions are used), and in these cases the cations and anions may have different interactions with volatile and char [53]. In some biomasses, not all minerals can be removed by washing (acid, alkaline, or neutral). These metals can be bonded chemically to the organic matrix, forming associations with oxygen-rich functional groups (-COOH,COH,-OH) or lignin phenolic groups [49,54]. In these cases, the ion exchange techniques have been effective to demineralize biomasses or to study the effect of an isolated cation (with anions removed). In this technique, the sample is initially washed with an acid solution to remove unbound minerals present in virgin biomass. Then, to remove chemically bonded minerals, the sample is immersed in a concentrated saline solution (nitrates, sulphates) in which the anions exert strong attraction to the electrophilic cations

242


Montoya et al / DYNA 82 (192), pp. 239-248. August, 2015.

2.6. Reaction temperature

The greatest weight loss occurs at between 300 and 400 °C (about 80%), corresponding to the maximum rate of volatiles released. In this stage, random fragmentation of cellulose, hemicellulose, and glycosidic bonds generates volatile compounds with a high oxygen content, leaving a carbonaceous residue known as char [27,50]. At temperatures above 400 °C, CO and CO2 are released by depolymerization reactions from the lignin-rich aromatic carbonaceous matrix. The effect of temperature on the bio-oil production is well understood and widely explored in the literature. It was found that the maximum bio-oil yield is obtained between 400-500 °C, depending on the type of biomass [8,19]. At higher temperatures, volatile cracking reactions are promoted, which reduce the yield of bio-oil and increase the production of non-condensable gases. [28,33,62,63]. The effect of temperature on the production of bio-oil by cellulose (Avicel) and maple fast pyrolysis in fluidized bed and entrainment flow reactor was studied by Scott et al. [64]. They hold constant particle size at 100 microns, volatile residence time at 500 ms and the process temperature between 400-700 °C. The authors found that maximum production of bio-oil was achieved in the fluidized bed reactor at 450 °C, with yields of 80% and 90% for maple and cellulose respectively, while for entrainment flow reactors yields were 70 and 60%, respectively, at 600 °C. Heo et al. [65] pyrolyzed sawdust with particle diameter of 0.7 mm in a fluid bed reactor to estimate the temperature effect on biooil yields. They found that the best performance is achieved at 450 °C (60 % w/w). Similar results are reported in [51-53]. There is a parabolic behavior for bio-oil production with temperatures of between 400-450 °C whereby the yield of bio-oil increases because the rate of biomass devolatilization increases (this is reflected in char reduction) and the bio-oil yield decreases at temperatures > 450 °C because of volatile cracking (reflected by the higher yields of non-condensable gases). Other studies have reported the effect of temperature on the yield of pyrolysis bio-oils [61,32,53]. In addition to altering the production of bio-oil, changes in temperature affect the composition released during devolatilization. [54]. At low temperatures (<300°C), most volatiles derive from decomposition of hemicellulose and cellulose links, resulting in levoglucosan, levoglucosenone, hydroxymethyl furfural, acetic acid, acetone, guiacyl, acetone, glyoxal,

The devolatilization process is sequential and depends on the temperature reached by the particle (see Fig 3). The first stage is comprised between 20-120 °C and corresponds to water evaporation from the biomass, and subsequently between 120-300 °C is a region with no appreciable biomass weight loss, where in small quantities of some light gases are released such as CO, CO2 and steam, coming mainly from dehydration reactions, decarboxylation from R-OH groups belonging to hemicellulose and lignin. [18,28,56,57]. Insofar as cellulose, it has been demonstrated [26,31,58–60] that between 200-350°C, the depolymerization reactions gives rise to oligomers and anhydrous sugars (levoglucosan, cellobiosan, cellotriosan, glucose, etc.), known by many researchers as “active cellulose” and “molten cellulose” or simply ''intermediary liquid compound” [61].

Figure 3. Stages of thermal decomposition biomass. Source: The authors

bound to the biomass. This facilitates the exchange of cations between salt and biomass. Finally, washing with distilled water removes all anions and cations on the surface of the biomass. Few studies have been reported with this methodology [49, 52]. Although it may isolate the effects of the specific anions and assess the impact of a single metal, the processes of pre and post treatments alter the original biomass structure; therefore, results can be limited and must be explained carefully. 2.5. Volatiles and solids residence time It is important to take into account the residence time of volatiles and solids inside reactor to ensure high yields of biooil. It is generally accepted that the residence time of volatiles must be low (a few seconds) to minimize cracking reactions [19,29,33]. The residence times of volatiles and solids are usually different. High biomass residence times are needed to ensure complete devolatilization, while volatile time should be short to minimize secondary reactions [14,41,55]. These times are directly related to the operating conditions (carrier gas flow) and the technology used in the pyrolysis process; for instance, the residence time of solids in fixed bed reactors is higher (t → ∞), while in fluid bed reactors the residence time for both solids and volatiles tends to be very short (a few seconds) depending on the height of the reactor and the gas flow. From the operational viewpoint, the residence time of volatiles and solids can be manipulated with the carrier gas flow or vacuum (Pyrovac technology). Short residence times of volatiles inside the reactor, however, may be counterproductive for several reasons. Firstly, they can promote solids entrainment (high carrier gas flow) and therefore reduce biomass conversion and further promote the obstruction of cyclones and even contaminate the bio-oil recovered in the condensation steps. Besides this, particles could not achieve high heating rates because the reactor is cooled by forced convection. The residence time of volatiles is associated with heat exchangers dimensions; very low residence time (inside reactor) imply designs with large volumes of condensers or cooling systems in multiple stages which brings higher investment costs.

243


Montoya et al / DYNA 82 (192), pp. 239-248. August, 2015.

methanol, and formic acid. Fu et al. [55] demonstrated the presence of formic acid, methanol, ethane, ethylene, formaldehyde, acetone, hydrogen cyanide, carbon monoxide, and carbon dioxide during pyrolysis of cotton stalk, corn, and rice in a temperature range of 300-400 °C. At temperatures above 400 °C, they found benzene, carboxylic acids, phenols, p-cresol, furans, furfural, in noteworthy amounts (>1% w w1 ) [19]. As the temperature rises, the composition of volatile compounds decreases and higher thermally stable polyhydroxy aromatic compounds (PHA) begin to be produced. For instance, the percentage of aldehydes, alcohols, paraffin, and acetone gradually decreases for temperatures between 580-800°C, and new compounds such as benzene, naphthalene, cresol, toluene, PHA (polyhydroxy aromatics) such as pyrene, phenanthrene, and anthracene, and others [20,56-57] are generated. However, these temperatures are higher than the conditions normally used in the fast pyrolysis processes (500-550°C) found high concentrations of. Despite all the studies conducted to evaluate the effect of temperature on the yield of char, biooil and non-condensable gases, there is ambiguity in the definition of the temperature of pyrolysis. Unfortunately, in both laboratory scale and industrial scale pyrolysis equipment, it is not possible to directly measure the temperature of the biomass particles, and it is generally assumed to be equal to the reactor temperature [28,58], or in some cases is estimated by mathematical modeling [59-60]. However, these assumptions, in most cases, are far removed from reality, with mismatches (thermal lag) of temperatures as high as 100 °C. For slow pyrolysis conditions with small particle sizes (less than 1mm), this assumption is valid due to the higher heating times, where it is easy to achieve thermal equilibrium between the biomass particles and the surrounding. [61-63]. In general, the temperature lags are greater when the heating is undertaken at high temperatures or high heating rates. This thermal lag causes errors in estimating actual particle temperature, which can overestimate the kinetic parameters such as the activation energy and the coefficient of the reaction rate [62-63]. 2.7. Heating Rate The heating rate of biomass particles is perhaps the main parameter that allows us to obtain a difference between slow pyrolysis processes from fast pyrolysis. This parameter, in other words, the time it takes to reach biomass reaction temperature. It has been reported that heating rates of the order 1 C/min-100°C/min are required for slow pyrolysis processes [15,20] and it is required to achieve heating rates higher than 1000°C/min for fast pyrolysis processes [64-65]. High heating rates promote cellulose and hemicellulose depolymerization reactions, minimizing the volatiles residence time inside the particle and secondary reactions. It also favors the volatiles cracking. So, the condensable gas release goes on quickly [66] thus achieving high yields of bio-oil and the lowest production of char. Chaiwat et al. [67] proposes a set of competitive reactions between cellulose depolymerization and dehydration, as shown in Fig 4. These results are supported by Agarwal et al. [68] who, through

Figure 4. Effect of the heating rate on the mechanism of thermal decomposition of the cellulose. Source: [67]

computer simulation, studied the changes affecting the cellulose during the fast pyrolysis process. The authors found that there is a change in the hydrogentype bond in cellulose structures. Low temperatures and low heating rates promote intra-chain hydrogen bonds of cellulose functional groups, increasing the probability of collision to produce a dehydration reaction. For high heating rates, inter-chain hydrogen bonds are stronger achieving greater separation between the cellulose molecules and thus decreasing the possibility of collisions that facilitate the dehydration reaction. There are many reports in literature pointing out the importance of the effects of the heating rate on the yields of bio-oil and char. The study by Ozlem et al. [69] of the fast pyrolysis of safflower seeds in a fixed-bed reactor showed that the effect of the heating rate on the yield of bio-oil and biochar was evaluated for 3 grams of this sample (particle size 1.25 mm) and heating ramps of 100, 300, 800 °C/min. The results show that for heating rates above 300 °C/min, maximum yields are 55% for bio-oil and at least 17% for char. Thangalazhy et al. [70] conducted a study at micro-scale level, using a pyroprobe (Pyroprobe model 5200, CDS Analytical Inc., Oxford, PA) to evaluate the effect of temperature and heating rate on yield and the distribution of some compounds in the bio-oil. The experiments were carried out with pine and grass, warming up the filament of pyrolyzer to reach 50,100, 500, 1000, 2000 °C/s. An interesting result was that at constant temperature (550°C), regardless of the filament heating rate, the final product distribution was the same (30% yield for pine wood bio-oil and 17% for forage grass), that is why biomass is resistant to heat transfer (poor thermal conductivity) its heating rate is different from the filament, and under these conditions the particle is always heated at 50°C/s. These experiments were carefully prepared; however, yields are far lower than usually reported by other researchers, [19,69,71-72]. It was not possible to recover the entire sample of bio-oil due to condensation in the tubing that connected the probe to the GC-MS, or cracking reactions in

244


Montoya et al / DYNA 82 (192), pp. 239-248. August, 2015.

the injection port and the GC oven. Van Waaij et al. [73], developed laboratory scale equipment ''wire mesh reactor,'' which reached controlled heating rates of up 7000 °C/s. This achieved bio-oil yields of about 84% (w w-1) and 5% char. The authors attribute these results to the precise temperature control, low residence time and instantaneous volatile cooling, which minimize secondary reactions effects when compared to traditional systems such as fluidized, packed, and circulating beds, in which nothing can be measured or the heating rate and temperature controlled. The heating rate also plays an important role on the quality of bio-oil and char structure obtained. Bio-oils can be obtained with lower moisture content at higher heating rates, primarily due to inhibition of secondary reactions such as volatile dehydration and cracking [74] and due to watersoluble fractions (formic acid, methanol, acetic acid) and heavy fractions (rich in phenol and its derivatives). The CO and CO2 also increase at high heating rates [32]. Ketones content, levoglucosan, phenol, and toluene levels also go up with the heating rate for the fast pyrolysis of pine wood and grass when the heating rate exceeds 50 -1000°C/s. Higher heating rates do not show a marked effect on the performance of each sub product [70]. With high heating rates, a biochar is obtained with smaller volume pores and a specific surface area. This heating rate can generate significant pressure gradients between the inside and outside of the particle because the volatiles are produced quickly and do not have the ability to instantly be evacuated. This causes some internal structures to crack, increasing the end proportion of macropores in connection with the Table 1. Main devices used in the study of fast pyrolysis processes. Reactor Temp. Heating Time Pressure Max °C Rate °C/s S atm 1500 104 - 105 0.1-4 Vac-69 Drop Tube

Particle Size Powder

Wire Mesh

1500

1000

1-3600

Vac-69

Powder

TGA

1200

1

Hours

Vac-2

< 2mm

Radiation

2000

104 – 106

Msec

1

Powder

µs

Vac

Fine

6

micropores. Besides this, the biomass passes through a melting phase (metaplastic) which warps the internal structures and blocks the pores. [54-55]. Although this parameter is very important to define the conditions of fast pyrolysis, we highlight that it cannot be measured experimentally; its estimation is limited to theoretical models. Often these parameters are subjective, this may depend on the devices used (reactor), models’ assumptions, experimental conditions (kind of biomass, carrier gas flow, and particle size) making it difficult to compare experimental results obtained for global yields and kinetic parameters published in scientific papers [76]. Regarding this last point, there is ambiguity as to the value of the heating rate. In most cases, the heating rate is reported the controller setpoint in the heating elements (electrical resistance, radiation lamp, hot plate), rather than a particle heating rate, assuming a linear heating, mathematically represented as: T=βt+To with β, t and To as the heating rate, time and initial temperature of the particle respectively [13,26,57]. This simplification can be valid under certain experimental conditions, such as those used in thermogravimetric analysis, or pyrolysis subjected to kinetic control conditions (small particle sizes <100um). Fig 5 shows that for large particles (>1mm), temperature gradients exist and heating rates at each point are different, modifying reactivity patterns at each point within the biomass (the reaction rate depends on the heating rate and reaction

Shock Tubes

2400

10

Pyroprobe (electrical resistance) Fluidized Bed

700

20000

103600

Vac-1

Fine

1200

100010000

1-3600 S

1-100

mm

Drive Bed

700-800

1-100

1-5

Thin Film

1000

100010000 104 – 106

Msec

Vac-2

µm

Ablative

600

-----

S

1

cm-m

AUGER.

600

------

Min

1

Chips

Vortex Microwave

600 Power: 5002000W

----------

Min Min

1

1-5 Pulv.

Source: The authors

245

Remarks

High heating rates. It is not possible to measure particle temperatures. Moderate gas residence time. Errors of 5% in closing mass balances. Uncertainty in particle temperature measurement. Short residence times of volatiles. Low uncertainty in the mass balances. Low heating rates and temperatures. High volatile residence time. Good estimation of sample temperature. Online monitoring sample weight. High heating rates. Inaccurate temperature measurement. Complete destruction of volatile (side reactions). Temperature measurement uncertainty. Direct measurement of volatile and light gases. Indirect measurement of the sample temperature. In this moment the heating rate on the sample is not reported with the quartz tube sample holder. High uncertainties in the closure of mass balances. Volatile quantifying uncertain. It is not possible to accurately determine particle temperature and residence time. Accurate temperature measurement of the particle. High volatile residence times. Volatile cracking reactions. Solids temperature measurement is not possible. Difficult volatile monitoring. Bio-oil contamination of sand. Difficult estimation of solids temperature. Higher volatile residence times. The estimation of solid temperature is difficult. It favors the formation of hot spots in the sample (uneven heating). It is not possible to make precise sample temperature control. .Selective heating of polar groups.


Montoya et al / DYNA 82 (192), pp. 239-248. August, 2015.

Figure 5. Temperature profile within a biomass particle. Source: The authors

temperature). This consideration has been ignored in many studies [26,37,58-61], where normally the same reaction rate expression is used to describe the reactivity of the whole particle. Obviously, this can cause problems when predicting the final product distribution, because each point inside the particle has different kinetic parameters due to changes in the heating rates and temperatures. For fast pyrolysis studies, there are many techniques and devices, which can ensure conditions of high heating rates such as: PY-MBMS (Pyrolysis-Molecular Beam Mass Spectrometry), TGA (Thermogravimetric analyzer), flash radiation lamps, wire mesh reactors, discharge tubes, free fall reactors (Drop tube reactor), among others. Table 1 summarizes the main characteristics of these devices. Of these devices, the most commonly used in kinetic studies are the free fall reactors, Py-MS, reactors, and wire mesh (hot wire) [62], [63]. Within the academic community, wire mesh reactors have gained acceptance over other technologies for their versatility (can be used to study almost any kind of biomass and coal without many modifications to the reactor), directness of the temperature measurement (this data is important for kinetic parameters estimation), shortness of the volatiles’ residence time (<1s), minimization of side reactions, and allowance of kinetic control. 3. Final remarks Several investigations have been carried out to produce clean energy based on biomass fast pyrolysis. In its first part, this paper presented a general review of the effect of the main operational parameters on the global products distribution (Bio-oil, bio-char and syngas). However, a good understanding of the process enables the maximization of pyrolysis products. The following conclusions and recommendations can be drawn:  Bio-oil production by pyrolysis is still an immature technology and it is not commercially viable yet. Biooil production technologies have to overcome many technical and economic challenges to compete with traditional fossil fuels.  Along with the technology, a proper selection of biomass is also a critical issue for achieving high yields of bio-oil. Biomasses with a high cellulose content may

be suitable, because of the greater amount of bio-oil derived from them. Similarly, biomass with low water content is desirable to reduce drying costs and improve oil quality.  In general, the cellulose and hemicellulose tend to produce the largest amount of volatiles; lignin is difficult to break up even at high temperatures; it normally retains its structure and is associated with char production.  Temperature and heating rates are the most important parameters in biomass pyrolysis. Average temperatures between 500-550°C usually maximize the bio-oil yield. Temperatures above or below this range bring on the production of char and gas respectively.  A combination of parameters such as: moderate reaction temperatures (500°C), high heating rates (>1000°C/min) and short residence time of volatiles (less than 2s) maximize the bio-oil yield.  The initial moisture, mineral matter content, flow, and kind of carrier gas are parameters directly related to the promotion of secondary reactions that adversely affect the production of condensable volatiles and maximize the char yield.  Carrier gas reduces the residence time of condensable volatiles, which helps to minimize the cracking and repolymerization reactions of vapors. Furthermore, it is important to ensure a rapid quenching of these vapors to recover as much of the bio-oil as possible.  Small particles (<1mm) are preferred for achieving high heating rates, uniform distribution of temperatures and higher yields in the bio-oil production. Larger particles tend to decrease the biomass conversion, bringing on char formation, mainly because there are mass and energy transfer restrictions that encourage the participation of secondary reactions.  For fast pyrolysis studies, there are many techniques and devices which can ensure conditions of high heating rates. Wire mesh and thin film reactors have gained acceptance within the academic community for kinetics studies (laboratory scale only). To achieve high bio-oil production, fluidized bed reactors are preferred. Most devices used in parametric studies for pyrolysis processes have limitations in terms of temperature measurements and online compounds quantifications due to the short reaction times and products interference with the recorded signals (changes in emissivity of the sample, tar deposition on lenses, and wide molecular weight distribution). Further research is needed in this line to better understand the primary and secondary reactions and the coupling with energy, mass, and momentum transfer processes in and out of the particles. A second review article will be presented to deepen the understanding of the phenomena involved in the process of biomass fast pyrolysis. The second paper will focus on the reaction models proposed in scientific papers and modeling at particle level of the formation of an intermediate liquid phase within the particle during the degradation of biomass.

246


Montoya et al / DYNA 82 (192), pp. 239-248. August, 2015.

Acknowledgments The authors acknowledge COLCIENCIAS, for funding the doctoral level studies of chemical engineer Jorge Iván Montoya Arbeláez. Very special thanks to Professor Manuel Garcia-Perez for his valuable comments in discussions aimed at understanding the pyrolysis processes.

[20]

References

[22]

[1] [2] [3] [4] [5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14] [15] [16] [17] [18] [19]

International Energy Agency and I.E. Agency, Key World Energy, OECD Publishing, 2011. Marín, E.C. y Mahecha, H.S., Biocombustibles y autosuficiencia energética, DYNA, 76 (158), pp. 101-110, 2009. WEC - World Energy Council, 2010 Survey of Energy Resources. World Energy Council, 2010, 618 P. Atlas Biomasas - UPME. Anex, R.P. et al., Techno-economic comparison of biomass-totransportation fuels via pyrolysis, gasification, and biochemical pathways, Fuel, 89, pp. S29-S35, 2010. DOI: 10.1016/j.fuel.2010.07.015 Trippe, F., Fröhling, M., Schultmann, F., Stahl, R. and Henrich, E., Techno-economic analysis of fast pyrolysis as a process step within biomass-to-liquid fuel production, Waste and Biomass Valorization, 1 (4), pp. 415-430, 2010. DOI: 10.1007/s12649-010-9039-1 Girods, P., Dufour, A., Rogaume, Y., Rogaume, C. and Zoulalian, A., Comparison of gasification and pyrolysis of thermal pre-treated wood board waste, J. Anal. Appl. Pyrolysis, 85 (1-2), pp. 171-183, 2009. DOI: 10.1016/j.jaap.2008.11.014 Tirado, M.C., Cohen, M.J., Aberman, J., Meerman, N. and Thompson, B., Addressing the challenges of climate change and biofuel production for food and nutrition security, Food Res. Int., 43 (7), pp. 1729-1744, 2010. DOI: 10.1016/j.foodres.2010.03.010 Damartzis, T. and Zabaniotou, A., Thermochemical conversion of biomass to second generation biofuels through integrated process design - A review, Renew. Sustain. Energy Rev., 15 (1), pp. 366-378, 2011. DOI: 10.1016/j.rser.2010.08.003 Bahng, M.-K., Mukarakate, C., Robichaud, D.J. and Nimlos, M.R., Current technologies for analysis of biomass thermochemical processing: a review., Anal. Chim. Acta, 651 (2), pp. 117-138, 2009. DOI: 10.1016/j.aca.2009.08.016 Ranzi, E., Cuoci, A., Faravelli, T., Frassoldati, A., Migliavacca, G., Pierucci, S. and Sommariva, S., Chemical kinetics of biomass pyrolysis, Energy & Fuels, 22 (6), pp. 4292-4300, 2008. DOI: 10.1021/ef800551t Degroot, W.F., Rahman, M.D., Pan, W.-P. and Richards,G.N., First chemical events in pyrolysis of wood, Journal of Analytical and Applied Pyrolysis, 13 (3). pp. 221-231, 1988. DOI: 10.1016/01652370(88)80024-X Couhert, C., Commandre, J.-M. and Salvador, S., Is it possible to predict gas yields of any biomass after rapid pyrolysis at high temperature from its composition in cellulose, hemicellulose and lignin?, Fuel, 88 (3), pp. 408-417, 2009. DOI: 10.1016/j.fuel.2008.09.019 Branca, C. and Di Blasi, C., Multistep mechanism for the devolatilization of biomass fast pyrolysis oils, Ind. Eng. Chem. Res., 45 (17), pp. 5891-5899, 2006. DOI: 10.1021/ie060161x de Wild, P., Reith, H. and Heeres, E., Biomass pyrolysis for chemicals, Biofuels, 2 (2), pp. 185-208, 2011. DOI: 10.4155/bfs.10.88 Alonso, D.M., Wettstein, S.G. and Dumesic, J.A., Bimetallic catalysts for upgrading of biomass to fuels and chemicals., Chem. Soc. Rev., 41 (24), pp. 8075-8098, 2012. DOI: 10.1039/c2cs35188a van de Weerdhof, M.W., Modelling the pyrolysis process of biomass particles, Eindhoven University of Technology. Klein, M.T. and Virk, P.S., Modeling of lignin thermolysis, Energy & Fuels, 22 (4), pp. 2175-2182, 2008. DOI: 10.1021/ef800285f Patwardhan, P.R., Satrio, J.A., Brown, R.C. and Shanks, B.H., Influence of inorganic salts on the primary pyrolysis products of

[21]

[23]

[24]

[25] [26]

[27]

[28] [29]

[30]

[31] [32] [33]

[34] [35]

[36] [37] [38] [39] [40]

247

cellulose., Bioresour. Technol., 101 (12), pp. 4646-4655, 2010. DOI: 10.1016/j.biortech.2010.01.112 Akhtar, J. and Saidina-Amin, N., A review on operating parameters for optimum liquid oil yield in biomass pyrolysis, Renew. Sustain. Energy Rev., 16 (7), pp. 5101-5109, 2012. DOI: 10.1016/j.rser.2012.05.033 Gani, A. and Naruse, I., Effect of cellulose and lignin content on pyrolysis and combustion characteristics for several types of biomass, Renewable Energy, 32 (4), pp. 649-661, 2007. DOI: 10.1016/j.renene.2006.02.017 Nakagawa-Izumi, A. and Kuroda, K., Analytical pyrolysis of lignin: Products stemming from β-5 substructures, Organic Geochemistry, 37 (6). pp. 665-673, 2006. DOI: 10.1016/j.orggeochem.2006.01.012 Ponder, G.R., Stevenson, T.T. and Richards, G.N., Influence of linkage position and orientation in pyrolysis of polysaccharides: A study of several glucans, Journal of Analytical and Applied Pyrolysis, 22 (3), pp. 217-29, 1992. DOI: 10.1016/0165-2370(92)85015-D Mettler, M.S., Paulsen, A.D, Vlachos, D.G and Dauenhauer, P.J., The chain length effect in pyrolysis: Bridging the gap between glucose and cellulose, Green Chem., 14, pp. 1284-294, 2012. DOI: 10.1039/c2gc35184f Broido A., Molecular weight decrease in the early pyrolysis of crystalline and amorphous cellulose, J. Appl. Polym. Sci., 17, pp. 3627-3635, 1973. DOI: 10.1002/app.1973.070171207 Liu, D., Yu, Y. and Wu, H., Differences in water-soluble intermediates from slow pyrolysis of amorphous and crystalline cellulose, Energy & Fuels, 27 (3), pp. 1371-1380, 2013. DOI: 10.1021/ef301823g Melgar, A., Borge, D. y Pérez, J.F., Estudio cinético del proceso de devolatilización de biomasa lignocelulósica mediante análisis termogravimétrico para tamaños de partícula de 2 a 19 mm, DYNA, 75 (155), pp. 123-131, 2008. Basu, P., Biomass, Gasification and Pyrolysis, Elsevier Inc., 45 (1), pp. i–i, 2010. Meier, D., van de Beld, B., Bridgwater, A.V., Elliott, D.C., Oasmaa, A. and Preto, F., State-of-the-art of fast pyrolysis in IEA bioenergy member countries, Renew. Sustain. Energy Rev., 20, pp. 619-641, 2013. DOI: 10.1016/j.rser.2012.11.061 Boutin, O., Ferrer, M. and Lédé, J., Flash pyrolysis of cellulose pellets submitted to a concentrated radiation: experiments and modelling, Chemical Engineering Science, 57 (1). pp. 15-25, 2002. DOI: 10.1016/S0009-2509(01)00360-8 Gómez, A., Klose, W., and Rincón, S., Pirólisis de biomasa. Cuesco de palma de aceite, Bogotá, 2008. Bridgwater, A., Fast pyrolysis processes for biomass, Renewable and Sustainable Energy Reviews, 4 (1), pp. 1-73, 2000. DOI: 10.1016/S1364-0321(99)00007-6 Shen, J., Wang, X.-S., Garcia-Perez, M., Mourant, D., Rhodes, M.J. and Li, C.-Z., Effects of particle size on the fast pyrolysis of oil mallee woody biomass, Fuel, 88 (10), pp. 1810-1817, 2009. DOI: 10.1016/j.fuel.2009.05.001 Demirbas, A., Effect of initial moisture content on the yields of oily products from pyrolysis of biomass, J. Anal. Appl. Pyrolysis, 71 (2), pp. 803-815, 2004. DOI: 10.1016/j.jaap.2003.10.008 Zheng, C., Liang, D., Yang, H., Yan, R., Chen,H. and Lee, D., Influence of mineral matter on pyrolysis of palm oil wastes, Combustion and Flame, 146 (4), pp. 605-611, 2006. DOI: 10.1016/j.combustflame.2006.07.006 Cuevas, R. and Lamas, J., Advances and development at the Union Fenosa pyrolysis plant, in 8th Conference on Biomass for Environment, in Agriculture and Industry, 1995. Williams, A.W., An investigation of the kinetics for the fast pyrolysis of loblolly pine woody biomass. Georgia Institute of Technology, 2011. Raveendran, K., Ganesh, A. and Khilar, K.C., Influence of mineral matter on biomass pyrolysis characteristics, Fuel, 74 (12), pp. 18121822, 1995. DOI: 10.1016/0016-2361(95)80013-8 Sulaiman, F. and Abdullah, N., Optimum conditions for maximising pyrolysis liquids of oil palm empty fruit bunches, Energy, 36 (5), pp. 2352-2359, 2011. DOI: 10.1016/j.energy.2010.12.067 White, J.E., Catallo, W.J. and Legendre, B.L., Biomass pyrolysis kinetics: A comparative critical review with relevant agricultural


Montoya et al / DYNA 82 (192), pp. 239-248. August, 2015.

[41]

[42] [43]

[44]

[45]

[46]

[47] [48]

[49]

[50]

[51]

[52]

[53] [54]

[55]

[56]

[57]

residue case studies, J. Anal. Appl. Pyrolysis, 91 (1), pp. 1-33, 2011. DOI: 10.1016/j.jaap.2011.01.004 Valette, J., Blin, J., Collard, F.-X. and Bensakhria,A., Influence of impregnated metal on the pyrolysis conversion of biomass constituents, Journal of Analytical and Applied Pyrolysis, 95. pp. 213226, 2012. DOI: 10.1016/j.jaap.2012.02.009 Shimomura, K., Watanabe, H. and Okazaki, K., Effect of high CO2 concentration on char formation through mineral reaction during biomass pyrolysis, Proceedings of the Combustion Institute, 2012. Richards, G.N. and Zheng, G., Influence of metal ions and of salts on products from pyrolysis of wood: Applications to thermochemical processing of newsprint and biomass, Journal of Analytical and Applied Pyrolysis, 21 (1-2), pp. 133-146, 1991. DOI: 10.1016/01652370(91)80021-Y Sohrabi, M., Hajaligol, M.R., Nik-Azar, M. and Dabir, B., Mineral matter effects in rapid pyrolysis of beech wood, Fuel Processing Technology, 51 (1-2). pp. 7-17, 1997. DOI: 10.1016/S03783820(96)01074-0 Mayer, Z.A., Apfelbacher, A. and Hornung, A., A comparative study on the pyrolysis of metal- and ash-enriched wood and the combustion properties of the gained char, Journal of Analytical and Applied Pyrolysis, 96, pp. 196-202, 2012. DOI: 10.1016/j.jaap.2012.04.007 Gellerstedt, G. and Kleen, M., Influence of inorganic species on the formation of polysaccharide and lignin degradation products in the analytical pyrolysis of pulps, Journal of Analytical and Applied Pyrolysis, 35 (1), pp. 15-41, 1995. DOI: 10.1016/01652370(95)00893-J Yürüm, Y. and Öztaş, N., Pyrolysis of Turkish Zonguldak bituminous coal. Part 1. Effect of mineral matter, Fuel, 79 (10), pp. 1221-1227, 2000. DOI: 10.1016/S0016-2361(99)00255-0 Mayer, Z.A., Apfelbacher, A. and Hornung, A., Effect of sample preparation on the thermal degradation of metal-added biomass, J. Anal. Appl. Pyrolysis, 94, pp. 170-176, 2012. DOI: 10.1016/j.jaap.2011.12.008 Jensen, A., Dam-Johansen, K., Wójtowicz, M.A. and Serio, M.A.,TGFTIR Study of the influence of potassium chloride on wheat straw pyrolysis, Energy & Fuels, 12 (5), pp. 929-938, 1998. DOI: 10.1021/ef980008i Pineda-Gomez, P., Bedoya-Hincapié, C.M. y Rosales-Rivera, A., Estomación de los parámetros cinéticos y tiempo de vida de la cascarilla de arroz y arcilla mediante la tecnica de análisis termogravimétrico (TGA), DYNA, 76 (165), 2011. Hajaligol, M., Fisher, T., Waymack, B. and Kellogg, D., Pyrolysis behavior and kinetics of biomass derived materials, Journal of Analytical and Applied Pyrolysis, 62 (2), pp. 331-349, 2002. DOI: 10.1016/S0165-2370(01)00129-2 Dauenhauer, P.J., Colby, J.L., Balonek, C.M., Suszynski, W.J. and Schmidt, L.D., Reactive boiling of cellulose for integrated catalysis through an intermediate liquid, Green Chem, 11, pp. 1555-1561, 2009. DOI: 10.1039/b915068b Bridgwater, A.V., Effect of experimental conditions on the comopsition of gases and liquids from biomass.Advances in Thermochemical Biomass Conversion pyrolysis, Glasgow, 1994. Scott, D.S., Piskorz, J., Bergougnou, M.A., Graham, R. and Overend, R.P., The role of temperature in the fast pyrolysis of cellulose and wood, Ind. Eng. Chem. Res., 27 (1), pp. 8-15, 1988. DOI: 10.1021/ie00073a003 Fua, P., Hua, S., Xianga, J., Lib, P., Huanga, D,. Jianga, L., Zhanga, A. and Zhanga, J., FTIR study of pyrolysis products evolving from typical agricultural residues, J. Anal. Appl. Pyrolysis, 88 (2), p. 117123,- 2010. Baldwin, R.M., Magrini-Bair, K.A., Nimlos, M.R., Pepiot, P., Donohoe, B.S., Hensley, J.E. and Phillips, S.D., Current research on thermochemical conversion of biomass at the National Renewable Energy Laboratory, Appl. Catal. B Environ., 115–116, pp. 320-329, 2012. DOI: 10.1016/j.apcatb.2011.10.033 Garcia-Perez, M., The Formation of polyaromatic hydrocarbons anddioxins during pyrolysis: A review of the literature with Descriptions of biomass composition, fast pyrolysis technologies and thermochemical reactions, Washington, 2008.

[58] Lédé, J., Biomass pyrolysis: Comments on some sources of confusions in the definitions of temperatures and heating rates, Energies, 3 (4), pp. 886-898, 2010. DOI: 10.3390/en3040886 [59] Di Blasi, C., Modelling the fast pyrolysis of cellulosic particles in fluid-bed reactors, Chemical Engineering Science, 55 (24), pp. 59996013, 2000. DOI: 10.1016/S0009-2509(00)00406-1 [60] Diebold, J. and Scahill, J., Ablative pyrolysis of biomass in solidconvective heat transfer environments, in: Fundamentals of Thermochemical Biomass Conversion, Overend, R P., Milne, T.A. and Mudge, L.K., Eds. Springer Netherlands, 1985, pp. 539-555. DOI: 10.1007/978-94-009-4932-4_30 [61] Lin, Y., Cho, J., Tompsett, G.A., Westmoreland, P.R. and Huber, G.W., Kinetics and mechanism of cellulose pyrolysis, J. Phys. Chem. C, 113, pp. 20097-20107, 2009. DOI: 10.1021/jp906702p [62] Narayan, R. and Antal, M.J.J., Thermal lag, fusion, and the compensation effect during biomass pyrolysis, Ind. Eng. Chem. Res., 35, pp. 1711-1721, 1996. DOI: 10.1021/ie950368i [63] Pokol, G., Várhegyi, G. and Dollimore, D., Kinetic aspects of thermal analysis, C R C Crit. Rev. Anal. Chem., 19 (1), pp. 65-93, 1988. DOI: 10.1080/10408348808542808 DOI: 10.1080/10408348808085617 [64] Bridgwater, A., Fast pyrolysis of biomass: A handbook, 3rd Ed. PyNe, 2008. [65] Kalgo, A.S., The development and optimisation of a fast pyrolysis process for bio-oil production, Aston University, 2011. [66] Le Bras, M., Yvon, J., Bourbigot, S. and Mamleev, V., The facts and hypotheses relating to the phenomenological model of cellulose pyrolysis, Journal of Analytical and Applied Pyrolysis, 84 (1). pp. 117, 2009. DOI: 10.1016/j.jaap.2008.10.014 [67] Chaiwat, W., Hasegawa, I., Tani, T., Sunagawa, K. and Mae, K., Analysis of cross-linking behavior during pyrolysis of cellulose for elucidating reaction pathway, Energy & Fuels, 23 (12), pp. 57655772, 2009. DOI: 10.1021/ef900674b [68] Agarwal, V., Dauenhauer, P.J., Huber, G.W. and Auerbach, S.M., Ab initio dynamics of cellulose pyrolysis: Nascent decomposition pathways at 327 and 600 °C, J. Am. Chem. Soc., 134 (36), pp. 1495814972, 2012. DOI: 10.1021/ja305135u [69] Onay, O., Influence of pyrolysis temperature and heating rate on the production of bio-oil and char from safflower seed by pyrolysis, using a well-swept fixed-bed reactor, Fuel Process. Technol., 88 (5), pp. 523-531, 2007. DOI: 10.1016/j.fuproc.2007.01.001 [70] Thangalazhy-Gopakumar, S., Adhikari, S., Gupta, R.B. and Fernando, S.D., Influence of pyrolysis operating conditions on bio-oil components: A microscale study in a pyroprobe, Energy & Fuels, 25 (3), pp. 1191-1199, 2011. DOI: 10.1021/ef101032s [71] Kumar, G., Panda, A.K. and Singh, R., Optimization of process for the production of bio-oil from eucalyptus wood, J. Fuel Chem. Technol., 38 (2), pp. 162-167, 2010. DOI: 10.1016/S18725813(10)60028-X [72] Açıkalın, K., Karaca, F. and Bolat, E., Pyrolysis of pistachio shell: Effects of pyrolysis conditions and analysis of products, Fuel, 95, pp. 169-177, 2012. DOI: 10.1016/j.fuel.2011.09.037 [73] Hoekstra, E., van Swaaij, W.P.M., Kersten, S.R.A. and Hogendoorn, K.J.A., Fast pyrolysis in a novel wire-mesh reactor: Design and initial results, Chem. Eng. J., 191 (0), pp. 45-58, 2012. DOI: 10.1016/j.cej.2012.01.117 [74] Ozbay, N., Putun, A.E. and Putun, E., Bio-oil production from rapid pyrolysis of cottonseed cake: product yields and compositions, Int. J. Energy Res., 30 (7), pp. 501-510, 2006. DOI: 10.1002/er.1165 [75] Lédé, J. and Authier, O., Characterization of biomass fast pyrolysis: Advantages and drawbacks of different possible criteria., Biomass Conv. Bioref, 1, pp. 133-147, 2011. DOI: 10.1007/s13399-011-0014-2

248


Fortran application to solve systems from NLA using compact storage Wilson Rodríguez-Calderón a & Myriam Rocío Pallares-Muñoz b a

Programa de Ingeniería Civil, Universidad Cooperativa de Colombia, Neiva, Colombia. wilson.rodriguez@campusucc.edu.co b Programa de Ingeniería Civil, Universidad Surcolombiana, Neiva, Colombia. myriam.pallares@usco.edu.co Received: March 3rd, 2014. Received in revised form: February 22th, 2015. Accepted: May 26th, 2015.

Abstract Educational software of Numerical Linear Algebra (NLA) friendly was developed, with the last developments in compact storage formats or compressed formats. This software generates the possibility to have own tools, handling for teaching processes in undergraduate, research and advanced formation. Having a tool own computational for the solution of lineal algebra problems generates advantages, since, all the details, hypothesis, restrictions, applications and kindness of the software can be known of first hand, without necessity of requesting expensive support services. The incorporation of storage advanced elements, programming and acceleration of the solution of NLA problems in the software, not allow alone solve academic cases, but also real problems that induce the user to a better understanding of the problems in an efficient and widespread way. Keywords: Numerical Linear Algebra, NLA, sparse matrix, storage formats, compressed sparse row, compressed sparse column, modified compressed sparse row format, modified compressed sparse column.

Aplicativo en Fortran para resolver sistemas de ALN usando formatos comprimidos Resumen Se desarrolló un software educativo de Álgebra Lineal Numérica (ALN) amigable y con los más recientes desarrollos en esquemas de almacenamiento compacto o formatos comprimidos. Este software genera la posibilidad de contar con herramientas propias, manipulables e intervenibles para procesos de docencia e investigación. Contar con una herramienta computacional propia para la solución de problemas de álgebra lineal genera ventajas muy importantes, toda vez que se pueden conocer todos los detalles, hipótesis, restricciones, aplicaciones y bondades del software de primera mano, sin la dependencia de costosos servicios de soporte. La incorporación de elementos avanzados de almacenamiento, programación y aceleración de la solución de problemas de ALN en el software, permite no sólo resolver casos académicos, si no también problemas reales que llevan al usuario a comprender mejor la resolución de sistemas de manera eficiente y generalizada. Palabras clave: Álgebra Lineal Numérica, ALN, matrices dispersas, esquemas de almacenamiento, formato comprimido por filas, formato comprimido por columnas, formato comprimido por filas modificado, formato comprimido por columnas modificado.

1. Introducción El Álgebra Lineal Numérica (ALN) posee gran cantidad de aplicaciones. Su desarrollo se remonta a la aparición de los computadores ya que con ellos tomó gran importancia la teoría de propagación de error, el concepto de aproximación y se desarrollaron muchos algoritmos de métodos numéricos, entre los que se encuentran los de ALN. Desde la época en que aparecieron los computadores, el tamaño y complejidad de los problemas ha crecido de manera sustantiva y los

problemas de ALN no han sido ajenos a esta situación, lo que ha obligado a que se busquen algoritmos, esquemas y estrategias que puedan resolver problemas de gran tamaño con la menor cantidad de almacenamiento a velocidades muy rápidas y de la manera más segura, estable y precisa posible. La labor no ha sido fácil ya que si bien los computadores han evolucionado y poseen mejores características en la actualidad, aún poseen claras dificultades ante la demanda de almacenamiento, rapidez y control de la propagación del error. Buena parte de los problemas de ALN a nivel

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 249-256. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.42423


Rodríguez-Calderón & Pallares-Muñoz / DYNA 82 (192), pp. 249-256. August, 2015.

académico y profesional involucran una gran cantidad de cálculo inalcanzable manualmente. Tal actividad requiere la utilización de software especializado de alto costo cuyas curvas de aprendizaje pueden ser largas sobre todo en temas especializados. Además estos programas comerciales cuentan con un nivel de ayuda e información teórica restringidos para su buen manejo. El desarrollo de herramientas computacionales propias permite evitar equivocaciones graves que normalmente surgen cuando se manipulan cajas negras; esta expresión hace referencia a la poca o nula intervención del usuario en el desarrollo de los programas comerciales. Por lo general, este confía en los algoritmos de los paquetes que usa aunque desconozca de manera total o parcial los detalles de las rutinas que solucionan el problema, no por incapacidad de quien maneja el software, sino porque los programas no dan acceso a la información. Así las cosas, puede ser cuestionable el empleo ciego de estos paquetes, ya que se pierde la rigurosidad, sobre todo a nivel académico en donde los procesos de enseñanzaaprendizaje deben ser sólidos y bien fundamentados. Una alternativa de tratamiento computacional de sistemas lineales medianamente grandes consiste en la utilización de hojas de cálculo diseñadas específicamente para un caso particular, pero, se requeriría un buen tiempo para el simple montaje de las mismas. Por esto, se justifica el desarrollo de un software educativo de ALN amigable y con los últimos desarrollos en esquemas de almacenamiento compacto o formatos comprimidos cuyo objetivo es almacenar matrices dispersas que pueden provenir de la solución numérica de EDP por Elementos Finitos, Diferencias Finitas o de la resolución de Cadenas de Markov. Un software educativo de estas características superaría en gran parte los inconvenientes mencionados, toda vez que genera la posibilidad de contar con herramientas propias, manipulables e intervenibles para procesos de docencia en pregrado, investigación y formación avanzada. Contar con una herramienta computacional propia para la solución de problemas de Álgebra Lineal, genera ventajas competitivas y pedagógicas muy importantes, ya que se pueden conocer todos los detalles, hipótesis, restricciones, aplicaciones y bondades del software de primera mano, sin la dependencia de costosos servicios de soporte. La incorporación de elementos avanzados de almacenamiento, programación y aceleración de la solución de problemas de ALN en el software, permite no sólo resolver casos académicos, sino también problemas reales que llevan al usuario a comprender mejor los problemas de una manera eficiente y generalizada. 2. Métodos Por restricciones de extensión, no se detallan todos los aspectos de los métodos, pero si los más importantes que justifican la pertinencia de este trabajo, incluyendo las referencias bibliográficas. Frecuentemente, los métodos numéricos y en especial los que sirven para resolver ecuaciones diferenciales, conducen a problemas de Álgebra Lineal Numérica en los que las matrices tienen alguna estructura y la mayoría de sus elementos son nulos (matrices dispersas). Para estos problemas, el ALN ofrece métodos especiales en los que se

Figura 1. Almacenamiento compacto: Formato CSR Fuente: [1]

está trabajando en la actualidad. De hecho, la resolución de sistemas lineales es seguramente el foco computacional más importante de la mayoría de las aplicaciones de ciencias e ingeniería. Nótese que las matrices que aparecen al discretizar una Ecuación Diferencial Parcial (EDP) por el Método de los Elementos Finitos (MEF) o Diferencias Finitas (DF) son dispersas; los problemas actuales que se resuelven por MEF manejan una cantidad considerable de incógnitas; si se buscara almacenar las matrices mediante una matriz densa se requerirían muchos bytes que vuelven imposible este tipo de almacenamiento, por consiguiente se requiere formatos diferentes para almacenar la matriz. Anteriormente, se utilizaban formatos de Banda y de Skyline, que tenían el defecto de almacenar parte de los ceros de la matriz por lo que se conviriteiron en ineficientes. Los formatos comprimidos superan los problemas de almacenamiento de los ceros de la matriz. En el formato Comprimido por Filas (CSR) de la Fig. 1, se usan tres vectores unidimensionales AN, JA, IA para almacenar la matriz A. El vector AN almacena los coeficientes no nulos de la matriz por filas, JA almacena los índices de columna para cada uno de los elementos de AN, el vector IA almacena los índices iniciales de cada fila dentro de la estructura AN/JA. La longitud total de AN y JA es el número total de elementos no nulos que contenie la matriz A (NZ) y la longitud del vector IA es el número de filas más una (N+l). Dentro de una fila no existe un orden específico pero es útil colocar como primer elemento de la fila el elemento de la diagonal. Con ello se accede al elemento diagonal de la fila i-ésima con la expresión: AN (IA (i)) [1]. El formato comprimido por columnas (CSC) es análogo a CSR. Se usan los mismos tres vectores unidimensionales para almacenar la matriz A. Ahora el vector AN almacena los coeficientes no nulos ordenados por columnas, JA almacena los índices de fila para cada elemento de AN y el vector IA almacena los índices de inicio de columna dentro de AN/JA. Como toda matriz no singular se puede factorizar de forma A=LU donde L es una matriz triangular inferior con su diagonal llena de unos y U una matriz triangular superior, U es la misma matriz que resulta de la eliminación Gaussiana y L la que resulta de almacenar los pivotes del proceso de factorización. El formato comprimido por filas modificado (MSR) de la Fig. 2, almacena las matrices L y U juntas. La diagonal principal de la matriz L no se almacena y se usan tres vectores unidimensionales LUN, JLU, DLU para almacenar las matrices

250


Rodríguez-Calderón & Pallares-Muñoz / DYNA 82 (192), pp. 249-256. August, 2015.

Figura 2. Almacenamiento compacto: Formato MSR Fuente: [1]

L y U. El vector LUN almacena todos los coeficientes no nulos de las matrices, esto es, en las primeras N posiciones se almacenan los elementos diagonales de U invertidos, la posición N+1 no se usa, y a partir de la posición N+2 se almacenan por filas primero los coeficientes de L y después los de U. El vector JLU almacena a partir de la posición N+2 los índices de columna para cada uno de los elementos de LUN. Las primeras N+1 posiciones del vector JLU almacenan los índices donde comienzan las filas dentro de LUN y de JLU. El vector DLU almacena los índices donde comienzan los elementos de U dentro de cada fila [1]. El formato comprimido por columnas modificado (MSC) es análogo al MSR; se emplean los mismos tres vectores unidimensionales para almacenar las matrices L y U. En este caso, el vector LUN almacena por columnas, el vector JLU los índices de fila y el vector DLU almacena los índices dentro de cada columna. 2.1. Métodos directos Si la matriz A es simétrica, entonces U=DLt donde D es una matriz diagonal. La factorización A=LDLt genera un ahorro en el algoritmo ya que sólo es necesario uno de los triángulos de la matriz para realizar la factorización y hay menos operaciones aritméticas ya que sólo se calcula L y una matriz diagonal; si además la matriz A es definida positiva todos los elementos de D también lo son y A=LD1/2D1/2Lt=(LD1/2)(LD1/2)t=GGt. Esta factorización llamada de Cholesky permite ahorrar operaciones [3]. En los algoritmos de factorización con matriz densa, el algoritmo LU almacena los factores L y U en la misma variable que contiene a la matriz definiéndola como un arreglo bidimensional de tamaño N N. En el algoritmo LDLt, se reemplaza el triángulo inferior de la matriz A por L y la diagonal por D; los elementos de la diagonal se usan para multiplicar y dividir simultáneamente. Finalizada la factorización, los elementos diagonales sólo se usan para dividir, por ello en la matriz se almacenan estos elementos invertidos. Durante el proceso se tiene un vector auxiliar que

almacena los valores sin invertir para evitar excesos de divisiones. En el algoritmo de factorización GGt, la matriz G es triangular inferior y el algoritmo de factorización reemplaza el triángulo inferior de la matriz A por la matriz G. En los algoritmos de factorización con matriz dispersa, el algoritmo debe adaptarse al formato de almacenamiento de manera que si la matriz está almacenada en CSR se debe usar algoritmos orientados por filas y en CSC algoritmos por columnas. El procedimiento consiste en sacar la fila i-ésima de la matriz A y ponerla en un arreglo sobre el que se hacen los cálculos y almacenar los elementos no nulos del arreglo en el formato correspondiente de las matrices de salida. Es necesario disponer de un algoritmo que permita conocer el lugar exacto donde estarán los elementos no nulos de las matrices de salida o factorización simbólica, el cual se basa en la observación de la forma en que evoluciona el proceso de eliminación gaussiana sobre el grafo asociado a la matriz. Esta factorización tiene complejidad de orden O(N) [1]. Los métodos directos tienen el problema del llenado adicional que introducen en la matriz lo cual hace inviable su uso en gran parte de las aplicaciones, además poseen un grado de paralelismo bajo que limita su escalabilidad en sistemas paralelos y pueden presentar problemas de exactitud numérica [1]. La propagación de error al emplear un método directo multiplica el error relativo en los datos por el número de condición de la matriz, en otros casos que no requieren demasiada exactitud los métodos directos hacen trabajo inneceario. 2.2. Métodos iterativos La solución a los problemas de los métodos directos la tienen los métodos iterativos que no modifican la estructura de la matriz de coeficientes ya que su principal operación es el producto matriz por vector. Por esto, no requieren tanta memoria como los métodos directos ni presentan dificultades de paralelización cuando se resuelven problemas de gran tamaño ya que la operación matriz por vector es paralelizable. Aunque la convergencia de un método iterativo puede volverse lenta, siempre es ajustable a la exactitud deseada [1]. Los métodos iterativos pueden ser clásicos y de proyección. Los clásicos: Jacobi, Gauss-Seidel, SOR, SSOR, Chebyshev y los de Proyección que son más eficaces: GMRES, Gradiente, Biconjugado BiCGstab, CGS y Gradiente Conjugado (CG). La metodología general de los métodos de proyección consiste en que, dado un sistema lineal nxn, Ax=b y el subespacio vectorial K de dimensión m<n, generado por la base V[v1...vm] se tome como aproximación de el vector x=Vy, donde y es un vector de dimensión m. El criterio más usado para seleccionar y es forzar al vector residuo, r=b-Ax para que sea ortogonal a otro subespacio  de dimensión m generado por la base W[w1...wm], esto es, se impone que: Wt.(b-AVy)= 0. Se tiene entonces que y=(WtAV)-1-Wtb, suponiendo que la matriz WtAV es no singular, es decir, se reduce el problema original a resolver un sistema lineal de mxm. Los métodos de proyección eligen diferentes subespacios K y  para hacer la aproximación y se formulan sobre el sistema del error y no en el sistema original, es decir, dada una primera aproximación a la solución x0, se busca un

251


Rodríguez-Calderón & Pallares-Muñoz / DYNA 82 (192), pp. 249-256. August, 2015.

vector de corrección e que aproxime la solución exacta del sistema A ̃ =r0, donde ̃ = -x0 es el vector error. Entonces la aproximación a la solución se da como x=x0+e. Aunque existen muchas opciones para elegir los subespacios K y  la más usada es que K y  sean subespacios de Krylov de dimensión m asociados a la matriz A y al vector v, generados por la base {v, Av, A2v,...,Am-1v} y denotados por Km(A,v). Los subespacios de Krylov son simples, permiten invertir la matriz WtAV y generar una base ortonormal de Km(A,v) (por el método de Arnoldi). Después de m iteraciones del método de Arnoldi se obtiene la matriz de dimensión (m+1) m y sus elementos no nulos son los coeficientes hij generados a la que se le por el algoritmo. La matriz Hm es la matriz ha eliminado la fila (m+1). La matriz Hm es simplemente la proyección de la matriz A sobre el subespacio Km(A,v1) con ‖ ‖=1. Es decir: Hm=WmtAWm, donde Wm=[w1...wm] y la matriz cumple la relación AWm=Wm+1 . Sintentizando, el algoritmo general del método de proyección es, Mientras No convergencia 1. Elegir V[v1...vm] y W[w1...wm] 2. Calcular r=b-Ax 3. Calcular y=(WtAV)-1-Wtr 4 . Calcular x= x+Vy Fin Mientras

En el Método del Gradiente Conjugado, para una matriz A simétrica definida positiva, la matriz Hm es tridiagonal en lugar de superior [1]; esto permite una serie de simplificaciones importantes a nivel de cálculo en el método de Arnoldi y el algoritmo resultante (llamado también método simétrico de Lanzcos) calcula una matriz nxn simétrica tridiagonal tal que, T=WtAW, donde W=[w1...wm] es una base ortonormal y T es como la expresión (1).  

  

 

 

(1)  

 

Al reescribir T=WtAW como AW=WT se tiene que Awj=jwj con j=1, 2,...,n, entendiendo que 1=n+1=0. Multiplicando esta ecuación por wjt se encuentra que j= wjtAwj y j+1wj+1=(A-jI)wj-jwj-1 . Si 1 y como =1 entonces j+1= . Estas relaciones conducen a un método iterativo para calcular los coeficientes j y j. Lanzcos es un algoritmo para el cálculo de los autovalores de A; se calculan los coeficientes j y j hasta m<n y se aproximan los autovalores de A mediante los autovalores de Tm que son triviales dado que Tm es simétrica tridiagonal. El Gradiente Conjugado ó algoritmo simétrico de Lanzco aplicado a la resolución de un sistema de ecuaciones es un proceso de optimización para encontrar el mínimo de la forma cuadrática,  . Este algoritmo optimiza hasta reducir la norma del error respecto a la matriz A en cada paso como se muestra en la ecuación (2).

1

1

.‖

(2)

donde (A) es el número de condición de A. El algoritmo del Gradiente Conjugado es, Elegir x0; Calcular r0=b-Ax0 Mientras No convergencia 〈 〉 , 〉 〈 , . 〈 〉 , 〉 〈 , . . Fin Mientras

La lentitud de la convergencia de los métodos iterativos hace necesario que se agregue precondicionadores en el esquema numérico que los aceleren, los cuales están basados en manipulaciones algebraicas de la matriz para obtener una aproximación de la inversa [5]. Un precondicionador busca una matriz M que sea una buena aproximación de A y más fácil de invertir; entonces, se usa M-1 para corregir las sucesivas aproximaciones del método iterativo. Es decir, si r es el residuo en la iteración m-ésima, se calculam=M-1 rm que será una aproximación del error en el paso m-ésimo y se corrige la solución como xm=xm+m. Existen numerosas aproximaciones para conseguir precondicionadores. Las tres más importantes son las factorizaciones incompletas, los precondicionadores polinomiales y los de aproximación dispersa de la inversa SPAI. Los precondicionadores polinomiales se basan en aproximar la inversa de la matriz A mediante un polinomio. Esto es, A-1=Pm(A)=q0I+q1A+…+qmAm. Los coeficientes del polinomio se eligen de forma que se minimice í ‖l P  ‖ donde E es un intervalo que incluye el espectro de A. El problema de minimización tiene diferentes soluciones en función de que norma se considere. Si se toma la norma infinito para‖f‖ á |  |, la solución es según la ecuación (3). 2

1+ jwj+ j+1wj+1

(3)

Donde Tm() es el polinomio de Chebyshev de primera clase de grado m, y E=[c, d] . El problema que presentan es la necesidad de disponer de un buen intervalo del espectro de A, lo contrario requeriría grados grandes del polinomio para una buena aproximación. Los precondicionadores SPAI buscan una matriz M tal que AM sea tan cercana a la identidad como sea posible. Esto es, minimizar ||AM-I|| en la norma de Frobenius, ‖ ‖ ‖ donde ei, es el vector i∑ ‖ ésimo de la base canónica. Cada columna de M se puede calcular en paralelo resolviendo el problema de mínimos ‖ cuadrados í ‖ 1, … , . La inversa de A generalmente es mucho más densa que A, pero los coeficientes significativos de A-1 son muy pocos, esto es,

252


Rodríguez-Calderón & Pallares-Muñoz / DYNA 82 (192), pp. 249-256. August, 2015.

M es una matriz dispersa, por lo tanto, el problema es de dimensión pequeña y puede resolverse eficientemente. Dado que la estructura de M no es conocida a priori, se inicia con una estructura diagonal y se avanza en el llenado hasta que ‖ ‖ para i=1, ...,N con 0 <  < 1, o cuando se llega a un nivel de llenado permitido [1]. Las factorizaciones incompletas son los únicos precondicionadores que son generales y no requieren la estimación de ningún parámetro. Se trata de factorizar la matriz A (LU, Cholesky, LDLt) sin introducir todo el llenado que se produce en la factorización [3]. Como resultado, el precondicionador puede aplicarse resolviendo dos sistemas triangulares cuya dispersión y complejidad de cálculo dependerá del tipo de factorización incompleta que se haya aplicado. Las formas de factorizaciones incompletas son: sin llenado -ILU(0), ICH(0)-, con llenado usando como criterio la posición dentro de la matriz -ILU(k), ICH(k)- con llenado, usando como criterio umbrales numéricos -ILU(τ), ICH(τ)- [4,711]. Las factorizaciones ILU(0) se usan para resolver EDP, son simples ya que no introducen llenado. La factorización incompleta tiene la misma cantidad de elementos no nulos y en las mismas posiciones que en la matriz A; esto permite reusar para el precondicionador todos los vectores de índices usados para la matriz con el consecuente ahorro de memoria, sin embargo, no son muy potentes. El parámetro k de una factorización ILU(k) en el contexto de EDP para problemas discretizados mediante diferencias finitas, indica el número de columnas alrededor de la diagonal en las que se permite llenado [1]. Las factorizaciones ILU(τ) deciden introducir o no un llenado lij en función de si es superior o inferior a un umbral determinado que se calcula en relación al valor de los elementos de la fila i-ésima de A usando el parámetro τ. El cálculo se realiza usando cualquier medida que puede ser el valor medio de los elementos de la fila i-ésima o cualquier norma de la fila i-ésima. Estas factorizaciones son de aplicación general [1]. La factorización incompleta ILUt (fil,τ) de aplicación genérica supera la falta de control fino de ILU(τ) sobre la cantidad total de llenado que se permite; sigue una doble estrategia en la introducción del llenado, esto es, al factorizar la fila i-ésima se introducen los llenados que superen un umbral numérico relativo a la fila i-ésima (parámetro τ) y finalizada la factorización de la fila iésima sólo se almacenan en la estructura de datos de salida los elementos que tiene la matriz A en la fila i-ésima más dos veces fil(fil más en la parte L, y fil más en la parte U). Se elige almacenar los elementos con un valor absoluto mayor. τ sirve para controlar el umbral numérico de cálculo y el parámetro fil para la cantidad efectiva de llenado. Para implementar todos los métodos anteriormente descritos, se desarrolló un aplicativo en Fortran para manejar las cuatro estructuras de datos para matrices dispersas. Se muestran dos casos, uno de factorización LU con la matriz de entrada simétrica y dispersa, en formato CSC y las matrices LU en formato MSC, y otro, de gradiente conjugado con precondcionador ILUt(fil,tol)

Tabla 1. Datos Factorización LU an an(1)=10 an(2)=2 an(3)=1 an(4)=4 an(5)=1 an(6)=3 an(7)=2 an(8)=1 an(9)=4 an(10)=2 Fuente: Los autores

EntradaCSC, SalidaMSC ja ja(1)=1 ja(2)=2 ja(3)=3 ja(4)=4 ja(5)=2 ja(6)=3 ja(7)=1 ja(8)=4 ja(9)=1 ja(10)=3

Ia ia(1)=1 ia(2)=5 ia(3)=6 ia(4)=8 ia(5)=11

Tabla 2. Datos Factorización PCG con precondicionador ILUt(fil,tol) Entrada CSR, SalidaMSR (de la ILUT) An Ja Ia an(1)= 10 ja(1)=1 ia(1)=1 an(2)=1 ja(2)=4 ia(2)=4 an(3)=1 ja(3)=3 ia(3)=6 an(4)=10 ja(4)=2 ia(4)=9 an(5)=3 ja(5)=3 ia(5)=11 an(6)=8 ja(6)=3 an(7)=4 ja(7)=2 an(8)=3 ja(8)=1 an(9)=10 ja(9)=4 an(10)=9 ja(10)=1 Fuente: Los autores

b b(1)=1 b(2)=5 b(3)=7 b(4)=2

para una matriz de entrada almacenada en formato CSR y las matrices de salida en formato MSR. Los datos de la Tabla 1, se traducen en la matriz de entrada A para la factorización LU del primer problema. 10 3 9

10 4

1 3 8

1 10

Los datos de la Tabla 2, se traducen en la matriz de entrada A para la factorización PCG con precondidinador ILUt(fil,tol) del segundo problema. 10 2 1 4

2

4

3

2 1

1

3. Resultados La Fig. 3 muestra los resultados obtenidos de la factorización LU realizada en el CAS OCTAVE (Sistema de Álgebra Computacional libre usado para la validación) y la Fig. 4 presenta la ventana que despliega los resultados de la factorización LU en formato MSC haciendo uso del software desarrollado en Fortran. Por su parte, en la Fig. 5, se muestra la solución del sistema lineal de ecuaciones de prueba en Octave empleando funciones internas propias del CAS.

253


Rodríguez-Calderón & Pallares-Muñoz / DYNA 82 (192), pp. 249-256. August, 2015.

Figura 3. Resultado factorización LU en Octave evaluando la función interna LU() Fuente: Los autores

Figura 5. Solución del Sistema Lineal de Ecuaciones de prueba en Octave empleando funciones internas Fuente: Los autores

Figura 6. Respuesta obtenida del PCG con precondicionador ILUt Fuente: Los autores

Las Figs. 6, 7 y 8 muestran respectivamente las ventanas que despliegan los resultados haciendo uso del software desarrollado en Fortran de la factorización PCG con precondicionador ILUt; la convergencia obtenida del PCG con precondicionador ILUt y la factorización ILUt en formato MSR. En la notación, vnr es el valor de la norma del residuo, vnx es el valor de la norma de la diferencia entre soluciones consecutivas. 4. Conclusiones

Figura 4. Ventana de resultados de la factorización LU en formato MSC haciendo uso del software Fuente: Los autores

Los resultados conseguidos haciendo uso del CAS OCTAVE aunque no son en formatos comprimidos, permitieron validar numéricamente las salidas del software desarrollado en Fortran para la factorización LU y la solución de un sistema de ecuaciones lineales por el PCG precondicionado con ILUT, teniendo en cuenta que si bien las implementaciones son diferentes, las respuestas son muy similares.

254


Rodríguez-Calderón & Pallares-Muñoz / DYNA 82 (192), pp. 249-256. August, 2015.

Markov. Se espera establecer en futuras pruebas, comparaciones en tiempos CPU (costo computacional) de la solución de sistemas de matrices dispersas grandes que permitan medir las ventajas de los formatos comprimidos, y con ello, muy seguramente se dispondrá mayores argumentos para emplear estos desarrollos en aplicaciones de gran complejidad numérica o en la solución de modelos numéricos de tipo industrial. Referencias [1] [2]

Figura 7. Convergencia obtenida del PCG con precondicionador ILUT Fuente: Los autores

Figura 8. Factorización ILUt en formato MSR Fuente: Los autores

A pesar que los sistemas para realizar las pruebas son pequeños, el verdadero poder de los algoritmos se encuentra en su aplicación a grandes sistemas procedentes de la discretización de EDP o de la resolución de cadenas de

CIMNE, Introducción al Cálculo Paralelo, 2000. Fortes, C. et. al., Implementation of direct and iterative methods for a class of Helmholtz wave problems. Computers & Structures [Online]. 82 (17-19), pp. 1569-1579, 2004. Disponible en: http://www.sciencedirect.com/science/article/pii/S00457949040014 39. DOI: 10.1016/j.compstruc.2004.03.053 [3] Lin, Ch. and Moré, J., Incomplete Cholesky factorizations with limited memory. SIAM Journal on Scientific Computing [Online]. 21 (1), pp. 24-45, 2006. [Consultado: 4 de Septiembre de 2014]. Disponible en: http://epubs.siam.org/doi/abs/10.1137/S1064827597327334. DOI: DOI:10.1137/S1064827597327334 [4] Rodríguez, W. and Pallares, M., Three-dimensional modeling of pavement with dual load using finite element. DYNA 82 (189), pp. 30-38 2015. DOI: 10.15446/dyna.v82n189.41872 [5] Saad, Y., ILUM: A multi-elimination ILU preconditioner for general sparse matrices. SIAM Journal on Scientific Computing [Online]. 17 (4), pp. 830-847, 2012. [Consultado: 27 de Noviembre de 2014]. Disponible en: http://epubs.siam.org/doi/abs/10.1137/0917054. DOI: 10.1137/0917054 [6] Saad, Y. and Van der Vorst, H., Iterative solution of linear systems in the 20th century. SIAM Journal on Scientific Computing [Online]. 123 (1-2), pp. 1-33, 2001. [Consultado: 27 de Noviembre de 2014]. Disponible en: http://www.sciencedirect.com/science/article/pii/S03770427000041 2X DOI: 10.1016/S0377-0427(00)00412-X [7] Saad, Y. and Zhang, J., BILUM: Block versions of multielimination and multilevel ILU preconditioner for general sparse linear systems. SIAM Journal on Scientific Computing [Online]. 20 (6), pp. 21032121, 2006. [Consulta: 1 de Mayo de 2015]. Disponible en: http://epubs.siam.org/doi/abs/10.1137/S106482759732753X. DOI: 10.1137/S106482759732753X [8] Saad, Y. y Zhang, J., BILUTM: A domain-based multilevel block. ILUT preconditioner for general sparse matrices. SIAM Journal on Matrix Analysis and Applications [Online]. 21 (1), pp. 279-299, 2006. [Consulta 15 de Marzo de 2014]. Disponible en: http://epubs.siam.org/doi/abs/10.1137/S0895479898341268. DOI: 10.1137/S0895479898341268 [9] Shen, Ch. And Zhang, J., Parallel two level block ILU preconditioning techniques for solving large sparse linear systems. Parallel Computing Journal-Elsevier [Online]. 28 (10), pp. 14511475, 2002. [Consulta: Junio de 2010]. Disponible en: http://www.sciencedirect.com/science/article/pii/S01678191020014 73. DOI: 10.1016/S0167-8191(02)00147-3 [10] Young, D., et al., Application of sparse matrix solvers as effective preconditioners. SIAM Journal on Scientific and Statistical Computing [Online]. 10 (6), pp. 1186-1199, 2006. [Consulta: Junio de 2004]. Disponible en: http://epubs.siam.org/doi/abs/10.1137/0910072. DOI: 10.1137/0910072 [11] Zhang, J., Preconditioned iterative methods and finite difference schemes for convection–diffusion. Applied Mathematics and Computation-Elsevier. [Online]. 109 (1), pp. 11-30, 2000. [Citado Agosto de 2014]. Disponible en: http://www.sciencedirect.com/science/article/pii/S00963003990001 32. DOI: 10.1016/S0096-3003(99)00013-2

255


Rodríguez-Calderón & Pallares-Muñoz / DYNA 82 (192), pp. 249-256. August, 2015. W. Rodríguez-Calderón, received the BSc. Eng in Civil Engineering in 1998 from the UIS, the MSc. degree in Numerical Methods in EngineeringUPC in 2002, is Dr. student in UFRGS, Brazil. He has worked as a professor and researcher for over fifteen years. Currently, he is a full professor in the Civil Engineering Program, Engineering Faculty, of the Universidad Cooperativa, Neiva, Colombia. His research interests include: simulation, modeling; statistical and computational intelligence techniques; and optimization using metaheuristics. ORCID: 0000-0001-9016-433X M.R. Pallares-Muñoz, received the BSc. Eng in Civil Engineering in 1998, from the UIS, the MSc. degree in Numerical Methods in Engineering in 2004, form the UPC. She has worked as a professor and researcher for over ten years. Currently, he is a full professor in the Civil Engineering Program, Engineering Faculty, of the Universidad Surcolombiana, Neiva, Colombia. Here research interests include: simulation and computational modeling. ORCID: 0000-0003-4526-2357

Área Curricular de Ingeniería Civil Oferta de Posgrados

Especialización en Vías y Transportes Especialización en Estructuras Maestría en Ingeniería - Infraestructura y Sistemas de Transporte Maestría en Ingeniería – Geotecnia Doctorado en Ingeniería - Ingeniería Civil Mayor información: E-mail: asisacic_med@unal.edu.co Teléfono: (57-4) 425 5172

256


Factors involved in the perceived risk of construction workers Ignacio Rodríguez-Garzón a, Myriam Martínez-Fiestas b, Antonio Delgado-Padial c & Valeriano Lucas-Ruiz d a

Universidad Cientifica del Sur, Lima, Perú. irgarzon@ugr.es b Universidad ESAN, Lima, Perú. mmartinezf@esan.edu.pe c Universidad de Granada, Granada, España. adpadial@ugr.es d Universidad de Sevilla, Sevilla, España. vlruiz@us.es Received: September 15th, 2014. Received in revised form: January 14th, 2015. Accepted: May 19th, 2015.

Abstract This article is an exploratory study of perceived risk in the construction sector. We used a sample of 514 workers in Spain, Peru and Nicaragua. The method used was the psychometric paradigm and, under its assumptions we have studied nine factors or qualitative attributes of risk. The main statistical analysis was carried out using a classification tree. As a result is obtained that four of the nine attributes studied predict significantly the perceived risk of the sample. The attribute on the delay of the consequences has been the most important predictor in the model, followed by the attribute that explores the potential catastrophic risk and the attribute that explores the serious consequences. Finally the attribute related to the personal vulnerability has emerged. The implications of the results are exposed. Keywords: risk; perceived risk; occupational safety; construction; psychometric paradigm.

Factores conformantes del riesgo percibido en los trabajadores de la construcción Resumen Este artículo es un estudio exploratorio acerca del riesgo percibido en el sector de la construcción. Se ha utilizado una muestra de 514 trabajadores de España, Perú y Nicaragua. El método utilizado ha sido el paradigma psicométrico y bajo sus premisas se han estudiado nueve factores o atributos cualitativos del riesgo. El análisis estadístico principal se ha desarrollado mediante un Árbol de clasificación. Como resultado se ha obtenido que cuatro de los nueve atributos estudiados predicen de manera significativa el riesgo percibido de la muestra. El atributo relativo a la demora de las consecuencias ha resultado ser el más importante en dicho modelo de predicción, seguido por el atributo que explora el potencial catastrófico del riesgo y el atributo que explora la gravedad de las consecuencias. Por último ha emergido el atributo relacionado con la vulnerabilidad personal. Las implicaciones de los resultados son expuestas. Palabras clave: riesgo; riesgo percibido; seguridad ocupacional; construcción; paradigma psicométrico.

1. Introducción El sector de la construcción es uno de los más peligrosos. Las tasas de muertes y lesiones se pueden describir como inaceptablemente altas [1]. Para intentar modificar esta situación han surgido diferentes propuestas. Así, los primeros intentos de reducir los accidentes laborales se orientaron hacia aspectos técnicos desde el punto de vista de la ingeniería. Sin embargo, a pesar de conocerse desde hace bastante tiempo que el comportamiento del trabajador es fundamental para evitar accidentes [2], la literatura pone de manifiesto que se ha influenciado poco sobre la actitud del

trabajador [3]. En este artículo se estudia la percepción del trabajador ante los riesgos de su trabajo, y qué factores son importantes en su conceptualización de dichos riesgos. 1.1. Riesgo Existen numerosas definiciones de la palabra riesgo. Esto se debe a que es un constructo difícil de definir. Atendiendo a Slovic & Weber [4], un riesgo (risk) puede ser definido desde el punto de vista de un peligro (hazard), como una probabilidad, como una consecuencia o como un expuesto potencial a la adversidad o amenaza. Sin embargo, es habitual definir un peligro

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 257-265. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.44999


Rodríguez-Garzón et al / DYNA 82 (192), pp. 257-265. August, 2015.

refiriéndose a la probabilidad del daño (valor cuantitativo) mientras que el riesgo se definiría como la posibilidad de que un daño ocurra (valor cualitativo). Hermansson [5] define el riesgo como algo negativo que puede suceder en el futuro. Sin embargo, Kunreuther & Slovic [6] argumentan que hay que contextualizar el riesgo. Según ellos, no se puede reducir a medidas científicas exclusivamente. El riesgo no es un concepto sólido y cerrado sino más bien moldeable y abierto según la forma en la que se observe. De esta forma, el riesgo es relacionado habitual e indistintamente con palabras como oportunidad, posibilidad y probabilidad [7,8]. En la literatura se pueden encontrar numerosas clasificaciones del riesgo. El concepto de riesgo puede variar según el tipo de peligro e incluso según la naturaleza del mismo [9]. Por ejemplo, los negocios necesitan un conjunto de métodos, procedimientos y modelos diferentes a otras áreas como la medicina, aunque sí podemos decir que los elementos básicos para la comprensión de los riesgos engloban dos dimensiones: (a) las posibles consecuencias y (b) las incertidumbres asociadas [10]. 1.2. Riesgo percibido A la percepción del riesgo, al haber sido estudiada tradicionalmente desde el área de la psicología, sociología y antropología más que desde la ingeniería, no se le ha prestado mucha atención en el contexto del lugar de trabajo y menos en la construcción [11]. Por ello, es un campo de estudio que está sin apenas desarrollar siendo aún muy limitada nuestra comprensión de cómo perciben el riesgo los individuos o los grupos [12]. Por ejemplo, dentro de la industria de la construcción, [8] estudiaron el flujo de información que se produce al analizar una situación basada en el riesgo o [9] [13] como modula el riesgo percibido la concepción de los gerentes y contratistas. Sin embargo, con respecto a los trabajadores, aunque se ha estudiado su percepción del riesgo (p. Ej.: [14-17,22,24]), no se ha hecho en la medida que esta industria lo demanda. Hallowell [14] define el riesgo percibido como el juicio subjetivo que una persona hace sobre la frecuencia y la gravedad de un riesgo en particular. Según Rundmo [18], la percepción del riesgo se compone de una evaluación subjetiva que mide la probabilidad de experimentar un accidente o una enfermedad causados por la exposición a una fuente de riesgo. Sin embargo, muchos autores han hecho hincapié acerca de que el riesgo se debe explicar a través de varios atributos además de por la probabilidad y las consecuencias. Los mayores estudios sobre percepción del riesgo se llevan a cabo sobre temas relacionados con la energía nuclear, desastres naturales y terrorismo [19]. Aunque también se han hecho estudios acerca de muchos otros peligros como por ejemplo, terremotos, inundaciones, huracanes, incendios forestales, deslizamientos de tierra, volcanes, emisiones de sustancias químicas tóxicas, desastres tecnológicos, terrorismo, etc. [20]. 1.3. Riesgo percibido en el lugar de trabajo El estudio acerca del riesgo percibido en el ámbito de la seguridad ocupacional ha sido mayoritariamente

desarrollado en las plataformas petrolíferas. Entre otras razones, en dichas plataformas petrolíferas, la percepción del riesgo ha sido un aspecto estudiado para dar una respuesta a las conductas inseguras del trabajador. Y es que, es lógico admitir que el comportamiento ante los riesgos a que están expuestos los trabajadores dependa en parte de su percepción del riesgo [24]. El riesgo percibido ha sido estudiado como reflejo de las verdaderas condiciones de trabajo y el estado de la labor preventiva tal y como lo percibe el personal [21,13]. También se ha afirmado que la actitud de los trabajadores hacia la seguridad en la obra está influenciada por su percepción del riesgo, la dirección y los procesos y las reglas de seguridad [22]. Por otro lado, se ha estudiado la relación positiva entre los accidentes laborales y el riesgo percibido de sufrir un daño [23]. Según Arezes & Bizarro [24], conocer la percepción del riesgo que tienen los trabajadores es fundamental para saber cómo enfocar la gestión de la seguridad. Por ejemplo, Harrel [25] encontró que el riesgo percibido se asocia con una disposición de los trabajadores a adoptar medidas de seguridad. Se ha estudiado la relación entre la cultura de la prevención y el riesgo percibido [15,16]o la relación entre el riesgo percibido de tener un accidente y el comportamiento seguro del trabajador [26]. En este sentido, Mullen [27] dice que la percepción de los trabajadores a hacerse daño es uno de los mejores indicadores de un comportamiento laboral seguro. En definitiva, conocer como los trabajadores perciben el riesgo puede ayudar a comprender y a gestionar el comportamiento laboral seguro. Este trabajo intenta ser un aporte a dicho estudio debido a que la industria de la construcción es una de las más peligrosas [1]. 2. Metodología 2.1. Objetivo de investigación El presente estudio plantea como objetivo principal analizar cómo se genera una alta/baja percepción del riesgo por parte de los trabajadores de la construcción. Las razones fundamentales que llevan a abordar dicho objetivo de investigación se desprenden de la revisión de la literatura: (i) el sector objeto de análisis es un sector donde existe una falta de investigación de este tema [11]; (ii) el trabajo de los operarios de la construcción posee altas tasas de peligrosidad y accidentabilidad, siendo necesario investigar cómo reducir dicha situación [24]; (iii) ha quedado patente por multitud de autores que la percepción del riesgo puede desarrollar en el lugar de trabajo un rol importante para la generación de un ambiente seguro [23,25-27]. 2.2. Instrumento de medida Para medir la percepción del riesgo de los trabajadores de la construcción se optó por utilizar el paradigma psicométrico. Con este modelo, el riesgo se aborda como un constructo social que se caracteriza por ser multidimensional. Nacen así las distintas dimensiones del riesgo como atributos que generan una idea global a partir de varios valores o cualidades.

258


Rodríguez-Garzón et al / DYNA 82 (192), pp. 257-265. August, 2015.

Es un modelo muy interesante usado recientemente para describir la percepción del riesgo [28]. Su uso es interesante debido a que mide diferentes atributos del riesgo y que es muy fácil de usar. Mediante este modelo, en este estudio se van a evaluar nueve atributos o dimensiones del riesgo y un atributo cuantitativo global basándonos en el estudio de Fischhoff y colaboradores [29] y la adaptación realizada por Portell y Solé [30].

Tabla 1. Preguntas del cuestionario relativas a las diferentes dimensiones cualitativas del riesgo percibido. Atributo Pregunta Factor explorado A1

¿Cree que posee suficientes conocimientos en temas relacionados con la seguridad?

Conocimiento del propio trabajador

A2

¿Considera que los responsables de seguridad de la empresa conocen los riesgos con los que trabaja usted cada día?

Conocimiento del responsable de seguridad y salud

A3

¿Cuánto teme al daño que le pueda ocurrir mientras que realiza su trabajo?

Temor

A4

¿Qué probabilidad tiene usted de experimentar un daño como consecuencia de la realización de su trabajo?

Vulnerabilidad personal

A5

En caso de producirse una situación de riesgo en su trabajo, ¿Qué daño le podría producir a usted?

Gravedad de las consecuencias

A6

¿Qué puede hacer usted para evitar que haya un problema que pueda conducir a una situación de riesgo?

Acción preventiva (control fatalidad)

A7

En una situación de riesgo que pueda producirse ¿Qué posibilidad tiene de intervenir para controlarla?

Acción protectora (control del daño)

A8

¿Es posible que se puedan producir situaciones de riesgo en las que se vean afectadas un gran número de personas?

Potencial catastrófico

2.3. Diseño de la investigación La población objeto de estudio se centro en operarios de la construcción de diferentes países. Específicamente la muestra estuvo compuesta por trabajadores de España, Perú y Nicaragua. La recogida de datos se realizó durante el año 2013. La muestra final estuvo formada por 514 sujetos. El tamaño de la muestra fue superior al requerido para una población infinita con un error muestral del 5% y un nivel de confianza del 95% bajo los supuestos de muestreo aleatorio simple. De la muestra total, 204 pertenecieron a la muestra española, 213 a la muestra peruana y 97 a la muestra nicaragüense. No obstante, para algunos análisis, la muestra finalmente válida fue de 505 sujetos dada la existencia de algunos datos perdidos en determinadas variables consideradas. Los 9 casos perdidos fueron de las siguientes submuestras: 3 de la muestra de España, 5 de la muestra de Perú y 1 de la muestra de Nicaragua. Los sujetos de la muestra fueron encuestados en los propios centros de trabajo (obras de edificación) o en centros de formación (con anterioridad al comienzo de los cursos). Se accedió a los centros de trabajo y formación que se libremente accedieron a realizar el estudio. Aunque no hubo una elección de los centros por parte de los investigadores, la muestra no pudo obtenerse mediante muestreo aleatorio simple. Sin embargo, en ningún momento existió relación entre los autores y los sujetos de la muestra. Para la recogida de la información se utilizaron cuestionarios auto administrados con presencia en todo momento del mismo encuestador. Sjöberg, [31] recomienda el uso del cuestionario por ser la herramienta más común en el estudio del riesgo percibido. El cuestionario estuvo estructurado en dos bloques. El primer bloque recogía las diferentes dimensiones del paradigma psicométrico (A1-A9). Éstas se presentaron mediante escalas tipo Likert con valores comprendidos entre 1 y 7. Al igual que en los trabajos originales y replicas posteriores, los nueve atributos cualitativos se acompañaron también de una pregunta cuantitativa general del riesgo (G1) que se colocó después de A9 e iba numerada para ser contestada de 0 a 100 en tramos de 5 puntos. Las preguntas realizadas y el factor o atributo del riesgo que explora cada una de ellas se expone en la Tabla 1. El segundo bloque contenía preguntas de carácter sociodemográfico y variables propias del sector. En la Tabla 2 se muestran las principales variables sociodemográficas consideradas en el estudio así como las medias y porcentajes obtenidos en la muestra global y por países.

¿Cree que su trabajo puede afectar Demora de las a su salud a largo plazo? consecuencias Fuente: Elaboración propia a partir de Fischhoff y colaboradores [29] y Portell & Solé [30] A9

Tabla 2. Variables sociodemográficas y variables del sector Variables Media/ Operarios Operarios descriptivas porcentajes de de de (Medias y %) la muestra España Perú total Edad 35 38 34 Estado civil 70% 62% 73% (casados) N° Hijos 1.5 1.2 1.6 Años de 13 19 9 experiencia Tamaño de la empresa (N° 77 78 80 Trabajadores) Fuente: elaboración propia

Operarios de Nicaragua 33 77% 2.0 10 65

En cada país analizado se realizó un pretest que comprendió dos etapas: en la primera etapa, el cuestionario fue completado por cinco profesionales de la construcción (ingenieros civiles y arquitectos) para asegurar que el lenguaje era adecuado; y en la segunda etapa, el cuestionario fue testado con una muestra de trabajadores similar a la que se utilizaría en el estudio definitivo. Los resultados de ambos

259


Rodríguez-Garzón et al / DYNA 82 (192), pp. 257-265. August, 2015.

pretests fueron positivos en todos los países, realizando solo pequeños cambios. 2.4. Análisis estadístico Para dar respuesta a nuestro objetivo de investigación se procedió a realizar un Árbol de clasificación. Esta técnica puede generar reglas probabilísticas para predecir en qué situaciones y bajo qué circunstancias un operario de la construcción tendrá una alta o baja percepción del riesgo. Como variables independientes del modelo se incluyeron todas las dimensiones cualitativas de la percepción del riesgo (A1 a A9) y como variable dependiente, la percepción del riesgo global (G1). Específicamente, las variables independientes se trataron como variables continuas, al estar medidas en escala Likert de 7 puntos. La variable dependiente fue recodificada en dos tramos como variable nominal, para ello se utilizó el valor de la mediana para obtener grupos iniciales similares en tamaño. Concretamente se agruparon en una categoría los valores de 0 a 70 y en otra categoría, los valores mayores de 70, esto es: 75, 80, 85, 90, 95 y 100 . De esta forma se podría analizar la diferencia entre los trabajadores que perciben una alta percepción del riesgo en su trabajo (más de 75 sobre 100) y el resto. Dado que la muestra estuvo compuesta por operarios de la construcción de diferentes países, con anterioridad a la realización del Árbol de clasificación se analizó si existían o no diferencias en las percepciones del riesgo global de los trabajadores según el país de procedencia. Para ello se llevó a cabo un análisis de la varianza para la percepción del riesgo global. El objetivo de este análisis previo era identificar si debería o no, considerarse la variable nacionalidad en el modelo. El Árbol de clasificación se llevó a cabo a través del método de crecimiento CHAID exhaustivo [32]. Este método supone una modificación del método de crecimiento de detección automática de interacciones mediante Chi cuadrado (CHAID) que deja fuera del modelo las dimensiones cualitativas de la percepción del riesgo que no son estadísticamente significativas para la determinación de G1. Los criterios de crecimiento del Árbol fueron los siguientes: la profundidad máxima del Árbol fue de 3 y el número de casos mínimo por nodo fue de 95 casos por nodo parental y de 35 por nodo filial. Estas decisiones se derivan de la elección del método de crecimiento seleccionado y del tamaño muestral del estudio. El método de cálculo del estadístico Chi Cuadrado fue el de Cociente de Verosimilitudes dado que es un método más robusto que el de Pearson, y es adecuado dado el tamaño muestral disponible. Así mismo, las comparaciones múltiples y los valores de significación para los criterios de división y fusión, se corrigieron utilizando el método de Bonferroni. El nivel de significación fue establecido en 0.05 para todos los análisis. Como intervalos de escala del análisis se seleccionó el valor de 7 dado que todas las variables independientes estuvieron medidas en una escala Likert de 7 puntos. Se llevó a cabo una validación cruzada del modelo con un número de pliegues de la muestra igual a 10 con el objetivo

de evaluar la bondad de la estructura del Árbol en caso de generalizarse para una mayor población. Se optó por este método dado que la validación por división muestral no es recomendable para muestras pequeñas. En último lugar, y con antelación a la realización del análisis, se corroboró que los datos perdidos en las variables independientes no fueran en ningún caso superiores al 5%. Este aspecto es importante dado que el método CHAID exhaustivo trata los valores perdidos, como una categoría que se combina con aquella categoría de datos válidos que más se le parece una vez que el algoritmo la ha generado en el modelo. La codificación de las encuestas, el tratamiento de los datos y los análisis estadísticos se llevó a cabo a través del software IBM SPSS Statistics 21 [33]. 3. Resultados y discusión Tal y como se indica en el apartado anterior, para corroborar si el país del trabajador estaba ejerciendo una influencia en la percepción del riesgo global, se llevó a cabo un análisis de la varianza donde la variable independiente fue el país de los trabajadores y, la variable dependiente, la percepción del riesgo global (G1). Dado que el test de Levene no pudo confirmar la inexistencia de heterocedasticidad (Levene=6.302, p-valor<0.05), se optó por realizar el análisis con el test de Welch para identificar la existencia o no de diferencias significativas entre las medias de G1 de los tres países. El test de Welch no reveló diferencias significativas (Welch=2.702, p-valor>0.05), por lo que pudo confirmarse que los trabajadores de la construcción no tuvieron una percepción del riesgo global diferente según el país. Por tanto, la variable nacionalidad no fue considerada como una variable de influencia en el modelo. Posteriormente se llevó a cabo el análisis del Árbol de clasificación. El análisis (ver Fig. 1) muestra siete nodos finales: nodos 1, 2, 5, 6, 7, 9 y 10. De ellos, en los nodos 1, 2, 5 y 7, la probabilidad de que el trabajador tenga una percepción del riesgo baja es elevada, sobre todo en los tres primeros casos que se encuentran por encima del 65%. Los nodos finales 6, 9 y 10 muestran el patrón contrario. Por tanto, para conseguir que los operarios de la construcción tengan un comportamiento seguro a través de su percepción al riesgo, será necesario conseguir que se posicionen en cualquiera de los nodos 6, 9 y 10, siendo éste último en el que habrá una mayor probabilidad de éxito (86%). Como se observa en la Fig. 1, el Árbol de clasificación mostró tres niveles para la configuración de los nodos. En el primer nivel se encontró la variable relativa a la demora de las consecuencias (A9) como la variable más discriminante del modelo (2=64.823; g.l.=3; p-valor<0.001) y que, por tanto, mejor predice la percepción del riesgo global de los operarios de la construcción. Según Harrel [25] y Mullen [27] a este atributo del riesgo los trabajadores le suelen prestar menos atención priorizando antes la inmediatez de los efectos. Este atributo puede estar relacionado con conceptos como la higiene [13,34], la ergonomía o los riesgos psicosociales; ya que, generalmente, los efectos de estos aparecen más tarde y no de forma inmediata en el sujeto expuesto. Por tanto, en función de los resultados hallados,

260


Rodríguez-Garzón et al / DYNA 82 (192), pp. 257-265. August, 2015.

será importante considerar estos aspectos para generar una elevada percepción del riesgo global. En segundo nivel, emergen las variables relativas al potencial catastrófico (A8) (2=9.286; g.l.=1; p-valor<0.05) y a la gravedad de las consecuencias (A5) (2=10.838; g.l.=1; pvalor<0.05) como variables predictoras del modelo. El atributo A8 (potencial catastrófico) explora cuanto puede afectar la materialización del riesgo en caso de materializarse. Este resultado implica que cuando los trabajadores de la construcción perciben que los riesgos derivados de su trabajo pueden afectar a un gran número de personas, su percepción del riesgo global aumenta. Es un atributo que clásicamente suele aparecer en riesgos cuya materialización suele perdurar en el tiempo [35]. Por otro lado, el atributo A5 (gravedad de las consecuencias) suele formar parte, junto con A4 (vulnerabilidad personal o

probabilidad de ocurrencia), de los sistemas de gestión del riesgo [36]. En el último nivel, emerge, a partir de A5, el atributo A4 (vulnerabilidad personal o probabilidad de ocurrencia) (2=10.449; g.l.=1; p-valor<0.05). Esto implica que ambas dimensiones estarían conectadas, teniendo una mayor relevancia A5 frente a A4. En esta línea, Bohm & Harris [37], en su estudio sobre conductores de dumpers encontraron que, en consonancia con el público en general, estos trabajadores daban más importancia a la gravedad de las consecuencias. Rundmo [21] describe la percepción del riesgo por parte de los trabajadores de una forma muy similar a Bohm & Harris [37], realzando más la gravedad de las consecuencias frente a la vulnerabilidad personal.

Figura 1. Árbol de clasificación. Fuente: Elaboración propia

261


Rodríguez-Garzón et al / DYNA 82 (192), pp. 257-265. August, 2015.

Tabla 3. Ganancias para los nodos por categorías definidas de la percepción del riesgo global (G1). Ganancias para los nodos Muestra global

Categoría de 0 a 70 en G1

Nodos

1 5 2 7 6 9 10

Categoría de 75 a 100 en G1

N

%

N

% por nodo

% por categoría

N

% por nodo

% por categoría

114 38 74 73 58 98 50

22.60% 7.50% 14.70% 14.50% 11.50% 19.40% 9.90%

93 27 51 39 23 38 7

81.58% 71.05% 68.92% 53.42% 39.66% 38.78% 14.00%

33.50% 9.70% 18.30% 14.00% 8.30% 13.70% 2.50%

21 11 23 34 35 60 43

18.42% 28.95% 31.08% 46.58% 60.34% 61.22% 86.00%

9.30% 4.80% 10.10% 15.00% 15.40% 26.40% 18.90%

Fuente: Elaboración propia

Específicamente, se extrae de lo anterior que para poder conseguir que un operario de la construcción tenga una alta percepción al riesgo global de su trabajo (G1) es necesario que perciba de forma muy significativa que las actividades de su trabajo pueden generarle consecuencias negativas en el futuro. Así mismo, el potencial catastrófico y la gravedad de las consecuencias serán otras dos dimensiones de la percepción al riesgo que habrá que trabajar con los operarios de la construcción para conseguir el objetivo una vez conseguido el primer aspecto. En ambos casos, una alta percepción de estas dos dimensiones garantizará una mayor probabilidad de conseguir una elevada percepción del riesgo global. En último lugar, una vez alcanzada una alta percepción del riesgo en relación con la gravedad de las consecuencias, sería recomendable realizar acciones que incrementen la percepción de vulnerabilidad dado que cuanto mayor sea la vulnerabilidad personal que el trabajador perciba de que puede materializarse un riesgo existente en su ambiente laboral, mayor será el riesgo percibido global. Un análisis más detallado se llevó a cabo a través de las ganancias para cada nodo por categorías de la variable G1. Como se observa en la Tabla 3, es necesario llevar a cabo acciones que reviertan la situación actual dado que, por ejemplo, en el nodo 1 se encuentra más del 20% de la muestra y en el mismo, más del 80% tendrían una baja percepción del riesgo, representando más del 30% de los trabajadores con baja percepción. Así mismo, el nodo más beneficioso para alcanzar una alta percepción del riesgo global (nodo 10) representa menos del 10% de la muestra. En función de estos resultados, cabría citar los nodos terminales 9 y 6 como nodos a considerar por la alta probabilidad de generar una precepción global del riesgo del trabajo elevada. Adicionalmente la Tabla 3 mostró que el orden de los nodos en el que se encontraría un mayor porcentaje de trabajadores con menor percepción del riesgo era el siguiente: 1, 5, 2, 7, 6, 9, 10. El patrón contrario sería el aplicable en el caso de análisis de las ganancias para los percentiles de la categoría de G1 de 75 a 100. En adición a lo anterior, para evaluar cada categoría de la percepción del riesgo global (G1) se tuvieron en consideración los gráficos del índice (que representa los

Figura 2. Gráficos de índices y respuestas para cada categoría de la percepción del riesgo global (G1). Fuente: Elaboración propia

262


Rodríguez-Garzón et al / DYNA 82 (192), pp. 257-265. August, 2015. Tabla 4. Tabla de clasificación. Pronosticado Observado

N en 0-70

N en 0-70 210 N en 75-100 89 Porcentaje 59.2% global Fuente: Elaboración propia

68 138

Porcentaje correcto 75.5% 60.8%

40.8%

68.9%

N en 75-100

valores de los índices de percentiles [% de respuestas de percentiles acumulados / % de respuestas total) x 100]) y de la respuesta (que representa las respuestas por percentiles acumulados, calculado como: (n de percentil de criterios acumulados / n total de percentiles acumulados) x 100) para cada nodo. Ambos gráficos se representan por percentiles acumulados. Como se observa en la Fig. 2 para las dos categorías de G1 el índice y respuesta fueron adecuados. Los índices comenzaron por encima del 140%, se mantuvieron constantes algunos percentiles y posteriormente descendieron hasta alcanzar el valor de 100%. En el caso de las respuestas, para ambos casos se observó que estuvieron situadas entre el 80% y el 40% para todos los percentiles. En último lugar se procedió a estimar el riesgo, su error típico y la tabla de clasificación como medidas de precisión predictiva del modelo hallado. La estimación del riesgo y su error típico es la proporción de casos clasificados incorrectamente después de corregidos, respecto a las probabilidades previas y los costes de clasificación errónea. En el presente modelo la estimación del error fue de 0.311 y su error típico de 0.021. No obstante, al objeto de poder evaluar la bondad de la estructura del Árbol cuando se generaliza para una mayor población, se procedió a realizar una validación cruzada y calcular la estimación del error como promedio de los riesgos de todos los Árboles generados en dicha validación. Se comprobó que la estimación del error tras la validación no aumentó considerablemente siendo su valor de 0.378 y su error típico de 0.022. La tabla de clasificación muestra el número de casos clasificados correcta e incorrectamente para cada categoría de la variable dependiente, así como el porcentaje global de casos que han sido clasificados correctamente (ver Tabla 4). Por todos estos resultados se pudo concluir que el modelo generado tuvo una buena capacidad de predicción (aprox. 70%), considerándolo con la posibilidad de clasificar correctamente a un individuo al azar (50%). Además, si se considera el error de estimación, se puede concluir que las posibilidades de clasificar correctamente a un trabajador con alta o baja percepción del riesgo son un 37.8% mayores utilizando el Árbol que sin él. 4. Conclusiones Se ha analizado el riesgo percibido en una muestra de trabajadores de la construcción en España, Perú y Nicaragua. El método utilizado ha sido el paradigma psicométrico. Con este método se hace hincapié en la subjetividad del riesgo y en la multidimensionalidad de éste. Para abordar el objetivo de investigación se han analizado nueve atributos del riesgo y la importancia de cada uno en la

Figura 3. Secuencias y orden de las rutas con mayor probabilidad de obtener una alta percepción del riesgo global. Fuente: Elaboración propia

composición del constructo denominado riesgo percibido. Se ha encontrado que cuatro de esos atributos tiene un valor estadísticamente significativo para la predicción del riesgo percibido. La capacidad predictiva del modelo hallado es cercana al 70%. El atributo con más peso en la predicción del riesgo que perciben los trabajadores ha sido el relativo a la demora de las consecuencias. Específicamente el modelo revela que los trabajadores que son conscientes de que los riesgos sufridos en el trabajo le pueden generar consecuencias negativas en el futuro, tienen mayores probabilidades de que posean una alta percepción del riesgo global de su trabajo. Es el más importante en la contextualización del riesgo percibido. Esto es un resultado muy interesante ya que tradicionalmente se le suele dar más importancia a la inmediatez de los efectos. El siguiente atributo en importancia se relaciona con el potencial catastrófico de los riesgos en el trabajo; es decir, el trabajador cree que los riesgos con los que convive en su trabajo diario pueden tener una repercusión importante para los demás. En el mismo nivel de importancia se encuentra el atributo relacionado con la gravedad de las consecuencias y a partir de éste surge, en último lugar, el atributo que explora la vulnerabilidad personal (o probabilidad de ocurrencia). El modelo genera reglas probabilísticas útiles para poder predecir la percepción del riesgo global que tendrá el trabajador (percepción alta o percepción baja). Para una mejor comprensión, las siguientes dos figuras (Figs. 3 y 4) representan simplificaciones de rutas del Árbol de clasificación (Fig. 1). Las rutas que conducen a una alta percepción del riesgo global se pueden ver en la siguiente figura (Fig. 3). En ella destaca la Ruta 1 con una probabilidad de que el trabajador, con esa ruta posea una elevada percepción del riesgo del 86%. Por otro lado, las rutas que conducen a una baja percepción del riesgo global se pueden ver en la siguiente figura (Fig. 4). En ellas destaca la Ruta 1 con una probabilidad del 82% de poseer una baja percepción del riesgo cuando un trabajador tenga una baja conciencia de las consecuencias futuras que puede generar un riesgo en su trabajo. En función de las reglas probabilísticas halladas en el modelo, para que los operarios de la construcción posean una alta percepción del riesgo de su trabajo han de considerarse diversas dimensiones cualitativas, si bien, basta con que la dimensión relativa a la demora de las consecuencias sea baja para que el resultado en cuanto a la percepción del riesgo no sea el deseado (baja percepción del riesgo). A partir de estos resultados, puede entenderse que la percepción del riesgo en el sector de la construcción es multidimensional.

263


Rodríguez-Garzón et al / DYNA 82 (192), pp. 257-265. August, 2015. [2] [3]

[4] [5] Figura 4. Secuencias y orden de las rutas con mayor probabilidad de obtener una baja percepción del riesgo global. Fuente: Elaboración propia

[6] [7]

El estudio de las dos figuras anteriores permite alentar de la gravedad de la situación actual. El 22.6% de la población se encuentra en la ruta donde existe una elevada probabilidad de tener una percepción del riesgo global baja (ver ruta 1 en la Fig. 3) y solo el 9.9% de la población está en la ruta donde existe una elevada probabilidad de tener una alta percepción del riesgo (ver ruta 1 en la Fig. 4). Por ello se recomienda que se estudie más a fondo la posibilidad de crear procedimientos, estrategias y acciones preventivas para los trabajadores de forma que varíen su percepción del riesgo. Dicha formación debería focalizarse en los cuatro atributos del riesgo que se han determinado como predictivas, prestando una mayor atención a la relativa a la demora de las consecuencias, posteriormente, a la gravedad de las consecuencias junto con la vulnerabilidad personal y por último, al potencial catastrófico. La razón de estas prioridades se deriva de las probabilidades de conseguir una alta percepción del riesgo siguiendo las diferentes rutas halladas en el modelo. Este trabajo presenta un avance para el estado del arte ya que no solo determina las variables más importantes en la determinación de la percepción del riesgo global, sino que también establece como se va conformando secuencialmente dicha percepción, aspecto hasta el momento no abordado por la literatura existente. Por otro lado, como ya se ha apuntado, las secuencias y probabilidades de éxito o fracaso en cada nodo permiten a los gestores determinar focos de acción en sus estrategias. Este estudio es exploratorio y tiene limitaciones. En primer lugar, no ha sido realizado mediante muestreo aleatorio simple. En segundo lugar, es un estudio transversal en el tiempo; es decir, no ha tenido continuidad. Y en tercer lugar, es un estudio realizado en tres países de habla hispana. Se proponen como futuras investigaciones ampliar la muestra a trabajadores de otros países y realizar estudios longitudinales que analicen las variaciones de la percepción del riesgo de los trabajadores tras la implementación de acciones concretas de gestión ocupacional siguiendo los resultados del presente estudio.

[8] [9] [10]

[11] [12]

[13]

[14]

[15]

[16] [17]

[18] [19] [20]

Referencias [1]

Konkolewsky, H.H., Actions to improve safety and health in construction. Luxembourg: European Agency for Safety and Health at Work, 2004.

[21]

264

Heinrich, H.W., Industrial accident prevention. A scientific approach. Second Ed. New York & London: McGraw-Hill Book Company, Inc, 1941. Lund, J. and Aarø L.E., Accident prevention. Presentation of a model placing emphasis on human, structural and cultural factors. Safety Science, 42 (4), pp. 271-324. 2004. DOI: 10.1016/S09257535(03)00045-6 Slovic. P., and Weber, E.U., Perception of risk posed by extreme events. In: Regulation of toxic substances and hazardous waste. 2nd ed. Foundation Press, Forthcoming. 2002. Hermansson, H., Defending the conception of “Objective Risk”. Risk Analysis, 32 (1), pp. 16-24, 2012. DOI: 10.1111/j.15396924.2011.01682.x Kunreuther, H. and Slovic, P., Science, values, and risk. Annals of the American Academy of Political and Social Science, pp. 116-125, 1996. DOI: 10.1177/0002716296545001012 Botin, J.A., Guzman, R.R. and Smith, M.L., A methodological model to assist in the optimization and risk management of mining investment decisions. DYNA, 78 (170), pp. 221-226, 2011. Faber, M.H. and Stewart, M.G., Risk assessment for civil engineering facilities: Critical overview and discussion. Reliability Engineering & System Safety 5, 80 (2), pp. 173-184, 2003. Otway H. and Winterfeldt D., Expert judgment in risk analysis and management: process, context, and pitfalls. Risk Analysis, 12 (1), pp. 83-93, 1992. DOI: 10.1111/j.1539-6924.1992.tb01310.x Aven T, Kristensen V., Perspectives on risk: review and discussion of the basis for establishing a unified and holistic approach. Reliability Engineering & System Safety, 90 (1), pp. 1-14, 2005. DOI: 10.1016/j.ress.2004.10.008 MacDonald, G., Risk perception and construction safety. Proceedings of the Institution of Civil Engineers-Civil Engineering, 159 (6), pp. 51-56, 2006. DOI: 10.1680/cien.2006.159.6.51 Lu, S. and Yan, H., A comparative study of the measurements of perceived risk among contractors in China. International Journal of Project Management, 31 (2), pp. 307-312, 2013. DOI: 10.1016/j.ijproman.2012.06.001 Holmes, N., Lingard, H., Yesilyurt, Z. and De Munk, F., An exploratory study of meanings of risk control for long term and acute effect occupational health and safety risks in small business construction firms. Journal of Safety Research, 30 (4), pp. 251-261, 1999. DOI: 10.1016/S0022-4375(99)00020-1 Hallowell, M., Safety risk perception in construction companies in the Pacific Northwest of the USA. Construction Management and Economics, 28 (4), pp. 403-413, 2010. DOI: 10.1080/01446191003587752 Lopez-del Puerto, C., Clevenger, C.M., Boremann, K. and Gilkey, D.P., Exploratory study to identify perceptions of safety and risk among residential latino construction workers as distinct from commercial and heavy civil construction workers. Journal of Construction Engineering and Management, 140 (2), pp. 1-7, 2013. Perlman, A., Sacks, R. and Barak, R., Hazard recognition and risk perception in construction. Safety Science, 64, pp. 22-31, 2014. DOI: 10.1016/j.ssci.2013.11.019 Wu, X., Liu, Q., Zhang, L., Skibniewski, M.J. and Wang, Y., Prospective safety performance evaluation on construction sites. Accident Analysis & Prevention, 78, pp. 58-72, 2015. DOI: 10.1016/j.aap.2015.02.003 Rundmo, T., Safety climate, attitudes and risk perception in Norsk Hydro. Safety Science, 34 (1–3), pp. 47-59, 2000. DOI: 10.1016/S0925-7535(00)00006-0 Kaptan, G., Shiloh, S. and Önkal, D., Values and risk perceptions: A cross‐cultural examination. Risk Analysis, 33 (2), pp. 318-332, 2013. DOI: 10.1111/j.1539-6924.2012.01875.x Bourque, L.B., Regan, R., Kelley, M.M., Wood, M.M., Kano, M. and Mileti, D.S., An examination of the effect of perceived risk on preparedness behavior. Environment Behaviour, 45 (5), pp. 615-649, 2013. DOI: 10.1177/0013916512437596 Rundmo, T., Risk perception and safety on offshore petroleum platforms—Part II: Perceived risk, job stress and accidents. Safety Science, 15 (1), pp. 53-68, 1992. DOI: 10.1016/09257535(92)90038-2 DOI: 10.1016/0925-7535(92)90039-3


Rodríguez-Garzón et al / DYNA 82 (192), pp. 257-265. August, 2015. [22] Mohamed, S., Ali, T.H. and Tam, W., National culture and safe work behaviour of construction workers in Pakistan. Safety Science, 47 (1), pp. 29-35, 2009. DOI: 10.1016/j.ssci.2008.01.003 [23] Gucer, P.W., Oliver, M. and McDiarmid, M., Workplace threats to health and job turnover among women workers. Journal of Occupational and Environmental Medicine, 45 (7), pp. 683-690, 2003. DOI: 10.1097/01.jom.0000071508.96740.94 [24] Arezes, P.M. and Bizarro, M., Alcohol consumption and risk perception in the Portuguese construction industry. Open Occupational Health & Safety Journal, 3, pp. 10-17, 2011. DOI: 10.2174/1876216601103010010 [25] Harrell, W.A., Perceived risk of occupational injury: Control over pace of work and blue-collar versus white-collar work. Perception Motor Skills, 70 (3c), pp. 1351-1359, 1990. DOI: 10.2466/PMS.70.3.1351-1359 DOI: 10.2466/pms.1990.70.3c.1351 [26] Seo, D., An explicative model of unsafe work behavior. Saf Sci, 43 (3), pp. 187-211, 2005. DOI: 10.1016/j.ssci.2005.05.001 [27] Mullen, J., Investigating factors that influence individual safety behavior at work. Journal of Safety Research, 35 (3), pp. 275-285, 2004. DOI: 10.1016/j.jsr.2004.03.011 [28] Rivers, L., Arvai, J. and Slovic, P., Beyond a simple case of black and white: Searching for the white male effect in the African‐American community. Risk analysis, 30 (1), pp. 65-77 2010. DOI: 10.1111/j.1539-6924.2009.01313.x [29] Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S. and Combs, B., How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Policy Science, 9 (2), pp. 127-152, 1978. DOI: 10.1007/BF00143739 [30] Portell, M. y Solé, M.D., Riesgo percibido: Un procedimiento de evaluación. (NTP 578). Madrid: Instituto Nacional de Seguridad e Higiene en el Trabajo (INSHT). 2001. [31] Sjöberg, L., Risk perception of alcohol consumption. Alcoholism: Clinical and experimental research, 22 (S7), pp. 277s-284s, 1998. DOI: 10.1111/j.1530-0277.1998.tb04380.x [32] Berlanga-Silvente, V., Rubio-Hurtado, M. e Vilà-Baños, R., Com aplicar arbres de decisió en SPSS. REIRE - Revista d'Innovació i Recerca en Educació, 6 (1), pp. 65-79, 2013. [33] IBM Corp.,Released 2012. IBM SPSS Statistics for Windows, Version 21.0. Armonk, NY: IBM Corp. [34] Garzón, I.R., Padial, A.D., Fiestas, M.M. and Ruiz, V.L., The delay of consequences and perceived risk: an analysis from the workers’ view point, Revista Facultad de Ingeniería, 74, pp. 165-76, 2015. [35] Slovic, P., Fischhoff, B. and Lichtenstein, S., Why study risk perception? Risk Analysis, 2 (2), pp. 83-93, 1982. DOI: 10.1111/j.1539-6924.1982.tb01369.x [36] McNeill, I.M., Dunlop, P.D., Heath, J.B., Skinner, T.C. and Morrison, D.L., Expecting the unexpected: predicting physiological and psychological wildfire preparedness from perceived risk, responsibility, and obstacles. Risk Analysis, 33 (10), pp. 1829-1843, 2013. DOI: 10.1111/risa.12037 [37] Bohm, J. and Harris, D., Risk perception and risk-taking behavior of construction site dumper drivers. International Journal of Occupational Safety and Ergonomics, 16 (1), pp. 55-67, 2010.

M. Martínez-Fiestas, es Dra. por la Universidad de Granada, España. Actualmente es profesora investigadora en la Universidad ESAN, Lima, Perú, donde además es docente en la Escuela de Postgrado de Negocios. Su interés en la investigación se centra en aspectos relacionados con la percepción del riesgo, neuromarketing, psicofisiología, marketing social y comportamiento del consumidor. A. Delgado-Padial, es Dr. y Psicologo por la Universidad de Granada, España. Es profesor a tiempo completo en el Departamento de Psicología Social y Metodología de las Ciencias del Comportamiento de la Universidad de Granada, España. Ha sido decano de la facultad y actualmente es el director de la Cátedra “SABIO- Salud y Bienestar Organizacional”. V. Lucas-Ruiz, es Dr, Arquitecto Técnico y Licenciado en Económicas por la Universidad de Sevilla, España. Es profesor a tiempo completo y Director del Departamento de Construcciones Arquitectónicas II en la Universidad de Sevilla (España). Además es coordinador del Master en Seguridad Ocupacional en dicha universidad.

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones Mayor información:

I. Rodríguez-Garzón, es Dr. e Ingeniero de Edificación por la Universidad de Sevilla, España. Actualmente es docente en la Universidad Peruana de Ciencias Aplicadas, Lima, Perú. Su interés en la investigación se centra en el riesgo y su gestión en la actividad laboral así como en la tecnología de la construcción.

265

E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02


Cost estimation in software engineering projects with web components development Javier de Andrés a, Daniel Fernández-Lanvin b & Pedro Lorca c b

a Accounting Department, University of Oviedo, Asturias, España jdandres@uniovi.es Department of Computer Sciences, University of Oviedo, Asturias, España dflanvin@uniovi.es c Accounting Department, University of Oviedo, Asturias, España plorca@uniovi.es

Received: May 7th, 2014. Received in revised form: April 16th, 2015. Accepted: May 4th, 2015.

Abstract A variety of models for the prediction of effort costs in software projects have been developed, including some that are specific for Web applications. In this research we tried to determine if the use of specific models is justified by comparing the cost behavior of Web and noWeb projects. We focused on two aspects of the cost calculation: the diseconomies of scale in software development and the impact of some project features that are used as cost drivers. We hypothesized that for these kinds of projects, diseconomies of scale are higher but the cost-increasing effect of the cost drivers is mitigated. We tested such hypotheses using a set of real projects. Our results suggest that both hypotheses hold. Thus, the present research’s main contribution to the literature is that the development of specific models for the estimation of effort costs for the case of Web developments is justified. Keywords: software projects; Web; costs; estimation.

Estimación de costes en proyectos de ingeniería software con desarrollo de componentes web Resumen Existen multitud de modelos propuestos para la predicción de costes en proyectos de software, algunos orientados específicamente para proyectos Web. Este trabajo analiza si los modelos específicos para proyectos Web están justificados, examinando el comportamiento diferencial de los costes entre proyectos de desarrollo software Web y no Web. Se analizan dos aspectos del cálculo de costes: las deseconomías de escala, y el impacto de algunas características de estos proyectos que son utilizadas como cost drivers. Se enuncian dos hipótesis: (a) en estos proyectos las deseconomías de escala son mayores y (b) el incremento de coste que provocan los cost drivers es menor para los proyectos Web. Se contrastaron estas hipótesis analizando un conjunto de proyectos reales. Los resultados sugieren que ambas hipótesis se cumplen. Por lo tanto, la principal contribución a la literatura de esta investigación es que el desarrollo de modelos específicos para los proyectos Web está justificado. Palabras clave: proyectos software; Web; costes; estimación.

1. Introduction The research question we try to address in the present paper is whether Web projects behave in a different way than non-Web software projects with respect to certain issues that have an impact on effort costs. Effort costs are the costs for paying software engineers, and they are harder to assess in the earlier stages of a project [1]. An interesting avenue of research is devoted to the development of models for the estimation of

effort costs. Some models use expert judgments. Some others rely on the use of heuristic procedures based on statistical/machine learning methods. Lastly, others imply theoretical assumptions, for example those that stem from econometric theory. Despite these different approaches, most of the models do not take into consideration the specific features of the different types of projects. In this regard, the development of a Web application has specific features caused by both technical and social/cultural

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (192), pp. 266-275. August, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n192.43366


de AndrĂŠs et al / DYNA 82 (192), pp. 266-275. August, 2015.

reasons. So, several specific measures have been proposed by different authors to design accurate effort estimations, specifically in Web projects. These authors propose and compare estimation techniques for Web applications development using industrial and student-based datasets [2– 11]. All these methods are based on the assumption that the effort costs of Web development projects follow patterns that are certainly different from those of traditional software developments [12] and, hence, require differently tailored measures for accurate effort estimation [6,13–16]. We try to shed some light on whether such assumption is true. This question is important because if the behavior of Web projects is not significantly different from that of non-Web projects, then the development of methods that are Web-specific is unadvisable because these models are developed using only a subset of the available data. This will lead to less accurate models. Accuracy of cost estimations when managing software projects is extremely important because good effort estimates lead to projects finished on time and within budget [17]. An estimate of the costs is desired even during the earlier stages of the development [1]. Hu et al. in [18] indicate that a 200 to 300 percent cost overrun and a 100 percent schedule slippage would not be unusual in large software systems development projects. Millions of dollars have been wasted in projects that are abandoned because of severe cost overruns and schedule slippages [19]. Furthermore, the study of the specific case of Web projects is important because, during the last several years, Web developments have been representing an increasing share of software projects. This has been caused by the diffusion of the Internet, and has generated a market and therefore a demand for Web applications. We undertook a comparative analysis on a massive database which is made up of both Web and non-Web projects. Specifically, we assessed the effect of the Web nature of a project on a) economies/diseconomies of scale, and b) the effect on effort costs of multipliers usually considered in cost models such as those capturing the product, hardware, personnel and project attributes. We conducted a series of regression analysis by means of which we tried to assess if the coefficients defining the impact of such factors on costs are significantly different for Web and no-Web projects. Our results indicate that a) the diseconomies of scale are significantly higher for the case of Web developments, and b) the specific features of Web developments have an influence on the aggregate effect of the cost drivers, which cause the cost of projects to be significantly lower. Thus, the main contribution of the present research to the literature is that we provide evidence that the development of specific models for the case of Web projects is clearly justified. This opens further avenues of research, such as the study of the effect of the Web nature of projects on each one of the cost drivers defining the features of a software project. The remainder of this paper is structured as follows: Section 2 discusses why Web projects are different; Section 3 analyzes the effort implications of the specific features of Web projects; Section 4 is devoted to the design of the analysis; and Section 5 outlines the main results. Finally, a summary, conclusions and future avenues of research are detailed in section 6.

2. Web development is different Practical experience in web development shows that there are significant differences between traditional software applications and web applications [9]. There are many factors that differentiate a web development from common software development projects. The most evident are the strong requirements that are imposed by the technical environment. The use of HTTP protocol, the complexity of web user interfaces with regard to the user experience, or the heterogeneous pool of components, frameworks and protocols present in a common web application [16,20] are just some of the factors that determine the complexity of the development. Furthermore, social and cultural factors must also be taken into account while estimating these kind of projects: the usual skills of web developers, the kind of projects and their features and many others that can also have influence in the final cost of the project. Regarding the project management, Web developments are characterized by having a very fluidic scope [21,22]. The development teams are small (from three to seven developers) [13,20,22,23] and they usually work intensively [24] and under a high pressure [14]. Their members are mostly inexperienced novice young programmers [16,20,23]. Another problematic aspect is related to the software requirements, given that in Web projects are very volatile [14,23] and that the development of web applications is characterized by the fact that their requirements generally cannot be estimated beforehand, which means that project size and cost cannot be anticipated either. In relation to the development processes, these are usually ad-hoc or heuristic, although some organizations are starting to look into the use of agile methods [25] like Scrum or Extreme Programming (XP). This agile approach suits the unstable requirement environment that was previously pointed out [26], given that agile methods are based on the assumption that software processes should be based of adaptation to changes in the requirements, as these changes will happen from time to time [27]. From a more technical point of view, the diversity of technologies in these developments is high compared with the rest of software projects. They are usually highly component oriented (mashups, frameworks, application servers, etc.), and can be created using diverse technologies such as several varieties of Java (Java, servlets, Enterprise java Beans, applets, and Java Server Pages), HTML, JavaScript, XML, XSL, etc. [16,20]. These two aspects can have positive and negative influences on the cost of the project. Integrating third-party tested components and elements can save a lot of resources since we can expect them to work adequately, waiving the cost of not only the building, but also the testing of these parts of the product. However, the integration can be complicated, and finding developers with the required skills is usually difficult. 3. Different features of the techniques proposed for cost and effort estimation. The features previously presented differentiate Web developments from the rest of software projects. But, how

267


de Andrés et al / DYNA 82 (192), pp. 266-275. August, 2015.

do they determine the cost of the project? It will now be analyzed what can be expected if any of the popular software estimation cost strategies were applied. The different techniques proposed for cost and effort estimation over the last 30 years fall into three general categories [16]: (i) expert judgment, (ii) machine learning and (iii) algorithmic theory-based models (AM). The latter is, to date, the most popular in the literature, and they attempt to represent the relationship between effort and one or more project characteristics. The main “cost driver” used in such a model is usually taken to be some notion of software size (e.g. the number of lines of source code, number of pages, number of links, functional points) [28]. Algorithmic models need calibration or adjustment to local circumstances. The most popular representative of the AM is the Constructive Cost Model (COCOMO), developed by Boehm et al. [29]. COCOMO has three increasing detailed and accurate forms. The basic form estimates effort using the following exponential function: (1) Where Y is the effort in person-months, KLOC is the software size measured in terms of thousands of lines of code, and a and q are constants determined by the environment and the complexity of the application to be developed. Intermediate and detailed versions of COCOMO incorporate an adjustment multiplicative factor that depends on a subjective assessment of product, hardware, personnel and project attributes, which are understood as cost drivers. It is interesting to highlight that COCOMO establishes an exponential relationship between software size and effort. Such an exponential relationship allows for the modeling of economies and diseconomies of scale. However, in COCOMO q is always bigger than 1, so it always hypothesizes diseconomies of scale in the software development process. The exponential approach has later been used in other models such as COCOMO II [30] (and its multiple adaptations like the proposal of Patil et al. [31]) and Constructive Systems Engineering Cost Model (COSYSMO) [32]. The main difference between COCOMO, COCOMO II and COSYSMO is the way they calculate the size of the project and the factors that we must consider to calibrate the adjustment. In this research we will consider COSYSMO to explain the behavior that Web development projects would play when we apply an algorithmic model to estimate the costs. This is because in COSYSMO the assessment of the cost drivers resembles the current circumstances in software development in a more accurate way. COSYSMO uses the following exponential function: (2) Where PMNS is the effort in person-months, A is a calibration constant derived from historical project data, E represents diseconomies of scale and EM is an effort

multiplier for each cost driver (the geometric product results in an overall effort adjustment factor to the nominal effort). Table 1. Multipliers in COSYSMO models. 1 Requirements The level of understanding of the Understanding system requirements by all stakeholders (A stakeholder in COSYSMO is any primary or secondary actor that could have any interest or collaboration in the development of the project) including the systems, software, hardware, customers, team members, users, etc.… 2 Architecture The relative difficulty of determining Complexity and managing the system architecture in terms of platforms, standards, components (COTS/GOTS/NDI/new), connectors (protocols), and constraints. This includes systems analysis, tradeoff analysis, modeling, simulation, case studies, etc.… 3 Level of Service This cost driver rates the difficulty and Requirements criticality of satisfying the ensemble of level of service requirements, such as security, safety, response time, interoperability, maintainability, Key Performance Parameters (KPPs), the “ilities” (The “ilities” may include: reliability, usability, performance, Affordability, maintainability, and so forth. The “ilities” are imperatives of the external world as expressed at the boundaries with the internal world of the system [35].), etc. 4 Migration Complexity The complexity of migrating the system from previous system components, databases, workflows, etc., due to new technology introductions, planned upgrades, increased performance, business process reengineering etc.… 5 Technology maturity/ The relative readiness of the key Technology risks technologies for operational use. 6 Documentation to The formality and detail of Match Lifecycle documentation required to be formally Needs delivered based on the life cycle needs of the system. 7 # and Diversity of The number of different platforms that installations/platforms the system will be hosted and installed on. 8 # of Recursive levels Number of applicable levels of the in the design Work Breakdown Structure 9 Stakeholder Team Leadership, frequency of meetings, Cohesion shared vision, approval cycles, group dynamics (self-directed teams, project engineers/managers), IPT framework, and effective team dynamics. 10 Personnel Capability Systems Engineering’s ability to perform in their duties and the quality of human capital. 11 Personnel The applicability and consistency of the Experience/Continuity staff over the life of the project with respect to the customer, user, technology, domain, etc.… 12 Process Maturity Maturity per EIA/IS 731, SE CMM or CMMI. 13 Multisite Location of stakeholders, team Coordination members, resources (travel). 14 Tool Support Use of tools in the System Engineering environment. Source: Adapted from Valerdi [36,37].

268


de AndrĂŠs et al / DYNA 82 (192), pp. 266-275. August, 2015.

High

Very High

Extra High

EMR

Requirement 1.87 1.37 Understanding Architecture 1.64 1.28 Understanding Level of Service 0.62 0.79 Requirements Migration Complexity --Technology Risk 0.67 0.82 Documentation 0.78 0.88 # and diversity of --installations/platforms # of recursive levels in the 0.76 0.87 design Stakeholder team 1.50 1.22 cohesion Personnel/team capability 1.50 1.22 Personnel 1.48 1.22 experience/continuity Process capability 1.47 1.21 Multisite coordination 1.39 1.18 Tool support 1.39 1.18 Source: Adapted from Boehm et al. [29]

Nominal

Low

Very Low

Table 2. Rating Scales for COSYSMO Effort Multipliers.

1.00

0.77

0.60

--

3.12

1.00

0.81

0.65

--

2.52

1.00

1.36

1.85

--

2.98

1.00 1.00 1.00 1.00

1.25 1.55 1.93 1.32 1.75 -1.13 1.28 -1.23 1.52 1.87

1.93 2.61 1.64 1.87

1.00

1.21

1.47

--

1.93

1.00

0.81

0.65

--

2.31

1.00 1.00

0.81 0.65 -0.82 0.67 --

2.31 2.21

1.00 1.00 1.00

0.88 0.77 0.68 0.90 0.80 0.72 0.85 0.72 --

2.16 1.93 1.93

The parameter E determines the behavior of the cost in relation to the size of the project. Typical determining factors are learning effect (the bigger the project is, the longer the time that developers have to adapt to the technology, environment, etc.) and coordination costs. Regarding this, we could expect an important weight of the learning effect. Working with novice developers, immature and heterogeneous technologies lead us to expect an important period of developer adaptation to the project in which productivity would not be as high. This adaptation period does not depend of the functional size of the project, so its pernicious influence would be smaller the bigger the project is, and this would be reflected with an economic effect in the estimation. Furthermore, the fact that Web developments are usually driven by ad-hoc and agile methods with small teams will probably have the opposite effect. Even their promoters state that agile methods perfectly suit small projects with small teams [33]. Part of their agility is based on the simplification of the coordination tasks, something possible when we have small teams, but that can be risky when these are bigger. For instance, Scrum suggests groups no bigger than nine people [34], while XP suggests that the smaller is the team, the better [27]. Consequently, we could expect increasing coordination problems as long as the project size grows. This may cause diseconomies of scale to be more relevant for the case of Web projects. In relation with the multipliers, Table 1 summarizes the factors considered in COSYSMO, and Table 2 shows the rating scales to be applied once the subjective assessment of the factors has been made. Some of these factors have a direct relationship with the effort required, while for others the relationship is inverse. The bigger the requirements and

architecture understanding, the stakeholder and team cohesion, team’s capability and experience, process capability, multisite coordination or tool support are, the smaller the cost of the project will be. However, level of service requirements, migration complexity, technology diversity and risks, documentation and recursive levels in the design exert a direct influence in the cost of the project. It will now be considered how the facts that were previously discussed regarding Web projects can be determined. At first sight, these features will probably lead to a higher cost of Web developments. The volatility of requirements involves a low level of requirement understanding (1), and will probably also determine the architecture understanding (2). The high technological complexity and heterogeneity of Web projects is directly represented in the diversity of installation and platforms factor (7), increases the migration complexity factor (4) and the technology risk/maturity (5). The latter is also determined by the experience of the developers. Novice developers will not be able to anticipate technological problems (like for example, integration) derived from the combination of different tools and standards. This fact also has an influence on the team capability and experience factors (9 and 10). Furthermore, some features of Web development such as the more frequent use of agile methods or the intensive component oriented development level may mitigate the impact of some of the factors on the total cost. For instance, the pernicious effect of a low understanding of the requirements is not a problem in Scrum or XP, since both are designed for this kind of scenarios. The way these methods organize the project encourage the team to have straight communication between developers and stakeholders (factor 8), as well as their physical organization (factor 12). For example, XP and Scrum state that customers and developers should work together and meet continuously throughout the project. Another determining use of agile methods is the simplicity of the deliverables, reducing the documentation to the minimum (factor 6). Finally, the generalized use of frameworks, third party components and specially the deployment over application servers that provide an important part of the logic already implemented suppose a high tool support (13) that should reduce the cost of the project. In summary, the specific features of Web developments have both positive and negative effects on the cost estimation. However, given the growing adoption of agile methods have in the industry [38], and the mitigation effect they have in the penalizing effects, we think that there are reasons to suppose that the cost-reducing features of Web projects outweigh the cost-increasing ones. 4. Hypotheses According to what was discussed in Section 0, we can formulate two different hypotheses. On one hand, the literature points out that Web projects are mainly undertaken by small teams that do not require advanced coordination mechanisms, which makes them vulnerable to any increment in the size of the project. This leads us to formulate the following hypothesis:

269


de Andrés et al / DYNA 82 (192), pp. 266-275. August, 2015.

H1: The diseconomies of scale are significantly higher for the case of Web developments. Moreover, even when some of the specific features of Web development would involve a negative effect in the cost, others features like the adoption of agile development methods in these projects can have a mitigation effect over the former. Thus, a second hypothesis can be formulated: H2: The specific features of Web developments have an influence on the aggregate effect of the cost drivers that cause the cost of projects to be significantly lower. 5. Methodology

expression which can be estimated through linear regression we replace Ln(1+d*web) by Kweb, where K is a continuous variable to be estimated. If K is significantly different from zero then d is also significantly different from zero, so K can be used to test H2. As is detailed below, the database used for empirical testing does not have information on the specific assessment of the individual cost drivers for each project. So, we consider the effect of such cost drivers on a fixed form. In other words, in our model they form a constant. Then, we can add it to the prior constant in the model and define a new intercept term in the equation which takes the form Intercept = Ln(a) + Ln[m(X)]. So, the final equation to be estimated is:

In this section we explain the empirical model we used to test our hypothesis and we provide a description of the database we used. 5.1. The model As we have previously noted the COSYSMO model proposes an exponential equation for the computation of the effort. This equation can be summarized as: (3) Where b represents the diseconomies of scale and m(X) is the aggregate effect of the multiplier factors which act as cost drivers. In order to test H1 and H2, which state that for the case of Web applications both the scale effects and the aggregate effect of the multipliers are significantly different, we formulate a modified version of (3) which takes the following form: ∗

1

(4)

Where web is a dummy that equals 1 for web projects and 0 otherwise. If c and d are significantly different from zero and, respectively, positive and negative then H1 and H2 hold. If these parameters are not significantly different from zero then the Web nature of a project should have no influence on the cost estimation process. Equation 4 can be transformed into a linear equation by considering it in logarithmic form: 1

∗ ∗

(5)

Rearranging the terms in (5) we obtain:

1

(6)

If d=0, then Ln(1+d*web)=0. Otherwise, the value of Ln(1+d*web) depends on web, that is, on whether or not the considered project is a Web application. In order to reach an

(7) In equation 7 the parameters to be estimated are: Intercept, b, c, K. As indicated above, c and K are the parameters of interest for the H1 and H2 tests, respectively. 5.2. The database For the test of our model we used the ISBSG (International Software Benchmarking Standards Group) Development & Enhancement Repository, dated June 2009. This is a huge database that comprises of 5052 projects from twenty-four countries. The ISBSG Repository is a consolidated data set in the field of software metrics (Further details can be found at http://www.isbsg.org/). It has been used in many papers which deal with the issue of effort estimation [39–42]. It has also been used for the study of other topics related to software metrics, such as software defect prediction, [43] the determination of the optimum development team size [44] and the determination of the priorities of the Capability Maturity Model Integration (CMMI) process areas [45]. It is noticeable that the ISBSG Repository does not contain information on the values of the factors used as cost drivers in COCOMO, COSYSMO and others. As we indicated above, in our equation such factors are aggregated and included into the intercept term. So, in our study we are limited to assess the impact of the Web nature of a project on the global effect of all the multipliers. However, there are two reasons that prevented us from considering databases containing the assessment of cost drivers for each of the individual projects: a) Such databases do not contain information on whether the software projects are Web applications or not. In most cases they are old databases containing projects which were developed in years prior to the popularization of the Internet. b) The size of most databases is too small and does not allow an accurate estimation of the parameters of individual cost drivers. Is interesting to remember that COSYSMO uses 14 cost drivers, and if we want to capture the differential effect of the Web nature of the project we need 14 additional coefficients. Modeling the intercept term and the diseconomies of scale requires another three parameters. So, 31 variables are needed for the development of a detailed model. This, under the usual standards, requires a sample size of 300 or more for an accurate estimation of the parameter

270


de Andrés et al / DYNA 82 (192), pp. 266-275. August, 2015.

(some databases used for the development of effort estimation models can be found on the PROMISE Software Engineering Repository), which is publicly available at [46]. Examples of databases in this repository which have one or more of the mentioned features that make them unsuitable for our study are COCOMO 81, COCOMO NASA, COCOMO NASA 2 and Desharnais Software cost estimation dataset). An additional advantage of the ISBSG Repository is that it contains information on the quality of the data gathered on each project. A rating code from A to D is assigned to each project depending on the quality of the data. The submission is fundamentally sound only for the projects that are A or Brated. So, in order to ensure high reliability in our results we discard projects with a C or D rating code. This procedure is in accordance with prior studies that used the ISBSG Repository (see, e.g., Pendharkar et al.,[47], among others). Another issue which may bias the research results is the measurement of software size. The ISBSG repository provides information of the functional sizing of each project, as other measures such as the lines of code can be influenced by the language used and the programmer’s characteristics. However, there are different approaches for the measurement of the functional size: that proposed by the International Function Point Users Group (IFPUG), the functional sizing of the Common Software Measurement International Consortium (COSMIC), that of the Netherlands Software Metrics Association (NESMA) and the MK II method by the UK Software Metrics Association, among others. Many prior papers on effort estimation pool all the projects in the same database no matter the method used for effort estimation. This is a source of bias, since different methods generate not equivalent estimations, even restricting the study to those covered in the ISO/IEC 14143-1:2007 (IFPUG, COSMIC, NESMA and MKII). In fact, particular attention over the past several years has been devoted to finding a mathematical function for converting IFPUG functional size units to the newer COSMIC ones [48]. So, we decided to restrict the study to those projects for which IFPUG functional sizing –the most popular one- was available. Table 4. Sample breakdown by year and type of project. New developments Year Web=0 Web=1 Total 1999 1 0 1 2000 2 2 4 2001 2 7 9 2002 1 12 13 2003 4 2 6 2004 6 38 44 2005 9 11 20 2006 3 99 102 2007 3 3 6 2008 2 0 2 2009 0 0 0 TOTAL 33 174 207 t-test Mean 2004.21 2004.94 2004.82 Std Dev 2.247 1.570 1.710 t-stat -1.773 p-value 0.084 Source: Elaborated by the authors.

Web=0 0 3 3 6 3 17 3 2 5 1 1 44

Table 3. Variables used in the model Variable Definition EFFORT Number of hours needed for the completion of the project. Web Web nature of a project. 1 for Web applications and 0 otherwise. ADJFP Size of the Project, measured in IFPUG functional points. Source: Elaborated by the authors.

Furthermore, the temporal scope considered for our study was the period 1999-2009. The reason for this selection is to make the comparison between Web and non-Web projects on a homogeneous basis, as Web technologies are of recent development. Web developments previous to 1999 are mainly implemented using now deprecated technologies like CGI or the older versions of PHP. The roughness of these technologies strongly determined the development projects. In the same way, considering non-Web projects from years prior to the popularization of the Web could cause significant biases in our research. After this last filter, the final sample consists of 588 projects. With regard to the rest of the variables (apart from software sizing) used to test our model, first we defined which projects are considered as Web developments. We only considered as Web projects those that were developed in a language using a virtual machine (Java, Java/similar, J2EE, C#, .Net.). We discarded other kinds of projects such as web page design or similar because they cannot be considered as Web application developments and therefore, they are beyond the scope of this research. As indicated in the prior subsection, the Web nature of the projects was represented through a dummy variable that equals 1 for Web applications and 0 otherwise. With respect to the effort, it was measured through the Summary Work Effort field of the ISBSG repository. This variable provides the total effort in hours recorded against the project. Table 3 provides a summary of the variables used in our model and their codification hereafter.

Enhancement Web=1 0 4 0 9 5 71 9 236 3 0 0 337

2003.89 2.148

2005.34 1.197 -4.395 0.000

271

Total 0 7 3 15 8 88 12 238 8 1 1 381

Web=0 1 5 5 7 7 23 12 5 8 3 1 77

2005.17 1.415

2004.03 2.182

Total Web=1 0 6 7 21 7 109 20 335 6 0 0 511 2005.20 1.348 -4.597 0.000

Total 1 11 12 28 14 132 32 340 14 3 1 588 2005.04 1.533


de Andrés et al / DYNA 82 (192), pp. 266-275. August, 2015.

6. Results As the database and the variables have been defined, we must indicate that we carried out three separate analyses: one for the total sample, another one considering only enhancement projects and another in which only projects which consisted of the development of new software were included. The reason for this is that enhancement projects can involve different circumstances for example lack of documentation, refactoring requirements and legacy systems compatibility restrictions, that could strongly determine the cost effort required by the project beyond the web or non-web condition. Table 4 shows the sample breakdown by year and type of project. We also included the results of a t-test of the difference of means, for Web and non-Web projects, for the year of the completion of the project. It is noticeable that the majority of the projects are Web developments, both in the new developments and in the enhancement subsamples. Most of the projects are from the 2004-2006 period. The results of the t-tests evidence that for the case of new developments Web projects are not significantly older than non-Web ones. For enhancements and for the total sample non-Web projects are significantly older. This can be a source of bias and poses a limitation on the research results. However, the majority of the prior research papers have not paid attention to this issue. ¡Error! La autoreferencia al marcador no es válida. contains the main descriptive statistics for the effort and for software size indicators. We emphasize that new development projects are bigger than enhancements. In this Section we show the results of the proposed model. Table 6 provides the parameter estimation for all projects, Table 5. Effort and Project size: descriptive statistics Panel A: New developments EFFORT Mean 3710.647 Standard Deviation 6478.468 25% percentile 965 Median 2321 75% percentile 4067 Minimum 102 Maximum 73501 Panel B: Enhancements EFFORT Mean 1879.911 Standard Deviation 4267.063 25% percentile 200 Median 573 75% percentile 1678 Minimum 8 Maximum 57811 Panel C:Total EFFORT Mean 2524.405 Standard Deviation 5223.204 25% percentile 333 Median 950 75% percentile 2804 Minimum 8 Maximum 73501 Source: Elaborated by the authors.

ADJFP 344.5314 357.564 146 250 372 21 2550 ADJFP 158.5564 239.8804 43 91 173 4 2087 ADJFP 222.0272 300.0362 63.5 138.5 256.5 4 2550

Table 6. Total. Equation Estimates Coefficient Intercept 6.807844 b 5.03e-06 c 6.14e-06 K -0.0000411 Other statistics F test 530.12 Adj. R2 0.731 Cook0.140 Weisberg / PaganBreusch Max. CI 9.873 Source: Elaborated by the authors.

Std err 0.032979 7.81e-07 8.43e-07 4.74e-06

t-statistic 206.43 6.44 7.28 -8.67

p-value 0.000 0.000 0.000 0.000

t-statistic 151.77 3.62 6.33 -7.79

p-value 0.000 0.000 0.000 0.000

P=0.000 P=0.705

Table 7. Enhancements. Equation Estimates Coefficient Std err Intercept 6.78507 0.0447067 b 3.86e-06 1.07e-06 c 7.24e-06 1.14e-06 K -0.000048 6.16e-06 Other statistics F test 334.87 P=0.000 Adj. R2 0.725 Cook0.24 P=0.627 Weisberg / PaganBreusch Max. CI 9.631 Source: Elaborated by the authors. Table 8. New developments. Equation Estimates Coefficient Std err Intercept 6.902051 0.0747149 B 7.19e-06 1.18e-06 C 2.98e-06 1.44e-06 K -0.0000215 8.42e-06 Other statistics F test 76.08 P=0.000 Adj. R2 0.5223 Cook0.20 P=0.652 Weisberg / PaganBreusch Max. CI 12.099 Source: Elaborated by the authors.

t-statistic 92.38 6.10 2.07 -2.55

p-value 0.000 0.000 0.040 0.011

while Tables 7 and 8 show the results for enhancements and new developments, respectively. In each of the tables we provide the parameter estimates of the equation, the adjusted R2, the results of the F-test and the Cook-Weisberg / PaganBreusch test for heteroskedasticity and the maximum of the condition indices (Max. CI), which we used to test the multicollinearity of the models. 7. Discussion It is noticeable that none of the models show a significant amount of heteroskedasticity or multicollinearity. According to the Cook-Weisberg / Pagan-Breusch test, heteroskedasticity is rejected in all cases. Condition indices 272


de Andrés et al / DYNA 82 (192), pp. 266-275. August, 2015.

are always below 15, which, according to Netter et al. [49], is a sensible threshold for multicollinearity. With regard to the parameter estimation, it is evidenced that consistently across all samples, c and K are, respectively, significantly negative and positive. This suggests that both H1 and H2 hold. That is, for the case of Web projects the diseconomies of scale are higher than for non-Web ones. Also, the specific features of Web developments have an influence on the aggregate effect of the cost drivers. This makes the cost of a Web project to be significantly lower than that of a non-Web project with the same features (cost drivers). Then, we can assert that the development of effort prediction models which are specifically designed for Web applications is clearly justified. Another important issue which stems from our results is that the features of a project have an influence on its cost patterns, both on the intensity of the diseconomies of scale and on the influence of each one of the cost drivers on the total cost of the project. Our results suggest that cost estimation in software engineering projects requires that projects are grouped with similar characteristics. In the present research the evidence that diseconomies of scale are greater in web projects is shown. Nevertheless, this result is unobservable when the costs of all projects (without splitting) are considered and therefore the project costs of Web developments may be underestimated. This could be the reason for high cost overrun that Hu et al. indicate [18]. We should note that in this paper the limitations of the ISBSG Repository are present. These limitations are common to other papers as Fernández-González and Ladrón de Guevara [50] have shown: (i) in ISBSG the best projects have been selected and the dataset is likely subject to biases, (ii) the ISBSG data is collected from various worldwide organizations with dissimilar backgrounds, business cultures, levels of personnel experience, and development maturity, (iii) a multi-company dataset such as ISBSG also suffers from the presence of more outliers in comparison to a singlecompany dataset.

for were the quality of the data, the year of the completion of the project and the approach used for measuring the software size. These two last factors have seldom been taken into account in prior research efforts. Our results indicate that our hypotheses hold. For Web projects, diseconomies of scale are significantly higher and the cost-increasing effects of the multipliers which act as cost drivers are significantly lower. Thus, the main contribution of the present research to the literature is that we provide evidence suggesting that the development of effort prediction models specific for Web applications is justified. Our findings also raise some questions that may constitute further avenues of research. First, due to data limitations we have studied the effect of the Web nature of projects on the aggregate behavior of the cost drivers. Through a detailed survey on one or more organizations, detailed information on COSYSMO parameters could be gathered for a number of projects significant enough to conduct empirical research. This would allow us to test the influence of the Web nature of software projects on each one of the cost drivers. Second, the influence on the effort costs of other project features like the type of organization (banking and financial organizations, manufacturing companies, etc.) or the function for which the software product is developed (e.g. accounting, stock control, management information system, etc.) could also deserve detailed study. Acknowledgments This work has been funded by the European Union, through the European Regional Development Funds (ERDF); and the Principality of Asturias, through its Science, Technology and Innovation Plan (grant GRUPIN14-100). References [1]

[2]

8. Conclusions In the present research we tested the hypothesis of whether the cost behavior of Web developments is significantly different. We focused on two aspects of effort cost behavior: the diseconomies of scale and the influence on the cost of a series of cost drivers that describe the main features of a software project. With regard to the diseconomies of scale, we hypothesized that they are higher for Web projects due to the relevance of the coordination issue for these kind of projects. Regarding the influence on the effect of the cost drivers, we hypothesized that on an aggregate basis the Web nature of a project mitigates the cost-increasing effects of cost drivers, mainly due to the more frequent use of agile development methods. We tested our hypotheses on a huge database containing data from more than 5000 projects. This allowed us to apply a series of filters in order to achieve a high degree of homogeneity in the final database. The factors we controlled

[3] [4]

[5] [6]

[7]

273

Shepperd, M., Cost prediction and software project management. In: Ruhe, G, Wohlin, C, eds. Softw. Proj. Manag. a Chang. World, Berlin, Heidelberg: Springer Berlin Heidelberg; 2014, pp. 51–71. DOI:10.1007/978-3-642-55035-5. Morisio, M., Stamelos, I., Spahos, V., Romano, D. and Torino, P., Measuring functionality and productivity in web-based applications : a Case study. Proc. Sixth Int. Softw. Metrics Symp. (Cat. No.PR00403), IEEE Comput. Soc; pp. 111–118. 1999. DOI:10.1109/METRIC.1999.809732. Fewster, R., Mendes, E,. Measurement, prediction and risk analysis for Web applications. Proc. Seventh Int. Softw. Metrics Symp,, IEEE Comput. Soc; 2001, pp. 338–348. DOI:10.1109/METRIC.2001.915541. Mendes., E., Counsell, S. and Mosley, N., Measurement and effort prediction for Web applications. In: Murugesan, S. and Deshpande, Y., eds. Web Eng., vol. 2016, Berlin, Heidelberg: Springer, Berlin, Heidelberg, pp. 295–310, 2001. DOI:10.1007/3-540-45144-7. Mendes, E., Mosley, N. and Counsell, S., Web metrics - estimating design and authoring effort. IEEE Multimed, 8, pp. 50–57. 2001. DOI:10.1109/93.923953. Mendes, E., Mosley, N. and Counsell, S., The application of casebased reasoning to early Web project cost estimation. Proc 26th Annual Int Comput Softw Appl. pp. 393–398, 2002. DOI:10.1109/CMPSAC.2002.1045034. Mendes, E., Watson, I., Triggs, C., Mosley, N., Counsell, S., Street, S., et al., A comparison of development effort estimation techniques for Web hypermedia applications. Proc. Eighth IEEE Symp. Softw. Metrics, IEEE Comput. Soc; pp. 131–140, 2002. DOI:10.1109/METRIC.2002.1011332.


de Andrés et al / DYNA 82 (192), pp. 266-275. August, 2015. [8]

[9]

[10] [11] [12]

[13] [14]

[15] [16] [17]

[18] [19] [20]

[21]

[22] [23] [24]

[25] [26] [27] [28] [29] [30]

Mendes, E., Mosley, N. and Counsell, S., Do adaptation rules improve web cost estimation?, Proc. fourteenth ACM Conf. Hypertext hypermedia - HYPERTEXT ’03, New York, New York, USA: ACM Press, 2003, 173 P., DOI:10.1145/900051.900091. Baresi, L., Morasca, S. and Paolini, P., Estimating the design effort of Web applications. Proceedings. 5th Int. Work. Enterp. Netw. Comput. Healthc. Ind. (IEEE Cat. No.03EX717), IEEE Comput. Soc; pp. 62– 72, 2003. DOI:10.1109/METRIC.2003.1232456. Ruhe, M., Jeffery, R. and Wieczorek, I., Cost estimation for Web applications. 25th Int. Conf. Softw. Eng. 2003. Proceedings, IEEE; 6, pp. 285–294, 2003. DOI:10.1109/ICSE.2003.1201208. Ruhe, M., Jeffery, R. and Wieczorek, I., Cost estimation for Web applications. 25th Int. Conf. Softw. Eng. 2003. Proceedings., IEEE, pp. 285–294, 2003. DOI:10.1109/ICSE.2003.1201208. Srivastava, S.K., Prasad, P. and Varma, S.P. Evolving predictor variable estimation model for Web engineering projects. In: Rajesh, R. and Ranjan, P., eds. Comput. Sci. Eng., New Dehli: Narosa Publishing House, 2015, pp. 68–89. Reifer, D.J., Web development: Estimating quick-to-market software. IEEE Softw, 17, pp. 57–64, 2000. DOI:10.1109/52.895169. Ruhe, M., Jeffery, R. and Wieczorek, I., Using Web objects for estimating software development effort for Web applications. Proceedings 5th Int Work Enterp Netw Comput Healthc Ind (IEEE Cat No03EX717), pp. 30–7, 2003, DOI:10.1109/METRIC.2003.1232453. Mendes, E. and Mosley, N., Bayesian network models for Web Effort prediction: A comparative study. IEEE Trans Softw Eng., 34, pp.723– 737, 2008. DOI:10.1109/TSE.2008.64. Mendes, E., A comparative study of cost estimation models for Web hypermedia applications. Empir Softw Eng., 8, pp.163–196. 2003. DOI:10.1023/A:1023062629183. Lagerström, R., von Würtemberg, L.M., Holm, H. and Luczak, O., Identifying factors affecting software development cost and productivity. Softw Qual Journal, 20, pp. 395–417, 2011. DOI:10.1007/s11219-011-9137-8. Hu, Q., Plant, R.T. and Hertz, D.B., Software cost estimation using economic production models. J. Manag Inf Syst., 15, pp. 143–163, 1998. Sankhwar, S. and Pandey, D., Software project risk analysis and assessment: A survey. Global Journal of Multidisciplinary Studies, 3 (5), pp. 144-160, 2014. Mendes, E., Mosley, N. and Counsell, S., Early Web size measures and effort prediction for Web costimation. Proceedings 5th Int Work Enterp Netw Comput Healthc Ind (IEEE Cat No03EX717), 2003, pp. 18–39. DOI:10.1109/METRIC.2003.1232452. Mendes, E. and Mosley, N., Further investigation into the use of CBR and stepwise regression to predict development effort for Web hypermedia applications. Proc. Int. Symp. Empir. Softw. Eng., 2002, pp. 79–90. DOI:10.1109/ISESE.2002.1166928. Pressman, R.S., What a tangled Web we weave [Web engineering]. IEEE Software., 17 (1), pp.18–21, 2000. DOI:10.1109/52.819962. Reifer, D., Ten deadly risks in Internet and intranet software development. IEEE Software, 19 (2), pp. 12–14. 2002. DOI:10.1109/52.991324. Ochoa, S.F.S.F., Bastarrica, M.C.C., Parra, G., Estimating the development effort of Web projects in Chile, in: Web Congress, 2003. Proceedings. First Latin American, pp. 114–122, 2003. DOI:10.1109/LAWEB.2003.1250289. Ambler, S.W., Lessons in agility from Internet-based development. IEEE Software, 19 (2) , pp. 66–73, 2002. DOI:10.1109/52.991334. Pardo, C., Hurtado, J.A. and Collazos, C.A., Agil software process improvement with agile spi - process, DYNA, 77 (164), pp. 251–263, 2010. Beck, K., Extreme programming explained: Embrace change. addison-wesley professional, 2000. (Dick)-Fairley, R.E. The influence of COCOMO on software engineering education and training. Journal of Systems and Software, 80 (7), pp. 1201–1208. 2007. DOI:10.1016/j.jss.2006.09.044. Boehm, B.W., Software engineering economics. 1st ed. Prentice Hall; 1981. Boehm, B., Clark, B., Horowitz, E., Westland, C., Madachy, R. and Selby, R., Cost models for future software life cycle processes:

[31]

[32] [33] [34] [35] [36] [37] [38] [39]

[40]

[41] [42] [43] [44] [45] [46]

[47]

[48]

[49] [50]

COCOMO 2.0. Ann Softw Eng., 1, pp. 57–94, 1995. DOI:10.1007/BF02249046. Patil, L.V., Waghmode, R.M., Joshi, S.D. and Khanna, V., Generic model of software cost estimation: A hybrid approach. 2014 IEEE Int. Adv. Comput. Conf., IEEE, pp. 1379–1384, 2014. DOI:10.1109/IAdCC.2014.6779528. Patil, P.K., A review on calibration factors in empirical software cost estimation (SCE) models. Int. J. Softw. Eng. Res. Pract., 3, pp. 1–7, 2014. Paulk, N.C., Extreme programming from a CMM perspective. IEEE Softw., 18 (6), pp. 19–26, 2001. DOI:10.1109/52.965798. Schwaber, K. and Beedle, M., Agile software development with scrum. Prentice Hall, 2002. Rechtin, E., Systems architecting: Creating and building complex systems, Prentice Hall, 1991. Valerdi, R., The constructive systems engineering cost model (Cosysmo): Quantifying the costs of systems engineering effort in complex systems, VDM Publishing, 2008. Valerdi, R., Heuristics for systems engineering cost estimation. IEEE Syst. J., 5, pp. 91–98, 2011. DOI:10.1109/JSYST.2010.2065131. Coram, M. and Bohner, S., The impact of agile methods on software project management. 12th IEEE Int. Conf. Work. Eng. Comput. Syst., IEEE; pp. 363–370, 2005. DOI:10.1109/ECBS.2005.68. Cuadrado, J., Sicilia, M., Garre, M. and Rodriguez, D., An empirical study of process-related attributes in segmented software costestimation relationships. Journal of Systems and Software, 73 (3), pp. 353- 361, 2006. DOI:10.1016/j.jss.2005.04.040. Aroba, J., Cuadrado-Gallego, J.J., Sicilia, M.-Á., Ramos, I. and García-Barriocanal, E., Segmented software cost estimation models based on fuzzy clustering. J, Syst, Softw., 81, pp. 1944–1950, 2008. DOI:10.1016/j.jss.2008.01.016. Song, Q. and Shepperd, M., A new imputation method for small software project data sets. J. Syst. Softw., 80, pp. 51–62, 2007. DOI:10.1016/j.jss.2006.05.003. Rodríguez, D., Sicilia, M.A., García, E. and Harrison, R., Empirical findings on team size and productivity in software development. J. Syst. Softw., 85, pp. 562–570, 2012. DOI:10.1016/j.jss.2011.09.009. Bibi, S., Tsoumakas, G., Stamelos, I. and Vlahavas, I., Regression via classification applied on software defect estimation. Expert. Syst. Appl., 34, pp. 2091–2101, 2008. DOI:10.1016/j.eswa.2007.02.012. Heričko, M., Živkovič, A. and Rozman, I., An approach to optimizing software development team size. Inf. Process Lett., 108, pp. 101106, 2008. DOI:10.1016/j.ipl.2008.04.014. Huang, S.-J. and Han, W.-M., Selection priority of process areas based on CMMI continuous representation. Inf. Manag., 43, pp. 297– 307. 2006. DOI:10.1016/j.im.2005.08.003. Sayyad-Shirabad, J. and Menzies, T.J., The PROMISE Repository of software engineering databases. Thesis, School of Information Technology and Engineering. Univ Ottawa, Canada, 2005. Available at: http://promise.site.uottawa.ca/SERepository. Pendharkar, P.C. and Rodger, J.A,, Subramanian GH. An empirical study of the Cobb–Douglas production function properties of software development effort. Inf. Softw. Technol., 50, pp. 1181– 1188, 2008. DOI:10.1016/j.infsof.2007.10.019. Vogelezang, F. and Lesterhuis A., Applicability of COSMIC full function points in an administrative environment: Experiences of an early adopter. Proc. 13th Int. Work. Softw. Meas. – IWSM, Verlag, pp. 23–25, 2003. Kutner, M., Nachtsheim, C., Neter, J. and Li, W., Applied linear statistical models, Vol. 1., 5th ed.. McGraw-Hill/Irwin, 2004. Fernández-Diego, M. and González-Ladrón-de-Guevara, F., Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review. Inf. Softw. Technol., 56, pp. 527–544, 2014. DOI:10.1016/j.infsof.2014.01.003.

J. de Andrés, received his BBA in 1993 and his PhD in 1998, both of them from the Universidad de Oviedo, Spain. He is an associate professor of Accounting and Finance at the University of Oviedo, Spain. His research interests include artificial intelligence applied to business organizations and technology management. Dr. De Andrés has published over 60 articles in

274


de Andrés et al / DYNA 82 (192), pp. 266-275. August, 2015. journals and refereed proceedings. He is Vice Dean of the School of Computer Science of the University of Oviedo. ORCID: orcid.org/0000-0001-6887-4087 D. Fernández-Lanvín, received his BSc. in Computing in 1998, his MSc. in Computing in 2002, and his PhD degree in 2007, all of them from the Universidad de Oviedo, Spain. From 1998 to 2003 he worked as software engineer in several companies, developing mainly web applications. Since 2003 he has worked as an associate professor in the Department of Computing of the University of Oviedo, Spain. His research interests are Web development, HCI and project management. ORCID: orcid.org/0000-0002-5666-9809 P. Lorca, received his BBA in 1993 and his PhD in 2000, both from the Universidad de Oviedo, Spain. He is an associate professor of accounting and finance at the University of Oviedo, Spain. His research interests include ERP systems and technology management. Dr. Lorca has published over 50 articles in journals and refereed proceedings. He is a member of the Standing Committee on New Technologies and Accounting of the Spanish Association of Accounting and Business Administration. ORCID: orcid.org/0000-0002-7640-8286

Área Curricular de Ingeniería de Sistemas e Informática Oferta de Posgrados

Especialización en Sistemas Especialización en Mercados de Energía Maestría en Ingeniería - Ingeniería de Sistemas Doctorado en Ingeniería- Sistema e Informática Mayor información: E-mail: acsei_med@unal.edu.co Teléfono: (57-4) 425 5365

275


tailingss

DYNA

82 (192), August, 2015 is an edition consisting of 250 printed issues which was finished printing in the month of August of 2015 in Todograficas Ltda. MedellĂ­n - Colombia The cover was printed on Propalcote C1S 250 g, the interior pages on Hanno Mate 90 g. The fonts used are Times New Roman, Imprint MT Shadow


• How to promote distributed resource supply in a Colombian microgrid with economic mechanism?: System dynamics approach • Loads Characterization using the instantaneous power tensor theory • Analysis of voltage sag compensation in distribution systems using a multilevel DSTATCOM in ATP/EMTP • Estimating the produced power by photovoltaic installations in shaded environments • Effects on lifetime of low voltage conductors due to stationary power quality disturbances • Voltage regulation in a power inverter using a quasi-sliding control technique • Distributed generation placement in radial distribution networks using a bat-inspired algorithm • Representation of electric power systems by complex networks with applications to risk vulnerability assessment • Vulnerability assessment of power systems to intentional attacks using a specialized genetic algorithm • Spinning reserve analysis in a microgrid • Methodology for relative location of voltage sag source using voltage measurements only • An epidemiological approach to assess risk factors and current distortion incidence on distribution networks • Generalities about design and operation of microgrids • Energy considerations of social dwellings in Colombia according to the NZEB concept • Measurement-based exponential recovery load model: Development and validation • Complete power distribution system representation and state determination for fault location • The impact of supply voltage distortion on the harmonic current emission of non-linear loads • Design and construction of a reduced scale model to measure lightning induced voltages over inclined terrain • Modeling dynamic procurement auctions of standardized supply contracts in electricity markets including bidders adaptation • A new method to characterize power quality disturbances resulting from expulsion and current-limiting fuse operation • Time-varying harmonic analysis of electric power systems with wind farms, by employing the possibility theory • Water quality modeling of the Medellin river in the Aburrá Valley • Compressive sensing: A methodological approach to an efficient signal processing • Settlement analysis of friction piles in consolidating soft soils • Influence of biomineralization on a profile of a tropical soil affected by erosive processes • Analysis of the investment costs in municipal wastewater treatment plants in Cundinamarca • Fast pyrolysis of biomass: A review of relevant aspects. Part I: Parametric study • Fortran application to solve systems from NLA using compact storage • Factors involved in the perceived risk of construction workers • Cost estimation in software engineering projects with web components development

DYNA

Publication admitted to the Sistema Nacional de Indexación y Homologación de Revistas Especializadas CT+I - PUBLINDEX, Category A1

• Cómo promover el suministro de los recursos distribuidos en una microrred colombiana con mecanismos económicos?: Enfoque con dinámica de sistemas • Caracterización de cargas usando la teoría instantánea del tensor de potencia • Análisis de la compensación de hundimientos de tensión en sistemas de distribución usando un DSTATCOM multinivel en ATP/EMTP • Estimación de la potencia producida por instalaciones fotovoltaicas en entornos sombreados • Efectos en conductores de baja tensión debido a perturbaciones estacionarias de calidad de potencia • Regulación de tensión en un inversor de potencia usando técnica de control cuasi deslizante • Ubicación de generación distribuida en redes de distribución radiales usando un algoritmo inspirado en murciélagos • Representación de los sistemas eléctricos de potencia mediante redes complejas con aplicación a la evaluación de vulnerabilidad de riesgos • Valoración de la vulnerabilidad de sistemas de potencia ante ataques intencionales usando un algoritmo genético especializado • Análisis de reserva rodante en una microred • Metodología para la localización relativa de la fuente de hundimientos de tensión usando únicamente mediciones de tensión • Aproximación epidemiológica para evaluar factores de riesgo e incidencia de distorsión armónica en redes de distribución • Generalidades sobre diseño y operación de micro-redes • Consideraciones energéticas de viviendas de interés social en Colombia según el concepto NZEB • Modelo de carga de recuperación exponencial basado en mediciones: Desarrollo y validación • Representación completa del sistema de distribución de energía y determinación del estado para localización de fallas • Efecto de la distorsión de tensión sobre la emisión de armónicos de corriente de cargas no lineales • Diseño y construcción de un modelo a escala reducida para medición de tensiones inducidas en un terreno inclinado • Modelado de subastas dinámicas para compra de contratos estandarizados en el mercado eléctrico incluyendo la adaptación de los participantes • Nuevo método para la caracterización de perturbaciones de la calidad de la potencia producto de la operación de fusibles de expulsión y limitadores de corriente • Análisis de armónicos variando en el tiempo en sistemas eléctricos de potencia con parques eólicos, a través de la teoría de la posibilidad • Modelación de la calidad de agua del Río Medellín en el Valle de Aburrá • Sensado compresivo: Una aproximación metodológica para un procesamiento de señales eficiente • Análisis de asentamientos de pilas de fricción en suelos blandos compresibles • Influencia de la biomineralización en un perfil de suelo tropical afectado por procesos erosivos • Análisis de los costos de inversión en plantas de tratamiento de aguas residuales municipales en Cundinamarca • Pirólisis rápida de biomasas: Una revisión de los aspectos relevantes. Parte I: Estudio paramétrico • Aplicativo en Fortran para resolver sistemas de ALN usando formatos comprimidos • Factores conformantes del riesgo percibido en los trabajadores de la construcción • Estimación de costes en proyectos de ingeniería software con desarrollo de componentes web

Red Colombiana de Revistas de Ingeniería


tailingss

DYNA

82 (192), August, 2015 is an edition consisting of 250 printed issues which was finished printing in the month of August of 2015 in Todograficas Ltda. MedellĂ­n - Colombia The cover was printed on Propalcote C1S 250 g, the interior pages on Hanno Mate 90 g. The fonts used are Times New Roman, Imprint MT Shadow


DYNA

http://dyna.medellin.unal.edu.co/

DYNA is an international journal published by the Facultad de Minas, Universidad Nacional de Colombia, Medellín Campus since 1933. DYNA publishes peer-reviewed scientific articles covering all aspects of engineering. Our objective is the dissemination of original, useful and relevant research presenting new knowledge about theoretical or practical aspects of methodologies and methods used in engineering or leading to improvements in professional practices. All conclusions presented in the articles must be based on the current state-of-the-art and supported by a rigorous analysis and a balanced appraisal. The journal publishes scientific and technological research articles, review articles and case studies. DYNA publishes articles in the following areas: Organizational Engineering Geosciences and the Environment Mechatronics Civil Engineering Systems and Informatics Bio-engineering Materials and Mines Engineering Chemistry and Petroleum Other areas related to engineering

Publication Information

Indexing and Databases

DYNA (ISSN 0012-73533, printed; 2346-2183, online) is published by the Facultad de Minas, Universidad Nacional de Colombia, with a bimonthly periodicity (February, April, June, August, October, and December). Circulation License Resolution 000584 de 1976 from the Ministry of the Government.

DYNA is admitted in:

Contact information Web page: E-mail: Mail address: Facultad de Minas

http://dyna.unalmed.edu.co dyna@unal.edu.co Revista DYNA Universidad Nacional de Colombia Medellín Campus Carrera 80 No. 65-223 Bloque M9 - Of.:107 Telephone: (574) 4255068 Fax: (574) 4255343 Medellín - Colombia © Copyright 2014. Universidad Nacional de Colombia The complete or partial reproduction of texts with educational ends is permitted, granted that the source is duly cited. Unless indicated otherwise.

The National System of Indexation and Homologation of Specialized Journals CT+I-PUBLINDEX, Category A1 Science Citation Index Expanded Journal Citation Reports - JCR Science Direct SCOPUS Chemical Abstract - CAS Scientific Electronic Library on Line - SciELO GEOREF PERIÓDICA Data Base Latindex Actualidad Iberoaméricana RedALyC - Scientific Information System Directory of Open Acces Journals - DOAJ PASCAL CAPES UN Digital Library - SINAB CAPES EBSCO Host Research Databases

Notice All statements, methods, instructions and ideas are only responsibility of the authors and not necessarily represent the view of the Universidad Nacional de Colombia. The publisher does not accept responsibility for any injury and/or damage for the Publisher’s Office use of the content of this journal. Juan David Velásquez Henao, Director The concepts and opinions expressed in the articles are the Mónica del Pilar Rada T., Editorial Coordinator exclusive responsibility of the authors. Catalina Cardona A., Editorial Assistant Amilkar Álvarez C., Diagrammer Institutional Exchange Request DYNA may be requested as an institutional exchange through the Byron Llano V., Editorial Assistant Landsoft S.A., IT e-mail canjebib_med@unal.edu.co or to the postal address: Biblioteca Central “Efe Gómez” Universidad Nacional de Colombia, Sede Medellín Calle 59A No 63-20 Teléfono: (57+4) 430 97 86 Medellín - Colombia

Reduced Postal Fee Tarifa Postal Reducida # 2014-287 4-72. La Red Postal de Colombia, expires Dec. 31st, 2015

tailingss

DYNA

82 (192), August, 2015 is an edition consisting of 250 printed issues which was finished printing in the month of August of 2015 in Todograficas Ltda. Medellín - Colombia The cover was printed on Propalcote C1S 250 g, the interior pages on Hanno Mate 90 g. The fonts used are Times New Roman, Imprint MT Shadow


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.